šŸ“ƒ SLua Alpha

General discussion and feedback on Second Life's SLua Alpha
Script Memory Limits Change
The present system of limiting script memory on a per script basis gives scripters an incentive to create often incredibly inefficient workarounds when they want/need more memory than the limit allows, such as creating a large number of slave scripts and passing data back and forth via ll.MessageLinked. Simply increasing the limit to something more reasonable could alleviate this somewhat, but it doesn't really address the underlying problem, how to responsibly allocate memory per creation, whether that creation is a single prim, or a collection of linksets. I therefore propose the following alternative: Give each linkset a parameter adjustable by anyone with modify rights, Call it Linkset Memory Limit (LML). Default to a reasonable value, say 1 MB. Add this parameter to the collection of items that affect the linkset's LI, at a reasonable cost, say, 1 LI/MB. (I don't know if it's actually reasonable, but 1 LI/MB does make a mentally convenient conversion factor.) Per current practice let, the final LI be the max of the per item LIs. Then each script in the linkset draws memory from the common memory pool as defined by the LML. The simplest implementation of this feature at the script level would probably be to change ll.SetMemoryLimit to be limited to the currently available memory in the LML pool, with an argument of -1 meaning grab it all. This would allow the script to reserve needed memory and to free it when no longer needed. Adding ll.GetLML, llGetFreeLM, and possibly ll.SetLML (see discussion below), together with existing parcel prim count functions would constitute a sufficient set of functions. It might be possible to automate dynamically allocating memory to scripts from the common pool as needed, but I'm not prepared to say how desirable this might be. It would likely improve the use of the common memory pool, but it could greatly complicate impact and debugging of out-of-memory conditions and I'm not sure how it would work with attempts by a script to reserve needed memory in advance. This new feature would give scripting creators the ability to make efficient use of as much memory as they might need, with a clearly visible cost that prospective object owners can see and can budget against their available resources. It also give owners with linkset modify rights the ability to adjust this cost within the limits of the scripts' ability to adapt. It entirely eliminates the creator's incentive to use inefficient workarounds that are costly to sim resources and performance. It gives creators a reasonable incentive to limit their use of memory in order to improve the market appeal of their products and/or the land impact of creations they use themselves. It doesn't particularly affect the case for using HTML and external servers for very large or shared datasets when HTML delays are acceptable. Since the LI of worn objects is not currently counted against any budget, in order to keep worn object impact under control, it would be necessary to have a per agent LI limit as well. This limit would probably apply only to the LML LI, but grins one might possibly make the case for applying it to other LIs as well (no doubt with much cussing by our more elaborately decked out friends). A higher agent LML LI limit could become another perk of higher account levels. Since dividing and merging of linksets usually occurs at creation time, and we want creators to manage this process, I would not recommend elaborate algorithms for managing the allocation of LML on merge or divide. It probably suffices on merge to keep the LML of the root of the merged object. On division, it probably suffices to give objects with a new root the default LML. Allowing a script to change the LML of an object would mean that the script could be dynamically changing the LI of the object. This would have both pros and cons. Pros such as allowing a single script to adapt the object to present limits or to set the LML of a newly divided/merged/created object to reasonable values. Cons such as removing from the owner complete control over the LI of their rezzed out objects, and making marketplace data about LI less reliable. Perhaps a reasonable compromise would be to limit such functions to objects owned by the parcel owner per other parcel functions. The prospect and limitations of any ll.SetLML function definitely needs more discussion. It would be nice if this feature also applied to LSL scripts either when compiled to Mono and to the Lua VM, or possibly just when compiled to the Lua VM. If automatically applied to existing content, it would probably break a significant number of existing inefficient workarounds and other instances where the number of 64kb scripts in the object exceeds 16. (Beware of furniture with nPose and AvSitter systems and many seats.)
8
Ā·

tracked

Add the missing other half of IPC, enable proper unit testing, and get SL taken more seriously by game devs: Synchronous script-to-script method calls, in addition to link messages.
You're already working on a major overhaul with Lua. NOW is the right time. It will NEVER, EVER be easier than RIGHT NOW. You will NEVER be able to reduce future capex on inefficient scripting overhead more easily than RIGHT NOW. Link messages: Good for broadcasting messages between scripts where it doesn't matter much if it takes a few milliseconds to arrive, and you don't mind scripting some asynchronous handler to respond if necessary. Doesn't do a good job at supporting proper organization of software in accordance with modern standards any developer expects. OK for the the Single Responsibility Principle and to some extent the Interface Segregation Principle. Does absolutely nothing to help with unit or integration tests. No serious programmer is looking at this and saying, "yes, this is good and cool." Direct, synchronous calls to public interfaces between objects: Java, TypeScript, C#, Lua (not mentioning this by accident), Python, and any other serious programming language you can mention, supports this natively. This was introduced in Simula 67 in 1967, and has been INDUSTRY-SANDARD ever since, because it's awesome and it rules. We get to call public methods on other objects (scripts) synchronously, just as fast, or nearly so, as we would call a function in the same script. You DON'T have to engineer a massive comprehensive interface mechanism between scripts, or build an include mechanism. You add ONE transmitting function to the LL library, back-end routing between Luau states in the back-end, and an in-script state handler which receives and responds to these messages. That's it! We don't have to engineer extremely hacky workarounds that force an asynchronous mechanism to serve for what would be VASTLY improved if it was synchronous. We get to write unit and integration tests without cramming them into the same file as production code, where they take up valuable RAM. We feel like we're using a real programming language, one which allows us to use the same patterns and techniques as ANY serious programming language from the last 58 years. Real Example (my aircraft physics system): A Weight & Balance script scans the linkset to determine where all the components are, their volumes and orientations. These are mapped to inertia proxies (cuboid, flat plate, thin hoop, etc) and used to calculate the inertia tensor. The airfoil prims are then split into segments (blade element theory) and have control surface information assigned. Linear force physics script has to obtain a full copy of all the airfoil parameters and segments from W&B. Moments (angular force) physics script has to do the same thing, and spend the same memory. Now the object has three independent copies of the same data. There are a ton of copy-pasted functions between these scripts because there is no #include method. This requires more storage for the script source, compiled bytecode, stack and heap segments as well. Not just for airfoil parameters, but for MANY, MANY other things that Linear and Moments both have to know about. If Linear and Moments both run timers, then LL pays for CPU for that. They are only in separate scripts to work around RAM limitations! If they could call a shared lib script in near-real-time, they could be in ONE script with ONE timer. LL is paying for all that overhead. I am losing huge amounts of time dealing with it. Everybody loses, and it sucks. Every other day I'm fighting stack-heap collisions, and wishing I had some way to effectively write unit tests. The constant discouragement is dragging me down unnecessarily. With direct inter-script calls: llInvoke(link, scriptKey, "method", args...) -> Luau back-end routes call from one Lua state (script) to the other, then returns whatever needs to be returned. (Just an example.) You can make this only work in the link set, or make it work sim-wide. (Sim-wide would enable a lot of cool stuff that relies on llRegionSay() now.) Transmitting script blocks until response received. If access control is desired on the receiving end, you can add a pin to the invoke message (a la the existing "remote load script PIN"), or whatever. In my project, Linear and Moments can just ask W&B for airfoil data in near-real time, and get an answer back in microseconds. Functions for calculating rho, tangential velocity, dozens of aeronautical equations, etc. can be in a standard library. Scripts can call that library, again, getting answers back in microseconds, instead of all that copy-pasted code gobbling up space on YOUR servers. You don't have to provide an #include mechanism, further conserving space. I can write UNIT AND INTEGRATION TESTS!!! and stop relying on horrid llOwnerSay() spam for everything. I can stop feeling so EXHAUSTED while trying to do serious innovation in SL. More efficient scripts from now on -> reduced capex for LL. LL wins, SL developers win, users win, EVERYBODY WINS. What's not to like?
1
Events in SLua
There has already been some discussion of this in other canny's Like here But a centralized issue for feedback is probably a better idea. As the post by Harold Linden says in the above link, LL are considering something like LLEvents.touch_start = function ... Personally I would rather suggest something more akin to local handle = llevent.onTouchStart(function(touches:number) end) local handle = llevent.dispose(handle) or local handle = llevent.on(llevent.TOUCH_START, function(touches:number) end) llevent.dispose(handle) or (nya's suggestion) local function touchHandler(touches: number) end llevent.on(llevent.TOUCH_START, touchHandler) llevent.off(llevent.TOUCH_START, touchHandler) Mostly to allow for if not now, at least in the future multiple event handlers being setup and expanding to support things similar too function listenHandler(channel, name, key, msg) end local listener = ll.Listen(0,"","","test") llevent.on(llevent.LISTEN, listenHandler, listener) or function listenHandler(channel, name, key, msg) end llevent.on(llevent.LISTEN, listenHandler, {channel=0,message="test"}) Possibly something similar to roblox's "standard" , or something designed in a way that is compatible with it, so it can be properly extended later. This needs to happen BEFORE a possible beta phase There should also be NO COMPATABILITY with the current way of working, all current scripts SHOULD break and need rewriting, having both is not really a good option, and NOW is the time for breaking that.
16
Ā·

inĀ progress

lljson encode and decode functions with vectors, quaternions, and UUIDs.
Right now, vectors and quaternions encode to strings and decode to strings with the "<... >" format. Likewise UUIDs decode to strings. This is a major nuisance to scripters when decoding tables with embedded vectors, quaternions or UUIDs, requiring special decoding functions in each and every script using them. The underlying problem is that at present there is no way to differentiate between an encoded vector, quaternion, or UUID and the equivalent string value. i.e vector(1,2,3) and "<1,2,3>" both encode to "<1,2,3>". I'd like to propose a variation to encode/decode, (call them lljson.pack and unpack or better, perhaps, add an optional EncodingType argument to encode and decode), that when a vector or quaternion is encountered, encodes them as they are now. UUIDs would then be encoded by adding the same <> delimiters around the current UUID string. When a string starting with < and ending with > is encountered, encode it adding an extra < and > at each end, then decode such strings by removing the added < and >. Then the vectors, quaternions, and UUIDs can be uniquely identified as such by the undoubled delimiters and the appropriate internal format and decoded directly to the appropriate type. This would allow the encoding and decoding of all SLua types in tables without special intervention by the scripters--significantly simplifying scripting such operations and greatly improving performance (one optimized pass through the data in C, rather than one in C followed by one of random scripter quality in SLua) when passing tables between scripts, or storing and retrieving tables in Linkset Data. The only reason for keeping encode and decode as they are now is for the sake of compatibility with external json operations and even then if vectors, quaternions, or UUIDs are involved the proposed operations would likely be superior as it would be necessary to make accommodations for these types on the remote end anyway.
0
Load More
→