Script Memory Limits Change
tracked
SungAli Resident
The present system of limiting script memory on a per script basis gives scripters an incentive to create often incredibly inefficient workarounds when they want/need more memory than the limit allows, such as creating a large number of slave scripts and passing data back and forth via ll.MessageLinked. Simply increasing the limit to something more reasonable could alleviate this somewhat, but it doesn't really address the underlying problem, how to responsibly allocate memory per creation, whether that creation is a single prim, or a collection of linksets. I therefore propose the following alternative:
Give each linkset a parameter adjustable by anyone with modify rights, Call it Linkset Memory Limit (LML). Default to a reasonable value, say 1 MB. Add this parameter to the collection of items that affect the linkset's LI, at a reasonable cost, say, 1 LI/MB. (I don't know if it's actually reasonable, but 1 LI/MB does make a mentally convenient conversion factor.) Per current practice let, the final LI be the max of the per item LIs. Then each script in the linkset draws memory from the common memory pool as defined by the LML.
The simplest implementation of this feature at the script level would probably be to change ll.SetMemoryLimit to be limited to the currently available memory in the LML pool, with an argument of -1 meaning grab it all. This would allow the script to reserve needed memory and to free it when no longer needed. Adding ll.GetLML, llGetFreeLM, and possibly ll.SetLML (see discussion below), together with existing parcel prim count functions would constitute a sufficient set of functions.
It might be possible to automate dynamically allocating memory to scripts from the common pool as needed, but I'm not prepared to say how desirable this might be. It would likely improve the use of the common memory pool, but it could greatly complicate impact and debugging of out-of-memory conditions and I'm not sure how it would work with attempts by a script to reserve needed memory in advance.
This new feature would give scripting creators the ability to make efficient use of as much memory as they might need, with a clearly visible cost that prospective object owners can see and can budget against their available resources. It also give owners with linkset modify rights the ability to adjust this cost within the limits of the scripts' ability to adapt. It entirely eliminates the creator's incentive to use inefficient workarounds that are costly to sim resources and performance. It gives creators a reasonable incentive to limit their use of memory in order to improve the market appeal of their products and/or the land impact of creations they use themselves. It doesn't particularly affect the case for using HTML and external servers for very large or shared datasets when HTML delays are acceptable.
Since the LI of worn objects is not currently counted against any budget, in order to keep worn object impact under control, it would be necessary to have a per agent LI limit as well. This limit would probably apply only to the LML LI, but
grins
one might possibly make the case for applying it to other LIs as well (no doubt with much cussing by our more elaborately decked out friends). A higher agent LML LI limit could become another perk of higher account levels.Since dividing and merging of linksets usually occurs at creation time, and we want creators to manage this process, I would not recommend elaborate algorithms for managing the allocation of LML on merge or divide. It probably suffices on merge to keep the LML of the root of the merged object. On division, it probably suffices to give objects with a new root the default LML.
Allowing a script to change the LML of an object would mean that the script could be dynamically changing the LI of the object. This would have both pros and cons. Pros such as allowing a single script to adapt the object to present limits or to set the LML of a newly divided/merged/created object to reasonable values. Cons such as removing from the owner complete control over the LI of their rezzed out objects, and making marketplace data about LI less reliable. Perhaps a reasonable compromise would be to limit such functions to objects owned by the parcel owner per other parcel functions. The prospect and limitations of any ll.SetLML function definitely needs more discussion.
It would be nice if this feature also applied to LSL scripts either when compiled to Mono and to the Lua VM, or possibly just when compiled to the Lua VM. If automatically applied to existing content, it would probably break a significant number of existing inefficient workarounds and other instances where the number of 64kb scripts in the object exceeds 16. (Beware of furniture with nPose and AvSitter systems and many seats.)
Log In
H
Harold Linden
tracked
Situationally allowing scripts to request higher memory limits is something we've thought a bit about before, there's preexisting code for it dating back to the initial Mono development, but it never got enabled.
I don't have any description of what expanded memory limits would look like or how they would work, but they definitely won't be in place during the alpha or beta phase.
It's mostly likely that we would first introduce higher script limits for scripts using key experiences (for "god" objects in scripted experiences) and other "higher-privileged" scripts.
SungAli Resident
It occurs to me that an event warning scripts of a pending LML change would also be needed to allow scripts to adapt when the LML is being reduced. Then any manual (or automatic if/when scripted reductions to the LML are allowed) change to the LML would be delayed by a short time while the announcing event went was handled by any scripts in the object listening for it. This would give scripts in the object time to adjust their memory settings according to the new limit ahead of time in the event of a reduction of the LML. An increase of the LML could likewise result in the same notification event timed to occur after the change has taken place and the new memory is available. Alternatively, if for any reason the time had to be before the change in both cases, scripts would then know to wait until the change was complete before making their own adjustment.
An attempt to manually increase the LML would result in the object's LI exceeding current limits should result in a warning being given to the User, allowing them to cancel the change, or accept it with the caveat the accepting the change will cause the object to be immediately returned to user inventory ala the usual procedure for exceeding LI limits, or detached from the agent if attached.
Note that if the default LML is also the minimum LML, then objects containing only scripts working within the default memory limit settings could ignore these features altogether. Adding a new script for which there was insufficient memory in the linkset pool would cause that script to immediately crash on an out of memory error, thus notifying the user that limits had been exceeded and allowing them to adapt accordingly as they chose.
Thunder Rahja
No.
SungAli Resident
Nothing in the proposal limits the number of scripts in an object when it makes practical sense, james. It simply removes the incentive to add large numbers of scripts for the sole purpose of getting very large amounts of memory using methods that are inherently inefficient, slow, and wasteful of system resources for all.
Jamesp1989 Resident
Sorry but im gonna have to down vote this. We are already getting a bigger memory limit and this proposal massivly over complicates literally everythjng: from marketplace listing to land ownership to managing our objects li. They are not going to give us 1mb of memory and even if we do keep the slua test memory levels we definately wont be getting more.
If something like this were to exist it should be an estate/land owner privilage but i dont see it happening
Besides master/slave scripts are important for products that have addons
SuzannaLinn Resident
I like the idea, but I would choose an implementation with fewer options:
. - A Linkset Memory Use (LMU) instead of a Linkset Memory Limit (LML):
. . . - It is informative only, shown in the edit window, read-only
. . . - It is the total of 64k memory blocks used by all the scripts
. . . - In current objects it will be the same as the number of scripts
. - The memory for each script is changed in its properties window only:
. . . - It is chosen in blocks of 64k, perhaps with a limit of 16 blocks (1 MB)
. . . - The LMU is recalculated with the changes
. - Dividing or merging linksets recalculates the LMU with the scripts in each new object
. - No new LL functions or changes in the current ones
. - Each 64k block counts as 0.1 LI
. - The first 10 blocks (640k) are free:
. . . - No change for many objects that are lightly scripted (fewer than 11 scripts)
. . . - Motivation for scripters to stay under this limit
. . . - Objects could be promoted as "script LI free" (there could be an official badge or icon)
. - Applied to all the languages and VMs, because:
. . . - There can be a mix of scripts in each object
. . . - LSL/Mono will change to LSL/Lua in the future
. . . - Scripters should use their preferred language, not the one that fits memory needs
jaragleef Resident
SuzannaLinn Resident. Got a Canny link for that suggestion?
SuzannaLinn Resident
jaragleef Resident No, I'm using this same canny, since it's very related to it