Scripting Features

  • Search existing ideas before submitting
  • Use support.secondlife.com for customer support issues
  • Keep posts on-topic
Thank you for your ideas!
Touch Pointer Capture
There is an improvement that could be made to touch events to greatly extend their capabilities. This is a preliminary specification and suggestion. This can be improved and refined later on via community/linden feedback. New functions * integer llCaptureTouch(integer detected, integer mode) * llReleaseTouchCapture(integer handler) * integer llHasTouchCapture() llCaptureTouch would be called within touch_start (and maybe touch event would be allowed too) to start touch capture. During touch capture the viewer would continue to pass touch events from across any surface even outside the prim it started from. Background This feature is heavily influenced from my web development career, here: https://developer.mozilla.org/en-US/docs/Web/API/Element/setPointerCapture The feature allows to capture touch events of elements outside of it's boundaries, this is useful for many applications, a basic one would be for example a slider prim to capture touch events outside of its boundaries which has plagued many scripted HUDs/UIs. Another option would be for draggable HUDs. But as a longtime SL user I am also especially looking forward to having the touch events across any in-world surface which has significant applications in improving user interactions. For example being able to start a drag to move around furniture in the house and then use raycasting to check for walls so the furniture could place itself in an ideal spot -- the furniture could actively show a "ghost" representation of itself while the user was still dragging . This is a common user interaction seen in many games and other 3D applications that feels intuitive and friendly. Spec ## For llCaptureTouch, the integer mode constant is a bitfield of * CAPTURE_WORLD * CAPTURE_SCREEN * CAPTURE_CONFIRM_TAP * CAPTURE_PASSTHROUGH ## So how does a user know the pointer capture is ongoing... * Change cursor to a "dragging" cursor similar to web browsers (Jakob’s Law -- familiarity / existing mental model) * Apply "dragging" cursor to existing grab behaviours as well to emphasize ## ...and how do they exit it? * (Default / intended behaviour) When you stop holding down the mouse button and enter touch_end event the pointer capture is released * The script can call llReleaseTouchCapture to release pointer capture (should scripts receive a touch_end here? I think yes, fire touch_end) * Pressing ESC should be a familiar escape hatch, always forces release of pointer capture, fire touch_end * If you are in CAPTURE_CONFIRM_TAP mode however, you need to touch a second time to stop capture What coordinate system is by default? If the touch event started from an in-world object it is in the absolute WORLD coordinates as if you touched any in-world prim normally. If the touch event started on a HUD attachment, it is in SCREEN coordinates. There are three ways SCREEN coordinate capture could be implemented here: Simplest would be to simply just pass mouse screen coordinates directly along and ignore any raycasting against HUD attachments (e.g. stop updating llDetectedTouchFace etc, only need llDetectedTouchPos). This is because the usecase in a HUD is a bit different and there isn't really a "world" to raycast against in a HUD. Raycast only against own HUD that touch started from -- current behaviour but then sending only mouse coordinates outside boundaries / where raycast fails to hit any other prim of the linkset Raycast against all HUD attachments -- This could be used to allow HUDs to snap against the bounding box of another HUD or allow for very interesting HUD-to-HUD interactions, for example a temp-on-attach experience hud showing the inventory loot of a dead monster or loot chest and the user dragging items into an RPG/minecraft-like inventory game HUD. The llDetected* could be enough info to figure out inventory slot grid and then communicate item handover to the game HUD. Privacy Concerns: I highly recommend 3) because it's damn useful but it could also allow to see what HUDs are attached by consequence which is a privacy concern. You could implement code to only provide select information but it could also hinder intentional usecases like cross-hud game inventory. Other checks could be done like only allowing scripts on same experience to see each other for full info. Another is when an in-world object request pointer capture for screen coords to only allow 1) but 3) if it started from a HUD, but might prevent usecases of dragging a game object to hud intentionally such as into inventory game hud, again maybe only when same experience. Overriding coordinate system ## CAPTURE_WORLD Using CAPTURE_WORLD allows a HUD to override default coordinate system instead use WORLD coordinates for captured touch events. E.g. dragging and item from a HUD to drop into the world and showing a ghost representation during capture for preview. ## CAPTURE_SCREEN CAPTURE_SCREEN on the other hand would allow an in-world object to capture touch events in SCREEN coordinates, for example to drag an item into an inventory game HUD or other usecases. CAPTURE_CONFIRM_TAP? I was thinking that another mode of touch capture could require a second click to confirm that then also only then releases pointer capture. By default without confirm tap, the behaviour is as follows: User holds down mouse -- touch_start is fired Script calls llCaptureTouch, pointer capture is initiated All touches outside prim boundary are passed to touch events User lets go of mouse -- touch_event is fired and pointer captured is released With confirm tap it changes the cursor type to maybe something with a hand with secondary icon, e.g. like Location Select -- https://learn.microsoft.com/en-us/windows/win32/menurc/about-cursors The behaviour is as follows: User holds down mouse -- touch_start is fired Script calls llCaptureTouch(0, CAPTURE_CONFIRM_TAP) All touches outside prim boundary are passed to touch events User lets go of mouse -- touch_end is fired and pointer capture continues. touch() events still continue for ghosting/preview purposes User clicks again to confirm location -- touch_start / touch_end are fired. Pointer capture is released. llHasTouchCapture returns true only in touch_start in this case (otherwise it is false as pointer capture in any other case is not possible when touch had just started) Confirm tap is just an optional extension of the feature. It is meant for possible better accessibility (holding down mouse is not necessarily always easy for everyone) and alternative usability (click once on a button, it shows furniture ghost outline snapping to floor and avoiding intersecting walls, click once again to confirm furniture location). Privacy Concerns: A different cursor should be used for good UX and you can also implement on-screen text similar to mouselook shows instructions. The concern is a rogue script using this to tracking mouse coordinates such as a malicious vendor script. Onscreen text and cursor indicator should help show a script is still capturing touch. How does a script know how/if pointer capture was released to avoid a confusing intermediary state? Viewer/sim could maybe have a timeout on the confirm tap mode or it could be endless / until agent leaves region / logs out. Scripts could also maybe have a function that returns a constant to indicate how capture was released on touch_end, I was thinking via llHasTouchCapture instead returning a constant instead of a boolean e.g. * CAPTURE_NONE (no capture happened / initial state), * CAPTURE_ACTIVE (actively capturing touches), * CAPTURE_CANCELLED (user pressed ESC / cancelled capture), * CAPTURE_RELEASED (via llReleaseTouchCapture), * CAPTURE_END (touch capture ended successfully by default letting go of mouse button / normal touch_end or on second click for confirm tap mode). CAPTURE_PASSTHROUGH? In somecases, a touch capture started from an object might wish to have raycasts pass through itself. For example a piece on a chessboard does not care about touches on itself anymore but against the board below it. This avoids hacky raycast or other workarounds such as setting the piece invisible / out of the way / phantom? Another example is when moving furniture to another location, you want to raycast onto the floor of the building rather than against the furniture where it is currently at if the user was doing a small adjustment. Multi-users llCaptureTouch could have integer detected argument, as touch events can have multiple events from different users at once, to indicate from which user to capture from related to the touch event detected number. llReleaseTouchCapture could require a integer handler that was returned by llCaptureTouch, similar to listen handlers. This is if we want to support multiple captures by different users. This could be used for example on a board strategy game where multiple users are interacting with it. Usecases The usecases are many way beyond this list but here is a few off the top off my head: * Game HUD that features an inventory grid (RPG, minecraft, modern resident evil, or diablo-like) -- What happens when a user wants to drag an item from inventory grid to another window, another HUD or even drop something on the ground? * Game object dragged into that inventory grid -- What if the game wants to allow users to drag a game object into their inventory? * Strategy game board -- there are rezzed game pieces, the touch could be started from a piece or from the board, tracking touches across different objects can have different intentions and meanings * Ghosting/preview -- being able to show a ghost/preview of a drag operation is a very common design pattern for 3D worlds / games. This is achieved because the script can track touches continuously and thus show a preview of an action based on where the user is dragging -- see https://www.youtube.com/watch?v=_zxU1khDXcU * Smart furniture placement -- hypothetical way to place furniture smartly like a game would, showing a ghost preview, snapping to walls and floors, moving away to avoid intersections using raycasts and showing the preview location. * HUD to drop an element after confirming location: https://www.youtube.com/watch?v=N-Qur11cvYQ * Tool HUDs that can avoid workarounds of fullscreen prims on HUD to capture screen location: https://www.youtube.com/watch?v=9mPc_9yX2mM * Reliably tracking screen coordinates for HUDs as it takes a moment before a prim can resize itself to fullscreen to capture all the input: https://www.youtube.com/watch?v=ZAP-PZJC7v4 Feel free to submit new usecases following format above ("name -- single sentence if possible") Wishlist while we are enhancing touch * integer llDetectedMeta(integer d) -- bitfield that indicates if SHIFT and/or CTRL key are held down. This is a very common UI design pattern to extend functionality. For example while dragging in capture mode the shift key could be pressed during dragging to enable snapping to grid when placing furniture smartly with a script and many other usecases, etc. llDetectedMeta could be used outside of capture mode for normal touch events too, enhancing SL platform as a whole. Alternative name: llDetectedKeyboardMeta
4
·

tracked

Remove rez distance limit on llRezObjectWithParams
As someone who has been scripting in SL for a long time and made many work arounds to deal with rez distance limits and now that I'm working on a region-scale game I'm trying to work on a feature where I can do a "level change" for the entire region by using llDerezObject to remove unscripted objects and swap it out with a new level by rezzing it in, then it occurred to me that the rez distance limits don't really make much sense or are way too strict. At least for parcel owners, people making experiences or tools we are left with setting up messaging systems to reposition rezzed objects to their intended destination. In my case I want to do a complete level change of an entire region by rezzing unscripted objects in place and manage their lifetime via llDerezObject. That's my current primary usecase. But in the past I've made tools, HUDs and other things that could have benefitted from unscripted unlimited rez distance rezzing with llDerezObject control. For example a HUD that renders markers and icons, for others to see as well, which would have eliminated hundreds of scripts having to be instantiated and listening for commands. Could you please reconsider removing or increasing the rez distance limits? If there are legacy compat concerns this change could be made only against llROWP which is still relatively brand new and unlikely to create problems. Ideally as an experience tools scripter and region owner I should be able not to have any rez distance limits. There's really no point. Likewise I might not want to impose rez distance limits on visitors either. As devils advocate I could potentially for the distance limit to exist for mainland for attachments on avatars, I do admit I feel wary about that. But on the other hand a consistent removal of the limit would be great to see for innovation along with a message in encouraging the use of llDerezObject to manage rezzed objects.
4
·

tracked

PBR llFunctions
I've spent about the last week updating some scripts to handle both PBR and Blinn-Phong updates simultaneously. The scripts work fine, but the current way you have to update PBR values - through llSetPrimitiveParams and the full list of values - is not ideal. In order to avoid overwriting your existing values with blank or incorrect data, you have to have a way to know and store your existing values, modify those values, and send the whole list back in. Adding in some llFunctions for PBR values, so you don't have to modify the entire parameter array to change one value, would make scripting PBR modifications much more pleasant to work with. If llFunctions are not feasible, a way to update individual values in the parameter arrays without data loss (e.g. being able to pass a blank string as a texture and have it skip the value instead of applying it) would be appreciated. I've compiled a list below of how I would personally pop each value out into a Set function (they would have matching Get functions, a la llGetColor and llSetColor): --- For PRIM_GLTF_BASE_COLOR: llSetGLTFBaseTexture(string texture, vector repeats, vector offsets, float rotation_in_radians, integer face); llSetGLTFBaseColor(vector color, integer face); llSetGLTFBaseAlphaMode(integer gltf_alpha_mode, integer face); llSetGLTFBaseAlpha(float alpha, integer face); llSetGLTFBaseAlphaMask(float alpha_mask_cutoff, integer face); llSetGLTFBaseDoubleSided(integer double_sided, integer face); --- For PRIM_GLTF_NORMAL (this really only has one function): llSetGLTFNormal(string texture, vector repeats, vector offsets, float rotation_in_radians, integer face); --- For PRIM_GLTF_METALLIC_ROUGHNESS: llSetGLTFMetalRoughTexture(string texture, vector repeats, vector offsets, float rotation_in_radians, integer face); llSetGLTFMetallicFactor(float metallic_factor, integer face); llSetGLTFRoughnessFactor(float roughness_factor, integer face); --- For PRIM_GLTF_EMISSIVE: llSetGLTFEmissiveTexture(string texture, vector repeats, vector offsets, float rotation_in_radians, integer face); llSetGLTFEmissiveTint(float emissive_tint, integer face);
3
·

tracked

Load More