✨ Feature Requests

  • Search existing ideas before submitting- Use support.secondlife.com for customer support issues- Keep posts on-topicThank you for your ideas!
Let users downscale materials.
Okay, this is going to be weird, hear me out. Purpose: Reduce texture load times, vram use, and therefore lag. How: For every texture make there exist and/or be created on first demand a uuid-xxxx alternate version of the texture where the texture is scaled down and saved to the asset servers. The value is determined by the width of the image. So I might have a uuid, uuid-512, uuid-256. Execution: In the build interface, users would have a list of textures on a face and a drop-down menu letting them select the size of each texture on that face. Why: Not every texture in the materials on a face needs to be maximum size. If you have a mesh item with both Blinn-Phong and PBR, a user might have to download up to 6 (maybe 7 if they're using different normals) textures per face for up to 8 faces per object component. That's 56 possible textures, and you could potentially take 40 or more of them (depending on if the face is small and you can get away with a lower resolution texture) and reduce them in size to take 1/4 as much resources. So you still have up to 56 textures on an 8 face object but at the memory cost of 26. Currently this type of optimization can only happen if you are the creator, you bought full perm texture assets and export scale down import, or you do crime. How would creators access these auto scaled textures? Tack the size number at the end, or open the full perm texture and select, "Get scaled down copies." Anyhow, thanks for reading.
0
·
Performance
Request: Script crash logging
basically a webhook for script errors -- Nexii Malthus, summarizing what we need You develop scripts for Secondlife. You start hearing people reporting that your scripts are crashing. Now what? Unless it's a very reliable crash, you're kind of in unexplored territory. SL is vast, many unique and hard-to-reproduce situations could be happening. The crash log is seen by your end user, not by you. The developer cannot know how widespread the problem is, whether it's a bug in a SL Server update, whether it's an edge case in their own programming... Possibly avenue to a solution: An unmodified, compiled script has consistent bytecode, and if I understand it right, an internal asset id corresponding. The simulator knows when this item has crashed (and currently only generates a message to the owner on a message channel). Perhaps a developer could "flag" scripts for logging "that script", or the set of scripts for a project that is in the wild and reporting problems. The developer could view (on the web? in the viewer?) a simple log, aggregating the crash reports from the currently-logged scripts, grid-wide for each flagged script. Perhaps some bit of information like the simulator version/memory use/event queue at time of crash. The simulator knows all sorts of things about what happened; Any log improves the current total black-box situation. Eg, "hearing that scripts are crashing on region restart? start logging now and see what comes in". This would be a limited utility. The number of scripts we can flag and limit how many log entries each script holds would be small. This would not be a performance monitoring tool; it would be turned on when a problem is suspected, to capture what's happening "out there." Acknowledged implications: Plenty of privacy concerns abound. To avoid adding new means of tracking users, gaining access to personalized data etc: The developer should not be able to find out what specific region the script was in when it crashed The developer should not be able to view the script's memory The developer should not be able to view who was running the script Might be useful: What type of crash list of events queued simulator version number region status Anything else that could be useful without opening bad implications? Any other implications that need calling out?
1
·
Performance
Vulkan Support – Future-Proofing Second Life for Better Performance & Graphics
🔴 Summary 🔴 Second Life has come a long way, but OpenGL is becoming outdated. To ensure SL remains visually competitive and runs smoothly on modern hardware, I propose that Linden Lab begins development on Vulkan support as a long-term goal. This transition would greatly improve performance, reduce crashes, and allow SL to take full advantage of modern GPUs. 🟡 Why Vulkan? 🟡 ✅ Better FPS & Performance – Vulkan is optimized for multi-core CPUs and modern GPUs, meaning higher frame rates and less lag in complex environments. ✅ More Stability & Fewer Crashes – Vulkan manages memory more efficiently than OpenGL, reducing viewer crashes and graphical glitches. ✅ Future-Proofing Second Life – OpenGL’s development has slowed, while Vulkan is the industry standard for new and upcoming graphics engines. ✅ Improved Graphics Potential – Vulkan supports advanced rendering features that could enhance lighting, shadows, reflections, and materials in SL. 🟢 How This Transition Could Work Smoothly 🟢 Instead of a sudden shift, I suggest a gradual development plan (2025-2030): 1️⃣ 2025-2026: Linden Lab researches Vulkan feasibility and starts experimental development. 2️⃣ 2027-2028: An optional Vulkan beta mode is introduced for testing and optimization, running alongside OpenGL. 3️⃣ 2029-2030: Vulkan becomes the default renderer, with OpenGL as a fallback for older systems. 4️⃣ Community Engagement: Regular updates from Linden Lab on progress, plus support for third-party viewers adapting to Vulkan. 🔵 Why Start Now? 🔵 Even though this transition will take years, starting early ensures SL stays ahead rather than falling behind other virtual worlds. A well-planned Vulkan integration could attract new users while making SL smoother for current residents. If you agree, please upvote and share your thoughts in the comments! Let’s show Linden Lab that the community is ready for a modern and optimized Second Life! 👍 💬 �
13
·
Performance
·
tracked
Load More