Viewer Bug Reports

• Use concise, precise descriptions
• Do not include sensitive information.
• Create a support ticket at https://support.secondlife.com for individual account issues or sensitive information.
Improve PBR viewer performance on Apple Silicon
PBR viewer performance on Apple Silicon—even on high-end Macs—is objectively abysmal. Viewer often drops below 1fps—making it impossible to type, move, edit or select objects, adjust camera, etc.—even with low or absolutely lowest graphics settings. Scenes need not be very complicated for this to happen, but the more complex a scene is, the more likely users will encounter an unusable scenario. • Run a PBR-enabled LL viewer on Apple Silicon • Set graphics to LOW • For giggles, go to a Yavascript pod station and see the results of trying to ride a Pod on mainland roads. Here's one: https://maps.secondlife.com/secondlife/Monowai/8/146/71 . Sit on an available pod, then click it. It will depart in one minute. • Observe the world on LOW RESULT: On LOW, on an M2 Max with 12-core CPU, 38-core GPU (hardly a "potato"): • FPS in World > Improve Graphics Speed peaks at 12 fps • Entire viewer visibly freezes once a second, every second • Any text input (chat, inventory filter, etc.) is significantly delayed such that characters only appear many seconds after they have been typed. • Opening windows (map, inventory, World > Improve Graphics Speed, anything) takes several seconds * At no point are any of the Mac's efficiency or performance CPU cores straining. At all. The GPU, however, is pegged. I can easily direct folks to places where FPS will drop below 1, even if the viewer is somehow reporting 50+ fps. Here, try one: http://maps.secondlife.com/secondlife/Green/37/184/23 . For extra fun, set your graphics to, say, one notch above LOW and try to, I dunno. Walk. Type. Move your camera. I've tried these experiments on several machines, including stock configurations. Results are consistent. LL seems to have stopped permitting logins from its pre-PBR viewers. At this point it is effectively impossible for me to do my work or attend meetings/events in SL on an LL viewer. I doubt I am alone.
4
·

tracked

Mesh Uploader does not match mesh upload requirements
Create a highest LOD mesh with two materials. Create a lower LOD mesh using one of the materials that is included in the highest LOD mesh. Attempt to upload. What happens: Error. "Levels of detail have a different number of textureable faces" The mesh verification steps have a bug, and an error message to go with it, and it prevents upload of mesh that the server would have accepted as valid. While this error message states a true fact, it is also irrelevant to how SL works; It's a clearly worded explanation of a non-existant problem. Second Life's server does accept lower LOD mesh files with a smaller number of texturable faces, as long as all materials are a subset of the highest LOD. This has ALWAYS been the case, and it's a highly desirable feature (good of LL to have made it possible). I have read varied accounts of histories about how this faulty mesh check made its way into the main viewer, but the point is, it's a breaking error that shouldn't be in there. It prevents a designed and fully-working feature. Firestorm Viewer, a few years back now, realized this had happened and has corrected the issue, with approriate checks and errors: FS prevents upload if materials are not a subset of the material list in highest LOD, as expected. FS also prevents uploading mesh with materials that are not assigned to faces. It can happen accidentally when we use materials correctly but decimate/optimize too much and wind up with a material slot assigned to zero faces. In Firestorm, the above-listed conditions correctly prevent uploading in cases where SL servers would not accept the file. Firestorm does not throw an error if the lower LOD model uses a "subset" of the highest LOD materials and has at least one face assigned to each material. Because of this, mesh like I described in my 2-steps at the top upload just fine and display correctly on all viewers. This is a very significant issue, but is not well-known about, and so we've been accepting SL "as-is" but this bug in the mesh model checking is actually causing models to have significantly more LI weight than they should. It's also causing viewers to try to display more textures than they need significantly more often than they should. Consider an object with just one "link" and 8 material slots in use. The bugged mesh validation check requires our lowest LOD model to have all 8 materials included--and they all must be assigned to a polygon--so we must have at least 8 triangles, 3 verts each, texture islands all, so 24 vertex count. Since Firestorm has removed the erroneous condition, it is possible to upload a single 3-vertex triangle using a single material from the highest LOD model. This is a significant LI calculation factor. Consider a specific use-case: a detailed bit of signage, made 3D, but at a distance, indistiguishable from a simple 2D image. In Firestorm, I can include a material slot at the highest LOD with a flat picture of the sign and assign it to a small triangle hidden among the mesh. At lowest LOD, I can use a simple 2-triangle rectangle with that material assigned to make a sign that is perfectly readable from 2 regions away, using almost nothing. In all other viewers, I'm required to include every texture on a separate polygon, so I wind up with a blob of triangles and smeared textures (and all the LI that goes with the higher vert count) for no reason, and everyone's viewers have to grab and draw all those textures even from a great distance. Again, no reason--the ability to use only a subset of materials at lower LOD already exists. Please fix this erroneous rejection; uploading mesh with a subset of faces works just fine and has for years, but only in one viewer that has corrected this bug.
6
·

tracked

Texture memory calculation is a big, inconsistent mess
As the title says, the texture memory calculation currently is a big, inconsistent mess: LLViewerTexture::isMemoryForTextureLow() and LLViewerTexture::getGPUMemoryForTextures(): Former was used in LLViewerTexture::updateClass() for a more aggressive memory reduction if "texture memory" got low before the PBR release. Now it is only used in the Lag-Meter floater. Latter function is only used in LLViewerTexture::isMemoryForTextureLow(). LLViewerTexture::getGPUMemoryForTextures() itself uses LLWindow::getAvailableVRAMMegabytes() to determine the amount of available memory of the GPU. LLWindow::getAvailableVRAMMegabytes(): This method is platform dependent. In short, this is only more or less correct on Windows, while on macOS it's basically a wild guess. Wild is actually a good description of what is going on for either platform. Let's talk about Windows first: The total amount of VRAM on the GPU is queried via DirectX - and might just override what was initially detected by querying WMI. This query also returns the amount of memory used. If this amout is not returned for some reason, it is estimated as: LLImageGL::getTextureBytesAllocated() * 2 / 1024 / 1024. Remember this - it will be important later. Now there is some reserve of the total VRAM calculated that should be available to other processes. This is 2GB if the total VRAM of the GPU is more than 4GB, and half of the VRAM in any other case - remember this as well. The total available VRAM is now calculated as: total VRAM - reserve - VRAM used. Now let's switch to macOS: On macOS, the available VRAM is only estimated, based on the total VRAM and the VRAM already used. Former is calculated - as already seen for Windows - as: LLImageGL::getTextureBytesAllocated() * 2 / 1024 / 1024. Apparently a reserve is not necessary on macOS and the available RAM is calculated as: total VRAM - VRAM used Since we have now more or less scientifically determined how much VRAM currently is still available, we would assume the result of LLWindow::getAvailableVRAMMegabytes() is used in the texture pipeline to determine the discard level of the textures the viewer is displaying, correct? WRONG! Apart from initially mentioned in LLViewerTexture::getGPUMemoryForTextures(), LLWindow::getAvailableVRAMMegabytes() is only used in one other location: The texture console for information purposes. But what else is used in the texture pipeline to determine the discard level of textures? Well, this brings us to: LLViewerTexture::updateClass(): Here, we see some more magic happen: First, the total amount of used VRAM by the viewer is calculated/estimated as the sum of LLImageGL::getTextureBytesAllocated() / 1024.0 / 512.0 - which is the same as LLImageGL::getTextureBytesAllocated() * 2 / 1024 / 1024 as seen before - and LLVertexBuffer::getBytesAllocated() / 1024.0 / 512.0. Also the total amount of VRAM of the GPU is needed. Of course there should also be some reserve for other applications, so they get granted a reserve of 512MB - if you remember what we found out earlier, this reserve is completely from the reserve calculated earlier - if there was a reserve all. Anyway... no matter what, the viewer always claims 768MB for itself, which leads a mimimum total VRAM of 768MB. Based on that calculated total VRAM and the estimated VRAM usage of the viewer, the over-usage percentage is calculated to determine by how much the discard level on the texture has to be increased. You might have noticed that here suddenly the memory used by Vertex Buffers are also taken into account, while previously it was not. Honorable mention: RenderMaxVRAMBudget debug setting: This setting overrides the total VRAM reported and can be used to cap the amount of VRAM the viewer is actually using. The setting description says that a restart is required. However, a restart is only required to get a "correct" display in the Lag Meter floater and the texture console since its value gets passied into LLWindowManager::createWindow() at startup and affects the result of LLWindow::getAvailableVRAMMegabytes() - and that only on Windows, since it is not part of the macOS implementation. But for the texture discard level calculation, it is instant, because it is taken into account in LLViewerTexture::updateClass(). So, here's a list of questions: What is the point of LLViewerTexture::isMemoryForTextureLow(), LLViewerTexture::getGPUMemoryForTextures() and LLWindow::getAvailableVRAMMegabytes(), if their sole purpose is just informational? Why is the data displayed fundamentally different from the data the viewer's decisions are based on? If the displayed data should actually reflect what the viewer is really doing, how about having one calculation that is used throughout the viewer? Why is this a total mess?
2
·

tracked

Load More