Hi, I would like to propose two ideas for improving the Texture Fast Cache system. The first is a simpler design while the second is a little more elaborate. Existing System The Texture Fast Cache currently works by saving a 16x16 or less texture to a 16x16 buffer along with a header information on the source texture information. IE Width Height #of Components (RGB=3, RGBA=4) Discard Level of the scaled down image This is stored in a fixed 1,028 buffer. It then writes raw image data to the entry, this is triggered when a new texture is being loaded after the first time it was loaded as part of the texture fetch system. The Fast Cache has a fixed size of 1024x1024 entries, but the Fast Cache file starts as an empty file and will keep expanding by appending to the existing file the new entry data. The data is read back when an existing texture is being loaded after either first login and load from a region, or after a region unloads and the textures are purged from memory and requested again from cache. The system goes through the process of creating a brand new texture to use for the 16x16 texture, then loads the actual target texture at the desired discard level. The file has various locks that protect the file. Proposal #1 I propose that we make the file a fixed size, pre-allocate the entire 1024x1024 entries and write out a zeroed out file to disk. Now there are two options I propose to give to the user. Disk Based (Memory Map) Texture Fast Cache Create a memory map file pointer to the file and directly assign the file pointer data to the actual LLRawImage so that you don't have to load in the data you can assign the texture data directly to the RawImage and not need to allocate or de-allocate the memory. This would save on some disk IO as well, reduce the amount of RAM being used and data being copied to load the 16x16 raw image. Writes to the file can be done directly by mapping the next entry to the next fixed pointer and writing the data to memory. The underlying system will handle writing the memory to the file system. The file size would be a little more then 1 GB on disk. Memory Based Texture Fast Cache Load the entire fast cache file into memory and give direct access to the fixed size data. There can be either a process that over a set period of time writes the data to the fastcache file, or after so may updates writes to the file. This would bypass the disk IO and virtual memory to disk writes in the background, but comes at the cost of system RAM being used to keep the fast cache in RAM. This would reduce latency of the Fast Cache. It would take up around 1 GB of system memory as well as 1GB of disk space. Choice 1 would require an updated LLAPR file which supports Memory Maps (already submitted as a PR) and possibly 64 BIT files. Proposal #2 This would be similar to proposal #1, where you have a memory mapped version and an memory based version. But in this case, I propose that instead of write a throw away 16x16 image that has to go through the same initialization process as the actual texture it needs to load, that we save the MAX_DISCARD_LEVEL texture to a a Fast Cache Body file and store the header for the fast cache in a Fast Cache Header file. The header would still store the same information, but in addition, it would store an offset into the Fast Cache Body to load the texture data. This way, the lowest tier of texture cache would be in ready to read format, and could be memory mapped upon creating of the LLRawImage and save on memory copies and decoding of the JPG2000 data as well as having the needed information for the texture already to go. This would also prevent VRAM fragmentation as we are not allocating and de-allocating 16x16 images all the time. The memory version of the same design would have 2 allocated buffers, the larger fixed size from the Fast Cache that is read from disk and a second where the data is stored in a vector that can grow over time, where once the system logs out, it will commit the Vector data to the Texture Fast Cache body data and header as well. This is partly due to the fact that each texture may have different MAX_DISCARD_LEVEL dimensions. Some textures are at Discard = 0 2Kx2K would need a 64x64 pixel body entry size, 1Kx1K would need 32x32, and 512x512 and lower would be 16x16 image. So the mixing of the sizes would need an offset stored in the header to do the calculation. One thing that could be done is have different fast cache files per MAX_DISCARD_LEVEL texture size IE 64x64, 32x32 and 16x16 image and store those in 3 different body's that can be referenced with the same header. Another option, would be to have a fixed 64x64 pixel entry size. The head would use 64x64 offset for the entry. fixed size for the entries,but some entries could be grouped together to make a Atlas like image so better utilize the space.