Web/Marketplace Features

Marketplace and SL websites
  • Search existing ideas before submitting
  • Use support.secondlife.com for customer support issues
  • Keep posts on-topic
Thank you for your ideas!
Disable the Marketplace review system until Linden Lab can provide timely moderation
When a seller believes that a review violates the guidelines, they can flag it for moderation. Linden Lab then checks the review and removes it if the violation is confirmed. Until last year, this process worked reliably and within reasonable delays. However, over the past few months, customer support response times have grown dramatically. This slowdown directly affects Marketplace review moderation, leading to delays that no longer make sense. Based on my experience, the average wait time is now around three months. As a result, insulting, defamatory, misleading, or completely off-topic reviews — often a combination of all three — can remain publicly visible on a product for months before any action is taken. For many merchants, Marketplace reviews are already a significant source of stress. But with the current moderation delays, the system has become deeply unbalanced: sellers are exposed for months to harmful comments that should normally be removed quickly, while having no actionable way to protect their brand or business during that time. The review system is essential, but only when it is supported by timely and effective moderation. In its current state, it no longer fulfills its purpose and creates unnecessary harm for merchants who rely on the Marketplace for their income. Given the current situation, the most reasonable solution is straightforward: until moderation can operate again within acceptable timeframes, the review system should be temporarily disabled entirely. This is the only way to prevent lasting, unjustified damage to merchants while ensuring the integrity of the Marketplace.
0
Please adopt c2pa spec. It's a significant step to ensure transparency in genAI
I hardly need to mention that there's a lot of feelings about ai-generated content, and LL has already heard quite a lot of feedback about it from residents. I'd like to suggest a thing we can do about it! Let me first admit that no solution is perfect and that an AI-using-creator could compile, train, and operate AI indpendently without the tech I mention below. But, the 99% use-case is that people generate content using ready-made solutions, and an enormous number of these ai-generating business have committed to a measure of transparency and accountability that I want to mention. If a person generates AI content--marketplace images, for example--using Midjourney, OpenAI (chatGPT), NanoBanana (Google Gemini), Firefly (Adobe), Runaway AI... or any of the myriad next-tier platforms implementing stable diffusion 3, then ALL of these tools automatically embed a content credential into the work. Cropping, resizing, or trivial modifications to the work do not alter the credential. Only significant human-made alterations to the content of the image result in the credential being changed. The presense of a content credential does not, I should note, mean "made with AI". It means "there's a content credential here." It's important because all major genAI tools do generate a credential that describes the origin of the work as a GenAI tool. The content credential carries a modification history with it. This means that even the creator of an original, handmade work could embed a Credential, and if their work was taken and modified by AI, then the credential would list out both versions. Again, all major genAI tools use this. The credentials are already there today. The credentials are not a value judgement, and they are not an ethical judgement. They are already within the ai-generated content, and they help us see what was done with a piece of content. The customer/end user makes the value judgement for themselves. LL has seen that customers want to know. The information is already within the file. The platform just needs to expose that information. A simple "CR" tag (using the established logo made with the c2pa standard) on the corner of any image on the marketplace, with a hover-tip to explain the content credential would go a long way. Further, adding a property for texture assets generated for use within SL itself would be a nice next-step (but less trivial to implement). For more info about the industry-wide standard C2PA (Coalition for Content Protection and Authenticity): https://c2pa.org/ The specification is avaialble here: https://spec.c2pa.org/specifications/specifications/2.1/specs/C2PA_Specification.html This isn't a fun, easy feature but it feels like an important feature.
0
Add marketplace policy regarding AI generated content
So going into this, I want to disclaim that I myself am not against AI. I personally have used it before, and will do so in the future. A specific chat AI helps me regularly with solving problems. However, I am against people intentionally misleading people about content, and polluting search results with content made in mere seconds, and find it no different than spam. If I made something that randomly generated randomly colored squares on a white canvas, and listed it a dozen or so times, it'd be seen as spam. Additionally, this issue was brought up in the SL Discord marketplace chat. I've moved it here so that it can get more recognition and visibility to LL. Personally, I believe that a policy worded like so would benefit everyone in the long run (canny has messed up some formatting here, so it looks like it has no line breaks, these have been substituted with "---" for readability) Content that is the sole generation of Artificial Intelligence, Machine Learning, or otherwise Generative Content, except in the cases where the generator was written for the sole purpose of generating a specific element, may not be listed on the Marketplace as-is or as the focal point of the content that is listed. --- AI images must never be used to generate listing images, and must be presented as the content appears in SL. Overlays, such as permission information, price, branding labels, or sales labels, are permitted as much so as they do not mis-represent the content that is being advertised. --- Examples of forbidden listings: * In singular or collection, AI generated content as a texture, image, mesh, sound or other type of asset, as the sole content being listed. * A AI generated image in a picture frame. * A museum prefab filled with AI generated images as a "ready to go" model. * A object that plays randomly generated AI voice clips. * Attempting to pass off or mislead people that AI generated content as original, hand crafted content. * A listing which shows a AI enhanced product preview. --- Examples of allowed listings: * A house that has a AI generated poster in it, where the poster is not the selling point of the house. * A museum model that has some AI generated images, or is focused on the topic of AI generated content. * A NPC, that among other features, plays AI generated voice clips. * A model that has AI generated textures, where the model is not AI generated. --- Listings that make use of 50% or more in it's creation must be clearly labeled as using AI generation or assistance in the listing description. Interpretation of what counts as "50% or more is" left solely to Linden Lab. Listings that use AI but under the 50% threshold are recommended, but not required, to disclose that AI is used in some capacity. In summary, it forbids listing of content on the marketplace, free or for sale, that is entirely AI generated, requires disclosure of AI generated content when AI does a majority of the work, and guards against some loopholes. This probably (as in, very likely. I feel this is implied, but just in case) isn't needed, but: Because this will probably get passed by the lawyers, I hereby grant all permission to Linden Lab to use or transform the above work in any way, without attribution or compensation. Linden Lab has the opportunity to deal with this now before it gets too out of control. Only a handful of residents would be in violation of this policy. This also only applies to marketplace stuff, not in-world content.
75
·
tracked
Load More