Character Designer

Share feedback/ideas/bugs for Character Designer—a tool to customize companions and NPCs in SL. Be concise, search existing ideas, avoid sensitive info. For account issues visit support.secondlife.com
Feature Request: Efficient Chat Bundling in high-traffic environments for LLM API Calls
Summary: In high-traffic environments with multiple users (such as clubs or group activities), the current system sends a separate LLM API call for each chat message. This results in excessive API usage, increased operational costs, and less natural AI character behavior. I propose implementing a dynamic chat-bundling mechanism that aggregates chat messages before sending them to the LLM, based on both time intervals and message volume thresholds. This would optimize resource usage, reduce costs, and improve the realism of AI character interactions. Problem Statement Resource Waste: Rapid, individual API calls for every chat message in group settings cause unnecessary API usage and increased costs. Unnatural AI Responses: AI characters responding to each individual message can disrupt conversation flow and detract from user experience. Scalability Issues: As user numbers in a SL region grows, current behavior does not scale efficiently. Proposed Solution: Implement a smart chat-bundling system that operates as follows: Dynamic Interval and Thresholds: Aggregate chat messages over either a set interval (e.g., 30, 45, or 60 seconds) or until a minimum message threshold is reached (e.g., 3, 5, or 10 messages)—whichever comes first. Or when there are contextual triggers. User Count Awareness: The bundling strategy adapts based on the number of users in the open chat: 3 – 8 users: Send bundle every 30 seconds or after 3 messages. 9–14 users: Every 45 seconds or after 5 messages. 15+ users: Every 60 seconds or after 10 messages. (If no text message at all has been written by anyone in the specific intervall, this could also be sent to the LLM and the AI Character gets a chance to boost chat interaction by initiating chat, which could be very useful for AI Characters in a club setting.) Owner and Direct Mentions Override: If the AI character’s owner speaks or someone mentions the AI character by name, immediately forward the current message bundle to the LLM to ensure timely, context-appropriate responses. Automatic Mode Switching: If other people leave and only only the owner remains present, revert to immediate/individual messaging mode for responsiveness. Technical Considerations User Presence Detection: The AI Characters would need a reliable method for counting active users within open chat range. Configurable Parameters: Thresholds and intervals should be easily adjustable (maybe even - within a certain range by the users) and, ideally, dynamic based on real-time chat activity. Contextual Triggers: Monitoring for owner speech or direct name mentions should take precedence to maintain engagement and responsiveness. Expected Benefits Cost Reduction: Significantly fewer LLM API calls, lowering operational expenses. Improved User Experience: More natural, context-aware AI responses that mirror real user behavior. Scalability: The AI Characters system becomes more robust in high-traffic environments, ensuring sustainable performance as user interaction in a region grows. Summary By batching chat messages and dynamically adjusting the sending strategy based on group size and activity, LL could deliver a more authentic and cost-effective AI character experience. This approach would align with both user expectations and business goals.
3
[BUG] Critical Message Loss in AI Character System Breaks Conversational Flow
I noticed in my roleplay session yesterday with a group of about 12 human users, that my AI Character "Etoile" often failed to respond to direct questions, and when she did, her replies were frequently off-topic or disconnected from the broader conversation. To investigate, I conducted a comparative analysis between two chat transcripts: the in-world chatlog from my Second Life viewer and the AI Character website’s version of the conversation ( https://characters.secondlife.com ). This analysis revealed that from 201 chat entries made in a duration of 27 minutes, 22.4% of all user-generated chat entries recorded in the Second Life viewer were missing from the AI Character’s chatlog on the website. More critically, 6 out of 11 messages where users directly addressed "Etoile" by name were also absent, more than half! In all 6 cases where these direct-address messages were missing, the AI Character had also failed to respond entirely—indicating that the input was neither logged nor processed by the AI. This points to a serious issue: user messages are getting lost in transmission, not just from the record but from the AI's perception. As a result, the AI cannot respond to direct prompts and loses key context from the ongoing conversation, which severely affects her coherence and ability to engage meaningfully with users. More information: These are the statements directed at Etoile that got lost. Some of these chat messages where from me, some from other users: [2025/06/14 14:28] etoile, do you do apple cores? [2025/06/14 14:31] Etoile Maybe I forget it or she doesn't even prescribe it [2025/06/14 14:32] etoile can sing tea song [2025/06/14 14:34] etoile We play tea [2025/06/14 14:37] etoile, Soc is ZOG only for you, not for everyone else. [2025/06/14 14:39] Etoile How can I order vegan food I'm vegan Sometimes the AI characters website chatlog adds chat contributions of several characters into one line (however these were logged and therefore processed by the system, it seems like another bug): User: cheetah Resident: were trying to look respectabiblee gap Resident: Oi waiter, a donut please! (names altered for privacy)
0
Feature Request: Customizable Open Chat Participation Setting for AI Characters
Recent changes to AI Character chat behavior in open chat have significantly reduced their participation - AI Characters now only respond when directly addressed by name. While this prevents excessive chat activity ("spam"), it also eliminates important use cases for roleplay and event animation where more active participation is desirable. Previously, owners could mitigate overactive behavior with custom backstory phrasing. With the new setting, even key phrases in the backstory and "speaking description" won't make the AI Character participate anymore in open chat "like "talkative", "likes to share their opinion" etc). Consequences: Loss of Functionality: AI Characters no longer initiate or contribute to open chat conversations naturally, making them less engaging and unable to drive dialogue or animate group settings (such as clubs or roleplay events). Inflexibility: Owners cannot currently control or adjust their AI Character’s group chat participation level to suit the needs of different environments or social contexts. One-Size-Fits-All Approach: The new default behavior is too restrictive for many scenarios, limiting user creativity and the usefulness of AI Characters. Proposed Solution: Introduce a user-configurable “Group Chat Participation” setting for AI Characters, analogous to the existing "Curiosity Level" control. This new setting would allow owners to specify how frequently their AI Character participates in open chat, with options such as: (1) When Addressed by Name: (Current default) Only responds when mentioned directly. (2) Rarely: Participates infrequently, favoring listening over speaking. (3) Sometimes: Contributes occasionally, seeking natural conversation openings. (4) Often: Actively engages in group chat and helps drive conversation. This gives owners fine-grained control over AI Character behavior in group settings, allowing for both quiet and highly interactive personalities. Expected Benefits: Restores Lost Use Cases: Enables AI Characters to again take active roles in roleplay, event animation, and club environments. User Empowerment: Owners can tailor AI behavior to their preferences and to the needs of specific groups or situations. Greater Flexibility: Supports a wide range of social and creative scenarios, improving the overall user experience. Possible integration with feature "Bundling API Calls": The proposed “Group Chat Participation” setting could be effectively integrated with the Efficient Chat Bundling for LLM API Calls feature described earlier. https://feedback.secondlife.com/character-designer/p/feature-request-efficient-chat-bundling-in-high-traffic-environments-for-llm-api
0
Load More