2FA activated grants 10 additional groups
Chrissy Skydancer
Reduce Linden Lab's support costs in a simple way,
while motivating people to protect their accounts better.
We all see it weekly, nearly daily — scams involving stolen accounts.
A lot of effort is required to work through the reports and sort out the mess.
(@Linden Lab, I'm okay that you guys give me 10% of the money you spared in L$) ;D
Love you all! (excluding clearly all the SCAMMERs)
Log In
Beatrice Voxel
A good idea to encourage 2FA use, another would be to reduce upload costs by a fraction for members with 2FA enabled (this would encourage designers who create a lot of assets securing their accounts).
However... I think LL is shifting the burden of account security to the residents, and ignoring the gaping holes that could be plugged programmatically. I'll post potential countermeasures as replies to this post.
Right now scammers are a plague on Second Life and other online platforms because it is profitable - the risk of getting caught is outweighed by the potential profit. LL needs to turn that around, so that the financial gain is low, and the risk of capture high. Only then will the scammers leave for greener pastures.
Beatrice Voxel
1) Stop making "chat in group" a default permission for the Everyone group. If a group mod toggles this on, a warning should be displayed "this will expose your group chat to URL scammers and phishing attacks." Instead, create a second group tier, similar to Officers called "Verified" that does allow chats to be sent to the group. The idea is to reinforce the idea that group management is tiered (much like Discord) and that there's no shame in not letting looky-lou's that join your group because they found it chat with members you invited to the group specifically.
Beatrice Voxel
2) Flag links to non-Linden Lab domains as suspect. This is currently done by some TPV's such as Alchemy, where the 'legit' URL's are preceded by the Linden "Hand" icon in the chat display. Perhaps allow for other domains (Primfeed, imgur, social media) to be 'whitelisted' in the viewer but include only the LL domains by default. Any link NOT verified as legit by the viewer would get a warning dialog and not open in the viewer's browser window.
Beatrice Voxel
3) Disallow 'hyperlinking' URL's to text as a default - only allow certain roles in a group to do it. Currently this is allowed for everyone, and is sometimes used creatively to hide a scammer link behind something that looks legit. Note: you currently cannot disguise one URL as another - that pattern is disallowed.
Beatrice Voxel
4) Employ AI and human CS teams to find the "spoke and wheel" botnets that exist - typically a scammer will use disposable alts or compromised accounts to do the advertising, and then 'collect the winnings' on a main account. These Linden payoffs from many accounts to one account could be used to trace the actual scammers in near real time. It would possibly expose legit club and business owners that use a 'banker' alt to consolidate their work and income, but being revealed in such a pattern would not necessarily mean an automatic ban, and "known good" operators and their alts could be white-flagged. (I say "and their alts" because new accounts suddenly sending Lindens and items to a 'legit business' might indicate that the operator has either fallen victim to a scam or decided to join the Dark Side themselves.) This is basically what the forensic bookkeepers in the FBI did to reveal organized crime, "follow the money".
AlettaMondragon Resident
Beatrice Voxel I have to say I usually agree with your ideas on this site, but this time I feel like this was a huge overkill. Maybe I don't take this problem seriously enough, but this comes up here like every day and the same suggestions are being pushed each time - these you wrote here now too. My problem with these so-called "countermeasures" is still the same, they suggest removing essential functions, applying more restrictions and justifying discrimination. All the opposite of what SL is supposed to be about.
1) Some groups have been doing this forever. RP groups where chatter (like ATC) is supposed to be restricted to the RP, and also some rental, store and event groups only working one way through notices. That is where it is a good idea. Otherwise it is just an extra step in setting up a group which most people would fail at, and they might rather allow other powers in the everyone role as well if they won't be sure what is needed to enable group chat. Pointless anyway, if any group owner wants to implement this idea, they can do it anytime on their own and take all the time to "verify" their members.
2 and 3) This comes up so often and it is still the worst of all of these suggestions. Flag links automatically if they're not on a list of a select few websites? Who will maintain these whitelists? Who will decide which website will be allowed and which not? And then the hyperlinks. This is what would kill group chats the most. I want my hyperlinks fully functional so I click them and they work, and I can decide which I want to click on. Don't take them away from me just because some people can't be careful enough. It is not bad enough that we don't have link previews (like image previews) in the chat, in one word it's primitive, let's go back to the level of 1990 and have to copy-paste addresses manually or something? The next step: don't turn on your computer so you will be safe from clicking on scam links. One step further: let's go back in time to 1980. Has anyone clicked on a malicious link back then? Sarcasm aside, this would just ensure more people would avoid chatting in groups. I and some others already avoid groups where the moderators don't allow posting links. It's a stupid overkill. Most of us are reasonable and honorable adults, don't take things away from us because others are not.
AlettaMondragon Resident
4) Why flag money transactions in a way that can flag legitimate club owners, creators or just about anyone, too? Why not just follow the money in the cases when someone had it stolen from them? This would turn SL into a police state instantly, where people legitimately transferring money among their accounts wouldn't be their own business anymore and anyone could be suspended for the duration of an investigation just because a bot flagged them for transferring too much (or any) money between accounts. This already happens to people over billing issues which happen by LL's fault and that is bad enough. No need for more options to mistreat people who haven't done anything wrong.
"I think LL is shifting the burden of account security to the residents"
I don't know about others but I have learned a long time ago that if I'm in danger or distress, I can't expect anyone to protect me. If I need to rely on someone else in such a situation, I'm as good as dead. In this case, I was fine without MFA and never felt like anyone could steal my account, and I did my best (and still do) to ensure that, but I was very happy when MFA was implemented. It's the least you can do to protect your account and then basically the burden of account security is on the MFA to work properly, if someone
chooses
not to do this and then chooses
to click the 6 millionth "free crap that's obviously not a thing but come click" link and then chooses
to enter their login credentials on that website, how can you be sure anything else can be done to keep them safe?AlettaMondragon Resident
However, here are some ideas that can help in the long run:
1: Make sure MFA and the login servers, etc work properly, as in they both keep our accounts safe and don't make login troublesome for us, so that people would not weigh the risks against using MFA because it is uncomfortable or a lot of trouble. Act on https://feedback.secondlife.com/feature-requests/p/relax-account-security-and-add-trusted-devices so that we wouldn't have to log in the same account on websites several times a day when switching between two or more devices if we use these devices every day, make it an account option at least that the viewer login is exempt from a MFA challenge indefinitely if there is no login attempt from another IP address or device. Once it runs as smoothly and reliably as with other platforms, most people won't reject the idea of using MFA.
2: Educate the users on MFA and the risks of scams. It is in progress, I see it every time on the FS login screens, but it still happens that we're talking about MFA in a group and someone asks "what is MFA?" It is scary, if some people still haven't heard about it, it's either still not emphasized enough or they really avoid information even if it's out there in front of them.
3: Track down the scammers faster. This is where I agree with your last point in the way that AI could be used, but in my opinion it would be better to monitor links in group chat as well as logins and transactions to find suspicious links and patterns like one of those links was just posted multiple times and then logins to accounts from machines and IP addresses happened that are usually not used for those accounts. Go after specific patterns. The same scam link goes around for so long, it is really bad to watch. Finding such patterns must be fairly easy for AI and then these links could be blacklisted and prevented from being delievered to the recipients, as well as flagging the sender to account operations so they can investigate.
I just really don't get it when people want to impose more restrictions instead of these proactive solutions. Personally, I just don't want more disadvantages disguised as "protective features" that don't protect me at all, as I was already sufficiently protected. Some people in the comments here amazed me with how far they went to protect their accounts and I really like it. If they could do that, everyone could do the bare minimum, which in most cases would be just enough.
Beatrice Voxel
AlettaMondragon Resident All fine and good, except, having dealt with IT and security for most of my adult life (including back in the late 80's when modems and bulletin boards were a thing) there's ALWAYS going to be someone who does the Dumb Thing.
That BBS system operator that didn't quarantine the uploads (and spread a virus to their subscribers).
That executive who clicked the wrong link in an email (and opened up a backdoor to the network).
That grandma that fell for a tech support scam (and spent her life savings "reimbursing" the sweet tech for her fat fingered mistake).
That user who clicked thru a 'Free Stuff!' form (and had their account turned into Yet Another Untraceable Spam Bot).
My point about AI doing pattern searching isn't just to reveal the bots, but reveal who the bots report to. That's an extremely crucial step - without it, it's an endless game of whack-a-mole.
As for link restrictions, it's really simple to do domain resolution scanning - OpenDNS has been doing it for over a decade. Typically it's completely transparent to users ... until they try to go someplace in the dark shadows and that's when the service says "um, no, that domain's on the naughty list." Yes, you do have to be judicious about what gets blocked and what doesn't, but this is the entire business model - cache what domain a user looks up, send a page request, read the resulting page, and categorize it. With AI able to sift and summarize, it's even easier, and it's all behind the scenes, no handcuffs, no restrictions, just a little flag on the URL being posted. What Alchemy does now JUST flags the Linden Labs domains, that's it. But a viewer (not server!) blacklist or whitelist lets the user decide how much risk they want to take. That part I think was unclear.
moon Shade
Upvoted cus as many other have said here anything that concentrates people's awareness of security has to be good. But as others have also pointed out it's not tackling the core issue of social engineering which seems to be responsible for the majority of scams, ie links in group chat (happy to be corrected if anyone has the stats on this).
A temporary solution could be to disable group chat links altogether. If a group has an announcement to make there's already a notice system and also a group description where links can be published. So if a group mod wants to announce a sale or a new product or event or whatever, announce it in group chat by saying 'new thing, check group description for link/check the group profile for the latest notice', or something along those lines.
It's not 100% because group mods can (presumably) get their accounts hacked too, and maybe a little draconian (I'm not as a rule in favour of banning stuff). But it would stop group chat scam links dead in their tracks while a better long-term solution gets figured out (maybe a new rule that only group owners/mods can post links to chat or something). Maybe it would also remove a little of the incentive to hack accounts in the first place.
Beachy Piers
moon Shade During SL22B Philip talked about giving group owners the ability to block links from chat or only allow moderators to post them, but I've heard nothing on it since. Not a word.
moon Shade
Beachy Piers Maybe technically it's a bear to implement or something? But yeah, giving group owners more control on how their groups get used by members would seem like an obvious place to start.
EmpressLuna Moonchild
At this time LL needs to 1st improve on there 2FA befor I can support such a vote. it has like NO fallback options, Unexpected 2FA prompts, If you lose access to the device where your authenticator app was set up, it can be a big problem also!
Some people reported specific issues with certain phones or authenticator apps that didn’t generate working tokens for them.
Bottom line: Yes while SL 2FA works as a security feature and is recommended, many users do find parts of the setup or recovery experience glitchy or frustrating in real-world use.
Gwyneth Llewelyn
EmpressLuna Moonchild I totally agree with that. It's hard to understand why we cannot have at least the option of having backup keys. This avoids the issue of a phisher who just captures our session data, logs in, changes the email address, and requests a link for 2FA to be removed. That's way too insecure IMHO. The point of backup keys is to allow you to still log in without a working authenticator app (or device) but not rely on email/SMS which
might
have been compromised (or simply changed to different things, at the whim of the phisher).So, backup keys are a must.
Nyx Onyx
Gwyneth Llewelyn And one argument by the OP was to reduce the burden on support... Giving you an independent backup method would also do this.
Gwyneth Llewelyn
Nyx Onyx definitely true!
To be perfectly honest, in order to keep all my backup keys from all services I've ever subscribed to, I store them away on a remote, multiple-encrypted, virtually-unhackable service (Keybase), which I can access from anywhere — but nobody else can :)
But of course anyone can use those 'Safe Vaults' built into OneDrive, Google Drive, Dropbox, and others of their ilk, which are allegedly 'safe enough', for the same purpose.
Or if you are a command-line geek, just PGP-encrypt everything you save locally — and keep your private key inside a YubiKey or similar device :)
AlettaMondragon Resident
EmpressLuna Moonchild Totally agree, I have written a request here about some aspects of this problem but as surprising as it is (not), they didn't even address it. It's not like it's not worth using MFA this way, but it definitely results in some annoying situations.
Nyx Onyx
Gwyneth Llewelyn Again I will refer to the CIA Triad and the ISO 27001.
It's up to the interested parties (the user in this case, for the most part) to make a determination on how to weigh the factors of security, and determine how important the information / account is.
LL currently have a thing in place that doesn't put as much weight on the 'A factor' in their MFA solution as I require, so I will at the moment choose to manage my credentials for a regular login and be as careful as I can in other ways. This includes being careful with links and at login prompts, but also that my payment method is linked to Paypal that in turn is associated with a credit card that has a very low limit, and I have an alt as a deposit for my L$.
If someone gets access to my account, the damage they can do is quite limited, and most of the damage should be possible for LL to restore. Losing access to my account due to MFA failure (lost device / corrupted app / errors on the LL side / whatevere else) and having to wait to get on SL until LL gets around to my ticket is not alright with me, not when it's the ONLY backup method. It's good to be able to fall back on support when the curently non-existing backup methods don't work for whatever reason, but it should not be the one and only way.
I mentioned some possible backup methods in an earlier comment. If a backup method is used that is insecure by its very nature (sending codes via e-mail or SMS for example), it should be combined with something (this is called layering security). This layering could include a knowledge test, as well as automated tests such as comparing the subnet you're connecting from with where you usually connect from and assign a score to that depending on how often you connect from that subnet.
I would consider Passkeys an alternative primary method rather than a backup method, but effectively TOTP generator apps and Passkeys will as alternatives serve as backups as well, if both are enabled.
This post turned into a longer one than I intended, sorry. I do recommend a read about the CIA Triad and ISO 27001. You can have your favourite LLM summarize them for you.
Gwyneth Llewelyn
You know, I think that security is of the essence, and we all know how thorough LL has been from the start in protecting in-world assets: the permission system works well, and the more technology advances (such as uploading glTF meshes now), the harder it is to steal content — especially interactive, script-based, rigged mesh content. You
can
still steal full meshes, but assembling them together into a working
object/item/attachment is a completely different story!So Linden Lab has been quite thorough in improving the security of in-world intellectual property. They have also been compliant with privacy and security legislation regarding one's personal data; for instance, there is no unified "database of avatar names with user names and credit card numbers". Instead, Tilia handles the credit card/real name bits, but never knows to which avatar the account is tied to (LL just sends them a token). Conversely, Linden Lab never gets the card number or PayPal/Skrill address from Tilia — they just get a reply for the token they sent saying that the transaction went through. Hack into Tilia's database, and you won't know who the avatar is; hack into LL's database, and you won't know what credit card is being used for payments (or even the name of its owner).
It's a pity that all that protection can be subverted with a relatively simple scam — one that acts as a "middleman" in the authentication procedure, capturing what the user sends, echoing it to LL's servers, and capturing the valid session token from the reply. Currently, there seems to be no mechanism to protect against that sort of attack, simply because it's relatively straightforward to defeat all security — so long as someone clicks on the 'wrong' link.
A lot of things (I've checked!) are also handled locally, on the page's JavaScript. Hackers get access to those and can see what they can safely override to escape detection. It's only server-side that the communication is secure, but only as far as LL's servers do
not
'accept' a request coming from practically 'anywhere'.EmpressLuna Moonchild
Gwyneth Llewelyn
why so much ChatGPT... you got to love the AI — line
Kyrah Abattoir
Gwyneth LlewelynThis is a joke right?
Gwyneth Llewelyn
A combination of a client certificate containing a digital fingerprint, randomly seeded with, say, a TOTP 6-digit-number generated locally,
could
provide a means to exclude third-party "replay" attacks. The idea would be that, once an avatar is first created, it gets a client certificate emitted by LL, based on its hardware fingerprint, as well as the number in the TOTP authenticator; now valid tokens would be returned encrypted with the user's private key from the client certificate issued by LL. Since TOTP changes every 30 seconds (within a margin), that means a potential attacker intercepting communication would have about that time to wreak havoc — depending on how sophisticated their tools are, this might be enough, or LL might build in an extra layer of security (e.g. no L$/item transactions allowed before the first minute has elapsed — which a "regular" user will not really notice, since this is also around the time you'll have to wait for things to rezz anyway).The major issue with this approach is that changing keys and making sure both parties (client and SL Grid servers) are aware of which keys are valid, as well as additionally encrypting everything, requires expensive computational resources — hardly something you'd like to have to add to lag. Moreover, every 30 seconds, there would be a "hiccup", until the server generates a new pair of keys. A slightly faster option than full encryption would be just to add a hash as signature — hashes are reasonably fast to compute (although this would need to be stress-tested), and they should provide the two parties enough proof that the communication hasn't been intercepted, at the cost of making the hcommunication less secure (even taking into account that most — not all! — communications between viewer and grid are already encrypted).
Gwyneth Llewelyn
I'm not claiming that this is a perfect solution, though, and I actually worry about the overhead costs of generating new pairs of keys twice per minute. Theoretically speaking, such keys could be pre-generated in advance, at least on the server's side — given the 'secret seed' shared by client and grid via TOTP, you can safely generate 'future' keys in advance, and use things like, say, teleporting or searching through inventory, when user attention does not require near-real-time attention, to generate more keys and place them in a buffer. This
could
work, but, of course, any 'future keys' generated on the client
viewer would be theoretically susceptible to being stolen by the scammer/phisher.Another possibility would be to use this mechanism only for "sensitive" communication, not for simple things like tracking an avatar's position. Second Life has for almost two decades relied on 'capabilities'. These are requests made by the viewer on behalf of the user to 'do' things: they're more fine-grained session keys, if you wish, but which only unlock a specific 'thing', not the whole range of accesses.
For instance, suppose you wish to have a look at your avatar's inventory. For that, your username must have the authorisation to do so (obviously!). So the viewer requests that capability from the grid, which will check if your username is, indeed, allowed to view your avatar's inventory. Once that is confirmed, the capability is sent back to the viewer, and, from that moment onwards, until the capability expires, your viewer can continue to look at your avatar's inventory.
The same applies for a lot of things, such as requesting access to enter a parcel, or to teleport, to fly, to engage in transactions, even to log in — for each case, the grid will need to see if you're properly allowed to do so, and the viewer will need to retrieve the appropriate capability to get granted access.
Gwyneth Llewelyn
While this seems a bit awkward, the point here is that accessing the core authentication servers requires precious time, and may be a resource-intensive operation. Consider someone in a private chat: you wish that only those people whom you gave access to join the conversation have access to the messages there. In theory, therefore, every time a message is sent, the grid server theoretically ought to look up your avatar's username, everybody else's username, and make sure, on the database, that everybody has given permission to everybody else to read the messages — because you cannot just 'assume' people will give each other permission 'forever'. People might get kicked out of the conversation, muted, banned, or something like that. So the system has no choice but to check if every condition is still valid
before
redistributing a message to others.This would be a major pain to do with 50 thousand simultaneously logged in users, all relying on chat — public or otherwise — to be 'immediate' (i.e., taking less than 200 ms to appear) but also 'secure' (i.e. nobody else can listen in to the conversation unless explicitly allowed to). Such requests would flood the database, stall it, possibly crashing the database server(s), and so forth. And that's just for
chatting
!Instead, when the communication is established, each participant's viewer requests a capability to join the conversation. The system checks for such permission
once
. Now the viewer only needs to send the messages with the granted capability — the grid 'knows' that person is cleared to chat, and with whom, so it doesn't need to bother with constantly requesting permission to do so. All parties can independently confirm that the capability was granted and signed by LL's servers and is therefore valid. If someone is muted suddenly, their capability to continue to chat will instantly get revoked, and all parties can confirm that the revocation order is a legitimate one.Gwyneth Llewelyn
Now, I'm not quite sure if capabilities have an automatic expiration date (I suppose they do!), but since there will
always
be a period of time required to retrieve the original
conversation's capabilities, before the conversation can, in fact, be initiated, I suppose that this could be combined with the before-mentioned security service as well. In other words: to request for a capability, the user needs to send a request signed with LL's public key for that communication — and get a signed reply, this time encrypted with the user's own personal private key.And perhaps this is something that
already
happens — I don't know, I'd have to look for that in the code. If it already does, great, one problem less! If not, it's a question of adding the extra encryption/digital signing on the capabilities themselves
(even the request for requesting capabilities might be encrypted that way!).Gwyneth Llewelyn
Obviously, LL may additionally build a safety mechanism: if such requests are made, say, ten times, and all the responses are invalid, it's reasonable to assume that either the user's clock is completely off the mark (over one minute), or that the user doesn't have the correct TOTP authorisation app. Either way, they may get kicked out by the grid with a warning message that a
legitimate
user would have no problem in following up — but a phisher/scammer cannot.That effectively means locking out any potential phisher who
might
have logged in by stealing all authentication data and retrieving all session keys etc. — at worst, they'll be able to be in-world for a minute or so. But then
the suspicious activity could be flagged: the phisher's system would have been fingerprinted, their approximate location identified (it matters little if they use a proxy, a VPN, or even Tor — what matters is that this will not
be the 'usual' location for that user, neither will it be the same machine), and the legitimate user may be warned (granted, if they changed the email address, such information might be captured as well... but at least the user may get an in-world IM, the next time they've logged in, and explain them what to do).So, phishers will target a different surface instead: the Marketplace or the user's account. Let's start with the Marketplace, which is perhaps easier to understand how this safety mechanism might work. Also consider the typical human being using the Marketplace, as opposed to a scripted attack.
Gwyneth Llewelyn
If the same technique is used for changing email addresses, then the user's email will also be protected. Again, the Web server and the client's TOTP application share a common seed, which the phisher can't know. The Web server doesn't even need to ask the user for entering a new code when changing email addresses — they can simply calculate the correct code at the moment of submission, send back a new client certificate (or something equivalent) which will only work if you know that seed, and refuse anything else. The best that the phisher can do is to try to
remove
TOTP by requesting a reset link by email — but that means intercepting the user's mailbox before
hacking into their SL!What this means is that in order for a scammer to completely bypass everything, they will have no choice but to 'take over' the user's computer using some sort of malware. While the TOTP app might still be inaccessible, what the scammer must do is simply to check what keys are being sent and received every 30 seconds, and use exactly the same ones for requesting capabilities of their own. As soon as a change of keys is detected, these have to be instantly relayed to the phisher as well — over a secure channel established between the user's computer and the phisher's, which keeps the exchange of keys in sync with them.
But that's not phishing/hacking any longer. That's maliciously hacking someone's system with a virus or a Trojan. And
that
is possible, of course, but it also means that the virus protection software will kick in and alert the user about a security vulnerability; not to mention that creating a new virus from scratch, for a specific use-case only (e.g., only attacking SL accounts), requires much longer to implement and is way harder to set up than a simple "phishing website".It's something completely different, and TOTP is not meant to be a solution to deal with computers under remote control by a malicious hacker.
Gwyneth Llewelyn
That said, the above comments are a description of an oversimplified solution. In effect, you cannot use TOTP to generate new encrypted keys based on the next number to be displayed. While the
server
can do that, the 'secret seed' is shared with the TOTP application
, and never
with the computer(s) it's used with, or it would be pointless — 'secret seeds' stored in a computer could
be hacked if the computer is compromised. But TOTP inside an independent device means that the hacker has now to compromise both
the device and
the computer used for logging in — and that means understanding the relationship between the two devices — which will not
share data (it's the human user, after all, who types the 6-digit TOTP number manually... and you still can't hack human brains directly 😂).Nevertheless, there
are
methods for cryptography using rotating keys at short intervals, which cannot be easily 'hacked', even by a phisher who captures the first communication with the first, well-known pair of keys, and just replays them over and over again. HTTPS traffic, for instance, to be fast
, uses a shared secret for encryption and decryption, because symmetrical cryptography is way
faster than assymetrical (i.e., public key/private key) cryptography. The way to make HTTPS secure is simply to generate a new shared secret every few seconds — and for that
, strong
cryptography is used, with a public/private key pair. But you can transfer huge chunks of data at so-called 'wire speed' just using the secret key. The weakness of things such as relying only on HTTPS is that a malicious hacker will not bother to attack the stream of data itself, but rather the endpoints: if they learn about the secret key change at the same time as the user, they can 'listen in' for as long as they want. Better: if they grab the private
key from either endpoint, they can generate 'secret seeds' as they like, and 'take over' communications. But hacking into a personal computer using a targeted virus for this purpose is not
for amateurs and script kiddies!Gwyneth Llewelyn
EmpressLuna Moonchild hah — I love when people
think
that I use ChatGPT, a tool that I use for summarising
long texts, because for writing
long texts, who needs AI?? All you need is one brain, two hands with several fingers, and time...Gwyneth Llewelyn
Kyrah Abattoir I hope not, since I'm not really well-known for my sense of humour, although I find it amusing that you
think
it's a joke. Tell me, what's so funny about it?Beatrice Voxel
Gwyneth Llewelyn Based on the use case, a twice per minute key rotation might be serious overkill. I worked for a company supporting a distributed app (streaming live TV, national and local) and their key rotation was set at 5 minutes for backend security and 30 minutes (!) for content. Apparently the management overhead of keeping track of multiple keys for each of 600+ channels added enough infrastructure to make the system rather ... unwieldy ... should the databases that stored the keys lose connectivity. And it did happen - fiber cuts that affected national to local links often meant that while we could shunt to higher cost networks to get the video through, the latency on those nets was enough to throw the key timings off and the customers couldn't watch it anyway!
Point of the anecdote is this: There's a balance between operability, where the user doesn't notice security overhead affecting the service, and security, where the user is completely protected but the security overhead affects the service itself. LL would need to determine where the sweet spot is for most accounts, not so often that the service is lagged, but not so rarely that a breach causes significant losses.
Kyrah Abattoir
Gwyneth Llewelyn Your eloquence is commendable, but that's about it.
Gwyneth Llewelyn
I've upvoted, of course — it's a good, even great way of incentivise adding the extra authentication step.
However, I thank Ember Ember and EmpressLuna Moonchild raise a fundamental point: people should not be lured into a false sense of security regarding 2FA, not in the way Linden Lab has implemented it right now.
EmpressLuna Moonchild was quite elaborate in detailing all the involved steps during the authentication process, and shows that the scammer doesn't need a password, and much less a 2FA token, to get full access to a phished account: all they need to grab is a valid, authenticated
session token
. With that, they can log in to the website and wreak havoc — namely, remove all safety measures that allow them to log in with the SL Viewer as well.There is just a minor clarification that is perhaps not that obvious (I certainly didn't get it the first time I read EmpressLuna's explanation): what these fake pages do is not simply "steal the password" — that would not be enough, given that you'd still need 2FA to log in. 2FA, therefore, is "good enough" for protecting a password, but that's all it protects against.
What scammers do is slightly more sophisticated. They don't simply grab login/password information — that, in fact, is of little use for them. Instead, they convey the user's typing on a keyboard to the SL authentication page (it's actually far less sophisticated than it sounds!), and all they want to capture is the valid session token. The user may do whatever LL requires to log in — 2FA, facial recognition, fingerprints, CAPTCHAs, solving intellectual puzzles still beyond the capability of AIs to solve, donate blood, validate DNA... — all that is irrelevant, when all that the scammer needs is the
valid token that allows them to get in
. At least for a while. Enough to cause harm.Of course such token will be valid for one session only, but one is enough for the scammers: all they need to do is to change the email address and go to the link saying
If you have lost your authenticator and would like to remove your MFA, click here
. LL will benevolently send you a link to log in with 2FA removed, so the next step is to change the password, log in as usual (automatically using LibreMetaverse, or manually, by logging in via the SL Viewer).Chrissy Skydancer
Gwyneth Llewelyn Thanks for the support and explaination, I
know its just a layer of additional protection and no 100% safty.
Also there is a ticket arround for Groups to filter these sites, so people dont need to do it by themself.
The next puzzle in our game is the uprise of Omnifilter - I tested it already in the Firestorm Beta. I personaly filtered out all ".shop" and all ".herokuapp" links that were mostly used. As well as fake marketplace links. - Downsite is at the moment i dont see the scam link anymore to take further action for abusive reports.
Gwyneth Llewelyn
Obviously, having a "fake login page" allows scammers to be even more sophisticated: not only will they easily capture the validated session token, but they will also get the username and the password. So, to delay detection, they can keep the password as it is — just change the email address and remove 2FA. Since 2FA, once used, can be stored locally for 30 days without being bothered again, it will be a while until the user figures out that they had their email address changed and 2FA removed, because, so long as their password continues to work, they have no reason to suspect anything...
Scammers
could
be even more sophisticated. In order to prevent the accidental revealing (either to prying eyes or malware), LL does not show what the current
email address is: it just shows the bare minimum for a human to recognise that address. Suppose that my email address is "fabulous.username@yahoo.com", one that the user will have set up just for SL, for instance. LL will obfuscate the address and show it just as f*****@y*****.com
— enough for the user to remember what the email address is, impossible to reconstruct the original email address from it.eWhat the scammer only needs to do is ty o replace the email address with
fake.username@yahoo.com
— one that they have set up for this purpose — and that's it. The user will just check if their email address is the correct one, see the obfuscated string, and assume that nothing has been changed.Since the vast majority of users will use
@gmail.com
, @yahoo.com
, @live.com
etc., all the scammer needs to do is to register 26 addresses with each of those providers, each starting with a different letter of the alphabet. This should cover 99% of all email addresses used for logging in to SL, thus baffling the user — even if they change something and expect to receive an email from LL, they will just assume it went straight into spam, since the obfuscated address will look exactly as it always did.It's only when L$ starts disappearing or tickets fail to get an answer from LL that someone might really wonder what's going on, but by then, it's far, far too late.
Chrissy Skydancer
Gwyneth Llewelyn to be fair you will get a message that your email adress got changed - but yeah that case is not impossible.
Gwyneth Llewelyn
Chrissy Skydancer that's actually a very good first start!
But one can go a bit further than that, i.e., instead of relying solely on manual filters, use something automated. There are plenty of online lists of 'bad' IP addresses and URLs from well-known malicious sources, and discussions on which service to use:
- From Google (embedded into Chrome viewers): https://transparencyreport.google.com/safe-browsing/search
- Netcraft has provided such a service as far back in time as I can remember (I'd guess 1995). Their current API is https://report.netcraft.com/api/v3
- Spamhaus is more famous for their email spammer database, but they have been checking domain names for eons as well: https://www.spamhaus.org/domain-reputation/ (anything positive is good; URLs with outstanding reputation earn scores around 40 or so — such as secondlife.comorgoogle.com). They have both free tiers and paid tiers.
- Bitdefender is a well-established commercial service for anti-scamming measures, but they also have a few free products as well: https://www.bitdefender.com/en-us/consumer/scamio
- Another commercial service but with a free tier available for API testing: https://urlscan.io/pricing/
- Just community-reported malicious/suspicious IP addresses (sometimes that is enough; resolve the domain name to one of those addresses, and if it matches, give a warning): https://www.abuseipdb.com/ (disclaimer: I've registered for this eons ago and contribute as many IP addresses I found to be malicious as I can — currently 110K+ — most of which automatically captured on my websites, some manually added based on logs and reports reviewed by a human, i.e., yours truly)
And here are some lists of more and more sites/services providing anti-scam prevention/identification (some old, some new, some outdated, pick one!):
Gwyneth Llewelyn
However, I'd like to suggest the following, paraphrased from Reddit:
> Short version: Stop what you are doing and subscribe to a threat intelligence service, instead.
I believe that's sound advice for the Omnifilter developers: you should not be reinventing the wheel — at least, not regarding URLs, i.e., anything non-specific to SL. Filtering objects, avatar names, teleport requests, etc., that makes lots of sense, since they're specific to SL; URLs, well, no 'filter'-based approach will really work.
For example, filtering out
all
herokuapp
URLs may have two unintended results:- False positives, since tons of perfectly legitimate applications are launched via Heroku (as well as many, many other 'container technology' systems, from Kubernetes, DigitalOcean, Azure, to Amazon AWS... the latter of which used by LL themselves, so you should be carefulnotto block yourself out of the grid!)
- False sense of security, since anyone can simply point a valid domain name to a Heroku container, thus hiding it from the filter, and tricking someone into feeling "safe" from Heroku-based scamming sites.
Similarly, things like blocking IP addresses from, say, Russia, thinking that will keep most scammers out is a fallacy: most scamming sites are actually located in the United States, simply because the US hosts most sites in the Internet (and aye, even Russian scammers use US sites... more than you might imagine)!
This is
not
to say that Omnifilter isn't an awesome
tool! It's a start, of course, and would be 90% effective in, say, 2004. But this is 2025 (almost 2026!) and scammers have wildly more sophisticated tools at their disposal that can defeat the most complex 'filtering' systems.As many of the above links show, you'll really need a considerable amount of Machine Learning (ML) to deal with current-generation scamming/spamming websites. This is not for the faint of heart to develop from scratch as a side-project; it's a multi-million approach taking teams of cybersecurity specialists and ML scientists years to complete; nobody around here has that amount of disposable income to invest into such a monster. Instead, you should simply rely on those who
did
that kind of R&D and sell subscription services — but often offering a free tier which will be already better than any naïve 'filtering' solution.Gwyneth Llewelyn
Chrissy Skydancer well, I'm learning something new every day, I can't recall ever receiving such a message — then again, the last time I changed email addresses was in May 2006 😂 so I might have missed that tiny detail!
Beatrice Voxel
Gwyneth Llewelyn This actually makes a lot of sense. I'd go one step further - LL can subscribe to DNS Resolution Filter services (OpenDNS) that tag and flag at the domain level, not just on the domain name itself, but heuristically by -content- found on the domain. I've used OpenDNS for years as a DNS resolver, and all devices on my subnet (computers, phones, smart appliances) get a blank page if they try to access any sites that OpenDNS filters based on my criteria.
This means that while a user might fall victim to a link, their access of that link is interrupted at the DNS resolution stage, they get a warning page instead (from either OpenDNS directly, or a HURL to a Linden Labs page) saying, "This might not be a good idea to proceed, here's why".
The trick of course is to get OpenDNS to survey/categorize the domains being used, but one use case for AI is to sample the pages people are accessing via a service and assigning them various tags based on the content retrieved. A few hits to a phishing page would be all it takes for such an AI to flag it as "phishing."
Ember Ember
While I agree with EmpressLuna Moonchild about 2FA not being the perfect protective measure because of how it is bypassed altogether if the user trusts a scam url and enters their login information elsewhere anyway, I still upvote this suggestion. Because giving people an incentive to simply think more about securing their account is always a good thing.
Nyx Onyx
All the incentive I need in order to enable MFA again (yes, had it before), is a backup MFA method in case the one and only fails (as happened to me before, my codes were accepted while setting it up, but not when trying to use it to log in). I'm not going to have that happen again that I need to wait for support to get around to my case in order to get back into my account.
A backup method is one that is not using the same TOTP generator solution, instead it's for example passkeys ( https://safety.google/safety/authentication/passkey/ ) or recovery codes, maybe in combination with a knowledge challenge. If using recovery codes, the user should be prompted to regenerate them every so often, once a year perhaps.
Information security does indeed include making sure that information isn't accessed by unauthorized people, but it also does include the Availability factor - if you can't access your information (your account in this case), it's poor information security. Please look up the CIA Triad and ISO 27001.
I will re-enable MFA the day the 'A factor' is strengthened, no extra groups or other incentives required. In the meantime, I keep my SL passwords complex, long, periodically changed, and stored securely. This while making use of MFA wherever MFA is better implemented. I do hope SL has login bruteforce protection.
You do you.
Tonya Souther
Nyx Onyx Yeah, this is the right answer. While Google Authenticator and other TOTP generators are good, passkeys are the way to go.
At least LL isn't stuck in the 2000s like the USPS, which only does 2FA via SMS or email...
Gwyneth Llewelyn
Tonya Souther well, 2FA via SMS/email is better than nothing at all...
Tonya Souther
Gwyneth Llewelyn No, not really. Security experts are pretty much unanimous in recommending that people do away with that. They're just too easy to hijack.
Gwyneth Llewelyn
Tonya Souther I won't disagree with you; I personally prefer passkeys as well (and I also like YubiKeys!). SMS/email is far more cumbersome!
Emails also have this nasty habit of getting flagged as spam... or greylisted... which means that they may be too short-lived in order to appear on your mailbox in time for logging in (SMS is technically instantaneous, but... in practice, it can be subject to delays as well, depending on the operators involved).
Also, TOTP can, at best, protect the
login
procedure, but, once you're "in", you need a different mechanism to continue the transaction secure. A phisher can easily replay a captured TOTP.Although I wonder if they can't just rely on the session data received after all levels of authentication have been completed on a fake, proxy page... in theory, passkeys may be 'as safe as' anything else (or nothing at all) if someone just types things on a phishing website, unaware that everything they do is being captured to be used later...
EmpressLuna Moonchild
While Linden Lab does promote and document MFA/2FA support.
2-factor authentication does not stop the current Second Life scam, because the attack does not rely on stealing passwords in the first place.
- They bypass passwords entirely
The scammer:
Tricks the user into clicking a fake “viewer login,” “inventory sync,” or “verification” page
That page steals the live login session token, not the password
The attacker now is you, from the server’s perspective
At that point:
No password is entered
No password is checked
2FA is never triggered
2FA only protects the login step.
This attack skips the login step completely.
- Session tokens = full access
Once a valid session token is captured:
Inventory access works
L$ transfers work
Group roles can be abused
Objects can be taken or rezzed
Marketplace actions may still succeed
The system sees a legitimate, already-authenticated user.
- This is the same class of attack used against
To be very clear, this isn’t “weak security” it’s a known attack class used against:
Google Workspace
Steam accounts
Discord accounts
Crypto wallets
It’s called:
Session replay / token hijacking
2FA does not stop it anywhere unless extra safeguards exist.
Trust me if 2-factor authentication did work LL would invest allot more into it and force it on us.
Ember Ember
EmpressLuna Moonchild Just want to say thank you because a lot of folks don't seem to understand this part, so thank you for sharing this info.
Gwyneth Llewelyn
Thank you as well! I've explained a bit further why 2FA doesn't work (at least in the way it's implemented right now); it seems to me that LL can do a small change just to avoid token hijacking and make 2FA more robust that way.
Gwyneth Llewelyn
All right, so it wasn't 'just a bit further'. It was 'so much' that people now think I use ChatGPT to write things lol
(Disclaimer: I actually
do
train LLMs for a living, or, rather, as part of the gig economy...)Journey Bunny
Throw in a free mount and I'm sold!
(MMOs have pretty-well established that giving even a small in-world perk helps a lot with getting the accounts secured; love the proposal)
Agath Littlepaws
Dear Santa Linden, I wish me that all my friends are secure and happy... that their Money and Homes are saved and their Body and Soul is blessed and unable to get corrupted by Scam-Demons..... Please Santa secure them with your bless in a small present box!
Load More
→