top of page
Search
  • Writer's pictureBecky Holmes

Social media platforms - you need to do better in the fight against fraud

What does a successful online scammer need? You got it - a fake social media account. Or how about two? Oh you know what? Make it unlimited accounts! 


To populate these accounts they need a decent backstory. For example romance scammers love to pretend to be a soldier or an oil rig worker. Surgeons, pilots, doctors, nurses are also popular. And let's not forget celebrities - I mean who wouldn’t want to be Keanu Reeves or Brad Pitt?  


A scammer prepared to put the effort in will likely link several fake accounts together to make their identity look more plausible. This way they can give their soldier plenty of fake friends or followers and fake likes and comments on their fake posts. That's a lot of fakery.

 

The good news for fraudsters, although not so much for the rest of us, is that none of this is difficult or complicated to do. In fact it’s damn easy. You won’t face any barriers or checks from the platforms. Thanks to AI tools, the process of creating the profile, a suitably attractive profile photo, and some plausible-looking posts is getting ever-easier. Not only that, but you’ll be able to find tips and tricks on how to make your accounts more plausible, and on tactics for luring victims in. And you don’t have to trudge around the dark web for this information - just check out some of the groups on Facebook!

 

For several years now I’ve been exposing romance fraudsters - engaging with the fake accounts, being as nonsensical as possible and then posting screenshots on my own Twitter (no, not X) account. It started as a way for me to kill time during lockdown, but over time I became completely obsessed with the subject of romance fraud. I learned everything I could about the perpetrators and their methods and then delved into the heartbreaking impact romance fraud has on victims. I became so ensconced in that world that I wrote a book, “Keanu Reeves is Not in Love With You”.

 

Something that I talk a lot about is victim blaming and stereotyping. People assume that victims must be lonely, unattractive, desperate - and above all, gullible. Sadly this often results in victims blaming themselves more than the perpetrator. I’ve heard “it’s my own fault for falling for it” more times than I care to remember. Feelings of shame, embarrassment and humiliation at having been defrauded compound their financial loss and their heartbreak. Let me make this clear right now: it is never the victim’s fault.


I’ve spoken to dozens of victims - men and women, gay and straight, old and young, black and white - and most are highly intelligent, articulate, switched on people. None of them fitted this ridiculous established stereotype. The uncomfortable truth is that these fraudsters are really good at using social media. And I mean really good. They construct online personas and tailored narratives which are incredibly psychologically adept, and lure in people from all walks of life.

 

So whilst I’m absolutely in favour of people doing what they can to protect themselves online, I think it’s imperative we recognise that this problem is bigger than any of us as individuals. We should therefore be asking - why do social media platforms make it so damn easy for scammers to do whatever the hell they like? 


Social media platforms are currently designed in ways which place very few barriers in the way of those who want to use fake accounts. It’s easy to create large numbers of fake accounts, which can then make all kinds of unverified claims, are incredibly plausible, and are almost impossible to trace. It’s simple for scam accounts to blend in with other genuine users and slide into their target’s DMs. And if a fake account is detected, the worst that is likely to happen is that it gets removed, which isn’t a particularly big deal for the fraudster because they’ll have a few others on the go anyway.

 

What’s frustrating is that there are some pretty straightforward steps which platforms could be taking to make it harder for people to use fake accounts to commit fraud. Let’s consider verification for example. As well as giving users the option to verify their identity they could make it easy for us to see which accounts are and aren’t verified, and give us the option to turn on a personal filter to stop us receiving content and messages from non-verified accounts. Together, these three measures would be an oh-so-unfortunate inconvenience for fraudsters. We’d instantly be able to see that an account claiming to be, say, a handsome oil rig worker with so much in common with you, had for some reason chosen not to verify their identity. Many of us would draw our own conclusions about what that might mean and decide not to chat to engage with the would-be adonis.


I want to make it clear that I don’t think verification will eradicate online fraud. Of course it won’t - fraudsters are far too clever for that. However it’s surely got to be worth putting that little bit of extra inconvenience in their way and using the changes to help educate people at the same time. 

 

Sadly, platforms have been unwilling to do any of this voluntarily. Cynical old me thinks this is in part because offering all their users verification would cost them money, although given the scale and wealth of these platforms, the sums involved would be relatively tiny. Probably more significant is that maximising user numbers is hardwired into these platforms’ DNA - more users means more advertising revenue and more profit, whether the users are real or not. Plus let’s face it, the platforms don’t want to come clean on just how many of their users are fake or fraudulent. Who is going to sign up to a platform whose strapline may as well be: “nearly half of our users are probably real”.

 

There were encouraging signs that the UK government recognised this when it passed the Online Safety Act last year. It included a “user identity verification duty” and handed Ofcom new powers to force platforms to tackle illegal behaviour. But surprise surprise so far, for users, nothing much has changed. Every day that this problem goes unaddressed means more victims, more heartbreak, more money stolen. Twitter and Instagram are still providing me with a steady stream of scam accounts to play with, and only when that stops, or at least slows down, will I know things are changing.

 

Rather predictably Ofcom will tell us that this is because these things take time. They are pumping out lengthy consultations and draft proposals and Rome wasn’t built in a day blah de blah. But many voices are warning that as well as taking its time, Ofcom is also being very timid when it does get round to proposing action. 


Here’s what I mean: rather irritatingly, their draft proposals for the measures which all platforms have to take to tackle fraud (its snazzily titled “illegal content codes of practice”) don’t include much at all on fake accounts. There's nothing to require platforms to offer users verification or on being able to see who is verified or on being able to avoid non-verified accounts. That’s a glaring gap, or perhaps more accurately a colossal chasm, especially given that in the same consultation Ofcom itself admits that fake and anonymous accounts are a “stand out” cause of online crime, and other government figures suggest fraud is now the most common crime; an astounding 40% of all crime.


Ofcom justifies this by saying it will come back to user identity verification at a later date, in the extra requirements it will introduce for the largest (helpfully-termed ‘category 1’) social media platforms. These extra requirements are going to be ‘phase 3’ of its regulation (which means it’s at a very early stage of writing them, and they won’t come in for at least another 18 months). Perhaps rather than ‘phase 3’ they could opt for a more transparent term - I suggest ‘phase when we can be bothered’   

 

But let’s say we decide this delay is ok, and let’s say we suddenly become confident that Ofcom would do a great job with ‘category 1’ requirements when they finally arrive, there’s at least one mind-boggling fail in this approach: online dating sites. Ofcom is proposing that for a site to be classed as category one, it needs to be used by over 10% of the population. That covers Facebook, Twitter and TikTok. But it doesn’t catch Tinder, Hinge or Bumble. Errmm… common sense calling Ofcom. Absurd amounts of romance fraud take place in the online dating world. 

 

The thing is this also doesn’t make sense for Ofcom in terms of building trust in the Online Safety Act as being a worthwhile thing. The general public gets that fake accounts are a problem. It’s all very well telling us to look out for them - but the regulator needs to step up and do their bit to make it easier for us, or what’s the point of the regulator? And yes I know we’ve all asked ourselves that question a hundred times. I’m surprised my eyes are still pointing forward with the number of times I’ve rolled them in this industry.


So I finish by removing my dainty satin glove and striking Ofcom, the government and social media platforms across their faces and saying “Sirs. I challenge you to pull your fingers out and take some of the burden of fraud away from the victims. And get a bloody move on.”



This is a guest post from Becky Holmes, who the runs the popular Twitter/X account @deathtospinach where she exposes internet fraudsters, and is the author of Keanu Reeves Is Not In Love With You: The Murky World of Online Romance Fraud

0 comments

コメント


bottom of page