On 17th March, an announcement from Meta appeared to confirm that “Meta Verified”, Meta’s paid-for verification service, is in the process of being rolled out globally. An initial pilot in Australia and New Zealand had been unveiled a month earlier. The announcement revealed that it is now being extended to the US. Other users, including those in the UK, are now able to sign up to a waiting list.
Meta Verified allows users of Instagram or Facebook to gain a “blue tick” on their profile, upon completing a verification process, and paying a monthly subscription fee ($14.99 per month if purchased on an iOS or Android device, or $11.99 per month if purchased directly on the web). The verification process requires users “to submit a government ID that matches the profile name and photo of the Facebook or Instagram account they’re applying for.”
Meta has so far provided very little information about how it processes a user’s ID document once it has been provided. For example it has not disclosed what documents are accepted, how the authenticity of a document is assessed, how accurate it expects the process to be, the extent to which the process is automated or relies on human moderation, or what data is retained and what it might be used for.
“Meta Verified” vs Twitter’s “Verified by Blue”
Some have suggested that Meta Verified is “borrowing from Musk’s playbook”, i.e. copying Twitter's so-called verification subscription “Verified by Blue”, which was launched in November 2022, and which we analysed here. There are some similarities. As with Twitter, revenue generation appears to be a major driver. Both schemes require payment of a monthly fee - in Twitter’s case (for UK users) £11 per month, in Meta’s (for US users) up to $15 (approx £12).
Both schemes also bundle the blue tick with other “features”. In Twitter’s case these include seeing fewer adverts, the ability to re-edit tweets or post longer videos, access to two-factor authentication via SMS, and a “ranking boost”. For Facebook and Instagram, Meta states that it is still “exploring elements to add to the subscription”, but it currently includes “stickers”, access to a “real person” for customer support and “more protection from impersonation”. Access to verification and a blue tick is being marketed as part of a package of premium features, monetising the fact that blue ticks have historically been seen as prestigious because of an association with notability/celebrity.
A crucial difference with Twitter’s Verified by Blue appears to be that Meta Verified does at least require something which could reasonably be considered to be verification. Twitter relaunched Verified by Blue in December, with Elon Musk claiming a “painful but necessary” beefing up of processes including “manual authentication” in response to widespread and high profile abuse by scammers and impersonators. These appear to have added some friction to the process of gaining “verified by blue” status, such as an eligibility delay for newly-created accounts. But there is still no apparent process to ensure the holder of a Verified by Blue Twitter account is who their account name/profile claims they are. In January a Washington Post journalist found it was still fairly straightforward to get a blue tick for a fake account impersonating a US senator, using a gmail address and a burner phone number - i.e a similar process to that which we used to create a fake MP account a year earlier. In late February researchers identified numerous Putin propaganda accounts sporting blue ticks, and presumably benefiting from Twitter Blue’s promised “ranking boost”.
By contrast, with the important proviso that (as usual) we’re having to rely on Meta’s own claims, Meta Verified does seem to include processes intended to ensure a user is genuinely who they purport to be and using a profile photo of a genuine likeness. This is delivered through a requirement to provide government-issued ID. Users are also required to adopt two-factor authentication, and are not permitted to change their profile name, photo, username or date of birth without repeating the verification process.
This is to be welcomed, not least because Twitter has demonstrated that making available a subscription service described as “verification”, which bestows a blue tick symbol previously associated with authenticity, is liable to exacerbate a range of online harms - creating what Wired magazine has dubbed a “scammers’ paradise”. Meta’s system at least appears less likely to exacerbate existing problems with inauthenticity, abuse, and deception. It does on balance seem likely that a user can reasonably consider an Instagram or Facebook blue tick a more reliable indicator of authenticity than a Twitter blue tick (although as we explore below, both may primarily be an indicator of wealth).
However, in many other respects the two services do share common characteristics - including a lack of transparency about how they operate, a high monthly cost and lack of choice for users, and an apparent prioritisation of revenue generation over safety, which we will explore below.
More a revenue-raiser than a safety feature?
As is almost invariably the case with new features launched by social media platforms, there’s a distinct lack of transparency from Meta about how the feature works beyond some very superficial marketing copy. Meta has not published any internal assessments of its likely impact on platform safety, or anticipated levels of take-up. However the pricing, the reliance on government ID documents, a lack of any attempt to set out how the feature is accessible to all demographics, and the bundling with other extra features, all suggest that Meta sees Meta Verified as a luxury/premium add-on, which it anticipates being adopted by only a small proportion of users. It suggests Meta Verified will appeal “especially” to “creators”, by which it presumably means users who aspire in some way to monetise their content or build a reach beyond just their friends.
Meta Verified looks first-and-foremost like an attempt to develop a new income stream to supplement (stalling) advertising revenue, by moving from a “free” to “freemium” model. It is not primarily being promoted as a safety feature, and the aim has not been to develop a verification option which most users could reasonably be expected to take up. To the extent that Meta is linking Meta Verified to improved user safety (e.g. enhanced protection from impersonation) the safety benefits are only available to those willing to pay for them - a situation which one UK commentator has compared to a “protection racket”.
In Australia, commentators have speculated that Meta Verified might also indicate that Meta has broader intentions for the digital identity market. They’ve highlighted that Meta’s approach appears to involve the collection and retention of significant amounts of personal data, and the lack of interoperability with other digital identity services being developed by the government or the banking sector. For example the Managing Director of ConnectID, an Australian Digital Identity Exchange backed by the Australian banking sector, expressed support for social media platforms verifying users, highlighting its potential “to reduce online abuse and trolling” and to “reduce fraud and personal risk”. However he questioned, “Do we really want Meta and other social platforms to store my identity documentation and biometric images? Given the trust profile these organisations have, wouldn't it be prudent to reduce the data they have to store? Adding to this honeypot of data is not a good idea.”
Would “Meta Verified” satisfy the Online Safety Bill’s User Verification Duty?
Meta has yet to offer Meta Verified to UK users, or indeed to offer any confirmation that the service will be made available to UK users beyond the presence of a “wait list”. However, given the size and similarity of the UK social media market to those where it has so far been piloted, it seems fairly likely that Meta plans to offer a version of Meta Verified in the UK at some point. This poses two obvious questions. Would Meta Verified satisfy the requirements of the User Verification Duty in the Online Safety Bill? And, perhaps more importantly given that there is still scope for the Bill to be improved - should the Bill allow that?
The first question is hard to answer definitively, because the Bill foresees Ofcom creating guidance to fill in the many gaps left by the primary legislation. However, it seems likely that with the Bill as it currently stands, Meta would claim that Meta Verified complies. There is an absence of any definition of “user identity verification”, or requirement that particular standards be met, beyond 57(2) making it clear that “The verification process may be of any kind (and in particular, it need not require documentation to be provided)”.
This lack of definition will surely embolden platforms to challenge any attempt by Ofcom to declare that a particular system is not compliant. We understood the intention behind the government’s wording of 57(2) to be to reassure those concerned about verification being reliant on government ID which might not be accessible to all users - however the “need not” wording is patently deficient or ambiguous and would not prevent Meta choosing to go down this route.
It is easier to say definitively that the Meta Verified proposal, as currently understood, shouldn’t be allowed to satisfy the User Verification Duty, if that duty is to deliver on its stated aims. When the government announced the User Verification Duty, and the accompanying “empowerment duty” to enable users to filter out non-verified accounts, it stated an aim of “removing the ability for anonymous trolls to target people on the biggest social media platforms”. It promised to require the biggest platforms to “offer ways for their users to verify their identities” coupled with the power to “block people who have not verified their identity”. The version of Meta Verified which Meta is currently rolling out would fall short on these promises in some crucial respects.
Most crucially, for the verification and filtering duties to have their desired impact on UK users’ safety, a critical mass of UK users need to be realistically able and to choose to verify. We are confident that if designed correctly, take-up for both these features would be very high - repeated opinion polling commissioned by us and others has found significant majorities of UK social media users expressing a willingness to verify and an interest in filtering non-verified accounts. However, if the vast majority of people remain non-verified, then not only are the vast majority of users no more accountable than at present, but also the option to filter out verified accounts ceases to be a helpful safety feature.
£144 per year, per account, would be a huge barrier to take-up of Meta Verified. This would be true in any economic climate. It is particularly true during a cost of living crisis, when real wages have declined and 47% of households report experiencing some level of difficulty paying their gas and electricity bills. Meta Verified would be truly unaffordable for millions of UK social media users, and for many more the cost would act as a powerful disincentive. With that price tag attached to verification, anyone making use of the option to filter out non-verified accounts could be fairly confident of only encountering rather affluent social media users.
Perhaps because there’s such a huge financial barrier anyway, there has been relatively little exploration so far of potential accessibility issues raised by Meta’s requirement that users share government-issued identity documents directly with Meta. None of the countries where Meta Verified has so far been piloted have a universal, mandatory identity card. Meta has not been transparent about what forms of documentation it is accepting, or shared its own assessment of what proportion of their user base has access to these documents, or whether any groups with particular vulnerabilities may be disproportionately likely to not have these documents. In addition Meta’s lack of transparency over how it will process the documents, combined with its historically poor reputation for respecting its users’ privacy may mean other users who could afford verification, and have the appropriate documents, may nonetheless be reluctant to share those documents with Meta.
It’s perfectly possible to develop robust, secure, privacy-respecting and cheap or zero cost verification processes which include options which make use of existing, paper identity, or which rely on prior authentication with other organization such as banks. But it’s not clear how secure or privacy-respecting Meta Verified is. If a UK version were to require users to provide Meta with a copy of their passport or driving licence, and not offer alternative routes (including those which would ensure accessibility for users not in possession of such documents such as “vouching” by another user, or verification based on a bank account) then the cost barriers to take-up would likely be compounded by barriers associated with not having the right documents or not trusting Meta enough to want to share them with the platform.
How should UK legislators and regulators respond to Meta’s “Meta Verified” and Twitter’s “Verified by Blue”
Two major social media companies, owning three social media platforms with a large UK user base, have now introduced new subscription services which they describe as “verification”. There are some important differences in how the two subscription services operate. And there have been significant stylistic differences between the chaotic launch and erratic tweeted updates from Elon Musk, and the phased roll out and scheduled announcements from Mark Zuckerberg. But notwithstanding these differences, both sets of features fall short of what is needed to reduce the harms associated with anonymous accounts, empower users to protect themselves from abuse or fraud, or to start to improve levels of trust online.
In Twitter’s case, Verified by Blue conflates verifying an account’s authenticity with collecting a monthly fee. In Meta’s case affordability and accessibility barriers will make Meta Verified a luxury add-on for a small number of affluent users. Both seek to monetise the blue tick as part of a premium subscription off rather than democratising it as a universally available safety feature. Neither offers all UK users a genuine choice of verifying their identity in a way that reasonably meets the needs and preferences of the user, and neither will have the levels of take-up necessary to make the filter on non-verified accounts a helpful safety feature.
None of this should come as a surprise. If we could rely on the platforms to operate in good faith, strike an appropriate balance between profit-seeking and the interests of UK citizens, and to deliver effective, universally available safety features voluntarily, we would not need the Online Safety Bill. Twitter’s Verified by Blue and Meta’s “Meta Verified” should therefore be treated as a helpful strength test of the current wording of the Bill. If we are unable to say, with high confidence, that the current wording gives Ofcom all the clarity and powers it needs to probe and assess the new measures and insist on improvements, then the Bill needs amending.
We have suggested four modest amendments which we expect to be discussed as the Bill proceeds through the House of Lords, and which together would ensure that users would experience the intended benefits of the user verification and user empowerment duties in a timely way. Our amendment to clause 58, for example, would set out more clearly what Ofcom’s guidance on verification must cover; the principles Ofcom must consider and the entities it must consult when drafting that guidance. This would mean Ofcom could confidently set minimum standards on, for example, accessibility and affordability - and assess the platforms are meeting those standards. Ofcom would have to consider issues like user choice and interoperability - and so consider whether it was reasonable for a UK user with several different social media accounts to be forced to go through (and pay for?) several separate in-house processes.
The timing of Meta and Twitter’s launches of so-called verification processes, just as the Online Safety Bill proceeds through parliament, may or may not be a total coincidence. But either way, it is helpful in a couple of key respects. Firstly, it puts to bed the debate about whether or not extending verification options to all users is practical or desirable - platforms are (after a fashion) doing it already. Secondly, it reinforces the importance of independent regulation - the platforms have shown that with respect to verification, as with so many other platform features, if they are left to their own devices users will be offered a feature that prioritises income generation and data harvesting over safety.
It’s almost exactly a year since the government first accepted the principle that users should have an option to verify themselves. We’ve now seen two real-world examples of how a verification system can fall short. The wording of the verification and filtering duties, drafted twelve months ago, is patently inadequate in the face of recent developments. It’s therefore essential that as the Lords debate and vote on the Online Safety Bill over the next few months, they take the opportunity to do some necessary fine-tuning.