A new identity verification layer is being rolled out on Claude. Not a broad-brush edict or ruling applied to every user, the ID filter will apply to what Anthropic defines as a “few use cases”, although the scope of those cases has been left somewhat vague in places.
Anthropic confirmed the development on Tuesday in a Claude Support blog. The organization says that identity verification helps prevent abuse, enforce usage policies, and comply with legal obligations.
Users have been told that they “might see a verification prompt” when accessing certain capabilities, and that this is part of the Claude team’s “routine platform integrity checks” and other safety and compliance measures.
How will Anthropic verify users?
Anthropic has selected San Francisco-based identity verification platform company Persona Identities as its technology partner for this service. Users asked to confirm their identity will need a valid government-issued photo ID. They will need to show the physical document in hand, along with a live selfie, to pass the verification process.
“We are not using your identity data to train our models… We are not collecting more than we need… We are not sharing your identity data with anyone else.” – Anthropic Claude team.
Accepted documents include passports, national identity cards, a driver’s license or a state/provincial ID card. Verification typically takes under five minutes. The Claude team confirms that non-government IDs, such as student IDs, employee badges, library cards, and bank cards, are not acceptable.
Which users is Anthropic filtering?
Initially, the Claude ID filter appears to originate from a requirement to corral four main user types: usage policy offenders, unsupported location users, terms of service violators and under-18 users.
The organization wants to clamp down on those who repeatedly violate the Anthropic usage policy. A fluid mandate to match the evolving nature of AI, the organization’s usage policy has its own occasional blog-cum-news update stream where users are informed of changes designed to prevent cyber infringements, restrict political content, and prevent unwanted agentic usage.
Conveniently, Anthropic publishes a list of supported countries where commercial API access and Claude.ai are available, rather than laying down a definitive list of banned nations. Unsupported locations include the usual suspects, i.e., mainland China, Russia, Iran, North Korea, and Belarus.
Some African nations are also absent from the list of supported nations. Ukraine is supported, but the occupied regions of Crimea, Donetsk, Kherson, Luhansk, and Zaporizhzhia are excluded.
Under-18 usage
Although some users in Zaporizhzhia will be disappointed, they’ve obviously got bigger problems right now. More likely is a growing tide of disappointment among the under-18 user group, especially those using Claude for computer science projects and study.
According to user llm_nerd writing this week on Hacker Noon, his 15-year-old son had his Claude account suspended with a demand for ID to prove he is 18 or older.
“He had his own Claude Max subscription (he out-earns me fairly frequently in his circle of gaming programmers), and was unaware Anthropic had a must-be-18 rule, as was I,” writes ther commenter.
Anthropic’s email said, “Our team found signals that your account was used by a child. This breaks our rules, so we paused your access to Claude.”
Happily, this user notes that Anthropic gave the young software programmer a refund for the entire month that was underway, even though he was nearing the end of it.
OpenAI’s terms of use confirm that use of Codex and ChatGPT is open to users over 13 years old. Google’s Gemini Apps responsible use and age requirement pages also confirm that services are open to users 13-years and older.
What Anthropic isn’t doing
Anthropic says it acts as “the data controller” for verification data. That means it sets the rules for how it’s used and how long it’s kept. User IDs and selfies are collected and held by Persona, not on Anthropic’s systems. Anthropic can access verification records through Persona’s platform when needed — for example, to review an appeal — but it does not copy or store those images itself.
“We are not using your identity data to train our models. Verification data is used solely to confirm who you are and to meet our legal and safety obligations. We are not collecting more than we need. We ask for the minimum information required to verify your identity. We are not sharing your identity data with anyone else,” reads the Anthropic statement.
This development might be said to be of the time and of the moment. Australia’s social media ban for under-16s is also being considered by the UK government.
With global conflicts playing out around the world and new streams of cyber vulnerabilities surfacing through the agentic services layer, user restrictions of this kind will be viewed as appropriate by some but as commercial-sector nanny-state interventionism by others.
YOUTUBE.COM/THENEWSTACK
Tech moves fast, don’t miss an episode. Subscribe to our YouTube
channel to stream all our podcasts, interviews, demos, and more.
SUBSCRIBE
Group
Created with Sketch.
Adrian Bridgwater is a technology journalist with three decades of press experience. He has an extensive background in communications, starting in print media, newspapers and also television. Primarily working as an analysis writer dedicated to a software application development ‘beat’,…
Read more from Adrian Bridgwater
