Worth noting the irony cycle: Discord's October 2025 breach leaked ~70,000 government IDs from their support vendor 5CA, which pushed them toward "privacy-preserving" on-device face estimation via k-ID. But the privacy-preserving design (run the model locally, only send metadata) is exactly what makes it trivially spoofable. The encryption is solid (AES-GCM with HKDF-derived keys) but it protects transport integrity, not input authenticity.
So they moved away from collecting IDs because collecting IDs is a liability, and moved toward a system that's bypassable because it doesn't collect enough to verify. This isn't solvable without hardware attestation (App Attest, Play Integrity), which kills the browser flow and still doesn't prevent pointing the camera at a screen.
Age verification as a concept requires either trusting the client (spoofable), collecting sensitive data (breach liability), or binding to attested hardware (excludes platforms and users). Pick your poison. Every vendor in this space is just choosing which failure mode they prefer.
You forgot one (the sane one, which is coming soon anyway):
Using a government issued eID system. The EU is going to rollout eID in a way that a site can just ask “is this person > age xy?”. The answer is cryptographically secure in the sense that this person really is this age, but no other information about you has to be known by the site owner.
Which is the actual correct way to do it.
I don’t understand why all the sites go crazy with flawed age verification schemes right now, instead of waiting a until the eID rollout is done.
EDIT:
I forgot to mention that it’s only the correct way if the implementation doesn’t give away to your government on which sites you browse…
Which I believe is correctly done in the upcoming EU eID but I could be wrong about it.
What I don't understand about this approach is if it's truly completely privacy preserving what stops me from making a service where anyone can use my ID to verify? If the site owner really learns nothing about me except for my age then they can't tell that it's the same id being used for every account. And if the government truly knows nothing about the sites I verify on they can't tell that I'm misusing the id either. So someone must know more then you are letting on.
One possible solution idea I just had is having the option of registered providers (such as Discord). They would have a public key, and the user has a private key associated to their eID. They could be mingled in such a way to create a unique identifier, which would be stored by the provider (and ofc the scheme would be such that the provider can verify that the mingled identifier, was created from a valid private key and their public key).
This would in total make sure that only one account can be created with the private key, while exposing no information about the private key aka user to the provider. I am fairly certain that should work with our cryptographic tools. It would ofc put the trust on the user not to share their eID private key, but that is needed anyway. Either you manage it or it gets managed (and you lose some degree of privacy).
The hole is closed with per-site pseudonyms. Your wallet generates a unique cryptographic key pair for each site so same person + same site = same pseudonym, same person + different sites = different, unlinkable pseudonyms.
"The actual correct way" is an overstatement that misses jfaganel99's point. There are always tradeoffs. EUDI is no exception. It sacrifices full anonymity to prevent credential sharing so the site can't learn your identity, but it can recognize you across visits and build a behavioral profile under your pseudonym.
I fail to see how that solves the problem? That's what I'm saying my service would provide. Unless the eID has some kind of client side rate limiting built in I can generate as many of them as I want. And assuming they are completely privacy preserving no one can tell they were all generated by the same ID.
> Since Proof of Age Attestations are designed for single use, the system must support the issuance of attestations in batches. It is recommended that each batch consist of thirty (30) attestations.
It sounds like application would request batch of time-limited proofs from government server. Proofs gets burned after single use. Whether or not you've used any, app just requests another batch at a set interval (e.g. 30 once a month). So you're rate limited on the backend.
Edit: seems like issuing proofs is not limited to the government, e.g. banks you're client of also can supply you with proofs? (if they want to partake for some reason). I guess that would multiply numbers of proof available to you.
Ok I have been convinced this is a technically feasible solution that could preserve privacy while reasonably limiting misuse. That said I'm worried that the document you linked does not require relying parties implement the zero knowledge proof approach. It only requires that they implement the attestation bearer token approach which is much weaker and allows the government to unmask an account by simply asking the relying party which attestation token was submitted to verify the account.
> Relying Party SHALL implement the protocols specified in Annex A for Proof of Age attestation presentation.
> A Relying Party SHOULD implement the Zero-Knowledge Proof verification mechanism specified in Annex A
the solution to this seems to be to issue multiple "IDs". So essentially the government mints you a batch of like 30 "IDs" and you can use each of those once per service to verify an account (30 verified accounts per service). That allows for the use case of needing to verify multiple accounts without allowing you to verify unlimited accounts (and therefor run into the large scale misuse issue I pointed out).
If you need to verify even more accounts the government can have some annoying process for you to request another batch of IDs.
Sites need to deal with Australia, which punted all responsibility to the platforms and provided no real assistance (like say the government half of the eID system that manages all the keys and metadata)
All publicly-listed ad delivery systems like Meta do in fact need to deal with high-income countries.
They can't afford to and will never strike off 100m Brits and Aussies, and that number will only rise with more high-income countries making regulation.
There are also alternatives that can be good enough, such as the Swedish BankId system, which is managed by a private company owned by many banks. They provide authentication and a chain of trust for the great majority of the population on about all websites (government, healthcare, banking and other commercial services) and is also used to validate online payments (3D Secure will launch the BankId app).
While it's not without faults (services do not always support alternative authentication which may support foreigners having the right to live in the country), it has been quite reliable for so many years.
So just to say, you can have successful alternatives to a government controlled system as many actors may decide it is quite valuable to develop and maintain such a system and that it aligns with their interest, and then have it become a de-facto standard.
Its like it is evolving in front of our eyes! Eventually they might get somewhere that meets all the requirements, natural selection governed by lawsuits.
Author here. We built a security scanner called Kolega that does semantic analysis instead of pattern matching. To see if it actually worked, we ran it against 45 open source projects and reported what it found through responsible disclosure.
225 vulnerabilities. 41 reviewed by maintainers so far, 37 accepted, 4 rejected. 90% acceptance rate.
The bugs weren't exotic. They were things like:
if not user_id is not None - a double negative in Phase that means the permission check never runs. Nine auth bypasses total.
torch.load() without weights_only=True in vLLM - RCE via pickle deserialization in one of the most popular inference frameworks.
RestrictedPython sandbox in Agenta where __import__ was explicitly added to safe_builtins. Four different escape routes to arbitrary code execution.
SQL injection in NocoDB's Oracle client - Semgrep scanned the same codebase and found 222 issues, 208 of which were false positives, and missed this one entirely.
The interesting part to me wasn't that we found bugs. It's that these are all syntactically correct - the code compiles, runs, looks fine in review. The problems are semantic. No pattern matcher catches not X is not None because it's valid Python. You have to understand what the developer intended.
135 findings are still waiting on maintainer response. 4 were rejected - some we thought were exploitable, maintainers disagreed. We document those too.
Happy to discuss specifics on any of the projects or argue about methodology.
Something felt off about your comments, so I checked your account. You signed up almost six years ago, and in all that time made zero submissions and your only comments are these two on this thread? I’ve been seeing this more and more on HN. What exactly is going on here?
Yep. Two latest comments are full of LLM tells, plus an LLM-generated Show HN.
As usual with modern Claudes and GPT-5s, the output repeats and overemphasizes jargon from the input tokens without clarifying or switching up the wording.
This perfectly describes my former experience with Reddit: I used to browse quite frequently without being logged in. If I wanted to post a reply badly enough to bother with logging in, I would then start commenting on other parts as well; the next day I'd likely be logged out again and not be willing to bother with signing in again for a few more months. Though this did change when the I started using mobile apps more.
So they moved away from collecting IDs because collecting IDs is a liability, and moved toward a system that's bypassable because it doesn't collect enough to verify. This isn't solvable without hardware attestation (App Attest, Play Integrity), which kills the browser flow and still doesn't prevent pointing the camera at a screen.
Age verification as a concept requires either trusting the client (spoofable), collecting sensitive data (breach liability), or binding to attested hardware (excludes platforms and users). Pick your poison. Every vendor in this space is just choosing which failure mode they prefer.
reply