
The OpenID Foundation doesn't make dramatic statements. They're the people who built the protocols that let you click "Sign in with Google" on thousands of websites. They've spent two decades solving authentication, authorization, multi-factor authentication. Boring infrastructure work that quietly keeps the internet running.
So when they publish a whitepaper titled "The Unfinished Digital Estate" and call digital death verification "identity's biggest unsolved problem," it's worth paying attention.
Their March 2026 report doesn't mince words: identity infrastructure has no mechanism for verifying death, delegating access posthumously, or handling incapacitation. None. Zero. The internet assumes everyone lives forever.
The problems they've identified
Death verification is a mess. There's no global standard for proving someone has died. Death certificates differ between countries, take 10-12 days to issue, and can be forged. When your grandmother passes and you need to access her accounts, you're navigating a patchwork of platform policies, legal requirements, and technical dead-ends.
Platforms treat death as an edge case. Some offer "legacy contact" features. Others have no formal process at all. Many quietly suggest families log in using the deceased person's credentials, which is technically against their own terms of service and potentially illegal.
Strong authentication dies with you. Remember those passkeys everyone said would replace passwords? They're tied to your devices and biometrics. When you die, they become useless. Yes, you can store them in 1Password, iCloud (with legacy contacts), or use physical Yubikeys that can be handed over. But these workarounds require planning, technical knowledge, and aren't how most people use passkeys. The authentication method designed to make your accounts more secure makes them harder to inherit by default.
AI can resurrect you without permission. This is the one that kept me thinking. Generative AI can now create realistic voice clones and digital avatars from a person's data. Companies are already offering "AI afterlife" services that let families talk to simulations of dead relatives.
But here's the question nobody's asking: who consented to this?
The consent problem nobody's solving
When someone dies, they can't consent to anything anymore. That's obvious. But the implications get uncomfortable fast.
Keeping a Facebook profile active as a memorial is one thing. That's preserving what someone actually posted, in their own words, from their own choices. It's passive preservation.
Training an AI to simulate conversations in their voice is something else entirely. That's generative. It creates things they never said, in a voice they can't control, representing a person who never agreed to be represented.
Can heirs give that consent? They're not consenting to something that affects them. They're consenting to simulate someone else's identity, someone else's voice, someone else's personality. The deceased person never signed off on being turned into a chatbot. The heirs are making that decision for them.
Families keep social profiles alive for years. LinkedIn accounts of people who died in 2015 still show up in connection suggestions. Is that what those people wanted? We don't know. Nobody asked them while they could answer.
This isn't a hypothetical future problem. It's happening now. And the OpenID Foundation is right: we have no governance framework for posthumous identity rights.
Why this is happening now
The timing isn't coincidental.
Digital assets in the U.S. surpassed $2.3 trillion in 2025. Not all of that is crypto. That's subscription services, digital purchases, online accounts, cloud storage, domain names. Real value locked behind passwords that die with their owners.
Every internet user eventually becomes a deceased internet user. The first generation of people who lived their entire adult lives online is starting to reach that phase. Their digital footprint isn't a footnote to their estate. For many, it's a significant portion of it.
Platforms are starting to realize their liability exposure. Mishandling a deceased user's data, letting unauthorized people access accounts, or refusing legitimate heirs creates legal risk. The EU's Digital Services Act is adding compliance requirements. Lawsuits are coming.
Governments are finally paying attention. Several U.S. states have passed digital asset laws. The EU is working on cross-border frameworks. But standards development is slow, and the problems are immediate.
Solutions taking shape
The OpenID Foundation isn't just identifying problems. They're also proposing a path forward.
They're calling on governments to legally recognize digital assets in inheritance law, clarify privacy rights after death, and establish cross-border frameworks. They want platforms to build proper delegation models instead of asking families to share passwords. They're pushing standards organizations to develop death verification protocols and fiduciary credential systems.
Some of this infrastructure already exists. In February 2026, OpenID launched self-certification for Verifiable Credentials specifications. These could enable legitimate executor verification, letting someone prove they have legal authority to access accounts without sharing passwords or forging documents.
The pieces are there. Someone needs to assemble them.
What this means for you
Waiting for platforms to solve this is a gamble. Some will build proper tools. Many won't. And the timeline for standards adoption is years, not months.
The families dealing with digital estates right now can't wait for the OpenID Foundation to finish their working group discussions. They need solutions that work today, not specifications that might ship in 2028.
This is the gap we built Trustbourne to fill. Comprehensive digital estate planning that works across platforms, respects legal frameworks, and puts you in control while you can still make decisions.
Because here's the thing about consent: you can only give it while you're alive. Once you're gone, other people make choices about your digital identity whether you planned for it or not.
The question is whether those choices reflect what you actually wanted.