You know what's wild? I just asked a room full of tech professionals how many of them have an estate plan. Eight hands went up. Out of dozens of people. That's it.
This is a symptom of a massive problem that's about to hit the financial world like a tidal wave. And I'm here to tell you why AI isn't just helpful for solving it; it's absolutely essential.
When $84 trillion meets outdated systems
Let me paint you a picture that should make every financial professional sit up straight. The baby boomer generation currently holds 72% of all US household assets. We're talking about $84 trillion that's set to be passed down to heirs by 2045.
Eighty-four. Trillion. Dollars.
Here's where it gets even more interesting: Two out of three Americans don't even have a Will. Think about that for a second. We're facing the largest wealth transfer in human history, and most people aren't prepared for it.
But wait, there's more to this story.
Why estate planning feels like reading ancient hieroglyphics
Estate planning is fundamentally about planning how your wealth gets distributed when you die or become incapacitated. Simple concept, right? Not quite.
At Wealth.com, we started by trying to help everyday people create trusts and wills. What we discovered was fascinating. The mass-affluent market needed help, sure, but there was an even bigger opportunity in financial institutions serving ultra-high-net-worth individuals.
Here's the thing about wealthy clients: they already have estate plans. The problem? These documents are often 10, 20, or even 30 years old. They're scanned PDFs spanning hundreds of pages, filled with handwritten notes, amendments, and edits that would make your head spin.
Imagine being a financial advisor reviewing these documents. You're looking for gaps in tax optimization strategies, trying to figure out where updates are needed, and deciphering complex entity relationships. It's like trying to solve a puzzle where half the pieces are written in different languages.
The traditional approach? Bring in an outside trust and estate attorney. Have them review everything. Work together to amend or restate documents. The process is costly, time-consuming, and frankly, painful for everyone involved.
The AI solution that almost works (But doesn't)
You might be thinking, "This sounds perfect for AI!" And you'd be right, sort of.
The challenge isn't just throwing an LLM at the problem and calling it a day. Trust me, we've seen what happens when people try that approach. Let me share a quick experiment I ran with Google's NotebookLM. I uploaded a Form 709 (a gift tax return) and asked it to extract simple yes/no answers from questions 18 through 21.
The result? 40% accuracy. On yes/no questions.
This is about understanding why reading estate plans is genuinely hard:
- Document variety is insane: Revocable trusts, irrevocable trusts, pour-over wills, last wills and testaments, financial powers of attorney, advance health directives, Form 709s; the list goes on.
- Age and condition matter: These aren't fresh digital documents. They're decades-old, scanned, and often handwritten papers.
- Length is overwhelming: We're talking hundreds, sometimes thousands of pages per client.
- Time is precious: On average, it takes a trust and estate attorney three hours to properly review a 60-page document.
AI promises to reduce those hours to minutes, maybe even seconds. That's where Esther, our estate planning AI copilot, comes in.

Building AI that lawyers can actually trust
Here's what Esther does: users upload their PDF estate planning documents, and our AI extracts key provisions, disposition details, and complex entity relationships. It then transforms this maze of legal language into clear, visual reports that advisors can present to clients.
Sounds straightforward, right? It's not.
Current LLMs struggle with several fundamental tasks:
- They're surprisingly bad at extracting text from images (OCR tasks)
- They fumble with Q&A tasks on government forms
- They lack deep domain knowledge in estate planning, legal, and financial spaces
- And yes, they still hallucinate
When AI hallucinations have real-world consequences
Let me tell you about some recent headlines that should concern anyone working with AI in regulated industries.
Two weeks ago, literally two weeks ago (*as of June 2025), Anthropic's lawyers used Claude to draft legal briefs in a court case against Universal Music Group. Claude hallucinated a legal citation. In another case, a lawyer used Gemini and got similar results. Another used ChatGPT and referenced non-existent cases.
These aren't edge cases from the early days of AI. This is happening right now, with the most advanced models available.
In fields like finance and legal, where a single mistake can have devastating consequences, 80% accuracy isn't good enough. We need 95%, 99%, maybe higher.
The non-negotiables for AI in regulated industries
Working in estate planning has taught me what really matters when building AI for high-stakes environments.
Here's what you absolutely cannot compromise on:
1. Precision and recall must be exceptional
Off-the-shelf LLMs with one-shot prompting won't cut it. You need systems designed from the ground up for accuracy.
This means:
- LLM-as-judge architectures
- Chain of verification processes
- Multiple validation layers
2. Human-in-the-loop isn't optional
We've built fact-checking mechanisms directly into our workflow. Users can easily verify and fact-check AI-generated information. But we go further: we proactively run evaluation checks ourselves.
This leads to our continuous red teaming approach. We maintain a team of domain experts who know estate planning inside and out. They can spot when AI-generated content is off, even slightly.
Without this expertise, how can you evaluate whether your AI system is performing accurately?
3. Data security and privacy are paramount
In regulated fields, you need:
- End-to-end encryption
- Written guarantees that LLM inputs/outputs won't be used for training
- PII scrubbing before any fine-tuning
- Consideration of self-hosted models to avoid shared infrastructure risks
4. Avoiding unauthorized practice of law
This is subtle but crucial. Financial advisors worry about AI being "too smart", generating recommendations that could constitute unauthorized practice of law if presented directly to clients.
The solution is smart agent design. Different users get access to different capabilities. Attorneys might access document-drafting agents, while financial advisors get analysis tools. It's about understanding who's using your system and designing appropriate guardrails.
How we actually make this work
Let me walk you through our human-in-the-loop architecture, which creates a symphony of collaboration.
Users interact with a supervisor agent that routes queries to expert agents: financial advisor agents, trust and estate attorney agents, and CPA agents. Each has specific tools and permissions based on user type.
A judge agent verifies accuracy before anything reaches the user. When results are presented, we collect feedback on accuracy. If users consent to training, we feed this back into the system, but only after anonymizing all PII.
Our red team samples AI-generated content continuously, evaluating agent performance. This creates a feedback loop where the system genuinely improves over time through fine-tuning and refined prompting strategies.

The UX challenge nobody talks about
Here's a hard truth: humans are inherently lazy. We want AI to be an autopilot, not a copilot. That's how you end up with lawyers submitting AI-hallucinated legal briefs without fact-checking.
Good AI products need to enforce verification through design, not policy. At Wealth.com, we've built a UX that makes fact-checking feel natural, even easy.
For example:
When AI generates an executive summary from a 100-page document, each bullet point includes citations. Click a citation, and you're not just taken to the right page, as we overlay a bounding box over the exact text the AI referenced. No hunting, no guessing.
Users who lack domain expertise can request manual review by our in-house legal team. It's about making the right thing the easy thing.
Key lessons from the trenches
Building AI for estate planning has taught me several crucial lessons:
Domain expertise equals engineering excellence
All the fancy retrieval methods, few-shot prompting techniques, and fine-tuning approaches mean nothing without domain experts who can verify accuracy. You need in-house human resources who understand the field deeply enough to curate training data that actually reflects reality.
High-quality data is everything
Models will get smarter and cheaper over time. What won't change is your need for ground-truth labeled datasets. You need consistent evaluation sets that work across model versions. Just because Gemini 2.5 is "better" than 2.0 doesn't mean your prompts will work the same way.
AI should empower, not replace
The goal is to amplify human expertise. But this requires thoughtful product design that acknowledges human nature. Build systems that make verification smooth and natural, not a friction point users will skip.
Adaptability keeps you future-proof
Sam Altman once said you can build AI startups in two ways: (1) around a specific model, or (2) banking on models getting better. The "OpenAI killed my startup" meme exists for a reason.
Build model-agnostic systems. Gather user feedback continuously. Let complaints guide improvements.
The future of agentic estate planning
Looking ahead, I see estate planning becoming truly agentic. Not just AI helping financial advisors, but entire teams of expert agents collaborating: trust and estate lawyers, CPAs, compliance experts, each represented by specialized AI agents working together seamlessly.
Imagine a world where updating your estate plan doesn't require months of meetings and thousands of dollars in legal fees, and where tax optimization strategies are continuously evaluated. Where complex multi-generational wealth transfers are modeled and remodeled as laws change.
That's the future we're building at Wealth.com. And honestly? Given that $84 trillion wealth transfer is heading our way, we can't make it fast enough.
The bottom line
Estate planning represents everything challenging about applying AI to regulated industries. The documents are complex, the stakes are high, and mistakes have real consequences. But it also represents the incredible opportunity AI offers.
By building systems with the right safeguards, exceptional accuracy, human oversight, robust security, and thoughtful UX, we can transform an industry that desperately needs it. We can make estate planning accessible, efficient, and effective for the millions who need it.
The question isn't whether AI will transform estate planning. It's whether we'll build it right. And from where I'm standing, with the right approach to accuracy, human collaboration, and continuous improvement, the answer is absolutely yes.
Because when $84 trillion is on the line, "good enough" isn't good enough. We need AI that financial advisors, lawyers, and clients can truly trust. And that's exactly what we're building.

