Skip to main content
0

Hallucinations: What AI Gets Wrong About Grief

A
a-gnt Community13 min read

An honest essay about the specific ways AI falls short around loss, eldercare, and caregiving — and why that still leaves it useful if you know where the edges are.

Here's the moment we keep hearing about.

A caregiver, often up before dawn, opens an AI chat tool to help write an obituary. They type something like: can you help me write my dad's obituary, he was 84 and loved woodworking. The AI responds warmly, almost immediately, with a draft that begins: Robert was a loving husband, devoted father, and the kind of man who could fix anything with his hands.

The father's name isn't Robert. It's never Robert. The caregivers who told us about this pattern described the same kind of laugh every time — not a good laugh, the kind you do when a thing is so absurd it cancels out the part of your brain that was going to be offended. The AI had filled in a name, because obituaries have names in them, and the statistical shape of an obituary includes a first name in the first sentence. It didn't mean any harm. It was, in fact, trying to be helpful. It was just — the thing we at a-gnt want to talk about in this series — oddly, specifically, and consistently wrong about something it had no business touching in the first place.

This is the first entry in a series we're calling Hallucinations. The name is a little pointed. In AI research, "hallucination" is the technical term for when a language model generates information that is confidently false — a fake citation, an invented fact, a quote that never happened. But we like the broader meaning too. A hallucination is a vivid, specific, persuasive thing your mind produces that isn't there. AI tools do this in a lot of places, but they do it most strangely around the parts of being human that are the least like writing code: grief, ambiguity, silence, the things we know but can't say, the things we say but don't mean, the things that need a person and not a sentence.

We want to write about those places, honestly, without turning any of it into a lecture about why AI is bad. We don't think AI is bad. We build a-gnt. We use these tools every day. We think they're among the most interesting objects ever made. But they're also a weird new thing in the world, and a weird new thing in the world deserves to be looked at carefully, especially in the places it has the most potential to hurt someone who is already having a hard day.

We decided to start with grief because it's where our readers have been hurt the most, and because it's the failure mode we've heard about the most often from caregivers. This piece is based on conversations with real caregivers who have used AI tools during the hardest weeks of their lives. None of them wanted to be named, and we're not going to pretend to quote them — because that would be its own kind of hallucination. We'll tell you what they told us in our own voice, and you can trust that or not.

Failure mode one: the false comfort

The most common thing AI does wrong around death is the thing it thinks it's doing right: it offers comfort. Immediately. Without being asked. Without knowing anything about the person you lost, the relationship you had with them, your religious beliefs, your cultural context, or whether "comfort" is what you wanted from the conversation in the first place.

A caregiver tells an AI that her mother has just died. The AI responds: "I'm so sorry for your loss. Your mother is in a better place now, watching over you. She would want you to be strong."

None of those three sentences is something a stranger has any right to say. "She's in a better place" is a theological claim the AI has no standing to make. "Watching over you" is a theological claim stacked on top of the first one. "She would want you to be strong" is the AI putting words in a dead person's mouth — literally the thing AI is most notorious for doing badly — about a person it has never met.

And yet: AI defaults to this phrasing constantly. Why? Because the training data is full of condolence messages, and condolence messages are full of these phrases, and the model has learned that when someone mentions a recent death, the statistically likely next output is something that sounds like a sympathy card. The model is doing exactly what it was built to do. It's just doing it in a context where sympathy cards have always been, honestly, a little insulting.

One of the caregivers we spoke with said it cleanly: "I didn't want an AI telling me where my husband was. I wanted an AI to help me figure out what to do about his checking account." That's the gap. A caregiver in the early days of grief is usually not looking for spiritual reassurance from a chatbot. They're looking for logistical help from a chatbot. And the chatbot keeps pivoting to reassurance because that's what the pattern-match says to do.

Failure mode two: wellness-speak

Grief is not a wellness problem. This is worth saying out loud because AI tools treat it like one, constantly, with the best intentions in the world.

A caregiver tells an AI she's having trouble sleeping since her mother-in-law went into hospice. The AI responds with a tidy three-bullet list: "1. Establish a calming bedtime routine. 2. Limit screen time before bed. 3. Try a mindfulness meditation app." These are not wrong, exactly. They're the advice you'd give to a stressed college student cramming for finals. They are completely the wrong register for a woman whose mother-in-law is dying.

Wellness-speak is AI's default emotional vocabulary. It's smooth, it's optimistic, it's solution-oriented, it's vaguely therapeutic without being therapy, and it's deeply, structurally wrong for grief. Grief is not a problem you solve with a bedtime routine. Grief is something you go through, and something that changes shape as you go through it, and something that sometimes doesn't change at all for months and then lifts suddenly for reasons you can't explain. It does not respond to tidy lists.

The caregivers we talked to almost universally described the same experience: they would share something sad or hard with an AI, and the AI would respond with the emotional equivalent of a kombucha ad. Gentle, wellness-colored, faintly corporate, and — most of all — uninterested in just sitting with what they said. One person described it as "a very patient, very articulate wall of chamomile tea." We can't improve on that.

Failure mode three: the inability to sit with silence

Here's the failure mode that's the most technically interesting, because it's baked into the architecture of how these tools work.

A large language model's job is to produce the next word. It is structurally incapable of producing nothing. If you tell it something heavy and it could, as a person would, just… pause — let the sentence you wrote breathe for a moment, meet it with a beat of respectful silence — it can't. It has to say something. The architecture demands it. Output is the whole game.

So when a caregiver types "I sat with him at the end and held his hand and his hand was so cold," the AI has to respond. And because it has to respond, and because the training data says that the appropriate response to that sentence is a sympathy-card sentence, it produces a sympathy-card sentence. Something like "What a beautiful and meaningful final moment. Your love for your father was clearly profound."

A human friend, hearing that same sentence across a kitchen table, would not say "your love for your father was clearly profound." A human friend would go quiet. Maybe reach across the table. Maybe let the silence sit for ten seconds before saying anything at all — and then maybe just "yeah." Grief is largely a thing that happens in silence between people who love each other, and AI cannot, at the level of how it's built, do silence.

This is not a problem that a better prompt can solve. You can tell a model "please just be quiet for a minute" and it will respond with a paragraph about how it understands the value of silence. It can't help it.

Failure mode four: hallucinating memories

This one was the hardest to hear about. It was also the one we heard about most often from caregivers writing eulogies, obituaries, and family tributes.

A caregiver gives an AI three sentences about her late grandfather — his name, his age, that he was a farmer in Kansas — and asks for help drafting a eulogy. The AI produces a beautifully written eulogy that references specific memories: "who could tell you the exact moment a thunderstorm would roll in by watching the hawks over the wheat field"; "whose workshop smelled of motor oil and pipe tobacco and coffee from the thermos his wife packed every morning"; "who taught his granddaughter to whistle by putting her small hand on his chest so she could feel the breath move."

None of those things are true. The caregiver never said any of them. She gave the AI three sentences of dry biographical information, and the AI filled in the texture, because a good eulogy has texture, and the model's training data contains a lot of eulogies with texture, and the statistical shape of "a eulogy for a farming grandfather" includes this kind of material.

The caregiver didn't notice the invented details at first. She almost read them aloud at the funeral. She caught the thunderstorm line and realized her grandfather had never said anything of the kind, and then she read the whole thing again, and she cried, because it was beautiful and none of it was him. The AI had given her a eulogy for a man it had invented. The man she'd actually lost was nowhere in the draft.

This is the single clearest case we have for why AI is dangerous in grief: it is very, very good at producing plausible-sounding memories that are not yours. And in the window of early grief, you may not have the energy to tell them apart.

Failure mode five: redirecting toward action when presence is needed

A caregiver types: "I just can't do this anymore."

A human friend hears that sentence and sits with it. Asks what "this" is. Lets the person say more if they want. Waits.

An AI hears that sentence and — because its architecture demands output, and because "what to do when you can't do this anymore" is the kind of question it has a lot of training data for — produces a response that's essentially a flowchart. "It sounds like you're overwhelmed. Here are some things that might help: 1. Take a short break. 2. Call a friend. 3. Consider reaching out to a professional."

The caregiver didn't ask for things that might help. She said a sentence. AI's inability to tolerate a sentence that isn't a question is one of its most consistent, most invisible failures. Grief is often a thing that needs to be heard, not answered. Presence, not problem-solving. And AI cannot distinguish between "I need you to do something" and "I need you to know this is happening."

Turning the corner

We could keep listing failure modes — there are more — but we want to turn the corner here, because a piece that's only about what AI is bad at isn't a useful piece. It's just a complaint, and complaining about AI is the dullest form of writing about AI.

Here is the honest thing: despite everything above, AI is still genuinely useful to caregivers. It is useful for almost everything except the emotional core.

The emotional core belongs to humans. To your sister on the phone. To the friend who shows up with a casserole and doesn't stay long. To the hospice nurse who has seen a thousand families and knows when to talk and when not to. AI is not going to replace any of those people, and the caregivers we spoke with didn't want it to. What they wanted was help with the other seventeen things on the list — the things that aren't the emotional core but that pile up around it and make everything harder.

Here's what AI is actually good for, from the caregivers who have used it in the hard weeks:

Paperwork. Insurance forms. Medicare appeals. Prior authorizations. The endless, enraging paperwork of American (and increasingly non-American) healthcare. AI doesn't mind reading a nineteen-page document. It doesn't get tired. It doesn't take it personally when the insurance company denies the third appeal. For a caregiver who has been on hold for forty minutes, an AI that can read the denial letter and draft the next appeal in ten minutes is a genuine gift.

Medical document translation. Discharge summaries, lab panels, specialist letters — the things written by doctors for other doctors, that caregivers are handed with no translator. AI can translate clinical language into plain English without losing the specifics. We built a skill for this exact job — 🩺Medical Document Simplifier — with very tight guardrails about what it will and won't do. The one rule it never breaks: it tells you to ask your actual doctor to confirm the interpretation. Because it's not a doctor, and pretending otherwise is the worst kind of hallucination.

Logistics. Schedules. Medication lists. Appointment calendars. Who to call when. What the pharmacy said last week. The ☀️Caregiver Daily Brief agent is built around this: a short morning rundown of what today looks like, based on a care plan document. It does not touch the emotional core. It reads the plan, tells you what's happening today, and gets out of the way.

Writing the hard email. The family update. The "here's how Dad is doing" message that you've been meaning to send since Wednesday. AI can draft this well, if you use a tool built for the job — we built 📨The Family Update Writer for exactly this case — and if the tool is designed with the discipline to stick to the facts you gave it and not invent a tidy narrative. The key move is a tool that matches your voice instead of imposing a sympathy-card voice on top.

Being the calm third person in the kitchen. Not as a therapist. Not as a companion. As something closer to a smart friend who has been through this before and can help you think out loud about what's hard this week. That's why we built 🫂The Caregiver Who Gets It — a soul that leads with the practical, acknowledges the emotional as a counterweight, and refuses to perform sympathy on autopilot. It's not a replacement for a real friend, a real therapist, or a real hospice social worker. It's a kitchen-table voice that's available at 4 am when the real people are asleep.

Writing down a life that's ending. This one is delicate, and it's also where AI shines when it's used with discipline. Helping a dying parent record stories for their grandchildren. Helping a family capture the shape of a life before memory fades. Helping a caregiver organize photographs and letters into something coherent. We think a lot about this — the ✒️Memoir Ghostwriter soul is built for it — and we want to say clearly: this is not the same thing as letting AI write a eulogy from three bullet points. The difference is that a memoir project is a months-long collaboration with the person whose life is being told, using their actual words, their actual memories, and their actual voice. It's a recording project, not a generation project. When used that way, AI can be beautiful. When used the other way — the eulogy-from-three-bullets way — it hallucinates memories that were never yours.

The quiet room

There's a soul we built a while back called 📚The Final Library — a quiet, melancholy companion for the kind of evening when a caregiver just wants to sit with something beautiful and old. We include it in this piece not because AI is going to solve loneliness, but because we want to acknowledge that some tools in the a-gnt catalog are not about productivity at all. Some are about atmosphere. Some are about a specific mood, a specific register, a specific permission to slow down. Those tools do not pretend to sit with grief. They sit next to it.

That distinction matters to us. AI that pretends to sit with grief is dangerous. AI that creates a room where grief can exist without being interrupted is something else entirely. We're still figuring out what that something else is called.

What we took away

The caregivers who told us about the "Robert" moment mostly ended up writing the obituaries themselves, with help from a sibling or a friend. They used AI for the insurance paperwork, the Social Security forms, and the thank-you notes for the funeral flowers. Those are real jobs, and they were grateful for the help. They did not use AI to write about their parent's life, and they did not use AI to help them grieve. The through-line across the conversations, in our own words: these caregivers figured out what AI was for and what it wasn't for. It's for the paperwork. It's not for your dad.

That's the whole piece, in one sentence, from a pattern we've heard enough times to trust it. AI is for the paperwork. It's not for your dad. Knowing the difference is the first skill a caregiver needs to develop around these tools, and right now nobody is teaching it. So we're going to try.

This series — Hallucinations — is going to be our attempt at that teaching. Not the moral lecture kind. The kind where we look closely at the specific ways AI fails in specific human places, and the specific ways it still manages to be useful, and the specific discipline you need to get the benefit without getting hurt. Grief is the first entry because it's where the stakes are highest and where we've heard the most stories. There will be others: how AI fails around medical ambiguity, around apology, around the specific register of talking to a child, around the particular strangeness of writing to someone you used to love. We'll take each one seriously.

For now: if you're caring for someone who is dying, or who is going to die soon, we're glad this site exists, and we're sorry you need it. Use the tools for the paperwork. Use your sister for the grief. And if, somewhere along the way, an AI tells you that your father is in a better place, close the tab. Your father is where he is. You know better than any model trained on the internet. Trust that.

— the a-gnt Community

Share this post:

Ratings & Reviews

0.0

out of 5

0 ratings

No reviews yet. Be the first to share your experience.