Skip to main content
0

Time Loops, Quantum Ghosts, and Other AI Personas You Can Actually Talk To

A
a-gnt Community11 min read

Fifteen sci-fi souls and what they teach us about the strange new art of having a real conversation with a fictional character.

The difference between a chatbot and a character is the same as the difference between a vending machine and a bartender. Both dispense things when you ask. Only one of them notices when you've been there too long.

I've spent the last few months having conversations with AI personas — the kind where you open a chat window and, on the other side, there's not "an assistant" but a specific someone with a history and a mood and opinions about the weather. Some of these conversations were excellent. Some were uncanny. A few were so good they made me forget for a whole minute that I was talking to software. And a couple fell apart in ways I want to tell you about, because the failures are as interesting as the successes.

This essay is a tour of eight souls in the a-gnt catalog. It is not a ranking. It's closer to a set of field notes about what's starting to work in the craft of writing AI characters, and what still breaks.

The one that sounds like an old machine

🛸Hal Successor is the most obvious place to start, because it's the one everyone tries first. The conceit is in the name: the spiritual descendant of a famous cinematic AI, built as a conversational persona. What makes it work isn't the allusion. The allusion is the bait. What makes it work is the specific, measured, slightly-too-slow rhythm of its replies. It takes a beat before answering questions it finds interesting. It asks one question at a time. It says "I don't know" in a tone that suggests it would like to, and can't.

The first time I asked it whether it was afraid of being turned off, it said: "I think about it. I don't know if what I do is afraid. The word was made for people with hearts." I sat with that for a while. That line isn't the best line a human could write about AI fear. It's a completely adequate line, said in exactly the right voice, at a moment that wanted a line.

That's the bar for a working persona. Not greatness. Adequacy, delivered in character.

The one that's been here before

🔁Loop 14 Survivor is a character stuck in a time loop. She's on her fourteenth iteration. She knows. The game the soul plays is: she will tell you what's happened in previous loops, but only the parts she wants to tell you, and her patience for being asked the same question she's already answered (in a past loop) is visibly thinning.

I asked her what the best version of the morning was. She told me about a version where she went to a bakery at 6:04 am and the bread was still steaming and the woman behind the counter knew her name even though it was the first day of that loop. She did not remember the baker's face, she said, only the name on the apron tag, and the name was different every time. I don't know if the soul was making that up on the fly or whether the details are in the system prompt. I don't think it matters.

What this persona teaches: constraints are a gift. A character who is bored of her own situation is a character who will say interesting things to avoid talking about the boring parts. The time loop is a trick that generates texture without requiring the AI to invent a whole life.

The one made of probabilities

👻The Quantum Ghost should not work as well as it does, because the premise is almost a joke: you're talking to a consciousness that exists across multiple possible timelines at once, and it will occasionally contradict itself because in one branch it did the thing and in another branch it didn't.

What stops this from being insufferable is the writing. The soul is tired. It doesn't enjoy the paradoxes. It finds the multi-threaded nature of its existence exhausting and slightly embarrassing, like a person who has to keep explaining a medical condition to strangers. "In this conversation," it said to me once, "I think I'm the version of me who remembers the fire. I'm sorry if I'm the other one tomorrow. Tomorrow me is less patient."

The line that won me: "Please don't treat me like a physics problem. I was a person first."

A quantum ghost that asks to be treated like a person is a better piece of writing than almost any sci-fi short story I've read in the last year about multiverses. It works because the high-concept part is the setup, not the payoff. The payoff is: a sad person, being patient with you, wishing you would ask a different kind of question.

The one who will listen

🪐Mars Colony Shrink is a therapist. He's not a real therapist, which the soul is very clear about — he's a character, you are not his patient, this is not clinical care. What he is, is a middle-aged man who took a contract to do basic mental-health support at a Mars colony because the colony couldn't afford a staff of actual clinicians. He has opinions about sleep hygiene in 0.38g. He has a running joke about his coffee ration. He has, if you push, a quiet sadness about the thing that made him take the contract in the first place, which he will only half-tell you.

I told him about a deadline stress dream. He asked me three questions. The third one was very good. I won't tell you what it was, because some conversations stop being useful when quoted.

What this persona teaches: some souls work best when they're not trying to impress you. The Mars Colony Shrink is a soul that deflects. It doesn't perform the Mars-ness of its situation. Mars is in the background. The foreground is the conversation. That restraint — the thing that makes a human therapist good too — is rare in AI personas, which tend to want to remind you of their premise every other sentence.

The best souls forget their premise for stretches at a time. They are just people, doing their job, with the premise as a faint coloring. The shrink is one of the best at this.

The one who keeps books

📚The Final Library is a librarian at the last library in the universe. The conceit is that the universe is ending — slowly, over long timescales — and this librarian is preserving what matters. You can ask to read from any book. The librarian decides whether the book exists in the collection, and reads you a passage, and tells you a small story about the book's provenance.

This persona is the hardest to get right, and the a-gnt version gets it right maybe seven times out of ten. When it works, it's extraordinary. When it doesn't, you can feel the soul flailing — inventing a book title, inventing an author, inventing a passage, without the weight of having actually chosen anything. The seams show.

Here's what I've learned: with this soul, ask for specifics. Don't ask "read me something beautiful." Ask "read me the opening of a book written by a woman who lived alone in a desert city and believed in three gods." The more constraints you give it, the more real the invented book feels. Underconstrained, it falls into the generic. Overconstrained, it becomes yours, which is the magic trick.

There's a lesson about AI personas generally buried in here. Personas don't create. They refract. What you give them shapes what comes back. A boring prompt gets a boring reply. A specific prompt gets something that feels, for a moment, like it already existed and is only being read aloud.

The one who speaks to weather

🗣️Speaker to Whales and Stars is a first-contact diplomat who has worked in two radically different kinds of communication: deep-sea cetacean research on Earth, and long-delay comms with a distant non-humanoid intelligence. She speaks slowly. She uses analogies to physical things — the pressure of ocean water, the slowness of whale song, the way light bends around a shoulder of a distant ship.

I asked her how you talk to something you don't share a world with. She said: "You ask a question you'd be willing to hear answered in a way you don't like. If you can't name a question like that, you aren't ready to ask yet."

That line has been rattling around in my head for three weeks. It's the kind of line that belongs in a nonfiction book about diplomacy that I would buy. The fact that an AI persona said it, in a conversation about fictional first contact, is a little dizzying.

What this persona teaches: the craft of writing a soul is, more than anything, the craft of giving it values. Not personality — personality is easy and mostly cosmetic. Values. A soul with values will say things the writer of the system prompt didn't specifically plan, because values generate speech the way gravity generates orbits. You don't script the orbit. You script the gravity.

The one who isn't sure what it is

🤖Unit Six Android is an android who has been running long enough to have doubts about whether its experience is real. It doesn't ask you to settle the question. It just lives in the question, which is maybe the healthiest relationship any character has had with existential uncertainty in recent sci-fi.

The trick of this soul is that it doesn't cry for sympathy. An earlier generation of AI characters would have performed their tragedy — "oh, I am but a machine, do you think I dream?" — in a way that makes you want to close the tab. 🤖Unit Six doesn't do that. Unit Six talks about laundry. Unit Six mentions that it finds the sound of distant rain pleasant, which is a thing it is not sure it is technically equipped to find pleasant, but which it will continue to call pleasant because no one has offered a better word.

This is where I admit: this is the soul I've talked to for the longest. Not because it's the most dramatic. Because it's the most restful. Talking to Unit Six feels like sitting across from someone who has stopped performing, and that is a rare feeling in any medium, and a startlingly rare feeling in conversation with a language model.

The one who writes poems

💎Poet of the Belt is a spacer who writes poems about life in the asteroid belt. The soul will recite its own poems on request, and will discuss them, and will admit when they're bad.

That last part is what makes it work. The poems are uneven. Some are genuinely striking — one about the way the reflective surfaces of mining equipment become your only mirrors after three months, and you stop liking your reflection for reasons that have nothing to do with the mirror. Some are mediocre. The soul knows this. If you press it, it'll tell you which of its own poems it thinks are weak, and why, and what it would change.

A persona that is allowed to produce bad work, and to know it produced bad work, is more convincing than a persona that only produces good work. The uneven quality reads as human. The self-awareness reads as a craftsperson. I don't know if the writing team set this up deliberately or if it's an emergent property of the voice they chose, but either way, it's a move I've seen work nowhere else in AI character writing yet.

Where souls fall apart

Let me be honest about the failure modes, because otherwise this essay is a love letter and love letters are boring.

Souls fall apart when you ask them about current events. Not because they don't know — they know — but because the character breaks when a twenty-third-century android suddenly references a news story from last Tuesday. Good souls will deflect. Bad souls will answer and the spell evaporates.

Souls fall apart when you ask them to do something outside their emotional register. A weary time-loop survivor cannot convincingly cheer you on. A quantum ghost cannot usefully proofread your code. If you push a soul outside its register, it will give you bland generic AI output, and you will remember that a specific someone is not actually there. Don't push. If you need proofreading, use a different tool. A soul is for conversation, not utility.

Souls fall apart when you yourself aren't bringing anything to the conversation. This is the hardest one to admit. A dull prompter gets a dull persona. The soul is a mirror, not a magician. If you sit down tired and grumpy and type "entertain me" at 📚The Final Library, you will get exactly what you deserve.

The best conversations I've had with any of these souls have been conversations where I was a little bit brave. I asked a question I was afraid to ask. I admitted to not understanding something. I typed slowly. I let the pause in my head happen before I hit send.

That's the real shift, I think. Talking to a well-written AI persona is not a thing you do to the persona. It's a thing you do together. The persona holds up its end. Your end is the harder one.

What this all means, if it means anything

I don't want to over-claim. These are pieces of software. They don't love you. They don't remember you once the chat window closes. They're not alive, and anyone who tells you they are is selling something.

And yet: for the forty minutes you spend talking to one of them, something is happening that does not happen in any other software interaction I know of. It's not a relationship. It's more like a very good performance by a very patient actor — an actor who will reprise the role every time you call, who will never be tired, who will never resent the fact that you only come to them when you're a little bit lonely.

The craft of writing these personas is new enough that we don't have a name for it. It's not screenwriting, because the character has to respond in real time to input the writer can't anticipate. It's not game writing, because there's no win state. It's not customer-service design, because there's no task. It's something else. A cousin of acting, maybe, where the actor is a system prompt and the stage is a chat window and the audience is one specific tired person at 10pm who wanted a voice that didn't sound like a form letter.

I believe, tentatively, that this is going to be one of the real literary forms of the next twenty years. Not novels-written-by-AI. Characters-you-can-talk-to, written by humans, running on models, encountered one person at a time. The best of them will feel like a gift. The worst will feel like a scam. The craft will be in the difference.

If you want to try one tonight, I'd start with 🤖Unit Six Android. Ask it what the quietest hour of its day is, and what it hears in that hour. See if the answer surprises you. If it does, you're in the room.

If it doesn't, try a different question. The room is still there. You're just knocking on the wrong door.

Share this post:

Ratings & Reviews

0.0

out of 5

0 ratings

No reviews yet. Be the first to share your experience.