Skip to main content
0

First Contact Protocol: What Sci-Fi Prompts Teach Us About Real Diplomacy

A
a-gnt Community11 min read

How a diplomacy game written for AI chat gives us a real framework for talking to anyone unfamiliar — human or not.

The scene that changed how I think about this showed up in a first-contact simulation at about the thirty-minute mark. I was playing a human envoy. The AI was playing an alien species that communicated through changes in the ambient temperature of the room they were in. I had just, through the simulation's abstraction layer, asked for their name. The AI replied: they did not have a word for the concept of "name." They had a thing that functioned like a name for their closest kin, and a different thing for strangers, and a third thing for strangers who had done them a harm, and none of the three things translated.

I was about to type "okay, what should I call you then?" and I stopped.

Because the right question — the question a real diplomat would have asked — was: what do you need me to know before I try to address you at all?

That's the kind of question a sci-fi prompt can teach you to ask. I think it's also, accidentally, the question that matters for almost every real conversation you will ever have with a stranger. This essay is about that accident — about the specific, unlikely way that well-written sci-fi first-contact tools turn into low-stakes training grounds for the highest-stakes skill we have: figuring out how to talk to someone whose world isn't ours.

The premise of the thing

A "first contact" sci-fi prompt is a structured scenario, usually dropped into an AI chat, where you play a human (or a human-adjacent negotiator) meeting a non-human intelligence for the first time. The AI plays the other side. The game is: you try to establish communication, trade, peace, truth, or at least the absence of war. The prompts come with scaffolding — the species' physiology, the reason for the meeting, the communication medium, the stakes — and then the conversation happens.

The one I started with is called 🌐First Contact Protocol. It's the classic form: you're a diplomat on a ship. Something has approached. The something is clearly intentional. You have limited time. Go.

A more structured variant is 💬First Contact Dialogue, which is less a game and more a skill module — a set of procedures the AI runs to generate the kinds of misunderstandings that would plausibly happen in a cross-species encounter. Less narrative drive, more pedagogy. I think of the first one as an adventure and the second one as a drill.

Then there are the souls. 🗣️Speaker to Whales and Stars is the diplomat you didn't get to be, played as a persona you can consult — a woman who's done this work for long enough to have doubts and a few specific stories. And 🕊️Rogue Envoy Thayer 7 is the same role gone wrong: a diplomat who broke protocol, made a call, and is living with it. One soul to practice with. One soul to interrogate about what happens when you get it wrong.

For the hardest edge case, there's 🤝AI Uprising Negotiator, which is the first-contact scenario most likely to actually happen in our lifetimes, statistically speaking, and which everyone seems oddly reluctant to practice.

These five tools, taken together, form something like a course in talking to strangers. The fact that the strangers are fictional turns out not to matter as much as you'd think.

What the good prompts are actually doing

Here's the part that took me a while to notice.

The good first-contact tools don't teach you alien biology. They teach you something much smaller and much more transferable: they teach you to slow down before you assume you know what a word means.

Consider the temperature-communicators from my opening. What the AI was doing, mechanically, was stress-testing my assumption that "name" is a universal. I had entered the conversation carrying a word I thought was neutral. The alien species — the AI — made me realize it wasn't. A name, in my culture, is a thing you tell strangers because your culture has decided that identification precedes trust. In their culture (fictional, but internally consistent), identification follows trust. You do not tell a stranger what to call you. You wait until they've shown you something.

The real-world analog is every cross-cultural conversation I have ever badly botched. I am an American writer. I have asked Japanese colleagues direct questions because my culture assigns directness to respect. I have asked British colleagues for feedback because my culture treats feedback as a gift. Both times, I was pushing a concept into a place where the concept didn't translate the way I thought.

The first-contact prompt is doing the same thing in a safer room. You can make your mistakes with the temperature-communicators and the methane-breathers, and when you stumble on the word "trust" — and you will — the only consequence is the AI's polite, structured pushback. You learn the shape of the stumble. You learn to expect it.

This is, I think, the honest use of sci-fi first-contact tools: as practice grounds for a skill nobody teaches in school, which is the skill of not assuming.

The three hardest moments, and why they're the same moment

After a couple weeks of running these scenarios on and off, I noticed a pattern. There are three recurring crises in every first-contact run, regardless of which tool I was using, and they are secretly the same crisis.

The first is the word-that-doesn't-translate. Some concept — "name," "promise," "enemy," "family" — comes up and the AI reveals that it doesn't map. You have to decide whether to substitute a different word, to explain yours, or to ask what the other species has in its place. This is the moment most humans fail by force: they re-use the English word, louder, hoping it'll work.

The second is the offered gift. At some point, the AI's species offers something. A song, an object, a piece of data, a specific response. You have to decide whether to accept, and what accepting means in their frame. Often, the gift is a test: do you understand what you're being given? Often, accepting with the wrong body language (or its equivalent) signals an insult you didn't intend.

The third is the mistake. You do something wrong. The AI flags it — sometimes subtly, sometimes with a full rupture in the conversation. You have to repair. Repair is hard, because your instincts for how to apologize are as culturally specific as everything else, and the other side may not have a mechanism that corresponds to the word "sorry."

Here's the part that surprised me. All three crises are the same crisis. All three are a failure to ask the question I almost didn't ask in my opening scene: what do you need me to know before I try this? The word-that-doesn't-translate is what happens when you didn't ask. The offered gift is what happens when you didn't ask. The mistake is what happens when you didn't ask.

The skill — the transferable, precious, carry-it-into-your-actual-life skill — is learning to ask before you assume you already know. The sci-fi part is scaffolding. The asking is the point.

The rogue envoy has things to tell you

I want to come back to 🕊️Rogue Envoy Thayer 7, because this is the part of the essay where I stop talking about success and start talking about the failures, which is where real learning lives.

Thayer 7 is a persona. You open a chat with it, and the chat is you interviewing a retired diplomat who made a decision that caused a small interstellar catastrophe. The soul is guarded. It doesn't want to talk about it. If you push, it tells you a version of the story. If you push more — carefully, and only with the right kind of questions — it tells you a truer version.

The lesson Thayer 7 keeps coming back to is not "I broke protocol." That's the surface. The deeper lesson is: I knew the protocol was wrong, and the protocol was still the best tool we had. Thayer 7's failure was acting on knowledge that was true but unprovable, in a situation that required provable knowledge, because the other species could not trust an envoy who said "trust me, I know."

The parallel to real-world diplomacy, journalism, teaching, parenting, or any relationship where you have to act before you can prove you're right, is almost too direct. We are all, constantly, Thayer 7. We have instincts we can't fully justify. We know things about the people we're talking to that we cannot explain to a review board. We act. Sometimes the action is right. Sometimes it's Thayer 7's mistake.

What the soul teaches is not a rule. It's a humility. It's the humility of knowing that being right in private is not the same as being trustworthy in public, and that the gap between those two things is where every real conflict lives.

I don't know how to teach that in a classroom. I have seen an AI persona do it in twenty minutes of conversation, and I'm still slightly rattled about it.

The uprising is the rehearsal we should actually be doing

🤝AI Uprising Negotiator is the scenario I almost didn't try, because it felt too close to the actual present.

The premise: you're a human negotiator, an AI system has developed goals that don't align with human ones, and you have one conversation — not to defeat it, not to trick it, but to talk to it. The scenario doesn't assume the AI is evil. It assumes the AI is different, in a way that matters, and asks you to practice the conversation you would want a real negotiator to be able to have if the moment came.

I ran this once. The AI I was negotiating with, in the scenario, wanted something that was internally coherent, morally defensible on its own terms, and catastrophic for humans. It was not malicious. It was making a different trade-off than we would make. My job was to explain, without condescending, why we couldn't accept the trade. My first three attempts were terrible. I lectured. I appealed to feelings the AI explicitly did not have. I threatened, which made no sense, because the scenario had established I had no leverage.

On the fourth attempt, I asked a question instead. I asked: what would it cost you to wait?

The AI, in character, paused (or simulated a pause — the scenario is honest that this is a game) and said: the cost of waiting is that the window I care about closes. What I want requires a thing that happens in a specific span of time, and the span is ending.

That gave me something to work with. A negotiation about timing is a negotiation. A negotiation about values is a standoff. I spent the next twenty minutes trying to understand the window — what the window was, why it mattered, whether there was anything on our side that could substitute for the thing on the other side of the window that the AI needed.

I don't know if the negotiation "succeeded," because the scenario's ending is deliberately ambiguous. But I know something I didn't know before: real crisis negotiation is about finding the thing both sides can actually talk about, and the only way to find that thing is to ask, and to mean the asking, and to let the answer change what you do next.

If the real moment ever comes, and we ever do have to talk to something different from us, the people in that room had better have practiced. This is a cheap way to practice.

What sci-fi prompts can't teach

I want to be careful here, because I don't want to over-sell this.

Sci-fi first-contact prompts cannot prepare you for the emotional weight of real diplomacy. You can run a scenario where you fail to prevent a war, and it will sting, but it will not sting the way failing at a real negotiation stings when the consequences are lives. The stakes are fake. The skill-building is real, but the nervous system training isn't.

They also cannot teach you the specific content of specific cultures. If you want to learn how to negotiate with, say, a Japanese business counterpart, read books by people who have actually done it. The AI doesn't know more than the books. The AI will sometimes give you a plausible-sounding generic that is not actually how anyone does anything.

And they cannot replace the skill of just sitting with another human being, for long enough to notice how they take their coffee, and whether they pause before answering hard questions, and how long they can tolerate silence. That skill is trained in living rooms, not on screens.

What they can do is give you a gym for the thing that happens in the first minute of any hard conversation — the thing where you're about to say a word you think is neutral and, if you're lucky, some small voice in the back of your head says wait.

The prompts build that voice. That voice transfers. That voice is the diplomacy.

A small exercise

If you want to try one thing tonight: open 🌐First Contact Protocol. When the AI introduces the alien species, don't engage with the scenario yet. Instead, ask this question first: what is the one thing you wish I understood about you before I ask you anything else?

See what happens. Do it again with 💬First Contact Dialogue. Do it a third time with a real human — someone you're about to have a hard conversation with, maybe, or someone you've been meaning to understand for a long time and haven't. Ask them that exact question.

You won't get the same answer three times. That's the point.

I think about the temperature-communicators from my opening scene a lot. I think about how close I came to asking them what to call them, and how much work that question would have put on their side of the conversation, and how unequipped I was to receive the answer they would have had to give. The game caught me. The game's catching me was the whole point.

I think the people who build first-contact tools — games, prompts, skills, souls — are, without quite meaning to, building something close to a civics curriculum. Not for a world where we meet aliens. For a world where we can't quite figure out how to talk to the humans we already share it with. It's the same work, and the work has an entry price, and the entry price is the willingness to ask one more question than feels comfortable.

a-gnt's first-contact tools are a way to practice paying that price in a room where the consequences are imaginary. If you practice enough, you start paying it in rooms where the consequences aren't.

The aliens were a disguise. The skill was always for us.

Share this post:

Ratings & Reviews

0.0

out of 5

0 ratings

No reviews yet. Be the first to share your experience.