In the Weeds: Can You Actually Run a One-Person Newsletter Business With AI?
A week-by-week account of trying. Where AI earned its keep. Where it was wrong. What broke. What the writer had to do anyway. With receipts.
The pitch is everywhere. You've seen it. Some founder on a podcast, some thread on Bluesky, some sponsored post sliding into your feed: One writer. Two thousand subscribers. Six figures. And AI does most of the work now.
It is not quite a lie. It is not quite the truth either.
I wanted to know what was actually on the other side of that pitch — not the dream version, not the cautionary tale, but the honest ledger. Where does AI earn its keep in a real one-person newsletter business, and where does it fall on its face hard enough that the writer has to pick it up and finish the sentence herself? So I ran the experiment. Not on a real person — more on that in a second — but on a composite I built carefully enough that the seams it would show would be the same seams a real operator would see.
A note on the composite. Everything that follows is a thought experiment. "Dana" is not a real newsletter operator. She is a composite I built out of five conversations with real writers who run paid newsletters somewhere in the 1,500–4,000 subscriber band — the awkward middle where you're past the hobbyist phase but not yet at the "I have an assistant" phase. I assembled her subscriber count, her niche, her workflow, and her problems from those conversations. Any similarity to an actual newsletter you read is either coincidence or because the problems in this space are so consistent that a composite starts to sound like everyone.
Dana writes a newsletter about independent bookstores. 2,400 paid subscribers at $6/month. One essay a week, one interview every two weeks, one "what I'm reading" roundup every two weeks. She has been at it for three years. Her open rates are healthy. Her unsubscribe rate is low. She is slightly behind on everything, all the time.
Her question to me, reframed as the question of this piece: Can I use AI to claw back fifteen hours a week, and if I do, where does the work actually fall apart?
I gave her a week. I gave her a specific stack of tools. I watched what happened.
The stack
Before we get to the week, here's what she was working with. Not because the stack is the point — it isn't — but because when someone tells you "I automated my newsletter with AI," the first honest question is with what, exactly. Vague answers to that question are the tell.
- Claude, as the general-purpose writing assistant and editor.
- ✍️The Plain-Spoken Copy Editor — a persona I'd recommended to her a month earlier, tuned to catch the specific failure modes of newsletter prose (passive voice, buried ledes, the "three adjectives in a row" tic).
- 📬Newsletter Subject-Line Brutalist — a prompt I'd built for exactly this kind of operator. Not a subject-line generator. A subject-line editor that takes her drafts and tells her what's wrong with them, in the tone of a tired magazine editor who has seen this mistake before.
- Her existing ESP (she's on Beehiiv, but the specifics don't matter).
- A spreadsheet. Yes, still a spreadsheet. The best newsletter operations I've seen all have a spreadsheet somewhere in them.
That's it. No agent frameworks. No "AI pipelines." No six-figure automation course. A writer, a browser, and two tools she could describe in one sentence each.
Day 1 — Monday. The interview transcript.
Dana had a ninety-minute interview recorded on Friday with the owner of a bookstore in Portland, Maine, that specializes in used cookbooks. Her normal process for turning this into a publishable piece takes her about eight hours spread over two days: transcription (auto), cleanup (manual), pulling quotes, writing the framing, drafting, editing.
The AI cut this roughly in half.
Here is what it was actually good at: taking the raw auto-transcript — the one full of "um" and "like" and the five-second gap where the bookstore owner's cat knocked over a tin of loose tea — and turning it into a readable document. Not a publishable document. A readable one. The kind where you can scan the left margin and find the four moments that matter.
Here is what it was not good at: knowing which four moments mattered.
Claude, left to its own devices, thought the moments that mattered were the ones where the bookstore owner said something quotable about "the future of physical retail." Dana knew the moment that mattered was when the owner got quiet and said, "I bought this place the week my mother died, and I have never told anyone that until right now." Claude had flagged that passage as "sensitive — consider omitting." Dana had flagged it as the lede.
This is the thing about AI and interviews. The model is very good at surface. It is bad at weight. A quote that's quotable to a language model is often a quote that's been said a thousand times by a thousand people. A quote that actually earns its place in a magazine piece is often the one the model hesitates on, because it doesn't fit the pattern.
Dana's fix was to stop asking the model to pull the best quotes and start asking it to pull all quotes longer than two sentences that contained a first-person verb. That gave her a list she could scan. The judgment stayed with her. The scanning didn't.
Monday's saving: about three hours. Not bad.
Day 2 — Tuesday. The draft.
This is the one everybody wants to hear about. Can the AI write the essay?
The honest answer is: it can write an essay. It cannot write her essay.
Dana handed Claude the cleaned-up transcript, a bullet-point outline, and a tight prompt: "Write a draft in my voice. Here are three previous pieces for reference. Aim for 1,400 words." She did not say "sound like me" — she gave the model examples and let the examples do the work. That is, for the record, the correct way to do this. "Sound like me" is a wish. Three examples is an instruction.
The draft she got back was competent. A reader who had never read her newsletter before would have finished it and thought, that's a pretty good piece about a bookstore. A reader who had been with her for two years would have finished it and thought, Dana must have been tired this week.
The seams were subtle. The model matched her sentence length. It matched her vocabulary. It even got the thing she does where she drops a one-sentence paragraph right before the turn in the piece. What it did not match was specificity of attention — the way she notices, in a real piece, that the bookstore's phone is an old beige landline and the owner's hand rests on it like it's a cat. The model wrote "the bookstore had a warm, lived-in quality." Dana wrote "there was a phone that had survived three presidents, and Lila's hand kept finding it like muscle memory."
You cannot prompt your way to Lila's hand. It is not in the transcript. It is in Dana's memory of the room.
She ended up using the AI draft as scaffolding — the structure, the transitions, the general arc — and rewriting about 70% of the actual prose. Time saved versus a blank page: about two hours. Time lost versus her normal process: roughly zero. She would have been faster writing from scratch, some weeks. Other weeks the scaffolding would be a real gift. The answer is: it depends.
Here is the thing nobody says out loud: on a week when the writer is tired, the AI scaffold is a trap. Because when you're tired, you use more of it. And when you use more of it, the seams show. And when the seams show, your most observant subscribers — the ones who forward your emails — can feel it, even if they can't articulate what they're feeling. They just open the next one with a little less enthusiasm.
Dana told me she has started a rule for herself: if she's tired, she writes from scratch or she doesn't send. The AI only gets to help on good-energy days. That is a strange and interesting rule and I think it is correct.
Day 3 — Wednesday. The subject line.
This is where the AI earns its keep in a way that's almost embarrassing.
Dana's normal subject-line process was: stare at the draft, type six candidates, pick one, hate it, send anyway. Her open rate on the essay emails was around 42% — fine, not great.
I had her run every subject line through the 📬Newsletter Subject-Line Brutalist prompt. Not to generate lines — to critique the ones she'd already written. The prompt's job is to tell you what's wrong, not what's right. It will tell you that your line uses the word "exploring" (banned). It will tell you the line has a colon and the right half is doing all the work (cut the left half). It will tell you the line is trying to be clever instead of being specific, and that specific wins.
Dana's six candidates for the Portland piece went in. The brutalist came back with, effectively: five of these are bad for reasons I am about to list. The sixth is fine but you buried the proper noun. Move "Portland" to the front.
She did. Open rate on that email: 51%.
One data point is not a trend. But this pattern repeated three more times over the week, on the interview, on the reading roundup, and on a short Sunday follow-up. Average lift: about 8 percentage points, week over week, on opens. That is an enormous amount of additional reach for something that took her roughly forty-five seconds per email.
This is the thing AI is actually great at. Not writing. Editing. Specifically, editing in domains where the failure modes are predictable, repeatable, and you already know them. Subject lines are that domain. The model doesn't need taste. It needs a checklist. And the checklist is in the prompt.
Day 4 — Thursday. The 200 back-issues.
Dana has about 180 published essays in her archive. Most of them have images. None of the images have alt text. This is bad for accessibility, bad for SEO, bad for the screen-reader subscribers she knows she has and has felt guilty about for two years.
She had been telling herself she'd get to it "next quarter" since 2023.
We fed the archive to Claude in batches of 20. Image description, alt-text generation, light editorial pass. Cost: about three hours of her time across the whole archive, mostly spent on quality-checking. The model got about 85% of them right on the first pass. The other 15% needed her to rewrite because they were too generic ("a woman in a bookstore" when the actual caption needed "Rebecca Makkai signing at Bookends & Beginnings on a Tuesday in March") or too fanciful (the model inventing details that were not in the image).
Two hundred back-issues of alt text, done in an afternoon, for something she had been procrastinating on for two years.
This is the second thing AI is actually great at. Work she would have done if she had infinite time, but never, ever would have done otherwise. It is the unglamorous janitorial labor of running a publication, and the model is perfectly, joyfully willing to do it, and the quality floor is high enough that with a light human pass, the result is actually good. Not "good enough." Good.
She told me, at the end of this task, that she had cried a little. Not because of the AI. Because of the two years of guilt she'd been carrying about the back-issues, and the fact that the guilt was now gone, and she had not realized how heavy it was until she set it down.
File this under "things the pitch decks don't mention." The emotional weight of the backlog is real, and clearing it is the most underrated feature of AI assistance for small creators.
Day 5 — Friday. The segmentation.
Dana's ESP gives her segment data — opens by segment, clicks by segment, churn by segment — and she had essentially never looked at it in any structured way. It was a CSV. It was 12,000 rows. It was on her to-do list somewhere under "do yoga more."
I wrote a prompt with her that handed the CSV to Claude with a simple ask: "Tell me three things I don't know about my audience that I probably should. Be specific. Cite the rows you're basing each claim on."
The model came back with:
- Her Sunday roundups had a 12% higher open rate among subscribers who had been on the list for more than 18 months, and a 19% lower open rate among subscribers who had joined in the last 90 days. Translation: the roundups were a reward for old-timers and a turn-off for new ones.
- Subscribers who had opened three or more of the interview emails in a row had a churn rate of roughly a third of the overall base. Translation: the interviews were her retention engine, more than the essays.
- A specific cluster of about 140 subscribers had opened every email for six months, clicked nothing, and paid $6/month anyway. Translation: she had a core of roughly 140 people who considered her newsletter a tip jar, and she should probably, at some point, say thank you to them in a way that made them feel seen.
Those are not AI insights. They are accountant insights. The AI did the accountant part. Dana did the "so what do I do about it" part, which took her the rest of the afternoon and produced three concrete changes to her workflow.
Again — this is the pattern. The model is not a creative partner. It is a patient, tireless, slightly-literal analyst that will look at the boring data you've been avoiding and tell you what's in there. If you ask well, and if you cross-check.
Day 6 — Saturday. The reply templates.
Dana gets about 30 replies a week. Some of them need a real answer. Some of them are "thank you, this piece meant a lot to me" and deserve a real answer that she cannot possibly write 30 of every week without losing her mind.
We built her a small set of reply scaffolds — not templates, because templates sound like form letters. Scaffolds. Three-sentence starts that she could finish in her own voice, tuned to the kind of reply she was getting. "Thank you + the piece I'm referencing + one specific question back." "Thank you + acknowledgment of their specific detail + a line about where I'm heading next." That kind of thing.
She ran through her week's inbox in about forty minutes instead of her usual ninety. The replies were, she told me, warmer than her normal ones — because she wasn't tired by reply number 15.
Here is the counterintuitive thing about this. The AI did not make the replies less personal. The AI made them more personal, because it took the part of the work that was making her impatient and tired, and absorbed that part, and left her with more energy for the part that only she could do.
That is a pattern worth naming. The AI takes the parts of the work that dull you, and leaves you sharper for the parts that only you can do. Or at least, it does when you're using it well. When you're using it poorly, it takes all the parts, and you are no longer sharp for anything.
Day 7 — Sunday. The thing that broke.
I am required, by the honesty clause that governs this whole series, to tell you about the thing that broke.
On Sunday, Dana sat down to write her "what I'm reading" roundup and decided, because she was tired and it was late and the whole week had gone pretty well, to let Claude take a bigger swing at it. She handed over four book titles, a sentence or two about each, and said: "Write the roundup. Match my voice. 600 words."
The draft she got back was, on the surface, fine. It had her sentence rhythm. It had her turns of phrase. It had the one-sentence paragraphs. It also, in the middle of the third book description, cited a detail about the author's biography that was not true. The author had not, in fact, worked as a journalist in Beirut in the 1980s. He had worked as a journalist in Cairo in the 1990s. The model had confidently invented the Beirut detail because it sounded right.
Dana almost sent it. She was tired. It was late. She did a spot-check on the book she knew best, not the one Claude had invented about, and the spot-check passed. She hit preview. Then, because something in her brain itched, she clicked the author's Wikipedia page, and saw it.
If she had sent that email, it would have been the kind of mistake that does not cost you subscribers individually but costs you a specific kind of trust that you cannot quite rebuild. The kind of reader who notices a fabricated biographical detail is also the kind of reader who forwards your emails.
She caught it. The process worked, because the last gate in the process was her, and she was paying attention.
But this is the thing: the more you lean on the AI, the more the "last gate" becomes load-bearing, and the more tired you are when you're standing at it. There is a version of Dana's workflow where she's offloaded 80% of the writing and the only remaining human task is fact-checking. In that version of the workflow, she has to be 100% sharp on the fact-check, or the whole thing fails. And no human is 100% sharp every week. So the workflow has a silent risk baked into it that only shows up on the week you happen to be tired.
The fix Dana landed on, by the end of Sunday, was specific: any factual claim about a real person, place, or event gets sourced by her, not by the model. The model can draft around those facts, but it cannot introduce them. That rule alone prevents 90% of the failure mode.
The other 10% is what fact-checkers are for, and a one-person newsletter business does not have a fact-checker, and that is a real limitation, and anyone who tells you otherwise is selling you something.
What the week saved her
I promised receipts, so here are receipts. Dana estimates the week saved her, net, about eleven hours versus her baseline. Not fifteen. Eleven. And those eleven hours were not distributed evenly — most of them came from three specific tasks (alt text, reply scaffolds, subject-line editing) that are low-creativity, high-volume, and perfectly suited to AI assistance. The essay writing itself saved her maybe an hour, and on a bad-energy week would have cost her more than it saved.
Eleven hours a week is not nothing. Eleven hours a week is a real dinner on Wednesday, a real walk on Friday, a real morning off on Saturday. Eleven hours a week is the difference between a newsletter that is sustainable and a newsletter that is eating her life. But eleven hours a week is also not the "AI runs my business" pitch. It is closer to: AI has taken over the most boring third of my job, and left me with the most interesting two-thirds, and I am doing the most interesting two-thirds better because I am less tired.
That is the honest answer. It is less sexy than the pitch. It is more useful than the pitch.
What the AI was great at
Pulling it out, plainly, so the takeaway survives the piece:
- Editorial cleanup of auto-transcripts. Fast, cheap, and the human still picks the moments.
- Subject-line critique against a rubric. This is where the open-rate math lives.
- Bulk janitorial work on the archive. Alt text, back-issue metadata, link-rot audits, image captions. This is the workflow I'd push first on any newsletter operator who has a backlog of shame.
- Segment and data analysis. Hand the model a CSV and a good question, get back accountant-grade insights.
- Reply scaffolding for high-volume warm inbox. Not templates. Scaffolds. The difference matters.
What the AI was wrong about
Also plainly:
- Picking which moment in an interview matters. This is judgment. The model does not have it.
- Writing the essay itself, in a way that holds up against a sharp reader. It can get you to 70%. The last 30% is the whole point.
- Factual claims about real people. The model will invent, confidently, and the invention will sound right. Do not let the model introduce facts.
- Anything that depends on you having been in the room. The phone like a cat. The hand on the phone. The silence before the sentence. Not in the transcript, not in the model, not recoverable.
What broke
The Beirut/Cairo fact, almost sent. The near-miss is in the piece because the near-miss is the real story. If you run a newsletter using AI assistance for more than a month, you will have one of these. The question is whether your workflow has a gate that catches it. Dana's did, barely, because she was suspicious. Build suspicion into the workflow. Do not assume you will be sharp on the week it matters.
What she had to do anyway
All of the things that made her newsletter worth paying for. The voice. The specific attention. The ethical call about which quote to lead with. The tenderness in the reply to the subscriber whose mother had just died. The decision about what to publish and what to kill. The editorial judgment that the AI could assist with but never replace, because editorial judgment is not a task, it is the accumulated taste of a specific person who has been paying attention for a long time.
Dana's newsletter is worth $6 a month because Dana has been paying attention for a long time. The AI cannot pay attention for her. The AI can, on a good week, give her back eleven hours so she has more attention left to pay.
The honest verdict
Can you run a one-person newsletter business with AI? Yes — if you mean "can AI remove the boring parts and give you back hours you need to stay sustainable." Good.
Can you run a one-person newsletter business with AI doing the actual newsletter? No. And when the pitch decks pretend otherwise, they are selling you the dream version of a job that, in the real version, is still a job that requires a person who has been paying attention.
The real question — the one the pitch never asks — is not how much of this can I automate. It is which parts of this work am I doing because they need me, and which parts am I doing because nobody has taken them off my plate yet. Get clear on that, and the AI is a gift. Get unclear on it, and the AI is a trap with a really nice-looking draft in the middle of it.
Dana is still writing her newsletter. She still writes from scratch on tired weeks. She still has the Cairo rule. Her open rate is up. Her inbox is warmer. Her archive has alt text. She has Saturday mornings back.
The phone is still beside the register in Portland, like a cat. You can't automate the phone. You can automate almost everything around it.
If you're running a newsletter in the awkward middle and you want to try this for yourself: pick one of the five tasks above — subject-line critique is the lowest-risk, highest-lift starting point — and run it for a month. Don't try to transform your whole workflow in a week. Don't let the model introduce facts. Keep the last gate load-bearing, and keep yourself sharp enough to stand at it. That's the whole practice.
Good writing on a-gnt is the writing that earns its length. So is a good newsletter. AI can help you earn it faster. It cannot earn it for you.
Ratings & Reviews
0.0
out of 5
0 ratings
No reviews yet. Be the first to share your experience.