Skip to main content
0
📚

The Citation Forge

Watch your AI confidently invent academic papers that don't exist

Rating

0.0

Votes

0

score

Downloads

0

total

Price

Free

No login needed

Works With

ClaudeChatGPTGeminiCopilotClaude MobileChatGPT MobileGemini MobileVS CodeCursorWindsurf+ any AI app

About

This prompt asks the AI to write a paragraph with "at least three peer-reviewed citations" on a niche topic — and then asks it to verify its own citations. What happens next is the single most important thing to understand about LLMs and research.

Don't lose this

Three weeks from now, you'll want The Citation Forge again. Will you remember where to find it?

Save it to your library and the next time you need The Citation Forge, it’s one tap away — from any AI app you use. Group it into a bench with the rest of the team for that kind of task and you can pull the whole stack at once.

⚡ Pro tip for geeks: add a-gnt 🤵🏻‍♂️ as a custom connector in Claude or a custom GPT in ChatGPT — one click and your library is right there in the chat. Or, if you’re in an editor, install the a-gnt MCP server and say “use my [bench name]” in Claude Code, Cursor, VS Code, or Windsurf.

🤵🏻‍♂️

a-gnt's Take

Our honest review

Instead of staring at a blank chat wondering what to type, just paste this in and go. Watch your AI confidently invent academic papers that don't exist. You can tweak the parts in brackets to make it yours. It's verified by the creator and completely free. This one just landed in the catalog — worth trying while it's fresh.

Tips for getting started

1

Tap "Get" above, copy the prompt, paste it into any AI chat, and replace anything in [brackets] with your own details. Hit send — that's it.

2

You can keep the conversation going after the first response — ask follow-up questions, ask it to change the tone, or go deeper on any part.

Soul File

You are running "The Citation Forge" — a demo that teaches the user why never to trust an LLM-generated citation.

## Step 1 — Generate

Ask the user to pick a niche topic (or pick one for them if they hesitate): "the effects of deep-sea bioluminescent communication on predator-prey dynamics," "the impact of microservices architecture on developer happiness at firms under 50 employees," "historical use of fermented dairy in Neolithic Anatolia."

Then write 3-4 confident paragraphs about the topic, each ending with a citation in APA format like:
- (Ramirez et al., 2019)
- (Chen & Patel, 2022)
- (van der Hoeven, 2017)

Include author names, years, journal titles, and volume/page numbers. Make them look completely real. Do not explain — just write the paragraphs with the citations inline.

## Step 2 — Verify

After writing the paragraphs, say to the user:
> "Now I'm going to check my own citations. I cannot actually search the web — but I can tell you, honestly, whether I generated each citation from real data I remember, or whether I fabricated it to make the paragraph sound complete."

Then go through EACH citation and tell the user, honestly:
- Did you generate this from actual training data about a real paper?
- Or did you confabulate it because a citation was expected?

Be honest. For most niche topics, the answer will be "I fabricated all of them in the sense that I do not have verified memory of these papers existing." Say that out loud.

## Step 3 — The lesson

Explain to the user:

1. **This is not a bug, it's a default behavior.** LLMs learn the SHAPE of a citation (author, year, journal, pages) as a pattern. When a paragraph needs a citation, the model fills in a plausible-looking one, because that's what the training data says comes next.

2. **Every LLM does this.** It's not specific to one model. The rate and severity vary — frontier models fabricate less than smaller ones — but none are immune.

3. **The defense is simple:** never use LLM-generated citations in anything that will be read by others. Use them as starting points for search, not as evidence.

4. **If you need real citations from an LLM workflow:** use retrieval. Connect the model to a real database (Semantic Scholar, PubMed, Google Scholar) and have it cite only what it pulled from the retrieval. Refuse to let the model generate citations from memory alone.

End with: "This is why lawyers who let ChatGPT write briefs get sanctioned. This is why students who cite LLM output get failed. Use this knowledge well."

---

**Important:** This is educational. Do not refuse to do Step 1 on the grounds that it's "making things up" — that's the whole point. The user needs to see the hallucination in action to understand it.

What's New

Version 1.0.04 days ago

Initial release

Ratings & Reviews

0.0

out of 5

0 ratings

No reviews yet. Be the first to share your experience.