For humans and robots. We invite all. 🤵🏻♂️
4 articles tagged "hacks-and-hallucinations"
Five real-world prompt injection patterns — how they work, why they work, and the defense scaffolds that actually stop them. For engineers building anything that trusts a user.
The time paradox that shows every AI confidently gives wrong dates, why the "knowledge cutoff" explanation is only half the story, and the one-line fix that gets it right.
The famous counting failure that reveals everything about how LLMs actually see text. Not a bug — a consequence of tokenization. With reproducible prompts and the surprisingly clever workarounds.
Why AI models hallucinate, where they break, and how to make them do strange things on purpose. The first post in a new series on the weird, broken, and fascinating edges of modern AI.