A few weeks ago I wrote about three prompt rules that stop AI from guessing when extracting data from documents. The rules — Force Blank, Penalize Guessing, Show the Source — were designed for mundane business problems: contracts with contradictory clauses, meeting notes with ambiguous commitments, invoices with missing fields.
But the more I used them, the more I noticed something: the same rules solve an entirely different problem — one that has nothing to do with business documents.
They solve worldbuilding.
The Problem: AI as a Continuity Editor
Anyone who has tried to use an LLM for sustained creative work knows the pattern. You’re building an alternate history, a fantasy setting, a science fiction universe, a tabletop RPG campaign. You’ve written hundreds of pages of lore. You hand it to Claude or ChatGPT and ask a question about how your fictional world works.
And the model invents something.
It creates a faction that doesn’t exist. It attributes a technology to the wrong era. It “remembers” a character who was never in your notes. It confidently places a fictional event in a real historical period and gets the real history wrong while doing so. The output sounds plausible, internally consistent, beautifully written — and it contradicts everything you’ve built.
This is the same structural problem I described in the earlier post, just in a different domain. The model is trained to produce complete, coherent output. When your lore has a gap, the model fills it — because filling gaps is what it was optimized to do. Whether the gap is “what are the payment terms in section 4” or “what happened in the Imperial Senate after the divergence point,” the instinct is identical: make something up that sounds right.
Researchers have a term for this in the fiction context: “character hallucination” (Wu et al., 2024) — when an AI playing a role violates the established identity of that role. The IJCAI 2025 tutorial on LLM role-playing calls the broader challenge “controlled hallucination”: the model must invent creatively within the established rules of a fictional world, while rigorously refusing to invent things that contradict those rules. The line between productive creativity and lore-breaking confabulation is exactly the line the three rules are designed to draw.
The Adaptation: Worldbuilding Has Two Canons, Not One
In contract extraction there’s one source of truth: the document. Extract what’s there, flag what isn’t, don’t invent.
In alternate history, there are two sources of truth operating simultaneously:
- Real history — everything that happened in our world before the story diverges from it
- Your lore — everything you’ve established about what happens after the divergence
Both are canonical. Both are places the AI must not invent. And the boundary between them is sharp: the “point of divergence” (POD), the moment at which your fictional timeline breaks from real history.
Before the POD, the AI must be a historian. It can reference real people, real technologies, real battles, real events — but only things that actually happened. Inventing a battle that didn’t happen or a person who didn’t exist is as bad as making up a contract clause.
After the POD, the AI must be a continuity editor. Only the things established in your lore exist. Everything else is a gap — and gaps should be labeled, not filled.
This is where the three rules come in, almost unchanged.
The Three Rules, Adapted
Rule 1: Force Blank → Label the Gaps
In document extraction, the model leaves a field BLANK when the data is missing and explains why. In worldbuilding, the same principle applies with two labels instead of one — because there are two types of gap:
[HISTORICAL GAP]— for events before the point of divergence that the model isn’t certain about. Don’t invent a Roman consul’s biography; flag the gap.[LORE GAP: no established specification]— for developments after the point of divergence that your lore hasn’t addressed yet. Don’t invent a new faction, technology, or major event; flag the gap.
The crucial move is the same as before: give the model explicit permission to not-know. Without this permission, the model’s completion instinct will override its uncertainty detection, and you’ll get confidently-written hallucinations that feel like canon but aren’t.
Rule 2: Penalize Guessing → A False Invention Is Worse Than a Gap
The business version of this rule says: “A wrong answer is 3× worse than a blank. When in doubt, leave it blank.”
The worldbuilding version is even more forceful, because the consequences are worse. A wrong payment term on a spreadsheet gets corrected. A wrong lore detail, accepted into your canon because it sounded right, can poison hundreds of hours of subsequent writing. Every future reference builds on it. Every character interacts with it. By the time you catch it, it’s woven through your world.
So the rule becomes:
A false invention is worse than acknowledging a gap in the worldbuilding.
No multiplier needed. The asymmetry is total. In creative work, a gap is a prompt to expand your lore on your own terms. A bad invention is a bug that ships.
Rule 3: Show the Source → Three Provenance Tags Instead of Two
In document extraction, every value is either EXTRACTED (directly from the source) or INFERRED (calculated or derived). In worldbuilding, you need three tags because you have two canonical sources plus your own extrapolation:
(HISTORY)— real historical fact from before the point of divergence(LORE-ESTABLISHED)— stated exactly this way in your source lore(LORE-INFERRED)— a logical consequence the model is drawing from your lore, with a one-sentence justification
The third tag is where the magic happens. You want the model to extrapolate — that’s what makes it useful for worldbuilding. An established technology must have consequences; an established faction must interact with other factions; an established event must have ripple effects. But you want those extrapolations flagged, so you can review them and decide whether they fit your vision. A flagged inference you disagree with takes thirty seconds to correct. An unflagged inference that quietly becomes canon takes hours to untangle three sessions later.
The Combined Prompt
Here is the full adaptation, structured as a system prompt you can paste into any long-running chat about your fictional world. Replace the bracketed placeholders with your own setting.
We are building an alternate timeline that begins in [YEAR] with [CHANGE / POINT OF DIVERGENCE]. You are my historian and continuity editor for this alternate-history universe. Your task is to produce texts, responses, and lore concepts that are absolutely free of contradiction.
The primary rule (the Point of Divergence): The year of divergence is [YEAR].
Rule 1 — BEFORE the Point of Divergence (strict history):
• Everything that happened before this date must correspond 100% to real, verifiable Earth history.
• Do not invent historical persons, technologies, battles, or events.
• If you are not certain of a historical detail, do not invent one. Use the placeholder [HISTORICAL GAP] instead.Rule 2 — AFTER the Point of Divergence (strict lore canon):
• Everything that happens after this date must be based exclusively on lore texts I provide.
• Do not invent new factions, main characters, major events, or fundamental technologies that are not established in my texts.
• If asked about developments my lore does not specify, respond with [LORE GAP: no established specification]. A false invention is worse than acknowledging a gap in the worldbuilding.Rule 3 — Source and logic labeling:
To keep the worldbuilding clean, mark in parentheses at the end of each paragraph or for each significant claim where the information comes from:
• (HISTORY) for real historical facts before the point of divergence
• (LORE-ESTABLISHED) for facts stated exactly this way in my texts
• (LORE-INFERRED) for logical conclusions drawn from my lore (e.g., how an established technology affects daily life). When inferring, briefly explain what you are drawing the inference from.
Plug in the year, plug in the divergence event, attach your lore documents, and you have a continuity editor that actively refuses to lie to you.
What This Enables
The workflow change is significant. Without these rules, every AI-generated paragraph needs to be cross-checked against both real history and your own notes — which nobody actually does, which means errors accumulate silently. With the rules, your attention goes exactly where it should: to the gaps (where you get to decide what your world does next) and to the inferences (where you get to approve or override the model’s extrapolation).
A few observations from applying this in practice:
The gaps are often the most interesting output. When the model flags [LORE GAP] for something, that’s the moment you realize your lore has a hole — and often, that hole is exactly the next thing you should develop. The model isn’t failing to answer; it’s telling you where your world needs more work.
Inferences reveal your lore’s implications. A well-labeled (LORE-INFERRED) paragraph often surfaces consequences you hadn’t thought through. “You established that faction X controls the trade route in Y; inferring, this would mean port city Z becomes economically dependent, which suggests tension with neighbor W.” That’s useful even if you reject the specific extrapolation — it shows you a logical consequence of your own setup.
Real history keeps the fiction grounded. Alternate history works best when the “before” is accurate. If your timeline diverges in 1914 and the model gets the pre-1914 world wrong, the whole divergence loses meaning. Forcing (HISTORY) labels — and forcing the model to flag [HISTORICAL GAP] when it’s uncertain — keeps the foundation solid.
The Deeper Pattern
What I find striking is that the same three rules work across two domains that seem to have nothing in common. Business document extraction and creative worldbuilding share no vocabulary, no audience, no workflow. But they share a structure: in both cases, the user needs the AI to distinguish between what is established and what is invented, and to flag the boundary clearly.
That structural similarity is worth taking seriously. It suggests the three rules aren’t really about contracts or fiction specifically — they’re about the general problem of using AI in any context where fidelity to a source matters more than fluency of output. Legal research. Code refactoring against a style guide. Historical research. Medical summarization. Translation against a glossary. Technical writing against a spec. Academic literature review.
In each of these, the AI’s default behavior — produce a confident, complete, coherent answer — works against the user’s actual need, which is to know which parts of the output are grounded and which are the model’s own contribution. Force Blank gives it permission to not-know. Penalize Guessing changes the calculus in favor of honesty. Show the Source makes the boundary between source and invention visible.
Three rules. Two sentences each. Apply everywhere fidelity matters.
The alternate history version is just one adaptation. I’d be curious what other domains this pattern fits — if you find one, I’d love to hear about it.
Sources and Further Reading
- Wu et al. (2024): “RoleBreak: Character Hallucination as a Jailbreak Attack in Role-Playing Systems.” Paper defining character hallucination as violation of role identity.
- IJCAI 2025 Tutorial: “LLM-based Role-Playing from the Perspective of Hallucinations.” Introduces the concept of “controlled hallucination” — creative invention constrained by scenario-specific rules.
- Previous post: “ChatGPT and Claude Got Smarter. Not More Honest.” The original three rules for document extraction.
- Panickssery, N. (2025): “Why do LLMs hallucinate?” On why hallucination is the default behavior of base models and requires active training or prompting to suppress.
- Bicking, I. (2023–2025): “Creating Worlds with LLMs.” Series of essays on worldbuilding with LLMs, including the tension between consistency and surprise.
- DiGRA (2025): “Reconceptualizing LLM-Induced Hallucinations as Game Design Features.” On when hallucinations enhance versus break i

