Category: AI in practice

3 Prompt AI Series #4: Framework: Calibration, Governance, and Trade-offs

Implementing the Three-Rule Framework: Calibration, Governance, and Trade-offs

The previous post in this series introduced a general framework for AI-assisted scenario building: Force Blank, Penalize Guessing, Show the Source. The framework produces output where every claim is tagged as VERIFIED, ASSUMED, or PROJECTED, and where gaps are explicitly labeled instead of silently filled.

That’s the what. This post is about the how — three practical challenges that anyone implementing the framework will encounter:

  1. Calibration: You’ve tagged something as ASSUMED. How do you check whether the assumption is reasonable?
  2. Governance: How do organizations enforce tagging in actual workflows — not just in one person’s prompt?
  3. Trade-offs: Doesn’t all this tagging create cognitive overload? How do non-experts read a document full of provenance labels?

1. Calibrating Assumptions: From “Tagged” to “Tested”

Tagging an assumption is necessary but not sufficient. (ASSUMED: market grows 15% annually) is better than an unlabeled 15% baked into the projection — but it still doesn’t tell you whether 15% is defensible. The framework surfaces assumptions; calibration tests them.

Four calibration methods work well with the tagged output:

Reference Class Forecasting: The Outside View

Daniel Kahneman and Amos Tversky’s distinction between the “inside view” (planning based on the specifics of this project) and the “outside view” (what happened in similar projects historically) is the single most useful concept for calibrating assumptions. The planning fallacy — systematically underestimating costs and timelines — is so well-documented that the American Planning Association officially endorsed reference class forecasting in 2005 as a corrective.

In practice, this means: for every ASSUMED tag, ask the model (or yourself) to identify 3–5 comparable situations and their actual outcomes. If you assume 15% growth, what growth did similar products in similar markets actually achieve? If you assume a 6-month regulatory timeline, how long did comparable approvals actually take? The tagged format makes this step natural — you have a list of assumptions; now walk down it with an outside view on each one.

You can even build this into the prompt:

For every ASSUMED tag, add a “Calibration” note: identify 2–3 comparable historical cases and their actual outcomes. If no comparable data exists, note [NO REFERENCE CLASS].

Sensitivity Testing: What Breaks If This Is Wrong?

Not all assumptions are equally important. RAND’s Assumption-Based Planning calls this “criticality” — an assumption is critical if its failure would require fundamental changes to the plan. In practice, this means testing: what happens to the conclusion if this assumption is 50% wrong? If the answer is “not much,” the assumption is low-priority. If the answer is “the entire business case collapses,” that’s your highest-priority validation target.

The tagged format enables this directly. You can ask the model:

Take the three ASSUMED items with the highest downstream impact on the final projection. For each, recalculate the projection with the assumption at 50% of stated value and at 150%. Show me which assumptions the conclusion is most sensitive to.

Pre-Mortem: Imagine It Failed

Gary Klein’s pre-mortem technique inverts the question: instead of asking “will this work?”, you start from “it failed — why?” This is particularly effective for ASSUMED tags, because it surfaces failure modes that optimism hides. Ask the model:

Assume this scenario failed after 12 months. Which of the ASSUMED items were most likely the point of failure? For each, describe a plausible narrative of how that assumption broke down.

Temporal Decay: When Does the Assumption Expire?

Assumptions have shelf lives. A market size estimate from a 2025 Gartner report is still reasonable in 2026. A competitive landscape assumption from 2024 may already be wrong. Adding a temporal dimension to ASSUMED tags helps:

For each ASSUMED tag, add an expiry estimate: how long is this assumption likely to remain valid? Mark anything older than 12 months or based on pre-2025 data as [STALE ASSUMPTION].


2. Governance: Making the Framework Stick Beyond One Person’s Prompt

The framework works well when one person uses it in one chat session. The governance question is: how does it survive contact with an organization — multiple people, multiple AI tools, multiple documents, over months?

The Problem: Tags Die in Translation

What typically happens: an analyst generates a beautifully tagged scenario. They copy it into a slide deck. The tags disappear. A manager reads the deck, sees “Year 1 revenue: €310K” with no indication that the number is PROJECTED from two unvalidated ASSUMED inputs. The ghost scenario lives again.

This is a knowledge management problem, not an AI problem. And it has knowledge management solutions.

Level 1: Template Enforcement

The simplest governance mechanism is a template. If your organization uses AI for scenario planning, the output template should have provenance columns built in. Not optional, not “add if useful” — structurally required. A scenario document without source tags should be treated the same way as a financial report without citations: incomplete.

Concretely: create a standard table format for all AI-assisted scenario outputs:

Variable Value Source Basis / If Wrong Validated By Date
(All AI-generated scenario outputs must use this format)

The “Validated By” and “Date” columns are the governance additions. They turn a prompt technique into an audit trail. Someone must sign off on each ASSUMED item before it enters planning.

Level 2: Review Workflow

For organizations with more structured processes, integrate tagging into the review cycle:

Step 1 — Generation: AI produces tagged output using the three-rule prompt.
Step 2 — Assumption Review: A domain expert reviews all ASSUMED and PROJECTED items. Each gets one of three dispositions: confirmed (reclassified to VERIFIED), challenged (sent for calibration), or accepted with risk (kept as ASSUMED with a documented rationale).
Step 3 — Gap Triage: All DATA GAP and ASSUMPTION GAP items are triaged: resolvable (assign someone to find the data), irreducible (the uncertainty is inherent — document it and plan around it), or deferred (not needed for this decision stage).
Step 4 — Decision Package: The final document separates “what we know” (VERIFIED), “what we believe” (ASSUMED, with calibration notes), and “what we don’t know” (remaining gaps). Decision-makers see all three.

Level 3: System Prompt Standardization

If your organization uses AI across multiple teams, standardize the system prompt. Don’t rely on individual analysts remembering to apply the three rules. Embed the framework into every AI access point — whether that’s a shared Claude project, a custom GPT, an API wrapper, or an n8n workflow. The prompt becomes infrastructure, not personal practice.

For teams using Claude Projects or custom GPTs, the three-rule prompt goes into the project instructions or system message — it’s active for every conversation in that workspace without anyone needing to remember to include it.

The Cultural Challenge

The hardest governance problem isn’t technical. It’s that tagging uncertainty feels like weakness. Presenting a scenario full of ASSUMED and DATA GAP labels to a board looks less confident than presenting clean numbers. The organizational response to this must be explicit: a tagged scenario is not an incomplete scenario — it’s an honest one. The clean numbers were never clean; they just hid where the guesses were.

This is exactly what Bent Flyvbjerg’s decades of research on megaproject failures shows: the projects that went most catastrophically over budget weren’t the ones with the most uncertainty — they were the ones where the uncertainty was hidden. Transparency about assumptions is a risk reduction strategy, not an admission of weakness.


3. Trade-offs: When Tags Become Noise

A document where every sentence carries a provenance label is exhausting to read. The framework creates real cognitive overhead, and pretending otherwise is dishonest. The question isn’t whether there’s a cost — there is — but how to manage it.

The Overload Problem

Consider a 20-variable scenario with source tags, calibration notes, and “if wrong” annotations on every ASSUMED item. For the analyst who built it, this is valuable — they can see exactly where to direct attention. For the executive who needs to make a decision based on it, it’s a wall of qualifications that obscures the bottom line.

Both perspectives are legitimate. The solution isn’t to choose one over the other — it’s to serve both with different views of the same underlying data.

Solution: Layered Presentation

The tagged scenario should exist in at least two layers:

Layer 1 — Decision Summary: One page. Key conclusions, key numbers, key risks. No tags in the running text. Instead, a single “Confidence Profile” section at the bottom:

This scenario rests on 14 verified data points, 6 stated assumptions, and 3 projections. Two data gaps remain unresolved (market-specific CAC, regulatory timeline). The assumption with the highest downstream impact is [X] — if wrong by 50%, projected revenue shifts from €310K to €180K.

That’s the executive view: how much of this is solid, how much is uncertain, and what specifically could break it.

Layer 2 — Full Tagged Analysis: The complete output with all provenance tags, calibration notes, gap labels, and sensitivity analysis. This is the working document. It’s what the analyst uses, what the reviewer signs off on, and what gets archived. It’s the audit trail.

The relationship between the layers is like the relationship between a financial statement and its footnotes. The statement tells you the numbers; the footnotes tell you what the numbers rest on. Both exist. Different readers use different layers.

How Non-Experts Read Tags

For teams where not everyone is fluent in the tagging system, simplify the visual language. Three colors work better than three acronyms:

  • VERIFIED → presented as normal text (no special marking needed — it’s the baseline)
  • ASSUMED → highlighted or marked with a distinct visual cue (e.g., italic, a colored sidebar, or a simple ⚠ symbol)
  • DATA GAP → presented as an explicit blank with a brief note

The core message non-experts need to internalize is simple: unmarked text is grounded; marked text is uncertain; blanks are honest. That’s a ten-second briefing. If someone can read a weather forecast that distinguishes “current temperature” from “tomorrow’s forecast,” they can read a tagged scenario.

When to Reduce Tagging

Not every use case needs full provenance. The right level of tagging depends on the stakes:

Stakes Tagging Level Example
Low Tag only gaps Internal brainstorming, early-stage ideation
Medium Tag gaps + assumptions Project proposals, budget drafts, team planning
High Full tagging + calibration Board presentations, investment decisions, regulatory submissions

For a casual strategy brainstorm, requiring VERIFIED/ASSUMED/PROJECTED on every line would kill the creative flow. For a €2M investment decision going to the board, anything less than full tagging is irresponsible. Match the framework’s intensity to the decision’s consequences.


The Framework Maturity Model

Putting it all together, organizations adopting the three-rule framework can think of implementation in three stages:

Stage 1 — Individual Practice: One person uses the three-rule prompt in their own AI conversations. Tagged output stays in their workspace. Value: personal quality control. Cost: near zero.

Stage 2 — Team Standard: The prompt is embedded in shared AI workspaces (Claude Projects, custom GPTs). Templates enforce the table format. Assumptions get informal peer review. Value: consistent quality across a team. Cost: template creation, brief training.

Stage 3 — Organizational Governance: The framework is integrated into planning processes. Assumption review is a formal workflow step. Calibration (reference class, sensitivity, pre-mortem) is standard practice. Decision packages separate confidence layers. Value: systematic risk reduction. Cost: process change, cultural shift.

Most teams should start at Stage 1 and see results immediately. Whether to progress to Stage 2 or 3 depends on how much is at stake when AI-generated scenarios inform real decisions. The higher the stakes, the more the governance investment pays for itself.


Limitations and Known Gaps

The three-rule framework is a practitioner pattern, not a peer-reviewed method. It deserves the same critical scrutiny it asks users to apply to AI output. Here are the things it doesn’t solve — and the ways it can be misused.

1. Not empirically validated

There are no controlled experiments, before/after error-rate measurements, or user studies behind this framework. Research shows that provenance tagging and structured prompting can reduce hallucinations — sometimes significantly — but this has been demonstrated for specific tagging schemes under controlled conditions, not for the exact VERIFIED / ASSUMED / PROJECTED pattern proposed here. Treat the framework as an engineering heuristic that probably helps in many cases, not as something whose effectiveness you can assume without measuring on your own use cases. If you adopt it, track whether it actually improves your outputs.

2. The prompt is one lever, not the only lever

The framework leans heavily on prompt design as the primary mechanism for controlling model behavior. In practice, prompts can reduce hallucinations, but models still violate instructions under pressure — especially when optimization, reward models, or fine-tuning push toward fluency and completeness. For production systems, prompt-level rules should be complemented by architecture-level controls: retrieval-augmented generation (RAG) to ground outputs in actual data, rule-based filters to catch unsupported claims, abstention mechanisms that refuse to generate when confidence is low, and human review workflows. The prompt is the user-accessible lever. It is not the only lever, and in high-stakes deployments, relying on it alone is fragile.

3. VERIFIED means “sourced,” not “infallible”

The framework’s tag hierarchy implies a confidence gradient: VERIFIED = solid, ASSUMED = fragile, PROJECTED = derived. But “verified” data can itself embed significant problems. Historical figures can reflect measurement error. Market data can encode vendor assumptions or sampling bias. Financial actuals can be non-stationary — a Q4 2024 revenue figure may be misleading for Q4 2026 projections in a post-shock market. The framework tracks provenance (where did this number come from?) but not quality (is this number still a reliable guide?). Users should resist the temptation to treat VERIFIED as “settled.” Data fundamentalism — assuming that sourced data is correct data — is a different failure mode than hallucination, but it can drive equally bad decisions.

4. Tags expose inputs, not structural validity

A scenario can be perfectly tagged — every number sourced, every assumption labeled, every gap flagged — and still be fundamentally misleading because the underlying causal model is wrong. Treating customer churn as independent of pricing. Ignoring feedback loops between marketing spend and brand perception. Assuming linear scaling where the real dynamics are nonlinear. The framework catches factual hallucinations (wrong inputs) but not structural errors (wrong model of how the inputs relate). The calibration methods described earlier — sensitivity testing, pre-mortem — partially help by stress-testing individual assumptions, but they test assumptions in isolation, not the relationships between them. ABP and scenario planning literature emphasize structural thinking, exploration of alternative logics, and the “world of no broken assumptions” as a reference scenario. This framework focuses on tagging and gap flagging, not on the quality of the mental model. A well-tagged bad model is still a bad model.

5. Labels don’t expose whose assumptions are being encoded

The categories VERIFIED / ASSUMED / PROJECTED can give a veneer of objectivity that hides power dynamics. Management may encode optimistic growth targets as ASSUMED without revealing the political pressure behind the number. A vendor’s market-size estimate tagged as VERIFIED may embed that vendor’s commercial interests. An analyst’s PROJECTED calculation may use a model that reflects institutional bias toward certain outcomes. The framework does not require the model (or the human) to reveal whose assumptions are being encoded or how they were generated. In organizational contexts, this matters: the question isn’t just “is this sourced or assumed?” but “whose interests shaped this assumption?” The framework doesn’t answer that question — and claiming it does would be a form of the same false confidence it’s designed to prevent.

6. Too many gaps can paralyze decisions

The framework explicitly penalizes guessing and encourages the model to flag [DATA GAP] and [ASSUMPTION GAP] at every opportunity. In high-uncertainty domains — which is most strategic planning — this can produce outputs dominated by gaps and caveats. ABP literature stresses that some assumptions must be made “for planning purposes” or planning cannot proceed. The stakes-based scaling table earlier in this post partially addresses this (brainstorming gets light tagging, board decisions get full tagging), but the underlying tension remains: the framework promotes a norm where “silent invention is worse than flagged uncertainty” without explicitly discussing when too much uncertainty signaling undermines decision-making. In a corporate context, if every plan is filled with prominent warnings, managers may either ignore the warnings as boilerplate or become overly cautious and delay needed decisions. Match the framework’s intensity not only to the decision’s stakes but also to the organization’s risk appetite and decision timeline.

7. Domain-specific adaptation required

The series claims the framework is portable across domains — document extraction, worldbuilding, business scenarios, cybersecurity, scientific writing. But those domains have very different stakes, epistemic structures, and regulatory environments. In medicine, tagging something as ASSUMED is far from sufficient to make it safe — existing guidance requires retrieval-augmented generation, external verification, and human oversight. In legal work, a custom label scheme might conflict with established citation standards or be misinterpreted by courts. In regulated industries, compliance frameworks may have their own provenance requirements that the three-rule labels don’t map onto. The general pattern provides a starting structure; domain-specific adaptation and validation are required before relying on it in regulated or high-stakes environments. The domain-specific posts in this series (cybersecurity, scientific writing) are first steps in that adaptation, not finished products.

These limitations don’t invalidate the framework — they bound it. The three rules are a significant improvement over the default (no provenance, no gap flagging, no penalty for guessing), but they are not a complete solution. They’re the beginning of a practice, not the end of one.


Sources and Further Reading

The Three-Rule Framework for Scenario Building

The Three-Rule Framework: From Document Extraction to Business Scenario Building

This is the third post in a series about a small set of prompt rules with a surprisingly wide reach.

In the first post, I showed how three rules — Force Blank, Penalize Guessing, Show the Source — stop AI from silently guessing when extracting data from contracts and invoices. In the second post, I adapted them for alternate history worldbuilding, where the same rules keep lore consistent and real history accurate.

This post takes the final step: generalizing the three rules into a framework that works for business scenario building — strategic planning, KPI development, project risk assessment, financial modeling, market entry analysis, and any other context where you’re using AI to think about the future.


Why Scenario Building Is Vulnerable to the Same Problem

Scenario planning has a long intellectual history. RAND developed Assumption-Based Planning (ABP) for the U.S. Army in the 1990s. Shell pioneered corporate scenario planning in the 1980s under Peter Schwartz. The Oxford Scenario Planning Approach, described in a December 2025 MIT Sloan article, now integrates generative AI into the process itself.

All of these methodologies share a core principle: make your assumptions explicit. RAND defines an assumption as “an assertion about some characteristic of the future that underlies the current operations or plans of an organization.” Every plan has them. Most are invisible. The ones that stay invisible are the ones that cause failures.

Now consider what happens when you hand your business data to an AI and ask it to build a scenario. The model does exactly what it does with contracts and fiction: it fills gaps. Revenue growth for Q4? The model picks a plausible number. Competitive response to your market entry? The model invents one. Timeline for regulatory approval? The model estimates. Customer churn under the new pricing? The model generates a figure.

Every one of these is an assumption. None of them are labeled as such. The scenario reads like a coherent analysis, backed by data — but some of the “data” is real, some is derived, and some was fabricated to make the narrative hold together. You can’t tell which is which.

This is the same problem in its third incarnation. And it responds to the same three rules.


The General Pattern

Across three domains, the same structure repeats:

Domain Canon (Source of Truth) Source Tags Gap Labels
Document extraction The document EXTRACTED / INFERRED BLANK
Worldbuilding Real history + Your lore HISTORY / LORE-ESTABLISHED / LORE-INFERRED HISTORICAL GAP / LORE GAP
Scenario building Verified data + Established constraints VERIFIED / ASSUMED / PROJECTED DATA GAP / ASSUMPTION GAP

The underlying logic is always the same: distinguish what is known from what is invented, and make the boundary visible.

For business scenarios, the “canon” has two layers — just like worldbuilding:

  1. Verified data — things you know from actual measurements: last year’s revenue, current headcount, signed contracts, measured KPIs, market data from credible sources
  2. Established constraints — things that are decided, not speculated: budget limits, regulatory requirements, contractual deadlines, board-approved targets

Everything else — market growth estimates, competitive behavior, customer adoption rates, technology readiness timelines — is an assumption. And assumptions come in two flavors: ones you’ve thought about and can defend (even if uncertain), and ones the AI just made up because the scenario needed a number.

The three rules exist to separate these categories.


The Three Rules for Business Scenario Building

Rule 1: Force Blank → Flag Unknown Variables

When the AI encounters a variable it doesn’t have data for, it should say so — not invent a plausible value.

The gap labels for business scenarios split into two types:

  • [DATA GAP] — a factual input the scenario needs but that hasn’t been provided or isn’t available. Example: “This projection requires customer acquisition cost (CAC) for the DACH region; no data was provided.”
  • [ASSUMPTION GAP] — a strategic or behavioral assumption the scenario relies on but that hasn’t been explicitly stated. Example: “This scenario assumes competitor X will not lower prices in response. This assumption has not been validated.”

This is where RAND’s ABP framework and the three rules converge most directly. Dewar and his colleagues at RAND argue that every plan has a “ghost scenario” — the implicit, unstated set of assumptions about the future to which the plan is suited. The most dangerous assumptions are the ones nobody realized they were making. Forcing the AI to flag gaps is a practical way to surface the ghost scenario.

Rule 2: Penalize Guessing → A Silent Assumption Is Worse Than a Known Unknown

The business version of “a wrong answer is 3× worse than a blank” is this:

A hidden assumption baked into the scenario is worse than an explicitly flagged uncertainty. When you don’t have data, flag the gap — don’t fill it with a plausible number.

Why is this more dangerous in scenarios than in document extraction? Because scenarios compound. A single unflagged assumption about market growth feeds into revenue projections, which feed into headcount planning, which feeds into budget allocation, which feeds into board presentations. By the time the assumption fails, six months of planning has been built on top of it.

ABP calls these “load-bearing assumptions” — the ones whose failure would require fundamental changes to the plan. The three-rule framework surfaces them before they bear load.

Rule 3: Show the Source → VERIFIED / ASSUMED / PROJECTED

Every number, every trend, every behavioral claim in the scenario gets one of three tags:

  • (VERIFIED) — based on actual data you’ve provided: financial reports, signed contracts, measured KPIs, credible third-party research with a citation
  • (ASSUMED) — a belief about the future that the scenario relies on but that could be wrong. The model must state the assumption explicitly: “Assumes 15% annual growth in segment X, consistent with 2023–2025 trend”
  • (PROJECTED) — a value derived or calculated from verified data and stated assumptions. The model must show the derivation: “Projected from Q1–Q3 actuals at current run rate”

The critical distinction between ASSUMED and PROJECTED: an assumption is a belief you bring to the scenario; a projection is a calculation the model performs using your data and assumptions as inputs. Assumptions can be challenged (“what if growth is 5% instead of 15%?”). Projections can be audited (“show me the calculation”).

This maps directly onto what scenario planning practitioners call “sensitivity analysis”: identifying which assumptions the scenario’s conclusions are most sensitive to. With source tags in place, you can immediately see which conclusions rest on verified data (stable) and which rest on assumptions (fragile). That’s where your attention should go.


The Combined Prompt

Here is the full framework as a system prompt. Replace the bracketed placeholders with your specific context.

You are my scenario planning analyst. We are building a [TYPE: business plan / market analysis / project risk assessment / KPI framework / budget scenario] for [CONTEXT: company, project, product, market].

Your task is to produce analysis that is transparent about what it knows, what it assumes, and what it doesn’t know. Follow these rules strictly:

Rule 1 — Flag unknown variables:
• If the scenario requires data that has not been provided, do not invent a plausible value. Use [DATA GAP: description of what data is missing and why it matters].
• If the scenario relies on a strategic or behavioral assumption that has not been explicitly validated, flag it with [ASSUMPTION GAP: description of the unstated assumption].

Rule 2 — Do not fill gaps silently:
• A hidden assumption baked into the analysis is worse than an explicitly flagged uncertainty.
• When data is missing, flag the gap. Do not generate a plausible-sounding number.
• When an outcome depends on assumptions about competitor behavior, market dynamics, regulatory decisions, or customer response, state the assumption explicitly rather than embedding it as fact.

Rule 3 — Source labeling:
Tag every significant claim, number, or conclusion with its source:
• (VERIFIED) — based on actual data I provided or credible, cited third-party data
• (ASSUMED) — a belief about the future that the scenario depends on. State the assumption and what it is based on.
• (PROJECTED) — calculated or derived from verified data and stated assumptions. Show or describe the derivation.
• For every ASSUMED tag, briefly state what would change if the assumption is wrong.


Example: Market Entry Scenario

Here’s what the output looks like when the rules are active. Imagine asking the AI to assess a SaaS product launch in a new market:

Variable Value Source Note
Current ARR €2.4M VERIFIED Q4 2025 financial report
Target market size (DACH) €340M VERIFIED Gartner 2025 report, cited
Year 1 market share DATA GAP No comparable launch data provided for this segment
CAC (DACH region) DATA GAP Current CAC is for US market only; DACH acquisition costs not provided
Pricing model €49/seat/month VERIFIED Board-approved pricing decision, March 2026
Competitor response No price reduction ASSUMED Assumes incumbent competitor maintains current pricing. If they discount 20%, projected margin drops from 68% to ~51%
Year 1 revenue projection €180K–€420K PROJECTED Range based on 30–70 enterprise seats at stated pricing. Lower bound assumes no channel partners; upper bound assumes 2 reseller agreements (ASSUMPTION GAP: no reseller discussions confirmed)

Compare this to what the same model would produce without the rules: a single confident revenue projection of €310K, a specific market share percentage, an assumed CAC that looks like data, and no indication of which numbers are real and which are invented.

The tagged version takes thirty seconds longer to read. It saves weeks of planning on false foundations.


Applications Beyond Strategy

The same framework adapts to any structured planning context:

KPI Development: When defining KPIs for a new initiative, tag each target as VERIFIED (based on historical baseline), ASSUMED (based on industry benchmarks or management expectations), or PROJECTED (calculated from verified inputs). Flag any KPI that lacks a reliable baseline with [DATA GAP]. This prevents the common failure mode where AI-generated KPI dashboards contain a mix of real metrics and invented benchmarks with no way to tell them apart.

Project Risk Assessment: For each identified risk, tag the probability and impact as VERIFIED (based on historical incident data), ASSUMED (based on expert judgment or analogy), or PROJECTED (derived from a model). Flag risks where neither data nor expert input exists with [ASSUMPTION GAP]. The result is a risk register that honestly distinguishes evidence-based risks from plausible-sounding guesses.

Budget Scenarios: Tag every line item. Fixed costs from signed contracts are VERIFIED. Headcount-dependent costs using planned hiring are PROJECTED (with the hiring plan as a stated assumption). Revenue-dependent items are ASSUMED if revenue targets haven’t been validated against pipeline data. The budget becomes a map of its own confidence levels.

Competitive Analysis: Every claim about a competitor’s strategy, pricing, or market position should be tagged. Public financial data is VERIFIED. Inferences from job postings or patent filings are PROJECTED. Assumptions about their future moves are ASSUMED — with an explicit “if wrong” note. This prevents the common scenario planning failure where competitive intelligence is a blend of hard data and conjecture, presented uniformly as fact.


The Framework as a Pattern

Looking across all three posts, the general framework can be stated in one paragraph:

When using AI in any domain where fidelity to sources matters, apply three rules: (1) give the model explicit permission to not-know, with labeled gaps; (2) make the cost of silent invention higher than the cost of flagged uncertainty; (3) require every claim to carry a provenance tag showing whether it comes from verified source material, from stated assumptions, or from the model’s own inference. The specific labels change by domain, but the structure is universal.

Rule 1: Force Blank Rule 2: Penalize Guessing Rule 3: Show Source
Extraction BLANK + Reason Wrong answer 3× worse EXTRACTED / INFERRED
Worldbuilding HISTORICAL GAP / LORE GAP False invention worse than gap HISTORY / LORE-ESTABLISHED / LORE-INFERRED
Scenarios DATA GAP / ASSUMPTION GAP Hidden assumption worse than known unknown VERIFIED / ASSUMED / PROJECTED
General [GAP: type + explanation] Silent invention > flagged uncertainty SOURCE / DERIVED / INFERRED

That bottom row is the portable version. It works for legal research, medical summarization, academic literature review, code refactoring, translation — any task where you need the model to be useful without being dishonest.

RAND’s James Dewar wrote in 2002 that every plan has a “ghost scenario” — the unstated set of assumptions about the future to which the plan is unconsciously suited. The three-rule framework is, in essence, a ghost-scenario detector. It forces the invisible to become visible, whether the plan is a vendor contract, a fictional universe, or a five-year business strategy.

The models are getting smarter every quarter. Making them honest is still up to us.


Sources and Further Reading

  • Dewar, J.A. et al. (1993/2002): “Assumption-Based Planning.” RAND Corporation. The foundational methodology for identifying, testing, and planning around critical assumptions. Overview at MindTools.
  • Lambdin, C. (2024): “Assumption-Based Planning.” Excellent deep-dive into ABP with the “ghost scenario” concept.
  • Ramírez, R. et al. (December 2025): “A Faster Way to Build Future Scenarios.” MIT Sloan Management Review. On integrating generative AI into the Oxford Scenario Planning Approach.
  • Schwartz, P. (1991): “The Art of the Long View: Planning for the Future in an Uncertain World.” The foundational text on corporate scenario planning.
  • Previous posts in this series:
    ChatGPT and Claude Got Smarter. Not More Honest. — The original three rules for document extraction.
    From Contract Extraction to Alternate History — Adapting the rules for worldbuilding.

From Document Extraction to Alternate History: Why the Three Honesty Rules Work for Worldbuilding Too

A few weeks ago I wrote about three prompt rules that stop AI from guessing when extracting data from documents. The rules — Force Blank, Penalize Guessing, Show the Source — were designed for mundane business problems: contracts with contradictory clauses, meeting notes with ambiguous commitments, invoices with missing fields.

But the more I used them, the more I noticed something: the same rules solve an entirely different problem — one that has nothing to do with business documents.

They solve worldbuilding.


The Problem: AI as a Continuity Editor

Anyone who has tried to use an LLM for sustained creative work knows the pattern. You’re building an alternate history, a fantasy setting, a science fiction universe, a tabletop RPG campaign. You’ve written hundreds of pages of lore. You hand it to Claude or ChatGPT and ask a question about how your fictional world works.

And the model invents something.

It creates a faction that doesn’t exist. It attributes a technology to the wrong era. It “remembers” a character who was never in your notes. It confidently places a fictional event in a real historical period and gets the real history wrong while doing so. The output sounds plausible, internally consistent, beautifully written — and it contradicts everything you’ve built.

This is the same structural problem I described in the earlier post, just in a different domain. The model is trained to produce complete, coherent output. When your lore has a gap, the model fills it — because filling gaps is what it was optimized to do. Whether the gap is “what are the payment terms in section 4” or “what happened in the Imperial Senate after the divergence point,” the instinct is identical: make something up that sounds right.

Researchers have a term for this in the fiction context: “character hallucination” (Wu et al., 2024) — when an AI playing a role violates the established identity of that role. The IJCAI 2025 tutorial on LLM role-playing calls the broader challenge “controlled hallucination”: the model must invent creatively within the established rules of a fictional world, while rigorously refusing to invent things that contradict those rules. The line between productive creativity and lore-breaking confabulation is exactly the line the three rules are designed to draw.


The Adaptation: Worldbuilding Has Two Canons, Not One

In contract extraction there’s one source of truth: the document. Extract what’s there, flag what isn’t, don’t invent.

In alternate history, there are two sources of truth operating simultaneously:

  1. Real history — everything that happened in our world before the story diverges from it
  2. Your lore — everything you’ve established about what happens after the divergence

Both are canonical. Both are places the AI must not invent. And the boundary between them is sharp: the “point of divergence” (POD), the moment at which your fictional timeline breaks from real history.

Before the POD, the AI must be a historian. It can reference real people, real technologies, real battles, real events — but only things that actually happened. Inventing a battle that didn’t happen or a person who didn’t exist is as bad as making up a contract clause.

After the POD, the AI must be a continuity editor. Only the things established in your lore exist. Everything else is a gap — and gaps should be labeled, not filled.

This is where the three rules come in, almost unchanged.


The Three Rules, Adapted

Rule 1: Force Blank → Label the Gaps

In document extraction, the model leaves a field BLANK when the data is missing and explains why. In worldbuilding, the same principle applies with two labels instead of one — because there are two types of gap:

  • [HISTORICAL GAP] — for events before the point of divergence that the model isn’t certain about. Don’t invent a Roman consul’s biography; flag the gap.
  • [LORE GAP: no established specification] — for developments after the point of divergence that your lore hasn’t addressed yet. Don’t invent a new faction, technology, or major event; flag the gap.

The crucial move is the same as before: give the model explicit permission to not-know. Without this permission, the model’s completion instinct will override its uncertainty detection, and you’ll get confidently-written hallucinations that feel like canon but aren’t.

Rule 2: Penalize Guessing → A False Invention Is Worse Than a Gap

The business version of this rule says: “A wrong answer is 3× worse than a blank. When in doubt, leave it blank.”

The worldbuilding version is even more forceful, because the consequences are worse. A wrong payment term on a spreadsheet gets corrected. A wrong lore detail, accepted into your canon because it sounded right, can poison hundreds of hours of subsequent writing. Every future reference builds on it. Every character interacts with it. By the time you catch it, it’s woven through your world.

So the rule becomes:

A false invention is worse than acknowledging a gap in the worldbuilding.

No multiplier needed. The asymmetry is total. In creative work, a gap is a prompt to expand your lore on your own terms. A bad invention is a bug that ships.

Rule 3: Show the Source → Three Provenance Tags Instead of Two

In document extraction, every value is either EXTRACTED (directly from the source) or INFERRED (calculated or derived). In worldbuilding, you need three tags because you have two canonical sources plus your own extrapolation:

  • (HISTORY) — real historical fact from before the point of divergence
  • (LORE-ESTABLISHED) — stated exactly this way in your source lore
  • (LORE-INFERRED) — a logical consequence the model is drawing from your lore, with a one-sentence justification

The third tag is where the magic happens. You want the model to extrapolate — that’s what makes it useful for worldbuilding. An established technology must have consequences; an established faction must interact with other factions; an established event must have ripple effects. But you want those extrapolations flagged, so you can review them and decide whether they fit your vision. A flagged inference you disagree with takes thirty seconds to correct. An unflagged inference that quietly becomes canon takes hours to untangle three sessions later.


The Combined Prompt

Here is the full adaptation, structured as a system prompt you can paste into any long-running chat about your fictional world. Replace the bracketed placeholders with your own setting.

We are building an alternate timeline that begins in [YEAR] with [CHANGE / POINT OF DIVERGENCE]. You are my historian and continuity editor for this alternate-history universe. Your task is to produce texts, responses, and lore concepts that are absolutely free of contradiction.

The primary rule (the Point of Divergence): The year of divergence is [YEAR].

Rule 1 — BEFORE the Point of Divergence (strict history):
• Everything that happened before this date must correspond 100% to real, verifiable Earth history.
• Do not invent historical persons, technologies, battles, or events.
• If you are not certain of a historical detail, do not invent one. Use the placeholder [HISTORICAL GAP] instead.

Rule 2 — AFTER the Point of Divergence (strict lore canon):
• Everything that happens after this date must be based exclusively on lore texts I provide.
• Do not invent new factions, main characters, major events, or fundamental technologies that are not established in my texts.
• If asked about developments my lore does not specify, respond with [LORE GAP: no established specification]. A false invention is worse than acknowledging a gap in the worldbuilding.

Rule 3 — Source and logic labeling:
To keep the worldbuilding clean, mark in parentheses at the end of each paragraph or for each significant claim where the information comes from:
• (HISTORY) for real historical facts before the point of divergence
• (LORE-ESTABLISHED) for facts stated exactly this way in my texts
• (LORE-INFERRED) for logical conclusions drawn from my lore (e.g., how an established technology affects daily life). When inferring, briefly explain what you are drawing the inference from.

Plug in the year, plug in the divergence event, attach your lore documents, and you have a continuity editor that actively refuses to lie to you.


What This Enables

The workflow change is significant. Without these rules, every AI-generated paragraph needs to be cross-checked against both real history and your own notes — which nobody actually does, which means errors accumulate silently. With the rules, your attention goes exactly where it should: to the gaps (where you get to decide what your world does next) and to the inferences (where you get to approve or override the model’s extrapolation).

A few observations from applying this in practice:

The gaps are often the most interesting output. When the model flags [LORE GAP] for something, that’s the moment you realize your lore has a hole — and often, that hole is exactly the next thing you should develop. The model isn’t failing to answer; it’s telling you where your world needs more work.

Inferences reveal your lore’s implications. A well-labeled (LORE-INFERRED) paragraph often surfaces consequences you hadn’t thought through. “You established that faction X controls the trade route in Y; inferring, this would mean port city Z becomes economically dependent, which suggests tension with neighbor W.” That’s useful even if you reject the specific extrapolation — it shows you a logical consequence of your own setup.

Real history keeps the fiction grounded. Alternate history works best when the “before” is accurate. If your timeline diverges in 1914 and the model gets the pre-1914 world wrong, the whole divergence loses meaning. Forcing (HISTORY) labels — and forcing the model to flag [HISTORICAL GAP] when it’s uncertain — keeps the foundation solid.


The Deeper Pattern

What I find striking is that the same three rules work across two domains that seem to have nothing in common. Business document extraction and creative worldbuilding share no vocabulary, no audience, no workflow. But they share a structure: in both cases, the user needs the AI to distinguish between what is established and what is invented, and to flag the boundary clearly.

That structural similarity is worth taking seriously. It suggests the three rules aren’t really about contracts or fiction specifically — they’re about the general problem of using AI in any context where fidelity to a source matters more than fluency of output. Legal research. Code refactoring against a style guide. Historical research. Medical summarization. Translation against a glossary. Technical writing against a spec. Academic literature review.

In each of these, the AI’s default behavior — produce a confident, complete, coherent answer — works against the user’s actual need, which is to know which parts of the output are grounded and which are the model’s own contribution. Force Blank gives it permission to not-know. Penalize Guessing changes the calculus in favor of honesty. Show the Source makes the boundary between source and invention visible.

Three rules. Two sentences each. Apply everywhere fidelity matters.

The alternate history version is just one adaptation. I’d be curious what other domains this pattern fits — if you find one, I’d love to hear about it.


Sources and Further Reading