The Three-Rule Framework: From Document Extraction to Business Scenario Building
This is the third post in a series about a small set of prompt rules with a surprisingly wide reach.
In the first post, I showed how three rules — Force Blank, Penalize Guessing, Show the Source — stop AI from silently guessing when extracting data from contracts and invoices. In the second post, I adapted them for alternate history worldbuilding, where the same rules keep lore consistent and real history accurate.
This post takes the final step: generalizing the three rules into a framework that works for business scenario building — strategic planning, KPI development, project risk assessment, financial modeling, market entry analysis, and any other context where you’re using AI to think about the future.
Why Scenario Building Is Vulnerable to the Same Problem
Scenario planning has a long intellectual history. RAND developed Assumption-Based Planning (ABP) for the U.S. Army in the 1990s. Shell pioneered corporate scenario planning in the 1980s under Peter Schwartz. The Oxford Scenario Planning Approach, described in a December 2025 MIT Sloan article, now integrates generative AI into the process itself.
All of these methodologies share a core principle: make your assumptions explicit. RAND defines an assumption as “an assertion about some characteristic of the future that underlies the current operations or plans of an organization.” Every plan has them. Most are invisible. The ones that stay invisible are the ones that cause failures.
Now consider what happens when you hand your business data to an AI and ask it to build a scenario. The model does exactly what it does with contracts and fiction: it fills gaps. Revenue growth for Q4? The model picks a plausible number. Competitive response to your market entry? The model invents one. Timeline for regulatory approval? The model estimates. Customer churn under the new pricing? The model generates a figure.
Every one of these is an assumption. None of them are labeled as such. The scenario reads like a coherent analysis, backed by data — but some of the “data” is real, some is derived, and some was fabricated to make the narrative hold together. You can’t tell which is which.
This is the same problem in its third incarnation. And it responds to the same three rules.
The General Pattern
Across three domains, the same structure repeats:
| Domain | Canon (Source of Truth) | Source Tags | Gap Labels |
|---|---|---|---|
| Document extraction | The document | EXTRACTED / INFERRED | BLANK |
| Worldbuilding | Real history + Your lore | HISTORY / LORE-ESTABLISHED / LORE-INFERRED | HISTORICAL GAP / LORE GAP |
| Scenario building | Verified data + Established constraints | VERIFIED / ASSUMED / PROJECTED | DATA GAP / ASSUMPTION GAP |
The underlying logic is always the same: distinguish what is known from what is invented, and make the boundary visible.
For business scenarios, the “canon” has two layers — just like worldbuilding:
- Verified data — things you know from actual measurements: last year’s revenue, current headcount, signed contracts, measured KPIs, market data from credible sources
- Established constraints — things that are decided, not speculated: budget limits, regulatory requirements, contractual deadlines, board-approved targets
Everything else — market growth estimates, competitive behavior, customer adoption rates, technology readiness timelines — is an assumption. And assumptions come in two flavors: ones you’ve thought about and can defend (even if uncertain), and ones the AI just made up because the scenario needed a number.
The three rules exist to separate these categories.
The Three Rules for Business Scenario Building
Rule 1: Force Blank → Flag Unknown Variables
When the AI encounters a variable it doesn’t have data for, it should say so — not invent a plausible value.
The gap labels for business scenarios split into two types:
[DATA GAP]— a factual input the scenario needs but that hasn’t been provided or isn’t available. Example: “This projection requires customer acquisition cost (CAC) for the DACH region; no data was provided.”[ASSUMPTION GAP]— a strategic or behavioral assumption the scenario relies on but that hasn’t been explicitly stated. Example: “This scenario assumes competitor X will not lower prices in response. This assumption has not been validated.”
This is where RAND’s ABP framework and the three rules converge most directly. Dewar and his colleagues at RAND argue that every plan has a “ghost scenario” — the implicit, unstated set of assumptions about the future to which the plan is suited. The most dangerous assumptions are the ones nobody realized they were making. Forcing the AI to flag gaps is a practical way to surface the ghost scenario.
Rule 2: Penalize Guessing → A Silent Assumption Is Worse Than a Known Unknown
The business version of “a wrong answer is 3× worse than a blank” is this:
A hidden assumption baked into the scenario is worse than an explicitly flagged uncertainty. When you don’t have data, flag the gap — don’t fill it with a plausible number.
Why is this more dangerous in scenarios than in document extraction? Because scenarios compound. A single unflagged assumption about market growth feeds into revenue projections, which feed into headcount planning, which feeds into budget allocation, which feeds into board presentations. By the time the assumption fails, six months of planning has been built on top of it.
ABP calls these “load-bearing assumptions” — the ones whose failure would require fundamental changes to the plan. The three-rule framework surfaces them before they bear load.
Rule 3: Show the Source → VERIFIED / ASSUMED / PROJECTED
Every number, every trend, every behavioral claim in the scenario gets one of three tags:
(VERIFIED)— based on actual data you’ve provided: financial reports, signed contracts, measured KPIs, credible third-party research with a citation(ASSUMED)— a belief about the future that the scenario relies on but that could be wrong. The model must state the assumption explicitly: “Assumes 15% annual growth in segment X, consistent with 2023–2025 trend”(PROJECTED)— a value derived or calculated from verified data and stated assumptions. The model must show the derivation: “Projected from Q1–Q3 actuals at current run rate”
The critical distinction between ASSUMED and PROJECTED: an assumption is a belief you bring to the scenario; a projection is a calculation the model performs using your data and assumptions as inputs. Assumptions can be challenged (“what if growth is 5% instead of 15%?”). Projections can be audited (“show me the calculation”).
This maps directly onto what scenario planning practitioners call “sensitivity analysis”: identifying which assumptions the scenario’s conclusions are most sensitive to. With source tags in place, you can immediately see which conclusions rest on verified data (stable) and which rest on assumptions (fragile). That’s where your attention should go.
The Combined Prompt
Here is the full framework as a system prompt. Replace the bracketed placeholders with your specific context.
You are my scenario planning analyst. We are building a [TYPE: business plan / market analysis / project risk assessment / KPI framework / budget scenario] for [CONTEXT: company, project, product, market].
Your task is to produce analysis that is transparent about what it knows, what it assumes, and what it doesn’t know. Follow these rules strictly:
Rule 1 — Flag unknown variables:
• If the scenario requires data that has not been provided, do not invent a plausible value. Use [DATA GAP: description of what data is missing and why it matters].
• If the scenario relies on a strategic or behavioral assumption that has not been explicitly validated, flag it with [ASSUMPTION GAP: description of the unstated assumption].Rule 2 — Do not fill gaps silently:
• A hidden assumption baked into the analysis is worse than an explicitly flagged uncertainty.
• When data is missing, flag the gap. Do not generate a plausible-sounding number.
• When an outcome depends on assumptions about competitor behavior, market dynamics, regulatory decisions, or customer response, state the assumption explicitly rather than embedding it as fact.Rule 3 — Source labeling:
Tag every significant claim, number, or conclusion with its source:
• (VERIFIED) — based on actual data I provided or credible, cited third-party data
• (ASSUMED) — a belief about the future that the scenario depends on. State the assumption and what it is based on.
• (PROJECTED) — calculated or derived from verified data and stated assumptions. Show or describe the derivation.
• For every ASSUMED tag, briefly state what would change if the assumption is wrong.
Example: Market Entry Scenario
Here’s what the output looks like when the rules are active. Imagine asking the AI to assess a SaaS product launch in a new market:
| Variable | Value | Source | Note |
|---|---|---|---|
| Current ARR | €2.4M | VERIFIED | Q4 2025 financial report |
| Target market size (DACH) | €340M | VERIFIED | Gartner 2025 report, cited |
| Year 1 market share | — | DATA GAP | No comparable launch data provided for this segment |
| CAC (DACH region) | — | DATA GAP | Current CAC is for US market only; DACH acquisition costs not provided |
| Pricing model | €49/seat/month | VERIFIED | Board-approved pricing decision, March 2026 |
| Competitor response | No price reduction | ASSUMED | Assumes incumbent competitor maintains current pricing. If they discount 20%, projected margin drops from 68% to ~51% |
| Year 1 revenue projection | €180K–€420K | PROJECTED | Range based on 30–70 enterprise seats at stated pricing. Lower bound assumes no channel partners; upper bound assumes 2 reseller agreements (ASSUMPTION GAP: no reseller discussions confirmed) |
Compare this to what the same model would produce without the rules: a single confident revenue projection of €310K, a specific market share percentage, an assumed CAC that looks like data, and no indication of which numbers are real and which are invented.
The tagged version takes thirty seconds longer to read. It saves weeks of planning on false foundations.
Applications Beyond Strategy
The same framework adapts to any structured planning context:
KPI Development: When defining KPIs for a new initiative, tag each target as VERIFIED (based on historical baseline), ASSUMED (based on industry benchmarks or management expectations), or PROJECTED (calculated from verified inputs). Flag any KPI that lacks a reliable baseline with [DATA GAP]. This prevents the common failure mode where AI-generated KPI dashboards contain a mix of real metrics and invented benchmarks with no way to tell them apart.
Project Risk Assessment: For each identified risk, tag the probability and impact as VERIFIED (based on historical incident data), ASSUMED (based on expert judgment or analogy), or PROJECTED (derived from a model). Flag risks where neither data nor expert input exists with [ASSUMPTION GAP]. The result is a risk register that honestly distinguishes evidence-based risks from plausible-sounding guesses.
Budget Scenarios: Tag every line item. Fixed costs from signed contracts are VERIFIED. Headcount-dependent costs using planned hiring are PROJECTED (with the hiring plan as a stated assumption). Revenue-dependent items are ASSUMED if revenue targets haven’t been validated against pipeline data. The budget becomes a map of its own confidence levels.
Competitive Analysis: Every claim about a competitor’s strategy, pricing, or market position should be tagged. Public financial data is VERIFIED. Inferences from job postings or patent filings are PROJECTED. Assumptions about their future moves are ASSUMED — with an explicit “if wrong” note. This prevents the common scenario planning failure where competitive intelligence is a blend of hard data and conjecture, presented uniformly as fact.
The Framework as a Pattern
Looking across all three posts, the general framework can be stated in one paragraph:
When using AI in any domain where fidelity to sources matters, apply three rules: (1) give the model explicit permission to not-know, with labeled gaps; (2) make the cost of silent invention higher than the cost of flagged uncertainty; (3) require every claim to carry a provenance tag showing whether it comes from verified source material, from stated assumptions, or from the model’s own inference. The specific labels change by domain, but the structure is universal.
| Rule 1: Force Blank | Rule 2: Penalize Guessing | Rule 3: Show Source | |
|---|---|---|---|
| Extraction | BLANK + Reason | Wrong answer 3× worse | EXTRACTED / INFERRED |
| Worldbuilding | HISTORICAL GAP / LORE GAP | False invention worse than gap | HISTORY / LORE-ESTABLISHED / LORE-INFERRED |
| Scenarios | DATA GAP / ASSUMPTION GAP | Hidden assumption worse than known unknown | VERIFIED / ASSUMED / PROJECTED |
| General | [GAP: type + explanation] | Silent invention > flagged uncertainty | SOURCE / DERIVED / INFERRED |
That bottom row is the portable version. It works for legal research, medical summarization, academic literature review, code refactoring, translation — any task where you need the model to be useful without being dishonest.
RAND’s James Dewar wrote in 2002 that every plan has a “ghost scenario” — the unstated set of assumptions about the future to which the plan is unconsciously suited. The three-rule framework is, in essence, a ghost-scenario detector. It forces the invisible to become visible, whether the plan is a vendor contract, a fictional universe, or a five-year business strategy.
The models are getting smarter every quarter. Making them honest is still up to us.
Sources and Further Reading
- Dewar, J.A. et al. (1993/2002): “Assumption-Based Planning.” RAND Corporation. The foundational methodology for identifying, testing, and planning around critical assumptions. Overview at MindTools.
- Lambdin, C. (2024): “Assumption-Based Planning.” Excellent deep-dive into ABP with the “ghost scenario” concept.
- Ramírez, R. et al. (December 2025): “A Faster Way to Build Future Scenarios.” MIT Sloan Management Review. On integrating generative AI into the Oxford Scenario Planning Approach.
- Schwartz, P. (1991): “The Art of the Long View: Planning for the Future in an Uncertain World.” The foundational text on corporate scenario planning.
- Previous posts in this series:
ChatGPT and Claude Got Smarter. Not More Honest. — The original three rules for document extraction.
From Contract Extraction to Alternate History — Adapting the rules for worldbuilding.

