Implementing the Three-Rule Framework: Calibration, Governance, and Trade-offs
The previous post in this series introduced a general framework for AI-assisted scenario building: Force Blank, Penalize Guessing, Show the Source. The framework produces output where every claim is tagged as VERIFIED, ASSUMED, or PROJECTED, and where gaps are explicitly labeled instead of silently filled.
That’s the what. This post is about the how — three practical challenges that anyone implementing the framework will encounter:
- Calibration: You’ve tagged something as ASSUMED. How do you check whether the assumption is reasonable?
- Governance: How do organizations enforce tagging in actual workflows — not just in one person’s prompt?
- Trade-offs: Doesn’t all this tagging create cognitive overload? How do non-experts read a document full of provenance labels?
1. Calibrating Assumptions: From “Tagged” to “Tested”
Tagging an assumption is necessary but not sufficient. (ASSUMED: market grows 15% annually) is better than an unlabeled 15% baked into the projection — but it still doesn’t tell you whether 15% is defensible. The framework surfaces assumptions; calibration tests them.
Four calibration methods work well with the tagged output:
Reference Class Forecasting: The Outside View
Daniel Kahneman and Amos Tversky’s distinction between the “inside view” (planning based on the specifics of this project) and the “outside view” (what happened in similar projects historically) is the single most useful concept for calibrating assumptions. The planning fallacy — systematically underestimating costs and timelines — is so well-documented that the American Planning Association officially endorsed reference class forecasting in 2005 as a corrective.
In practice, this means: for every ASSUMED tag, ask the model (or yourself) to identify 3–5 comparable situations and their actual outcomes. If you assume 15% growth, what growth did similar products in similar markets actually achieve? If you assume a 6-month regulatory timeline, how long did comparable approvals actually take? The tagged format makes this step natural — you have a list of assumptions; now walk down it with an outside view on each one.
You can even build this into the prompt:
For every ASSUMED tag, add a “Calibration” note: identify 2–3 comparable historical cases and their actual outcomes. If no comparable data exists, note [NO REFERENCE CLASS].
Sensitivity Testing: What Breaks If This Is Wrong?
Not all assumptions are equally important. RAND’s Assumption-Based Planning calls this “criticality” — an assumption is critical if its failure would require fundamental changes to the plan. In practice, this means testing: what happens to the conclusion if this assumption is 50% wrong? If the answer is “not much,” the assumption is low-priority. If the answer is “the entire business case collapses,” that’s your highest-priority validation target.
The tagged format enables this directly. You can ask the model:
Take the three ASSUMED items with the highest downstream impact on the final projection. For each, recalculate the projection with the assumption at 50% of stated value and at 150%. Show me which assumptions the conclusion is most sensitive to.
Pre-Mortem: Imagine It Failed
Gary Klein’s pre-mortem technique inverts the question: instead of asking “will this work?”, you start from “it failed — why?” This is particularly effective for ASSUMED tags, because it surfaces failure modes that optimism hides. Ask the model:
Assume this scenario failed after 12 months. Which of the ASSUMED items were most likely the point of failure? For each, describe a plausible narrative of how that assumption broke down.
Temporal Decay: When Does the Assumption Expire?
Assumptions have shelf lives. A market size estimate from a 2025 Gartner report is still reasonable in 2026. A competitive landscape assumption from 2024 may already be wrong. Adding a temporal dimension to ASSUMED tags helps:
For each ASSUMED tag, add an expiry estimate: how long is this assumption likely to remain valid? Mark anything older than 12 months or based on pre-2025 data as [STALE ASSUMPTION].
2. Governance: Making the Framework Stick Beyond One Person’s Prompt
The framework works well when one person uses it in one chat session. The governance question is: how does it survive contact with an organization — multiple people, multiple AI tools, multiple documents, over months?
The Problem: Tags Die in Translation
What typically happens: an analyst generates a beautifully tagged scenario. They copy it into a slide deck. The tags disappear. A manager reads the deck, sees “Year 1 revenue: €310K” with no indication that the number is PROJECTED from two unvalidated ASSUMED inputs. The ghost scenario lives again.
This is a knowledge management problem, not an AI problem. And it has knowledge management solutions.
Level 1: Template Enforcement
The simplest governance mechanism is a template. If your organization uses AI for scenario planning, the output template should have provenance columns built in. Not optional, not “add if useful” — structurally required. A scenario document without source tags should be treated the same way as a financial report without citations: incomplete.
Concretely: create a standard table format for all AI-assisted scenario outputs:
| Variable | Value | Source | Basis / If Wrong | Validated By | Date |
|---|---|---|---|---|---|
| (All AI-generated scenario outputs must use this format) | |||||
The “Validated By” and “Date” columns are the governance additions. They turn a prompt technique into an audit trail. Someone must sign off on each ASSUMED item before it enters planning.
Level 2: Review Workflow
For organizations with more structured processes, integrate tagging into the review cycle:
Step 1 — Generation: AI produces tagged output using the three-rule prompt.
Step 2 — Assumption Review: A domain expert reviews all ASSUMED and PROJECTED items. Each gets one of three dispositions: confirmed (reclassified to VERIFIED), challenged (sent for calibration), or accepted with risk (kept as ASSUMED with a documented rationale).
Step 3 — Gap Triage: All DATA GAP and ASSUMPTION GAP items are triaged: resolvable (assign someone to find the data), irreducible (the uncertainty is inherent — document it and plan around it), or deferred (not needed for this decision stage).
Step 4 — Decision Package: The final document separates “what we know” (VERIFIED), “what we believe” (ASSUMED, with calibration notes), and “what we don’t know” (remaining gaps). Decision-makers see all three.
Level 3: System Prompt Standardization
If your organization uses AI across multiple teams, standardize the system prompt. Don’t rely on individual analysts remembering to apply the three rules. Embed the framework into every AI access point — whether that’s a shared Claude project, a custom GPT, an API wrapper, or an n8n workflow. The prompt becomes infrastructure, not personal practice.
For teams using Claude Projects or custom GPTs, the three-rule prompt goes into the project instructions or system message — it’s active for every conversation in that workspace without anyone needing to remember to include it.
The Cultural Challenge
The hardest governance problem isn’t technical. It’s that tagging uncertainty feels like weakness. Presenting a scenario full of ASSUMED and DATA GAP labels to a board looks less confident than presenting clean numbers. The organizational response to this must be explicit: a tagged scenario is not an incomplete scenario — it’s an honest one. The clean numbers were never clean; they just hid where the guesses were.
This is exactly what Bent Flyvbjerg’s decades of research on megaproject failures shows: the projects that went most catastrophically over budget weren’t the ones with the most uncertainty — they were the ones where the uncertainty was hidden. Transparency about assumptions is a risk reduction strategy, not an admission of weakness.
3. Trade-offs: When Tags Become Noise
A document where every sentence carries a provenance label is exhausting to read. The framework creates real cognitive overhead, and pretending otherwise is dishonest. The question isn’t whether there’s a cost — there is — but how to manage it.
The Overload Problem
Consider a 20-variable scenario with source tags, calibration notes, and “if wrong” annotations on every ASSUMED item. For the analyst who built it, this is valuable — they can see exactly where to direct attention. For the executive who needs to make a decision based on it, it’s a wall of qualifications that obscures the bottom line.
Both perspectives are legitimate. The solution isn’t to choose one over the other — it’s to serve both with different views of the same underlying data.
Solution: Layered Presentation
The tagged scenario should exist in at least two layers:
Layer 1 — Decision Summary: One page. Key conclusions, key numbers, key risks. No tags in the running text. Instead, a single “Confidence Profile” section at the bottom:
This scenario rests on 14 verified data points, 6 stated assumptions, and 3 projections. Two data gaps remain unresolved (market-specific CAC, regulatory timeline). The assumption with the highest downstream impact is [X] — if wrong by 50%, projected revenue shifts from €310K to €180K.
That’s the executive view: how much of this is solid, how much is uncertain, and what specifically could break it.
Layer 2 — Full Tagged Analysis: The complete output with all provenance tags, calibration notes, gap labels, and sensitivity analysis. This is the working document. It’s what the analyst uses, what the reviewer signs off on, and what gets archived. It’s the audit trail.
The relationship between the layers is like the relationship between a financial statement and its footnotes. The statement tells you the numbers; the footnotes tell you what the numbers rest on. Both exist. Different readers use different layers.
How Non-Experts Read Tags
For teams where not everyone is fluent in the tagging system, simplify the visual language. Three colors work better than three acronyms:
- VERIFIED → presented as normal text (no special marking needed — it’s the baseline)
- ASSUMED → highlighted or marked with a distinct visual cue (e.g., italic, a colored sidebar, or a simple ⚠ symbol)
- DATA GAP → presented as an explicit blank with a brief note
The core message non-experts need to internalize is simple: unmarked text is grounded; marked text is uncertain; blanks are honest. That’s a ten-second briefing. If someone can read a weather forecast that distinguishes “current temperature” from “tomorrow’s forecast,” they can read a tagged scenario.
When to Reduce Tagging
Not every use case needs full provenance. The right level of tagging depends on the stakes:
| Stakes | Tagging Level | Example |
|---|---|---|
| Low | Tag only gaps | Internal brainstorming, early-stage ideation |
| Medium | Tag gaps + assumptions | Project proposals, budget drafts, team planning |
| High | Full tagging + calibration | Board presentations, investment decisions, regulatory submissions |
For a casual strategy brainstorm, requiring VERIFIED/ASSUMED/PROJECTED on every line would kill the creative flow. For a €2M investment decision going to the board, anything less than full tagging is irresponsible. Match the framework’s intensity to the decision’s consequences.
The Framework Maturity Model
Putting it all together, organizations adopting the three-rule framework can think of implementation in three stages:
Stage 1 — Individual Practice: One person uses the three-rule prompt in their own AI conversations. Tagged output stays in their workspace. Value: personal quality control. Cost: near zero.
Stage 2 — Team Standard: The prompt is embedded in shared AI workspaces (Claude Projects, custom GPTs). Templates enforce the table format. Assumptions get informal peer review. Value: consistent quality across a team. Cost: template creation, brief training.
Stage 3 — Organizational Governance: The framework is integrated into planning processes. Assumption review is a formal workflow step. Calibration (reference class, sensitivity, pre-mortem) is standard practice. Decision packages separate confidence layers. Value: systematic risk reduction. Cost: process change, cultural shift.
Most teams should start at Stage 1 and see results immediately. Whether to progress to Stage 2 or 3 depends on how much is at stake when AI-generated scenarios inform real decisions. The higher the stakes, the more the governance investment pays for itself.
Limitations and Known Gaps
The three-rule framework is a practitioner pattern, not a peer-reviewed method. It deserves the same critical scrutiny it asks users to apply to AI output. Here are the things it doesn’t solve — and the ways it can be misused.
1. Not empirically validated
There are no controlled experiments, before/after error-rate measurements, or user studies behind this framework. Research shows that provenance tagging and structured prompting can reduce hallucinations — sometimes significantly — but this has been demonstrated for specific tagging schemes under controlled conditions, not for the exact VERIFIED / ASSUMED / PROJECTED pattern proposed here. Treat the framework as an engineering heuristic that probably helps in many cases, not as something whose effectiveness you can assume without measuring on your own use cases. If you adopt it, track whether it actually improves your outputs.
2. The prompt is one lever, not the only lever
The framework leans heavily on prompt design as the primary mechanism for controlling model behavior. In practice, prompts can reduce hallucinations, but models still violate instructions under pressure — especially when optimization, reward models, or fine-tuning push toward fluency and completeness. For production systems, prompt-level rules should be complemented by architecture-level controls: retrieval-augmented generation (RAG) to ground outputs in actual data, rule-based filters to catch unsupported claims, abstention mechanisms that refuse to generate when confidence is low, and human review workflows. The prompt is the user-accessible lever. It is not the only lever, and in high-stakes deployments, relying on it alone is fragile.
3. VERIFIED means “sourced,” not “infallible”
The framework’s tag hierarchy implies a confidence gradient: VERIFIED = solid, ASSUMED = fragile, PROJECTED = derived. But “verified” data can itself embed significant problems. Historical figures can reflect measurement error. Market data can encode vendor assumptions or sampling bias. Financial actuals can be non-stationary — a Q4 2024 revenue figure may be misleading for Q4 2026 projections in a post-shock market. The framework tracks provenance (where did this number come from?) but not quality (is this number still a reliable guide?). Users should resist the temptation to treat VERIFIED as “settled.” Data fundamentalism — assuming that sourced data is correct data — is a different failure mode than hallucination, but it can drive equally bad decisions.
4. Tags expose inputs, not structural validity
A scenario can be perfectly tagged — every number sourced, every assumption labeled, every gap flagged — and still be fundamentally misleading because the underlying causal model is wrong. Treating customer churn as independent of pricing. Ignoring feedback loops between marketing spend and brand perception. Assuming linear scaling where the real dynamics are nonlinear. The framework catches factual hallucinations (wrong inputs) but not structural errors (wrong model of how the inputs relate). The calibration methods described earlier — sensitivity testing, pre-mortem — partially help by stress-testing individual assumptions, but they test assumptions in isolation, not the relationships between them. ABP and scenario planning literature emphasize structural thinking, exploration of alternative logics, and the “world of no broken assumptions” as a reference scenario. This framework focuses on tagging and gap flagging, not on the quality of the mental model. A well-tagged bad model is still a bad model.
5. Labels don’t expose whose assumptions are being encoded
The categories VERIFIED / ASSUMED / PROJECTED can give a veneer of objectivity that hides power dynamics. Management may encode optimistic growth targets as ASSUMED without revealing the political pressure behind the number. A vendor’s market-size estimate tagged as VERIFIED may embed that vendor’s commercial interests. An analyst’s PROJECTED calculation may use a model that reflects institutional bias toward certain outcomes. The framework does not require the model (or the human) to reveal whose assumptions are being encoded or how they were generated. In organizational contexts, this matters: the question isn’t just “is this sourced or assumed?” but “whose interests shaped this assumption?” The framework doesn’t answer that question — and claiming it does would be a form of the same false confidence it’s designed to prevent.
6. Too many gaps can paralyze decisions
The framework explicitly penalizes guessing and encourages the model to flag [DATA GAP] and [ASSUMPTION GAP] at every opportunity. In high-uncertainty domains — which is most strategic planning — this can produce outputs dominated by gaps and caveats. ABP literature stresses that some assumptions must be made “for planning purposes” or planning cannot proceed. The stakes-based scaling table earlier in this post partially addresses this (brainstorming gets light tagging, board decisions get full tagging), but the underlying tension remains: the framework promotes a norm where “silent invention is worse than flagged uncertainty” without explicitly discussing when too much uncertainty signaling undermines decision-making. In a corporate context, if every plan is filled with prominent warnings, managers may either ignore the warnings as boilerplate or become overly cautious and delay needed decisions. Match the framework’s intensity not only to the decision’s stakes but also to the organization’s risk appetite and decision timeline.
7. Domain-specific adaptation required
The series claims the framework is portable across domains — document extraction, worldbuilding, business scenarios, cybersecurity, scientific writing. But those domains have very different stakes, epistemic structures, and regulatory environments. In medicine, tagging something as ASSUMED is far from sufficient to make it safe — existing guidance requires retrieval-augmented generation, external verification, and human oversight. In legal work, a custom label scheme might conflict with established citation standards or be misinterpreted by courts. In regulated industries, compliance frameworks may have their own provenance requirements that the three-rule labels don’t map onto. The general pattern provides a starting structure; domain-specific adaptation and validation are required before relying on it in regulated or high-stakes environments. The domain-specific posts in this series (cybersecurity, scientific writing) are first steps in that adaptation, not finished products.
These limitations don’t invalidate the framework — they bound it. The three rules are a significant improvement over the default (no provenance, no gap flagging, no penalty for guessing), but they are not a complete solution. They’re the beginning of a practice, not the end of one.
Sources and Further Reading
- Kahneman, D. & Tversky, A. (1979): “Prospect Theory: An Analysis of Decision under Risk.” The foundational work on cognitive biases in decision-making, including the inside view vs. outside view distinction.
- Flyvbjerg, B. (2008): “Curbing Optimism Bias and Strategic Misrepresentation in Planning: Reference Class Forecasting in Practice.” The definitive paper on using the outside view to correct planning forecasts.
- Cantarelli, C.C. et al. (November 2025): “Reference Class Forecasting: Promises, Problems, and a Research Agenda Moving Forward.” Systematic review of RCF covering 2001–2025.
- Klein, G. (2007): “Performing a Project Premortem.” Harvard Business Review. The pre-mortem technique for surfacing failure modes before they occur.
- Dewar, J.A. (2002): “Assumption-Based Planning: A Tool for Reducing Avoidable Surprises.” Cambridge University Press / RAND.
- Lambdin, C. (2024): “Assumption-Based Planning.” On the “ghost scenario” and load-bearing assumptions.
- Ramírez, R. et al. (December 2025): “A Faster Way to Build Future Scenarios.” MIT Sloan. On AI-assisted scenario planning and surfacing unexamined assumptions.
- Previous posts in this series:
Post 1: AI Honesty
Post 2: Worldbuilding
Post 3: Scenario Building

