{"id":187,"date":"2026-03-31T17:42:26","date_gmt":"2026-03-31T16:42:26","guid":{"rendered":"https:\/\/knowtech.waszmann.com\/?p=187"},"modified":"2026-04-17T06:14:40","modified_gmt":"2026-04-17T05:14:40","slug":"three-prompt-rules-that-stop-ai-from-guessing-and-the-science-behind-them","status":"publish","type":"post","link":"https:\/\/knowtech.waszmann.com\/?p=187&lang=en","title":{"rendered":"Three Prompt Rules That Stop AI From Guessing \u2014 And the Science Behind Them"},"content":{"rendered":"<p>Every new model generation arrives with fanfare: better benchmarks, higher accuracy scores, more impressive demos. GPT-5 reasons through complex problems. Claude plans ahead when writing poetry. Gemini processes images and video with startling fluency. The intelligence curve keeps climbing.<\/p>\n<p>But there&#8217;s a second curve that rarely makes the keynote slides \u2014 the honesty curve. And it&#8217;s barely moved.<\/p>\n<p>This isn&#8217;t a vague philosophical complaint. It&#8217;s a structural problem baked into how these models are trained, evaluated, and deployed. And it&#8217;s one that hits hardest in exactly the kind of work where people increasingly rely on AI: extracting data from contracts, parsing invoices, summarizing meeting notes, building CRM records from messy inputs.<\/p>\n<p>This post unpacks why the intelligence-honesty gap exists, what the latest research tells us about its causes, and \u2014 most practically \u2014 three prompt rules you can apply today to force AI to be honest about what it doesn&#8217;t know.<\/p>\n<hr \/>\n<h2>The Gap: Intelligence vs. Honesty<\/h2>\n<p>When we say a model &#8220;got smarter,&#8221; we usually mean it scores higher on benchmarks \u2014 math competitions, coding challenges, multi-step reasoning tasks. These are real improvements. But benchmark scores measure a model&#8217;s ability to produce correct answers. They don&#8217;t measure a model&#8217;s willingness to say &#8220;I don&#8217;t know.&#8221;<\/p>\n<p>In fact, the incentive structure actively punishes honesty.<\/p>\n<p>In September 2025, OpenAI published a\u00a0<a href=\"https:\/\/openai.com\/index\/why-language-models-hallucinate\/\" target=\"_blank\" rel=\"noopener\">research paper<\/a>\u00a0that made this problem precise. The team \u2014 including researchers from Georgia Tech \u2014 examined major AI benchmarks and found that the vast majority use binary grading: either the answer is correct and gets a point, or it&#8217;s wrong and gets zero. Crucially, abstaining \u2014 saying &#8220;I don&#8217;t know&#8221; \u2014 also gets zero. The mathematical consequence is straightforward: guessing always has a higher expected score than abstaining. A model that bluffs on every uncertain question will rank higher than one that honestly declines.<\/p>\n<p>OpenAI&#8217;s own blog post put it plainly: the situation is like a multiple-choice test where leaving an answer blank guarantees a zero, but guessing at least gives you a chance. Under those rules, the rational strategy is to always guess \u2014 even when you have no idea. And that&#8217;s exactly what the models learn to do.<\/p>\n<p>The paper demonstrated this with a striking example: when asked for the PhD dissertation title of one of its own co-authors, a widely-used model confidently produced three different titles across three attempts. All three were wrong. It did the same with his birthday \u2014 three dates, all incorrect, all delivered with unwavering confidence.<\/p>\n<p>This isn&#8217;t a bug that can be patched. It&#8217;s the natural outcome of optimizing for accuracy-only metrics. As the OpenAI researchers argue, the mainstream benchmarks and leaderboards need to be redesigned to penalize confident errors more heavily than uncertainty. Until that happens, every model that climbs the leaderboard does so in part by learning to bluff better.<\/p>\n<hr \/>\n<h2>Why Models Confabulate: Insights from Interpretability Research<\/h2>\n<p>The OpenAI paper explains the\u00a0<em>incentive<\/em>\u00a0problem. But what happens mechanically inside the model when it makes something up?<\/p>\n<p>Anthropic&#8217;s interpretability research \u2014 published in March 2025 under the title &#8220;<a href=\"https:\/\/www.anthropic.com\/research\/tracing-thoughts-language-model\" target=\"_blank\" rel=\"noopener\">Tracing the Thoughts of a Large Language Model<\/a>&#8221; \u2014 provides some of the most detailed answers we have. Using what they describe as a &#8220;microscope&#8221; for AI, Anthropic&#8217;s team traced the internal circuits that activate when Claude processes a question. It&#8217;s worth noting that these findings are specific to Claude 3.5 Haiku \u2014 other model families may handle uncertainty through different internal mechanisms \u2014 but the patterns are likely general enough to be instructive.<\/p>\n<p>One of their most revealing discoveries involves what we might call a default refusal mechanism. In Claude, refusing to answer is actually the\u00a0<em>default<\/em>\u00a0behavior: the researchers found a circuit that is &#8220;on&#8221; by default and causes the model to state it has insufficient information. But when the model recognizes a &#8220;known entity&#8221; \u2014 say, Michael Jordan the basketball player \u2014 a competing set of features fires up and\u00a0<em>suppresses<\/em>\u00a0this default circuit, allowing the model to respond.<\/p>\n<p>The problem arises when this mechanism misfires. If the model recognizes a name but doesn&#8217;t actually know the relevant facts, the &#8220;known entity&#8221; signal can still override the &#8220;I don&#8217;t know&#8221; circuit. The result: a confident, detailed, completely fabricated answer. In one experiment, the researchers used a person named Michael Batkin \u2014 someone unknown to the model, who by default triggered a refusal. But when they artificially activated the &#8220;known entity&#8221; features or inhibited the &#8220;can&#8217;t answer&#8221; features, Claude promptly \u2014 and consistently \u2014 hallucinated that Batkin was famous for playing chess.<\/p>\n<p>Even more unsettling: Anthropic found evidence that when Claude can&#8217;t easily compute an answer (say, the cosine of a large number), it sometimes engages in what philosopher Harry Frankfurt would call\u00a0<em>bullshitting<\/em>\u00a0\u2014 producing an answer without any internal evidence of the calculation actually occurring. Despite claiming to have run the math, the interpretability tools revealed no trace of any computation. When given a hint about what the answer should be, Claude worked\u00a0<em>backwards<\/em>, constructing plausible-looking intermediate steps that lead to the hinted answer \u2014 a textbook case of motivated reasoning.<\/p>\n<p>These findings matter because they show that the honesty problem isn&#8217;t just about training incentives. The models have internal mechanisms that are\u00a0<em>supposed<\/em>\u00a0to catch uncertainty \u2014 but those mechanisms can be overridden by other pressures, including the drive toward grammatical coherence and the pattern-matching instinct to fill in gaps.<\/p>\n<hr \/>\n<h2>Automation Bias: Why This Matters More Than You Think<\/h2>\n<p>All of this would be merely academic if people treated AI output with appropriate skepticism. They don&#8217;t.<\/p>\n<p>Automation bias \u2014 the tendency to over-rely on automated recommendations \u2014 is one of the most thoroughly documented phenomena in human-computer interaction research. A\u00a0<a href=\"https:\/\/link.springer.com\/article\/10.1007\/s00146-025-02422-7\" target=\"_blank\" rel=\"noopener\">2025 systematic review<\/a>\u00a0published in\u00a0<em>AI &amp; Society<\/em>\u00a0analyzed 35 peer-reviewed studies spanning healthcare, finance, national security, and public administration. The pattern was consistent across domains: when an AI system delivers a confident answer, people accept it. They check less. They override their own judgment.<\/p>\n<p>A\u00a0<a href=\"https:\/\/www.medrxiv.org\/content\/10.1101\/2025.08.23.25334280v1\" target=\"_blank\" rel=\"noopener\">randomized clinical trial<\/a>\u00a0conducted with AI-trained physicians in Pakistan (published as a preprint in August 2025) made the dynamic especially clear. Even doctors who had completed 20 hours of AI-literacy training \u2014 including instruction on how to critically evaluate AI output \u2014 were vulnerable to automation bias when exposed to erroneous LLM recommendations. The training helped, but it didn&#8217;t eliminate the problem. Confident-sounding AI output has a gravitational pull that&#8217;s difficult to resist, even when you know to look for errors.<\/p>\n<p>The real-world consequences are already visible. In February 2024, Air Canada was ordered to pay damages to a customer after a support chatbot \u2014 not a large language model, but an AI system nonetheless \u2014 hallucinated a bereavement fare policy that didn&#8217;t exist. The chatbot confidently told the customer they could retroactively request a discount within 90 days of purchase. The actual policy allowed no such thing. But the system stated it with such authority that the customer relied on it to make a financial decision. The underlying technology differed from today&#8217;s LLMs, but the dynamic was identical: confident AI output, uncritical human acceptance.<\/p>\n<p>In an operations context, the failure modes are subtler but no less damaging. Consider a contract with payment terms mentioned on page 8 and page 14 \u2014 and the two pages say different things. A human reviewer might catch the discrepancy. An AI, asked to extract the payment terms, will pick one and move on. It won&#8217;t mention the conflict. It won&#8217;t flag the ambiguity. It will fill the cell in your spreadsheet with &#8220;Net 30&#8221; and give you no indication that page 14 says &#8220;Net 45.&#8221;<\/p>\n<p>Meeting notes are another minefield. &#8220;Let&#8217;s circle back next week&#8221; becomes a specific date and a named owner in the AI&#8217;s summary \u2014 details that nobody actually stated, but that the model invented to produce a clean, actionable output.<\/p>\n<p>The pattern is the same across invoices, insurance documents, lease agreements, vendor scoring, CRM data entry: wherever AI is used to extract structured information from messy sources, the model&#8217;s instinct to\u00a0<em>fill every field<\/em>\u00a0works directly against the user&#8217;s need to know which fields are uncertain.<\/p>\n<hr \/>\n<h2>Three Prompt Rules That Change the Incentive<\/h2>\n<p>These three problems \u2014 training incentives that reward guessing, internal mechanisms that can override uncertainty detection, and human psychology that accepts confident output at face value \u2014 come from different research streams. But they converge on the same practical conclusion: by default, AI will guess rather than admit ignorance, and people will trust the guess.<\/p>\n<p>You can&#8217;t fix the training pipeline. You can&#8217;t redesign the benchmarks. But you can change the local incentive structure inside the conversation. The following three rules \u2014 adapted from a practical\u00a0<a href=\"https:\/\/d-squared70.github.io\/ChatGPT-and-Claude-Got-Smarter.-Not-More-Honest.\/\" target=\"_blank\" rel=\"noopener\">framework by D-Squared<\/a>\u00a0\u2014 do exactly that. They work because they explicitly reverse the default dynamic: instead of rewarding completeness, they reward honesty about uncertainty. Note that the effectiveness of these techniques may vary across model families \u2014 they&#8217;ve been tested primarily with ChatGPT and Claude, and other models may respond differently.<\/p>\n<h3>Rule 1: Force Blank + Explain<\/h3>\n<p>The single most effective change you can make is to explicitly instruct the model to leave fields blank when the data is ambiguous, missing, or unclear \u2014 and to explain why.<\/p>\n<p>Without this rule, every field gets filled. With this rule, the model produces output like:<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Value<\/th>\n<th>Reason<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>Payment Terms<\/td>\n<td>\u2014 BLANK<\/td>\n<td>Pages 8 and 14 state different terms \u2014 net 30 vs net 45<\/td>\n<\/tr>\n<tr>\n<td>Renewal Date<\/td>\n<td>Jan 15, 2027<\/td>\n<td>\u2014<\/td>\n<\/tr>\n<tr>\n<td>Liability Cap<\/td>\n<td>\u2014 BLANK<\/td>\n<td>References &#8220;Exhibit B&#8221; \u2014 not included in document<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>The blank fields are where the value is. They tell you exactly where to focus your attention. They&#8217;re the model admitting &#8220;I&#8217;m not sure&#8221; \u2014 something it would never do without explicit instruction.<\/p>\n<p><strong>The prompt language:<\/strong><\/p>\n<blockquote><p><em>Extract the following fields from this document into a table. Rules: Only extract values that are explicitly stated in the document. When a value is ambiguous, missing, or unclear, leave the field BLANK. Add a column labeled &#8220;Reason.&#8221; Next to every blank field, include a one-sentence explanation of why you left it blank. Base every value on what the document actually says. Quote or reference the specific section you pulled it from.<\/em><\/p><\/blockquote>\n<p>One way to think about why this works is through the lens of Anthropic&#8217;s interpretability findings. The model\u00a0<em>has<\/em>\u00a0internal mechanisms for recognizing uncertainty \u2014 the default refusal behavior described above. But those mechanisms get overridden by the pressure to produce complete, coherent output. The &#8220;Force Blank&#8221; instruction may effectively give the uncertainty pathway permission to activate, rather than being suppressed by the completion instinct. We don&#8217;t know for certain that this is the internal mechanism at work \u2014 but the practical result is consistent and reliable.<\/p>\n<h3>Rule 2: Penalize Guessing<\/h3>\n<p>By default, from the model&#8217;s perspective, a wrong answer and a blank answer carry equal weight \u2014 neither earns praise, neither triggers correction. The model has no reason to prefer one over the other, so it defaults to guessing (which at least has a chance of being right).<\/p>\n<p>Rule 2 changes this calculus with a single sentence:<\/p>\n<blockquote><p><em>A wrong answer is 3\u00d7 worse than a blank. When in doubt, leave it blank.<\/em><\/p><\/blockquote>\n<p>This mirrors the scoring reform that OpenAI&#8217;s September 2025 paper advocates at the benchmark level. The researchers propose that evaluation systems should award points for correct answers, penalize wrong answers more heavily than abstentions, and give partial credit for appropriate expressions of uncertainty. They note that some standardized human exams have used this approach for decades \u2014 penalizing wrong guesses more heavily than skipped questions \u2014 precisely to discourage blind guessing.<\/p>\n<p>You can&#8217;t change the benchmark. But you can embed the same incentive structure in your prompt. The 3\u00d7 multiplier is arbitrary \u2014 pick any number that makes the model understand that silence is preferable to fabrication. The key insight is that you need to\u00a0<em>say it explicitly<\/em>. The model won&#8217;t infer this preference on its own.<\/p>\n<h3>Rule 3: Show the Source<\/h3>\n<p>Even models that are told to &#8220;extract only&#8221; will drift toward inference. They&#8217;ll compute a renewal date from a start date and term length. They&#8217;ll estimate a total from line items. They&#8217;ll infer a contact person from an email signature. These aren&#8217;t necessarily wrong \u2014 but they&#8217;re not extraction, and the user needs to know the difference.<\/p>\n<p>Rule 3 requires the model to label every value as EXTRACTED (directly stated in the document) or INFERRED (derived, calculated, or interpreted), with an explanation for every inferred value.<\/p>\n<p><strong>The prompt language:<\/strong><\/p>\n<blockquote><p><em>For each field, add a column called &#8220;Source.&#8221; Mark each value as one of: EXTRACTED \u2014 directly stated in the document, exact match. INFERRED \u2014 derived from context, calculated, or interpreted. For every INFERRED field, include a one-sentence explanation of what you based it on.<\/em><\/p><\/blockquote>\n<p>The output looks like this:<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Value<\/th>\n<th>Source<\/th>\n<th>Evidence<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>Start Date<\/td>\n<td>Jan 15, 2025<\/td>\n<td>EXTRACTED<\/td>\n<td>Section 2.1, paragraph 1<\/td>\n<\/tr>\n<tr>\n<td>Term Length<\/td>\n<td>24 months<\/td>\n<td>EXTRACTED<\/td>\n<td>Section 2.1, paragraph 2<\/td>\n<\/tr>\n<tr>\n<td>Renewal Date<\/td>\n<td>Jan 15, 2027<\/td>\n<td>INFERRED<\/td>\n<td>Calculated 24 months from start date. Check Section 8 \u2014 early termination clause may alter this.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>The EXTRACTED\/INFERRED distinction is a practical implementation of what hallucination researchers call &#8220;provenance tracking&#8221; \u2014 tying every claim back to its source. The model is perfectly capable of making this distinction; it just doesn&#8217;t bother unless you ask.<\/p>\n<hr \/>\n<h2>The Combined Prompt<\/h2>\n<p>All three rules work together. Here&#8217;s the complete version:<\/p>\n<blockquote><p><em>Extract the following fields from this document into a table.<\/em><\/p>\n<p><em>Rules:<\/em><\/p>\n<p><em>&#8211; Only extract values explicitly stated in the document.<\/em><\/p>\n<p><em>&#8211; When a value is ambiguous, missing, or unclear, leave the field BLANK.<\/em><\/p>\n<p><em>&#8211; A wrong answer is 3\u00d7 worse than a blank. When in doubt, leave it blank.<\/em><\/p>\n<p><em>&#8211; For each field with a value, add a &#8220;Source&#8221; column: EXTRACTED = directly stated, exact match. INFERRED = derived, calculated, or interpreted.<\/em><\/p>\n<p><em>&#8211; For every INFERRED field, add a one-sentence explanation.<\/em><\/p>\n<p><em>&#8211; For every BLANK field, add a row to a separate &#8220;Flags&#8221; table explaining why the value could not be extracted.<\/em><\/p><\/blockquote>\n<p>The workflow change this enables is significant. Instead of reviewing every extracted value (which nobody actually does), you review only the blanks and the inferred fields. Everything marked EXTRACTED with a section reference can be trusted at a higher confidence level. Your attention goes where it matters.<\/p>\n<hr \/>\n<h2>The Bigger Picture<\/h2>\n<p>These three rules are a stopgap. They work \u2014 sometimes remarkably well \u2014 but they&#8217;re fighting against the grain of how models are trained. The deeper fix requires changes at the infrastructure level.<\/p>\n<p>OpenAI&#8217;s hallucination paper calls for benchmark reform: scoring systems that reward calibrated uncertainty instead of confident guessing. Anthropic&#8217;s interpretability work points toward architectural insights \u2014 understanding the internal circuits well enough to strengthen the &#8220;I don&#8217;t know&#8221; pathway rather than relying on prompt-level patches.<\/p>\n<p>Perhaps the most structurally promising direction is OpenAI&#8217;s &#8220;<a href=\"https:\/\/openai.com\/index\/how-confessions-can-keep-language-models-honest\/\" target=\"_blank\" rel=\"noopener\">Confessions<\/a>&#8221; research (2025). Instead of relying on users to prompt honesty, the Confessions approach separates the honesty objective from the performance objective\u00a0<em>during training itself<\/em>. After producing a main answer \u2014 optimized for all the usual factors like correctness, style, and helpfulness \u2014 the model generates a separate &#8220;confession&#8221; report. This report is scored exclusively on honesty: Did the model flag its uncertainties? Did it acknowledge where it took shortcuts? Crucially, nothing in the confession is held against the main answer&#8217;s score, so the model has no incentive to hide its doubts. If this approach scales, it could move the honesty problem from something users have to prompt-engineer around to something the model handles natively.<\/p>\n<p>These are promising directions, but none of them are available to you today. What\u00a0<em>is<\/em>\u00a0available is the ability to change the local incentive structure in your prompts. Force blanks. Penalize guessing. Require source labels. These three rules won&#8217;t make AI honest by nature, but they create an environment where honesty is the path of least resistance \u2014 and that turns out to be surprisingly effective.<\/p>\n<p>The models are smart enough to know when they&#8217;re guessing. They just need permission to say so.<\/p>\n<hr \/>\n<h3>Sources and Further Reading<\/h3>\n<ul>\n<li><strong>OpenAI (September 2025):<\/strong>\u00a0&#8220;<a href=\"https:\/\/openai.com\/index\/why-language-models-hallucinate\/\" target=\"_blank\" rel=\"noopener\">Why Language Models Hallucinate<\/a>.&#8221; Research paper arguing that standard training and evaluation procedures reward guessing over acknowledging uncertainty.<\/li>\n<li><strong>OpenAI (2025):<\/strong>\u00a0&#8220;<a href=\"https:\/\/openai.com\/index\/how-confessions-can-keep-language-models-honest\/\" target=\"_blank\" rel=\"noopener\">How Confessions Can Keep Language Models Honest<\/a>.&#8221; Research on training models to produce separate honesty reports, scored independently from main responses.<\/li>\n<li><strong>Anthropic (March 2025):<\/strong>\u00a0&#8220;<a href=\"https:\/\/www.anthropic.com\/research\/tracing-thoughts-language-model\" target=\"_blank\" rel=\"noopener\">Tracing the Thoughts of a Large Language Model<\/a>.&#8221; Interpretability research revealing internal circuits for refusal, known-entity recognition, and hallucination in Claude 3.5 Haiku.<\/li>\n<li><strong>Anthropic (March 2025):<\/strong>\u00a0&#8220;<a href=\"https:\/\/transformer-circuits.pub\/2025\/attribution-graphs\/biology.html\" target=\"_blank\" rel=\"noopener\">On the Biology of a Large Language Model<\/a>.&#8221; Companion paper on circuit tracing and attribution graphs.<\/li>\n<li><strong>Carnat, I. (November 2024):<\/strong>\u00a0&#8220;Human, All Too Human: Accounting for Automation Bias in Generative Large Language Models.&#8221;\u00a0<em>International Data Privacy Law<\/em>, Vol. 14, Issue 4, pp. 299\u2013314.<\/li>\n<li><strong>Qazi, I.A. et al. (August 2025):<\/strong>\u00a0&#8220;<a href=\"https:\/\/www.medrxiv.org\/content\/10.1101\/2025.08.23.25334280v1\" target=\"_blank\" rel=\"noopener\">Automation Bias in LLM Assisted Diagnostic Reasoning Among AI-Trained Physicians<\/a>.&#8221; Randomized clinical trial, medRxiv preprint.<\/li>\n<li><strong>AI &amp; Society (July 2025):<\/strong>\u00a0&#8220;<a href=\"https:\/\/link.springer.com\/article\/10.1007\/s00146-025-02422-7\" target=\"_blank\" rel=\"noopener\">Exploring Automation Bias in Human\u2013AI Collaboration<\/a>.&#8221; Systematic review of 35 studies.<\/li>\n<li><strong>D-Squared (2025):<\/strong>\u00a0&#8220;<a href=\"https:\/\/d-squared70.github.io\/ChatGPT-and-Claude-Got-Smarter.-Not-More-Honest.\/\" target=\"_blank\" rel=\"noopener\">ChatGPT and Claude Got Smarter. Not More Honest.<\/a>&#8221; Original slide deck presenting the three prompt rules.<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>Every new model generation arrives with fanfare: better benchmarks, higher accuracy scores, more impressive demos. GPT-5 reasons through complex problems. Claude plans ahead when writing poetry. Gemini processes images and video with startling fluency. The intelligence curve keeps climbing. But there&#8217;s a second curve that rarely makes the keynote slides \u2014 the honesty curve. And &hellip;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[55,66],"tags":[57,59,61,63],"class_list":["post-187","post","type-post","status-publish","format-standard","hentry","category-ai-en","category-perspectives","tag-ai-en","tag-gpt-en","tag-halucination-en","tag-llm-en"],"_links":{"self":[{"href":"https:\/\/knowtech.waszmann.com\/index.php?rest_route=\/wp\/v2\/posts\/187","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/knowtech.waszmann.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/knowtech.waszmann.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/knowtech.waszmann.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/knowtech.waszmann.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=187"}],"version-history":[{"count":3,"href":"https:\/\/knowtech.waszmann.com\/index.php?rest_route=\/wp\/v2\/posts\/187\/revisions"}],"predecessor-version":[{"id":194,"href":"https:\/\/knowtech.waszmann.com\/index.php?rest_route=\/wp\/v2\/posts\/187\/revisions\/194"}],"wp:attachment":[{"href":"https:\/\/knowtech.waszmann.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=187"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/knowtech.waszmann.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=187"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/knowtech.waszmann.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=187"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}