The $440,000 "Hallucination" and the Singapore Standard


Welcome to The Edge

Marsham Edge's weekly Newsletter

Last year, a $440,000 consulting report from Deloitte for the Australian Government became a global cautionary tale. It was 237 pages of analysis that included fictitious academic papers, non-existent footnotes, and a fabricated quote from a Federal Court judge.

The media called it an AI "hallucination." But it’s time to call it what it actually is.

It’s Not a Hallucination : It’s "Probabilistic Completion"

The word "hallucination" is dangerously misleading. It implies a biological glitch. In truth, LLMs are massive statistical engines performing Probabilistic Completion.

When you ask a model for a legal citation, it isn’t "remembering" a book. It is calculating: "Statistically, what sequence of characters looks most like a valid case citation?" If the model isn’t grounded in a verified database, it will simply invent the most "probable" lie. It’s not a mind playing tricks; it’s a calculator performing the wrong operation on a massive scale..and trying its hardest to please you.

The Rise of "Invented Jurisprudence"

This creates a massive liability for the legal and corporate sectors. We are seeing the rise of Invented Jurisprudence, where AI mirrors the formatting and logic of real law so perfectly that it passes a cursory glance.

In the Ayinde v. London Borough of Haringey 2025 case, the court was forced to strike out entirely fictitious case law generated by an AI. This isn’t just a technical quirk; it’s a professional breach of the duty of diligence.


The Architecture of Trust: A 4-Layer Framework

To align with the Singapore Model AI Governance Framework (2026), we need to move from "checking the work" to "architecting the output." Here is how to build a Human-in-the-Loop (HITL) system that works:

  1. The Data Layer (Grounding): Never let an LLM "guess." Use Retrieval-Augmented Generation (RAG) to force the AI to pull only from your verified "Gold Standard" documents.
  2. The Model Layer (Validation): Set strict instructions. The model must cite sources and, crucially, be programmed to state "I don't know" if the evidence is missing from the provided data.
  3. The Workflow Layer (Routing): Establish confidence thresholds. If the AI’s internal confidence score is below 90%, the output should be automatically routed to a human expert, not the final report.
  4. The Human Layer (Critical Review): Humans should not "rubber stamp." We must focus judgment on high-stakes, nuanced decisions where statistical probability fails.

The Bottom Line

In high-stakes corporate and legal environments, "probabilistic" isn't good enough. We need verifiable. The Singapore Model shows us that while AI is a powerful co-pilot, the human must always remain the captain.

Do you have a "Human-in-the-Loop" policy in your firm yet?

If you are ready to discuss please book a call with us here: https://marshamedge.pipedrive.com/scheduler/bELMq7Iv/ai-strategy-exploratory-call

600 1st Ave, Ste 330 PMB 92768, Seattle, WA 98104-2246
Unsubscribe · Preferences

Marsham Edge

Strategic AI insights for major project leaders. I share the frameworks and governance models needed to move infrastructure into the digital age, distilled from decades of executive experience in London, Sydney and Singapore.

Read more from Marsham Edge

Welcome To The Edge Marsham Edge's Newsletter Most AI tools don’t fail in obvious ways. They fail quietly. They work 95% of the time…And then break on the one case that actually matters. That’s the problem with edge cases. The illusion of “working” In most organisations, AI tools are tested like this: Does it generate a report? Does it summarise correctly? Does it produce a usable output? If the answer is “yes,” the tool is considered ready. But that’s not where the risk lives. The risk lives...

Welcome to The Edge Marsham Edge's Newsletter Most executives I speak to right now are stuck in what I call the PoC Lobby. Months are spent evaluating models, running pilots, and building “AI roadmaps”…Only to end up exactly where they started, with no measurable impact on delivery, cost, or speed. Are you stuck in the lobby? The assumption is that the technology isn’t ready. It is. The problem is execution. The uncomfortable reality Based on recent work across infrastructure and project-led...

Welcome to The Edge Marsham Edge's newsletter Today, I’ve invited Kate Russell, co-founder of The Square Wave to share why most AI rollouts hit a wall long before they hit a technical limit. Kate is a transformation specialist based in Australia who focuses on the "human conditions" that make technology stick. Kate and me during our first podcast Why Your AI Rollout Is Stalling (And It Is Not the Technology) By Kate Russell, Co-Founder, The Square Wave & Founder of Hum[ai]n...