The Invisible 77% (and why your AI is stalling)


Welcome to The Edge

Marsham Edge's newsletter

Today, I’ve invited Kate Russell, co-founder of The Square Wave to share why most AI rollouts hit a wall long before they hit a technical limit.

Kate is a transformation specialist based in Australia who focuses on the "human conditions" that make technology stick.


Why Your AI Rollout Is Stalling (And It Is Not the Technology)

By Kate Russell, Co-Founder, The Square Wave & Founder of Hum[ai]n (https://hum-ai-n.replit.app). Helping leaders and teams build real AI confidence with a focus on people. Ex-lululemon, Australian Fashion Council & leading marketing agencies.

You have approved the tools and you may have run some training. However, when you look at how AI is actually being used across your projects, the picture is inconsistent.

Some people are using it to draft tender responses and cut through compliance documentation in a fraction of the time. Most are not touching it at all. Beneath the surface, a small group of early adopters is moving quickly, while a larger group is quietly unsure whether they are even allowed to use AI on project work.

In the middle, a growing capability gap creates real risk: inconsistent outputs and no shared standard for when a human needs to check the work before it leaves the building.

The Stanford Digital Economy Lab's April 2026 report found that 77% of the work in a successful AI deployment is invisible. It’s not the model; it’s the change management and process design. In high-stakes project environments, that invisible work matters more. The cost of an AI-assisted document that nobody properly reviewed is not just a quality issue. It is a liability.

The organisations getting traction have done the harder work of aligning leadership on how AI should be used and building an environment where people feel safe to flag errors.


The Muriel View: Hallucinations are a Choice

Kate’s point about the "invisible 77%" is exactly why we founded Marsham Edge. While Kate works on the culture of adoption, I work on the architecture of certainty.

If your team is "quietly unsure," they will use AI in the shadows. And "Shadow AI" is where the most dangerous risks live.

The Financial Times recently highlighted how "probabilistic" errors continue to cause reputational damage in professional services. Whether it's a fabricated legal citation or a distorted technical specification, these errors are not "unfortunate accidents." They are avoidable.

At Marsham Edge, we focus on making AI outputs defensible inside real project environments:

  • Grounded outputs from verified project data
  • Workflows stress-tested against real delivery conditions
  • Clear human accountability before anything leaves the building

Where this becomes a real risk

In most teams, this is already happening:

  • AI outputs being used without a clear review standard
  • Different teams applying different levels of scrutiny
  • No shared threshold for what is “safe to send”

That’s where liability starts to build - quietly.

A practical way to test this

We’ve been working with a small number of teams to run a 1-day diagnostic on exactly this issue.

The focus is simple:

  • Stress-test the prompts your teams are already using
  • Identify where errors or blind spots could occur
  • Put clear guardrails in place using your existing setup

No new systems. No long rollout. Just clarity on what’s actually happening.

If this is relevant, just reply with:

DIAGNOSTIC

or

RISK

I’ll share how we’re running these sessions and whether it makes sense in your context.

Muriel Demarcus

CEO & Founder, Marsham Edge

Engineer | Lawyer | Ultra-Runner

600 1st Ave, Ste 330 PMB 92768, Seattle, WA 98104-2246
Unsubscribe · Preferences

Marsham Edge

Strategic AI insights for major project leaders. I share the frameworks and governance models needed to move infrastructure into the digital age, distilled from decades of executive experience in London, Sydney and Singapore.

Read more from Marsham Edge

Welcome To The Edge Marsham Edge's Newsletter Most AI tools don’t fail in obvious ways. They fail quietly. They work 95% of the time…And then break on the one case that actually matters. That’s the problem with edge cases. The illusion of “working” In most organisations, AI tools are tested like this: Does it generate a report? Does it summarise correctly? Does it produce a usable output? If the answer is “yes,” the tool is considered ready. But that’s not where the risk lives. The risk lives...

Welcome to The Edge Marsham Edge's Newsletter Most executives I speak to right now are stuck in what I call the PoC Lobby. Months are spent evaluating models, running pilots, and building “AI roadmaps”…Only to end up exactly where they started, with no measurable impact on delivery, cost, or speed. Are you stuck in the lobby? The assumption is that the technology isn’t ready. It is. The problem is execution. The uncomfortable reality Based on recent work across infrastructure and project-led...

Welcome to The Edge Marsham Edge Newsletter Most executives are currently trapped in a loop I call the "Proof of Concept (PoC) Factory". They spend months debating which foundation model to buy, only to join the 95% of generative AI pilots that fail to produce any measurable financial impact. But here is the unfiltered truth: the problem isn’t the technology; it’s the execution. Hello from Hong Kong! According to the latest research from the Stanford Digital Economy Lab in their April 2026...