Why your AI “Roadmap” is likely a waste of time - Follow-up


Welcome to The Edge

Marsham Edge's Newsletter

Most executives I speak to right now are stuck in what I call the PoC Lobby.

Months are spent evaluating models, running pilots, and building “AI roadmaps”…
Only to end up exactly where they started, with no measurable impact on delivery, cost, or speed.

The assumption is that the technology isn’t ready.

It is.

The problem is execution.


The uncomfortable reality

Based on recent work across infrastructure and project-led organisations, the pattern is consistent:

  • Teams are still rebuilding reports manually every week
  • Planning cycles are slowed by fragmented or unreliable data
  • AI initiatives exist, but sit outside actual delivery workflows

In other words:

AI isn’t failing.
It’s just not being applied where the real friction is.


Three things that actually matter

From 50+ real-world deployments, three truths show up every time:

1. Most of the work is invisible
The challenge isn’t the model, it’s data quality, process design, and adoption inside live teams.

2. Iteration beats planning
Every successful implementation moves quickly from idea → test → adjustment.
Not 6-month strategy cycles.

3. Productivity gets worse before it gets better
There is always a dip while processes are restructured, and most organisations abandon efforts at exactly this point.


Where this breaks down

The biggest gap I see is this:

Organisations treat AI as a strategy exercise
When it’s actually an execution problem

So you get:

  • 12-month roadmaps
  • Slide decks
  • Pilot use cases

…but no change to how work actually gets done.


What works instead

The teams that move fastest do one thing differently:

They take a single operational bottleneck
…and build something usable around it immediately

Not perfect. Not scaled. Just working.

In practice, that looks like:

  • Turning a manual reporting process into a live model
  • Stress-testing schedules using existing data
  • Matching tender requirements to team structures automatically

Not theory — usable outputs.


Where this becomes real

For most teams, this shows up today as:

  • Reporting cycles rebuilt manually across multiple systems
  • Delays in decision-making due to lack of visibility
  • “AI initiatives” that never move beyond pilot stage

This isn’t a strategy issue.
It’s an execution bottleneck.

What we typically do is take one of these friction points and turn it into a working model in a day — using the data and tools already in place.

If that’s something you’re currently dealing with, just reply with:

REPORTING
or
PLANNING

I’ll share how we’re approaching it in practice with other teams.


Muriel Demarcus
CEO, Marsham Edge

600 1st Ave, Ste 330 PMB 92768, Seattle, WA 98104-2246
Unsubscribe · Preferences

Marsham Edge

Strategic AI insights for major project leaders. I share the frameworks and governance models needed to move infrastructure into the digital age, distilled from decades of executive experience in London, Sydney and Singapore.

Read more from Marsham Edge

Welcome To The Edge Marsham Edge's Newsletter Most AI tools don’t fail in obvious ways. They fail quietly. They work 95% of the time…And then break on the one case that actually matters. That’s the problem with edge cases. The illusion of “working” In most organisations, AI tools are tested like this: Does it generate a report? Does it summarise correctly? Does it produce a usable output? If the answer is “yes,” the tool is considered ready. But that’s not where the risk lives. The risk lives...

Welcome to The Edge Marsham Edge's newsletter Today, I’ve invited Kate Russell, co-founder of The Square Wave to share why most AI rollouts hit a wall long before they hit a technical limit. Kate is a transformation specialist based in Australia who focuses on the "human conditions" that make technology stick. Kate and me during our first podcast Why Your AI Rollout Is Stalling (And It Is Not the Technology) By Kate Russell, Co-Founder, The Square Wave & Founder of Hum[ai]n...

Welcome to The Edge Marsham Edge Newsletter Most executives are currently trapped in a loop I call the "Proof of Concept (PoC) Factory". They spend months debating which foundation model to buy, only to join the 95% of generative AI pilots that fail to produce any measurable financial impact. But here is the unfiltered truth: the problem isn’t the technology; it’s the execution. Hello from Hong Kong! According to the latest research from the Stanford Digital Economy Lab in their April 2026...