top of page
Search

Office Full Of Ghosts

  • Writer: Duncan Welling
    Duncan Welling
  • Jan 26
  • 4 min read

Why enterprise design has to come before the model is trained


AI is an office full of ghosts.


Most AI models are trained on the past and the present.On historical data.On legacy systems.On the observed behaviours of today’s employees and customers.


In other words, they learn how things have been done — not how they should be done, and certainly not how they could be done if we were starting again today.


That’s the paradox.

For a technology we describe as futuristic, AI doesn’t look forward.It doesn’t imagine.It doesn’t make conceptual leaps. Instead, it faithfully absorbs the organisation as it already exists. And that has profound implications for executives.



What actually gets trained into AI systems


When organisations talk about “applying AI to the business”, the conversation often focuses on use cases, tools, vendors, and quick wins. What gets much less attention is what the AI is really learning.


Because when you deploy AI inside an enterprise, you don’t just automate processes - you encode the ghosts of old decisions:

  • The workarounds that grew up around broken or missing systems

  • The informal rules people follow because formal governance doesn’t work

  • The hero behaviours that keep things running while hiding structural flaws

  • The incentives that shape behaviour, even when leadership thinks they don’t


AI doesn’t question these patterns. It doesn’t know which ones are accidental, outdated, or undesirable. It simply learns them, and then scales them. Relentlessly. At machine speed.


This is why AI strategy can’t be a purely technical exercise.



Why “AI on top of today’s operating model” is risky


Applied directly to today’s operating model, AI doesn’t transform the organisation, it hardens it. It encodes behaviours. It makes yesterday’s design durable and scalable.


What used to be flexible, informal, and correctable becomes embedded logic. What used to be managed through judgement becomes managed through systems.


This is where the executive challenge really begins.


Yes, AI will enable headcount reduction.Yes, it will automate tasks and decisions that previously required people. But the ghosts in the office are not as easy to manage (or remove) as employees.



From managing people to managing hauntings


When people don’t behave as expected, leaders have options.

They can coach. They can restructure. They can change incentives. They can replace roles or redesign teams.


When an AI system doesn’t behave as expected, those levers largely disappear.

The executive isn’t managing performance anymore.They’re managing hauntings.

The response becomes technical:

  • You have to bring the technologists back in

  • You have to retrace training data and assumptions

  • You have to unpick embedded decision logic

  • You have to understand which behaviours were learned, and why


And because of the nature of AI systems, it’s rarely a simple matter of turning it off and turning it back on again. AI systems learn continuously. They accumulate history. They carry organisational memory in ways that are hard to unwind.


Once embedded, these ghosts don’t leave quietly. They persist across reorganisations, leadership changes, and strategy resets - long after the people who created the conditions for them have moved on.



This is where enterprise design comes in


The core mistake many organisations make is treating enterprise design as something that follows AI adoption. In reality, it has to precede it.


Enterprise design is the work of intentionally shaping:

  • Decision rights

  • Accountability

  • Incentives

  • Governance

  • Ways of working

  • The boundaries between people, processes, and systems


If you don’t do this work explicitly, AI will do it for you, implicitly, by learning from whatever currently exists. And it will do it without judgement.



What “enterprise design first” actually means in practice


For executives, integrating enterprise design into AI is not about producing perfect future-state org charts. It’s about making a set of deliberate choices before those choices are frozen into code.


In practice, that means asking some uncomfortable questions up front.


1. Which behaviours do we want to scale, and which do we want to eliminate?

AI will amplify behaviour. You need clarity on:

  • Which workarounds exist because people are smart

  • Which exist because the organisation is broken.

Only the first category should survive training.


2. Where is judgement genuinely required, and where is it accidental?

Many organisations rely on human judgement not because it’s valuable, but because the system design forces it. AI applied here will simply automate poor design.


3. Who is accountable when decisions are automated?

AI blurs responsibility unless decision rights are explicitly redesigned. “The system decided” is not a governance model.


4. What incentives are currently shaping behaviour (formally and informally)?

AI will learn the real incentives, not the stated ones.


5. What do we want the organisation to become before we teach a machine how it works?

Once the model is trained, that answer matters a lot more.



AI needs enterprise design now — not later


None of this is an argument against AI.


AI can scale judgement, execution, and pattern recognition in extraordinary ways. But it cannot decide what kind of organisation you want to become.


That remains a leadership responsibility.

Which is why AI needs enterprise design as well as technical expertise.Not as a downstream activity. Not as a clean-up exercise. But as a core part of how AI initiatives are conceived, governed, and deployed.


Because once the ghosts are trained into the system, they’re very hard to evict.


And at that point, the question for executives is no longer whether AI works, but whether it’s working for the organisation they actually want to run.

 
 
 

Comments


© 2026 by Choir Consulting Ltd.
Powered and secured by Wix

bottom of page