top of page
Search

Most AI is an IT Project. Real AI never is.

  • Writer: Claas
    Claas
  • Dec 11, 2025
  • 4 min read

Why AI ends up in IT, and why this creates the wrong conditions


In most enterprises, AI programmes naturally flow into the IT portfolio. The logic is familiar. IT has delivery structures, programme capacity, architectural governance and dedicated teams. Business units have operational responsibility and limited discretionary time. Any initiative that involves models, data or infrastructure is routed to IT by default.

This works for systems. It does not work for work.

Copilot-style capabilities extend existing workflows without altering how the organisation creates value. They behave like any other software enhancement. IT can deliver them effectively.

The moment AI begins to classify, interpret, prioritise, route or resolve, it moves into the domain of operational work. These activities sit between roles and processes, in the space typically filled by human judgment. Once AI performs this type of work, it cannot remain an IT asset. The consequences of that work belong entirely to the business.
Treating all AI as an IT project is a structural mismatch. Most organisations do it anyway, because their operating model leaves them no alternative.

When AI behaves like labor, the operating model break


The common assumption that all AI can be delivered through IT structures is one of the main reasons scaling fails. Some AI enhances human productivity. Other AI performs operational tasks. These categories require different governance, ownership and capabilities, yet most organisations manage both through the same delivery logic.

Digital labor exposes gaps that the operating model was never built to handle. When AI performs work, several organisational questions become unavoidable:
  • Who defines the decision logic?
  • Who owns risk tolerance?
  • Who intervenes when the model is uncertain?
  • Who measures the quality of outcomes?
  • Who redesigns roles once tasks move to AI?

These questions cannot be answered by IT because they are not technical. They concern how the organisation produces value. When they remain unanswered, digital labor behaves unpredictably, even if the underlying model is sound.

This is where most AI programmes break. IT delivers a technically correct solution. The business has not defined the operating conditions that allow AI to perform reliably. The result is friction, mistrust and reversion to manual work.

The distinction is simple. Tools fit inside IT. Labor does not. And AI that behaves like labor exposes the limits of an operating model that still treats technology as software rather than work.

What recent MIT findings reveal about organisational readiness


A recent MIT report found that around 95 percent of enterprise generative AI pilots fail to
produce measurable business impact. The failures were not primarily technical. They were organisational. AI systems remained isolated from the work itself. Ownership was unclear. Decision logic was missing. Business capacity was insufficient. Workflows were not redesigned for AI involvement.

In controlled pilots, these gaps are hidden. The environment is narrow and predictable. Once AI enters real operations, it encounters ambiguity that only humans know how to resolve. Without a clear operating model, the organisation cannot absorb digital labor.

The report confirms a pattern many leaders recognise. The difficulty is not deploying AI. The difficulty is integrating it into a structure that was never designed to accommodate an actor that performs work.

Why unclear work is the real blocker


Most organisations document processes, not the decisions that make those processes function. Processes describe sequences. Work happens in the gaps. It depends on contextual interpretation, unwritten rules, local heuristics and tacit coordination. Humans fill these gaps with little effort.

AI cannot.

Digital labor requires explicit decision logic. It needs clarity on acceptable outcomes, incomplete information, conflicting signals and escalation. When organisations cannot articulate this logic, AI behaves unpredictably. The model is not the problem. The absence of defined work is the problem.

This is why many AI pilots succeed within narrow boundaries but fail to scale. The pilot avoids the complexity of real operations. The organisation has not defined the work deeply enough for a non-human actor to participate.

Work must be understood before it can be delegated.

Why real AI never belongs in IT: a simple scenario


Consider a customer operations team introducing AI for routing and case triage. IT builds the model, integrates it into systems and tests it. The technical delivery is correct. Once deployed, the model follows patterns and probabilities that make sense from a data perspective.

Operationally, it fails.

The business has not defined customer risk segments. It has not set rules for high-touch
cases. It has not clarified when escalation is mandatory or when ambiguity must trigger
human review. It has not specified how to treat inconsistent data. And it has not redesigned the work of frontline teams to absorb the AI’s decisions.

The model is technically sound. The workflow is not.

Customers are misrouted. Exceptions increase. Teams override decisions manually. Management pauses the initiative. The organisation concludes that AI is unreliable.

In reality, nothing was wrong with the model. The organisation had not defined the work clearly enough for AI to perform it. The failure belongs to the operating model, not the technology.

What must change before AI can scale


Scaling AI requires the organisation to adjust structures that have been stable for years.

  • It needs business capacity dedicated to decision ownership. Side-of-desk involvement is not enough. Decision owners and domain architects must shape how work is executed and measured.
  • It needs decision maps rather than process diagrams. AI depends on clarity at decision points, not only sequences of tasks.
  • It needs a clear separation between platform ownership and outcome ownership. IT governs the technical environment. The business owns quality, risk and performance.
  • It needs redesigned workflows and roles. Human work should not remain unchanged when tasks move to digital labor. Activities need to be removed deliberately rather than informally bypassed.
  • It needs incentives that support value creation. Organisations must reward the adoption of digital labor, not only operational stability.

These conditions are not optional. They determine whether digital labor becomes part of the organisation or remains a sequence of isolated pilots.

Conclusion: AI succeeds when organisations stop treating it as an IT project


Most AI begins in IT because it resembles technology. Real AI fails in IT because it behaves like work.

The business must define decision logic, own outcomes, manage risk and redesign roles. IT cannot perform these responsibilities. The operating model must evolve before the model does.

Organisations do not struggle with AI because the algorithms are insufficient. They struggle because they cannot yet articulate the work they expect AI to perform.

AI scales when the organisation is prepared for digital labor. Until that happens, most AI will remain an IT project. And real AI never will be.
 
 
 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page