top of page
Search

Beyond Standard - The Repositioning of Enterprise Architecture

  • Writer: Claas
    Claas
  • Feb 25
  • 6 min read
The debate between standard and custom sometimes appeared settled. Standard software prevailed not because it was superior in every dimension, but because it reduced uncertainty. Custom solutions were expensive, politically fragile and difficult to sustain. Standard systems created predictability in environments where predictability mattered more than elegance.

AI changes parts of that equation. Development is faster, integration is easier and orchestration across systems is technically feasible in ways that were unrealistic a decade ago. Vendors are embedding AI aggressively. and qgent models and usage-based pricing are being introduced. Capital markets expect monetisation. Boards expect productivity gains.

Yet in delivery programs, what I observe is not acceleration but hesitation.

This is consistent with something I wrote earlier when reflecting on digital labour and screenless architectures. If AI becomes the operator rather than the assistant, the interface loses dominance. Systems turn into transaction engines. The real question shifts from functionality to orchestration. That shift is structural, but it is not yet settled.

We are not witnessing a collapse of standard, but we are also not witnessing a triumphant return of custom. We are operating in a transition phase in which architectural placement, economic logic and governance maturity are still being negotiated.

Stability was never accidental


Enterprise platforms dominate because they solve structural problems, not because they offer the most elegant feature set. They embed compliance directly into operational flows. They support global scale without constant reinvention, they make auditability routine rather than exceptional and they create operational resilience in environments where failure is not an option. In many organisations, they also shift substantial delivery and regulatory risk away from internal IT teams toward vendors with global support structures.

That structural value has not weakened. Regulatory complexity is increasing, not declining. Global operating models still require consistency. Boards and audit committees still prioritise predictability over experimentation when it comes to core systems. Stability is not a legacy preference, but a governance requirement.

This is why stock market volatility and valuation corrections in the enterprise software sector should be interpreted carefully. A correction in growth expectations is not evidence of structural irrelevance. Capital markets reassess monetisation trajectories. They do not automatically signal that compliance layers, transaction engines and global process backbones are obsolete.

Standard platforms continue to carry institutional weight because they anchor accountability, not just functionality. The risk today is that while these platforms remain the anchors of accountability, they are increasingly positioning themselves as the sole gateway to an organization’s own intelligence. The 'Standard' model is shifting from a functional choice to a sovereignty negotiation (see also my blog on sovereignty).

Orchestration is where the real debate moves


The real shift happens in architecture, not in feature sets.

AI capabilities inside applications are visible and easy to compare. The more consequential change is less visible. It concerns the placement of decision logic across systems and the way that logic is governed. As intelligence begins to act across domains, architecture becomes the central question. Selecting an application is no longer the only structural decision. The location of orchestration determines control, dependency and economic exposure. Vendors position embedded intelligence as the natural extension of their platforms. The data model is native. Security is integrated. Context is deep. From a technical standpoint, this approach is coherent.

At the same time, many enterprises invest in central AI layers to coordinate across systems. The objective is cross-domain visibility, consistent governance and reduced dependency on individual platforms. This is not ideological positioning. It reflects the need to manage systemic interactions rather than isolated use cases. In delivery programs, this tension becomes tangible.

In complex call center environments, activating embedded AI often requires broader system upgrades. These landscapes are tightly integrated and operationally sensitive. Organisations hesitate to trigger large upgrade cycles purely to unlock AI features. Instead, they introduce smaller, targeted solutions on top of existing systems to address immediate efficiency needs.

In other environments I have observed, that the hesitation is often economic rather than technical. The built-in AI capabilities are available, but pricing and scaling considerations lead clients to experiment with simpler, more controlled AI layers alongside the platform. In ERP and CRM landscapes, central AI initiatives frequently operate in parallel to platform-native capabilities. This isn't just technical redundancy; it is an architectural hedge. Organizations are realizing that if the decision logic lives exclusively inside the vendor’s 'Black Box,' the enterprise loses the ability to pivot without permission.

Smaller specialised providers enter these spaces with targeted orchestration tools, addressing specific gaps without demanding full platform alignment. The technology functions. The architectural equilibrium has not yet stabilised.

Pricing reveals the structural tension


The monetisation question is where much of the uncertainty becomes visible. Vendors introduce agent-based pricing, usage models and AI tiers. From their perspective this is consistent. Intelligence adds capability. Capability should add value. Value should be monetised.

Inside enterprises, the discussion sounds different. AI is primarily evaluated through efficiency. Faster ticket handling, automated documentation, reduced manual processing or better routing. These are operational improvements and they are measurable. This makes it attractive.

But they also raise practical questions.

  • If automation reduces manual workload, what happens to seat-based pricing?
  • If AI operates across systems, does each platform charge separately?
  • If efficiency reduces user interaction, does revenue scale with usage or decline with automation?
These questions are rarely framed as criticism, but as budget planning. In some organisations, embedded AI is technically available but adopted selectively because the pricing model is not yet perceived as proportionate. In others, central AI initiatives are compared against platform-native AI not only on functionality, but on long-term cost exposure.

We are witnessing a fundamental misalignment: Vendors are trying to protect per-seat margins while selling technology designed to eliminate the need for those seats. This 'Automation Tax', where the efficiency gain is immediately recaptured by the vendor via AI surcharges, might be the primary driver of the current adoption plateau.

The hesitation is not about whether AI works. It is about whether the monetisation logic is aligned with the efficiency logic. Markets seem to reflect that ambiguity. Growth expectations are adjusted. Valuations move. That does not signal collapse. It signals that the long-term revenue mechanics are still being tested.

Until there is clearer alignment between automation gains and pricing structures, caution is rational.

Custom is not returning, but it is repositioned

Technically, custom development is less intimidating than it used to be. Prototyping is faster. Integration patterns are cleaner. AI-assisted development lowers effort for certain classes of functionality. Building something on top of existing systems no longer feels like a multi-year engineering commitment.


That changes the feasibility discussion. What it does not change is the delivery reality. In large programs, custom initiatives rarely struggle because code cannot be written. They struggle because ownership is unclear, because scope expands, because governance is fragmented. AI does not solve those problems. If anything, it increases the need for clarity about responsibility and decision rights. This is why I do not see a broad return to custom as system replacement. Core platforms continue to carry compliance, transaction integrity and global consistency. Rebuilding that stack from scratch remains economically and organisationally unrealistic for most enterprises.

Custom components now function as the 'connective tissue' or the 'policy brain' that directs traffic between standard systems. This isn't about building a new database; it’s about writing the proprietary code that prevents an organization from becoming a mere tenant in its own value chain. It complements stable platforms rather than competing with them. It gives enterprises room to shape behaviour without destabilising the backbone. In practice, this leads to layered architectures. Standard systems anchor transactions and compliance. Custom components shape cross-system logic and domain-specific behaviour. AI acts within and across those layers.
This is not a comeback story. It is a redistribution of responsibility within the stack. And again, the constraint is not technical feasibility. It is organisational maturity. The ability to decide where differentiation is worth the complexity and where stability should prevail.

Architectural responsibility in an unsettled phase


The debate is no longer about choosing between standard and custom. The relevant issue is control over architecture and economic exposure. Core platforms continue to anchor compliance, global processes and accountability. That layer remains essential. What is changing is the layer above it. As intelligence operates across systems, architectural placement becomes a strategic decision. Embedded AI strengthens platform depth but can increase dependency and cost concentration. Central orchestration layers increase flexibility but require internal maturity and clear governance.
Both approaches are viable. Neither is neutral.

The tension today lies in the fact that automation capability, monetisation models and organisational readiness are evolving at different speeds. Technical feasibility advances quickly. Pricing structures adapt commercially. Governance frameworks take longer to stabilise. This misalignment explains much of the current hesitation.

From what I see in customer programs, the most resilient organisations are not those moving fastest, but those designing with optionality. They keep the core stable, experiment at the orchestration layer and avoid irreversible economic commitments until the long-term interaction between automation and pricing becomes clearer.

There is no dominant model yet. Stability remains necessary. Orchestration is becoming strategic. The critical task is to design architectures that preserve control while absorbing innovation without structural overreaction.

That requires deliberate architectural leadership rather than reactive adoption.


 
 
 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page