Direct Answer
AI augmented architecture is the practice of connecting a governed EA repository to AI language models and BI tools so that architecture intelligence is accessible on demand: without requiring architects to manually generate reports, answer stakeholder queries, or translate models into presentations.
In practical terms: an enterprise architect maintains a Sparx EA repository. EA GraphLink connects that repository to Power BI (for live dashboards), to Microsoft Copilot (for natural language queries in Teams and Outlook), and to Kernaro AI Hub (for AI-assisted architecture work directly in Sparx EA). A CIO can ask “Which of our applications are approaching end of life and what capabilities do they support?” and receive a structured, accurate answer drawn from the live repository: in a Teams chat, without opening Sparx EA, without filing a report request, and without waiting.
This is not a future state. In 2026, this is a production-ready architecture available to any organization running Sparx EA with Pro Cloud Server, EA GraphLink, and a governed MDG repository. The technology is mature. The constraint is repository governance quality.
Why This Matters Now
The Inflection Point
Three things converged between 2024 and 2026 to create a genuine inflection point for AI in enterprise architecture:
MCP standardisation: The Model Context Protocol: developed by Anthropic and rapidly adopted as an open standard: provides a standardized way for AI language models to connect to external data sources and tools. Before MCP, integrating an AI tool with a specific data source required bespoke API work for each combination. With MCP, any data source that exposes an MCP server can be queried by any AI client that supports MCP. Sparx Systems’ EA GraphLink MCP Server implementation means the Sparx EA repository is now MCP-native.
Native Sparx EA MCP support: Sparx Systems shipped EA GraphLink with MCP Server support, making Sparx EA the first major EA tool to natively expose repository content via MCP. This is not a third-party workaround: it is a Sparx Systems product decision that positions Sparx EA as an AI-native architecture platform. The implication is significant: any AI system that supports MCP can now query a Sparx EA repository.
LLM maturation: The large language models available in 2025-26: GPT-4o, Claude 3.5/3.7, Gemini 2.0, and their successors: have crossed a capability threshold where they can meaningfully reason over structured architecture data. Earlier LLM generations produced confident-sounding responses that were structurally plausible but factually unreliable. Current models, given well-structured context from a governed repository, produce responses that are genuinely useful for architecture work.
The combination of MCP standardisation, native Sparx EA MCP support, and capable LLMs has produced a situation where the primary constraint on AI augmented architecture is no longer the technology: it is the quality of the underlying EA repository.
The Productivity Gap AI Augmented Architecture Closes
Enterprise architects are expensive and scarce. They spend a significant proportion of their time on activities that do not require architectural judgment:
- Generating status reports from repository data
- Answering repeating stakeholder questions about the current-state application landscape
- Producing slide decks that translate repository content into executive-readable formats
- Running queries to populate governance artefacts
These activities are valuable outputs but low-value activities for the architect who produces them. AI augmented architecture automates them: not perfectly, but well enough to free architect time for the work that actually requires architectural judgment: evaluating options, managing trade-offs, engaging stakeholders on complex decisions, and building the mental models that make good architecture possible.
The organizations that will have the most effective EA practices in 2028 are the ones that invest now in the repository governance that makes AI augmentation possible.
The Technology Stack
Understanding AI augmented architecture requires understanding the components that make it work and how they connect.
Layer 1: The Repository
Sparx EA is the architecture repository. It stores elements (capabilities, applications, systems, technology components, data entities, processes), relationships between elements (realisation, association, composition, dependency, flow), and the metadata on elements (tagged values, notes, diagrams, links to external artefacts).
The repository quality: specifically the MDG (Model Driven Generation) governance that structures element stereotypes and tagged values: determines everything downstream. A repository with governed MDG profiles produces rich, structured data. A repository without governed MDG profiles produces ambiguous, sparse data. AI tools can only reason over what the repository contains and how it is structured.
Layer 2: Pro Cloud Server
Sparx EA Pro Cloud Server (PCS) is the server-side infrastructure for Sparx EA. It manages the repository database (SQL Server, PostgreSQL, MySQL, or Oracle), provides web-based access to the repository, handles authentication, and exposes the API layer that EA GraphLink connects to. Every enterprise Sparx EA deployment should be running PCS; file-based repositories (.eapx) do not support EA GraphLink.
Layer 3: EA GraphLink
EA GraphLink is the connectivity product from Sparx Systems that transforms the repository into a structured data source for external systems. It provides:
Interface A: GraphQL API: Structured graph queries against the repository. Power BI’s connector queries this API to populate dashboards. The GraphQL interface can also be queried by custom applications, reporting tools, and programmatic integrations.
Interface B: MCP Server: A Model Context Protocol server that makes repository data available to AI language models. Any MCP-compatible AI client: Microsoft Copilot, Kernaro AI Hub, Claude Desktop, Cursor, Azure OpenAI, and others: can connect to the EA GraphLink MCP Server and query the repository.
Layer 4: The Consumer Applications
The applications that consume EA data via EA GraphLink:
Power BI (via Interface A): BI dashboards that present EA data to management and executive audiences. Capability heat maps, application portfolio views, technology risk dashboards, project alignment matrices. Refreshed automatically as the repository is updated.
Kernaro AI Hub (via Interface B): An AI assistant purpose-built for EA work, developed by Kernaro. AI Hub connects to EA GraphLink’s MCP Server and provides AI-assisted architecture analysis within the context of the specific EA repository. Use cases include: automated element classification, relationship suggestion, gap detection, impact analysis, and natural language queries against the repository.
Kernaro Assist (via Interface B): Kernaro’s AI assistant interface: a conversational AI that operates within Sparx EA. Architects can ask questions about the repository (“What applications support the customer onboarding capability?”) and receive structured answers without leaving the EA tool.
Microsoft Copilot (via Interface B): EA intelligence in the flow of Microsoft 365 work. Natural language queries against the EA repository from Teams, Outlook, Word, and other M365 applications.
Microsoft Fabric (via Interface B): EA data ingested into the Fabric data estate for cross-source analytics and agentic data workflows.
Universal AI clients (via Interface B): Any MCP-compatible AI client can connect to EA GraphLink’s MCP Server. Claude Desktop, Cursor (for architecture-as-code workflows), ChatGPT Enterprise (with MCP plugin support), and Azure OpenAI (with MCP integration) are all compatible.
What AI Augmented Architecture Enables
For Architects: Productivity and Governance Automation
Report generation: The most immediate productivity gain. Instead of manually querying the repository, structuring findings, and building slides or reports, an architect asks EA GraphLink-connected AI for a current-state summary. The AI generates a structured, accurate response from the repository that the architect reviews, edits, and publishes: in minutes rather than hours.
Impact analysis: “What would be affected if we decommissioned Application X?” In a well-governed repository, this query traverses the dependency graph from Application X through its capabilities, the business processes those capabilities support, the user communities those processes serve, and the technology components Application X depends on. A governed repository with EA GraphLink produces a complete impact analysis in seconds. The same query without EA GraphLink requires an architect to manually traverse the repository.
Governance automation: Relationship suggestions, missing element detection, and consistency checks. AI tools connected to the repository can identify elements that lack required tagged values, relationships that appear to be missing based on the pattern of the rest of the repository, and elements whose classification appears inconsistent with similar elements. This is not autonomous governance: the architect reviews and approves: but it reduces the manual inspection work that governance requires.
Architecture health monitoring: Regular automated queries against the repository to track metrics: percentage of applications with lifecycle status populated, percentage of capabilities with maturity scores, number of elements without an owner tagged value. These metrics, surfaced in Power BI, give the EA team visibility into repository quality trends without manual audit work.
For Stakeholders: On-Demand Architecture Intelligence
Stakeholders: program managers, domain leads, technology owners: have recurring architecture information needs. What applications support a given business domain? Which projects are modifying the customer data architecture? What is the technology risk profile of the retail banking platform?
Without AI augmented architecture, these questions go to the EA team as ad hoc requests. Each request requires an architect to run queries, structure an answer, and respond. With AI augmented architecture:
- Power BI dashboards answer the high-frequency visual questions without any EA team involvement
- Microsoft Copilot answers the conversational queries in Teams without any EA team involvement
- The EA team is freed from report production and can focus on the requests that genuinely require architectural judgment
This is not about replacing EA team engagement. It is about making EA team time available for the questions that actually need them.
For Executives: Live Portfolio Insight
Executive stakeholders need EA insight at the portfolio level: capability coverage, technology risk, project alignment with architecture strategy, and progress against architecture transformation programs. Today, most EA teams produce these as periodic reports: quarterly reviews, annual roadmap presentations: that are already partially out of date when they are delivered.
With EA GraphLink and Power BI, executives access live portfolio dashboards that reflect the current state of the EA repository. The CIO’s architecture dashboard shows the current application lifecycle profile, not the one from last quarter’s review. The Head of Technology sees the current OT/IT integration risk view, updated as the repository is maintained. Portfolio decisions are made with current-state data rather than stale reports.
The MDG Quality Gate
AI augmented architecture works when the repository is well-governed. It fails: or produces misleading results: when it is not. This is the central constraint, and it is a repository governance constraint, not a technology limitation.
The Garbage-In/Garbage-Out Reality
EA GraphLink surfaces what the repository contains. AI language models reason over what EA GraphLink surfaces. If the repository contains poorly typed elements, sparse tagged values, inconsistent relationship modeling, and ambiguous element naming, the AI outputs will reflect that quality.
A query like “Which applications support our customer onboarding capability?” requires:
- Applications to be modelled as elements with a governed
«ApplicationComponent»stereotype - Capabilities to be modelled as elements with a governed
«BusinessCapability»stereotype - A relationship between specific applications and the customer onboarding capability: specifically, a realisation relationship of the type that EA GraphLink recognizes as application-supports-capability
If any of these are missing, the AI query returns incomplete results. If all three are missing, the query returns nothing useful. The AI is not failing: the repository is failing.
What Good MDG Governance Looks Like
A repository that is ready for AI augmented architecture has:
Consistent stereotypes: Every significant element has a stereotype from a governed MDG profile. Applications are «ApplicationComponent», capabilities are «BusinessCapability», technology components are «TechnologyComponent». No significant element uses a default UML type without a stereotype.
Populated key tagged values: The strategic elements: capabilities, applications, data entities: have their key tagged values populated. Owner, lifecycle status (for applications), maturity level (for capabilities), and investment direction are populated consistently, not sporadically.
Explicit cross-layer relationships: The relationships that enable cross-layer queries: application-supports-capability, technology-hosts-application: are explicitly modelled using the relationship types that EA GraphLink recognizes. These relationships are what enable AI tools to answer “what supports what” questions.
Named and described elements: Elements have names that are meaningful to domain stakeholders, not just to the architect who created them. Brief descriptions on key elements provide the context that AI tools use to answer “what is X?” questions.
A Sparx Services Deploy engagement establishes this MDG governance foundation. A Discover engagement assesses whether an existing repository meets the quality threshold.
The Ecosystem Paths
EA GraphLink’s MCP Server is open to any MCP-compatible AI client. The ecosystem paths vary by what the AI client does well and what governance environment the organization operates in.
The Microsoft Path
Audience: Organizations running Microsoft 365, Power BI, and Microsoft Fabric.
Products: EA GraphLink → Power BI (Interface A), EA GraphLink → Microsoft Copilot (Interface B via MCP), EA GraphLink → Microsoft Fabric (Interface B via MCP).
What it delivers: Live EA dashboards in Power BI, natural language architecture queries in Teams/Outlook/M365, EA data as a Fabric data source for cross-source analytics.
Key benefit: All processing occurs within the organization’s Microsoft 365 tenant. No EA data leaves the tenant boundary. This is the correct choice for regulated industries with data residency requirements.
The Salesforce Path
Audience: Organizations where Salesforce is the strategic application and analytics platform.
Products: EA GraphLink → Tableau (Interface A for BI), EA GraphLink → MuleSoft (Interface B for integration), EA GraphLink → Salesforce Agentforce (Interface B via MCP for AI).
What it delivers: EA dashboards in Tableau, EA data flows through MuleSoft integration fabric, natural language architecture queries in Salesforce Agentforce agents.
Key benefit: For organizations that have invested heavily in the Salesforce ecosystem, the EA data can flow into existing Salesforce analytics and AI infrastructure without adding new platform dependencies.
The Universal Path
Audience: Organizations that want to use best-in-class AI tools regardless of platform affiliation: or that use multiple AI clients for different teams or use cases.
Products: EA GraphLink MCP Server connected to any combination of: Claude Desktop (Anthropic), ChatGPT Enterprise (OpenAI) with MCP plugin, Gemini (Google) with MCP integration, Azure OpenAI with MCP wrapper, Cursor (for architecture-as-code and IaC workflows), Kernaro AI Hub and Kernaro Assist.
What it delivers: Flexibility to use the best available LLM for each use case. Kernaro products are EA-specialist tools that provide capabilities specifically designed for architecture work. General-purpose LLMs provide broad language capability for communication, documentation, and analysis tasks.
Key benefit: Not locked to any single AI vendor’s capability trajectory. As LLM capabilities evolve, the EA GraphLink MCP Server connection works with whatever the best available model is.
What It Doesn’t Do
Honest limitations are important. AI augmented architecture does not do the following:
Replace Architectural Judgment
AI tools connected to an EA repository can answer questions about what is in the repository. They cannot make architecture decisions, evaluate strategic trade-offs, or exercise the judgment that comes from domain experience and organisational context. An AI tool can tell you which applications are at end of life. It cannot tell you whether the business case for replacing them outweighs the risk of the project to replace them: that requires an architect.
AI augmented architecture amplifies architects. It does not replace them. The organizations that treat AI as a replacement for EA expertise will produce worse architecture than those that treat it as a tool that makes architects more effective.
Compensate for Poor Repository Governance
This has been emphasized throughout this guide, but it cannot be overstated. There is no AI tool that can extract useful, reliable architecture intelligence from a poorly governed repository. The garbage-in/garbage-out constraint is absolute. Organizations that invest in EA GraphLink and AI tooling before investing in MDG governance will be disappointed with the results.
The correct sequence is: repository governance first, AI integration second.
Work Well With Stale Repositories
EA GraphLink surfaces the current state of the repository. If the repository is updated infrequently: if architects produce models during project phases but do not maintain them as the organization changes: the AI outputs will reflect a state of the organization that no longer exists. Stakeholders receiving answers from a repository that is 18 months out of date will stop trusting the AI and, by extension, stop trusting the EA practice.
AI augmented architecture requires a commitment to repository maintenance. The dashboards, Copilot answers, and AI outputs are only as good as the most recent data in the repository.
Produce Reliable Results From Ambiguous Queries
AI language models are probabilistic reasoners. Given an ambiguous query against a well-governed repository, they will produce a confident-sounding response that may be partially wrong. Architects and stakeholders using AI-augmented tools need to understand that responses are always starting points for verification, not authoritative answers. The better the query is specified: “what applications with lifecycle status ‘Strategic’ support the ‘Customer Data Management’ capability using an ArchiMate Serving relationship?”: the more reliable the response.
How to Get Started
Stage 1: Discover ($25K–$75K)
A Discover engagement is the right starting point for any organization considering AI augmented architecture. Discover produces:
- An assessment of the current Sparx EA repository’s MDG governance quality
- A capability map and application landscape as a baseline architecture artefact set
- A gap analysis identifying what repository governance work is needed before AI integration
- A prioritized roadmap for the Connect and Deploy work that follows
Discover takes 4–8 weeks. It answers the question: “Are we ready for AI augmented architecture, and if not, what does it take to get there?”
Stage 2: Deploy ($30K–$130K)
If the Discover assessment reveals material MDG governance gaps, a Deploy engagement addresses them before AI integration. Deploy establishes:
- Governed MDG profiles with consistent stereotypes and tagged value schemas
- A governed package structure and repository organization
- The minimum required cross-layer relationship modeling (capability → application → technology)
- Baseline metadata population on strategic elements
Deploy does not need to address the entire repository. It establishes governance for the elements and relationships that the AI integration will query most frequently.
Stage 3: Connect ($50K–$185K+)
Connect delivers the EA GraphLink deployment and the consumer integrations:
- EA GraphLink installation and configuration on Pro Cloud Server
- Power BI semantic model and initial dashboard set
- Microsoft Copilot plugin (or Salesforce/Universal path AI integration)
- Microsoft Fabric pipeline configuration (where in scope)
- Kernaro AI Hub and Assist configuration (where in scope)
- Architect training on the integrated platform
Connect is where the AI augmented architecture becomes real for the organization: where architects start using AI tools in their daily work and where stakeholders start accessing architecture intelligence through dashboards and conversational AI.
Stage 4: Amplify ($45K–$160K)
Once the AI integration is live, Amplify engagements maximize the value of the investment:
- Expanding the repository governance to cover additional domains and use cases
- Adding new Power BI dashboards for emerging governance needs
- Deepening the Copilot integration for specific stakeholder workflows
- Training and capability development for the EA team
Amplify is ongoing. As the EA practice matures and the AI tools evolve, there is always value to be added to a well-run AI augmented architecture program.
FAQ
What is AI augmented architecture?
AI augmented architecture is the practice of connecting a governed Sparx EA repository to AI language models and BI tools so that architecture intelligence is accessible on demand: through Power BI dashboards, Microsoft Copilot natural language queries, and AI-assisted analysis tools like Kernaro. The repository stays in Sparx EA; AI tools make its content accessible to wider audiences without requiring direct EA tool access.
What is EA GraphLink and who makes it?
EA GraphLink is a connectivity product made by Sparx Systems (the makers of Sparx EA). It exposes the Sparx EA repository through two interfaces: Interface A is a GraphQL API consumed by Power BI and other BI tools; Interface B is an MCP (Model Context Protocol) Server consumed by AI language model clients including Microsoft Copilot, Kernaro AI Hub, Claude, ChatGPT Enterprise, and others. Sparx Services implements EA GraphLink as part of a Connect engagement.
What is MCP and why does it matter for EA?
MCP (Model Context Protocol) is an open standard: developed by Anthropic and widely adopted: that provides a standardized way for AI language models to connect to external data sources. Before MCP, each AI/data source integration required bespoke development. With MCP, any data source with an MCP server (including EA GraphLink) is immediately accessible to any AI client that supports MCP. This means the Sparx EA repository can be queried by Microsoft Copilot, Claude, ChatGPT Enterprise, Azure OpenAI, Cursor, and any other MCP-compatible AI client.
Does AI augmented architecture work without good MDG governance?
No: not reliably. EA GraphLink surfaces what the repository contains. If elements lack stereotypes, tagged values are sparsely populated, or cross-layer relationships are not modelled, AI queries produce incomplete or misleading results. MDG governance is the quality gate for every AI integration. Sparx Services recommends a Deploy engagement to establish or validate MDG governance before a Connect engagement deploys AI integration.
What is Kernaro and how does it relate to EA GraphLink?
Kernaro is an independent software company that develops AI tools purpose-built for Sparx EA. Kernaro AI Hub is an AI assistant for EA analysis work: gap detection, automated classification, impact analysis, natural language repository queries. Kernaro Assist is a conversational AI interface within Sparx EA. Both connect to the EA repository via EA GraphLink’s MCP Server. Kernaro products are EA-specialist tools; they complement rather than replace general-purpose AI tools like Microsoft Copilot or Claude.
Can we use AI augmented architecture if we use Azure OpenAI rather than Microsoft Copilot?
Yes. EA GraphLink’s MCP Server is compatible with any MCP-supporting AI client. Azure OpenAI can be configured with an MCP integration to query the EA repository. This gives organizations that prefer Azure OpenAI’s specific model capabilities (or its Azure-native data residency) access to the same EA intelligence as Microsoft Copilot provides. Sparx Services can implement the Azure OpenAI MCP integration as part of a Connect engagement.
What does AI augmented architecture NOT do?
It does not replace architectural judgment: AI tools answer questions about what is in the repository, but cannot evaluate strategic trade-offs or make architecture decisions. It does not compensate for poor repository governance: a poorly governed repository produces unreliable AI outputs regardless of how capable the AI model is. It does not work well with stale repositories: AI outputs are only as current as the repository data. And it does not produce perfectly reliable answers to ambiguous queries: AI responses are starting points for verification, not authoritative outputs.
What is the recommended sequence for getting started?
The recommended sequence is: Discover → Deploy → Connect. Discover assesses your current repository quality and produces an architecture baseline and readiness gap analysis. Deploy establishes the MDG governance that AI integration requires. Connect deploys EA GraphLink and the consumer integrations (Power BI, Copilot, Kernaro). Amplify then maximises the value of the investment through expanded governance and additional dashboards. Organizations with already well-governed repositories can skip the Deploy stage and move directly from Discover to Connect.
How much does AI augmented architecture cost to implement?
The Sparx Services implementation cost depends on the scope. Discover starts at $25K. Deploy starts at $30K. Connect starts at $50K. A typical full path: Discover + Deploy + Connect: is in the range of $120K–$300K depending on repository complexity, scope of Microsoft or other integrations, and number of Power BI dashboards. This does not include Microsoft product licensing (Power BI, Copilot, Fabric) which the client licenses separately from Microsoft, or Sparx Systems product licensing (EA GraphLink). Contact Sparx Services for a scoped estimate.
Is AI augmented architecture suitable for regulated industries?
Yes: and the Microsoft ecosystem path is specifically designed for regulated industries. Microsoft Copilot and Fabric process EA data within the organization’s Microsoft 365 tenant, satisfying data residency requirements for financial services, healthcare, and government programs. For UK government and US defense, Microsoft Government Cloud (GCC and GCC High) tenants provide appropriate security classifications. Sparx Services has experience implementing AI augmented architecture in financial services, government, and defense contexts.
Next Step: Start With a Connect Engagement
AI augmented architecture is not a research project. It is a production-ready capability available today to organizations running Sparx EA with Pro Cloud Server. The constraint is repository governance quality: not technology availability.
A Connect engagement from Sparx Services delivers a full EA GraphLink deployment with Power BI dashboards and AI tool integration, typically in 8–16 weeks.
Talk to Sparx Services about a Connect engagement
Connect engagements start at $50K. Discover first if you need a repository readiness assessment.