Real questions from real clients, answered directly. No marketing gloss. If the honest answer is “it depends,” we say so: and then tell you what it depends on.
Q: How do we know if we’re ready for EA?
You are ready when you can name at least one decision that went wrong because nobody had a complete picture of how your systems, capabilities, or data fitted together. That is the problem EA solves. You do not need a mature governance framework or a full-time team to start: you need a scoped, practical first step, usually a capability map or an application inventory tied to a strategic objective. Readiness is about having a problem worth solving and an executive willing to sponsor the work, not about having everything figured out in advance.
Q: How long does it take to see value from EA?
Tangible value: in the form of a decision made faster, a redundancy identified, or a planning conversation grounded in evidence: can happen within 90 days of a well-scoped engagement. A full EA capability that delivers ongoing governance value typically takes 12 to 24 months to establish. The honest answer is that value is proportional to scope and discipline: a focused application portfolio assessment delivers results quickly; a comprehensive enterprise-wide program takes longer but delivers broader impact.
Q: What should we model first?
Start with what matters most to your most important decision-maker. If the CIO needs to rationalise applications, start with the application portfolio. If the CEO is asking which capabilities need investment, start with a capability map linked to strategic objectives. If digital transformation is the driver, start with the current-state application landscape and the target integrations. Modeling everything before doing anything is the fastest way to produce shelfware. Pick the decision, model what informs it.
Q: Do we need a dedicated EA team, or can existing architects do it?
Existing architects can and should contribute to EA: their domain knowledge is essential. But EA requires dedicated coordination: someone to maintain the repository, govern the metamodel, manage the stakeholder program, and ensure consistency across contributions. A single dedicated EA architect can carry a small-to-mid-sized program; a larger enterprise typically needs a small center of excellence. Without at least one dedicated person, EA work gets crowded out by delivery pressure.
Q: We’ve tried EA before and it didn’t stick. What went wrong?
Usually one of three things: the EA practice was disconnected from actual decisions (models produced but not used), the repository became stale within six months (governance not maintained), or there was no executive sponsorship and the program was defunded when priorities shifted. Sometimes all three. A practical diagnosis looks at whether EA was solving a real problem, whether governance was embedded in delivery processes, and whether the right stakeholders were engaged. The fix is usually to restart smaller, with a tighter scope and a more visible connection to a live business decision.
Q: How do we make the business case for EA investment?
The strongest business cases connect EA directly to a cost or risk that leadership already recognizes. Application rationalisation (“we think we’re running 12 systems that do the same thing”), M&A integration speed (“we don’t know what the acquired company’s technology estate looks like”), regulatory compliance (“we cannot map our data flows to our GDPR obligations”): these are compelling to CFOs and CIOs without requiring them to believe in EA as a discipline. Quantify the cost of the problem, not just the value of the solution.
Q: What’s the difference between EA and IT strategy?
IT strategy sets the direction for technology investment and capability. EA provides the structured analysis and artefacts that inform and implement that strategy: the models that show what exists, what is needed, and how to get from one to the other. In practice, good IT strategy and good EA are inseparable: strategy without EA lacks grounding; EA without strategy lacks purpose. The best EA practices are embedded in the strategy cycle, not separate from it.
Q: Should EA sit inside IT or closer to the business?
Ideally, it sits at the intersection: reporting to a CIO or COO who spans both. EA that sits purely inside IT tends to produce technology-centric models that business leaders do not engage with. EA that sits in strategy without IT credibility struggles to govern technical decisions. The structural answer matters less than the relationship pattern: EA architects must have genuine access to and credibility with both business and technology leadership.
Q: How do we scope an EA program when the enterprise is enormous?
Domain-by-domain. Pick the domain with the most active change program, the most visible pain, or the most willing executive sponsor. Build the EA capability there, show value, and expand. A well-scoped domain engagement also produces reusable patterns: metamodel profiles, viewpoint templates, governance processes: that accelerate subsequent domains. Attempting to build an enterprise-wide architecture from scratch simultaneously is a reliable way to produce an expensive, unwieldy model that nobody uses.
Q: What artefacts should an EA practice always maintain?
At minimum: a current-state application inventory (with lifecycle status and business owner), a capability map linked to strategic objectives, a set of architecture principles with rationale, and a log of significant architecture decisions (ADRs). Everything else builds from these four. Organizations that try to maintain dozens of artefact types without the basics tend to produce impressive documentation and poor decisions.
Q: Why Sparx EA over Archi or LeanIX?
Sparx EA is a full modeling environment: it handles UML, ArchiMate, BPMN, SysML, and custom metamodels in a single tool, with a server-based shared repository, scripting, code generation, and document automation. Archi is free and excellent for individual ArchiMate modeling but lacks the repository model, scalability, and extensibility that enterprise teams need. LeanIX is strong for application portfolio management and stakeholder self-service but is not a full modeling tool: it trades depth for accessibility. Sparx EA wins on depth, extensibility, and total cost of ownership at scale. The right answer depends on your use case.
Q: What database should we use for the EA repository?
SQL Server is the most commonly used and best supported, particularly in Microsoft-heavy environments. MySQL and MariaDB are solid choices for organizations on Linux infrastructure or wanting open-source stacks. PostgreSQL is fully supported and increasingly popular. Oracle is supported but adds licensing cost without meaningful benefit for most EA programs. Avoid SQLite (the built-in file-based option) for any multi-user deployment: it is fine for evaluation and individual use, not for shared team repositories.
Q: How many Sparx EA licenses do we need?
Start with the core modeling team: the architects who will actively build and maintain the repository. Casual reviewers and business stakeholders typically do not need full licenses; Pro Cloud Server’s WebEA interface provides read-only access at no additional per-user cost. Kernaro AI Hub extends this further, giving AI-assisted access to EA content without any Sparx licenses. A typical enterprise EA team of 5 to 15 architects needs 5 to 15 licenses; the rest of the organization accesses outputs through other channels.
Q: What is Pro Cloud Server and do we need it?
Pro Cloud Server (PCS) is the server component that sits between the Sparx EA client and the repository database. It provides browser-based access (WebEA), integration APIs, and the connectivity layer for EA GraphLink. For single-user or small team use, you can connect directly to a shared database without PCS. For any team larger than about five people, or for any integration with other tools, PCS is effectively required: and its cost is modest relative to the value it provides.
Q: How do we manage repository structure across a large team?
Package governance is the answer. Define a clear package hierarchy that reflects your EA domains and governance boundaries, assign package ownership (who can create, modify, and baseline each package), and document the conventions in an MDG Technology profile. Without package governance, repositories drift into inconsistency within months as different architects apply different conventions. This is one of the first things Sparx Services addresses in any new client engagement.
Q: Can Sparx EA integrate with Jira, Confluence, or Azure DevOps?
Yes, through several routes. PCS provides integration APIs. Sparx EA’s built-in scripting and automation capabilities can push or pull data. And EA GraphLink’s GraphQL interface can feed data to any tool that can consume an API. The integration approach depends on the direction of data flow: whether you are pushing EA artefacts into delivery tools or pulling delivery artefacts into the EA model: and the volume of data involved.
Q: How do we keep the repository from becoming a dumping ground?
Governance, not technology. The technical answer is MDG Technology: validation rules, element type constraints, and required tagged values that enforce quality at the point of entry. The organisational answer is a clear ownership model (every package has an owner responsible for its quality) and regular repository reviews. But the cultural answer is the most important: architects need to believe that quality matters and that the repository is used for real decisions, not just documentation.
Q: What does a Sparx EA deployment look like at scale?
A mature enterprise deployment typically has a central PCS instance connecting to a primary repository database (SQL Server or PostgreSQL), a baselines strategy for version control, an MDG Technology package distributed to all client installations, a WebEA or Kernaro interface for stakeholder access, and EA GraphLink for BI and AI connectivity. The Sparx Services Discover and Connect offerings are designed to get organizations to this architecture safely and efficiently. Large deployments may also have separate repository instances per domain with federated governance.
Q: How often should the EA repository be updated?
For active elements: applications, capabilities, integrations: at least quarterly reviews are the minimum; monthly is better. For high-velocity elements in active programs, continuous updates aligned with delivery cadence. The goal is not a perfectly up-to-date model of everything; it is a model that is current enough to inform the decisions that are actually being made. Prioritize currency in the areas of highest decision activity.
Q: Is Sparx EA suitable for regulated industries like financial services or defense?
Yes: it is widely used in both. Sparx EA’s on-premises deployment model means data never leaves your infrastructure. For defense, it supports DoDAF, MODAF, and NAF natively. For financial services, BIAN profiles are available. TOGAF governance patterns are well-supported. The key regulated-industry consideration is data classification: the EA repository may contain information about security architecture, data flows, or system vulnerabilities that requires appropriate access controls. PCS provides the access management layer for this.
Q: What’s the fastest path to AI-readable EA?
Three things in sequence: first, ensure your repository is governed by consistent MDG Technology profiles (so AI tools can interpret element types and relationships reliably); second, deploy Pro Cloud Server if you have not already; third, connect EA GraphLink. With those three in place, tools like Microsoft Copilot, Claude, or Fabric can query your repository via MCP within days. The bottleneck is almost never the technology: it is model quality and MDG governance. A poorly structured repository, even once connected, produces poor AI outputs.
Q: Do we need EA GraphLink, or can AI tools access the repository directly?
You need EA GraphLink, or something equivalent. The Sparx EA repository database is a relational schema not designed for direct external consumption: its table structure is complex and version-sensitive. EA GraphLink provides a stable, semantically rich API layer (GraphQL for BI, MCP for AI) that abstracts the underlying schema and exposes the model in a form that external tools can consume reliably. Direct database access is possible but fragile and unsupported.
Q: Can Microsoft Copilot access our EA repository?
Yes, via EA GraphLink’s Interface B (the MCP Server). Microsoft Copilot supports MCP connections as part of its extensibility framework. Once connected, Copilot can answer questions about your EA content: application landscapes, capability gaps, integration patterns: by querying the live repository. The quality of answers depends on the richness and consistency of the model. This is a Microsoft 365 Copilot capability; it requires appropriate licenses and an EA GraphLink deployment.
Q: What makes MDG governance so important for AI?
AI tools reason over data. If your model has three different ways of representing the same concept: because different architects used different stereotypes or element types for the same thing: the AI sees three different things, not one. MDG Technology enforces consistency at the point of modeling, so that when an AI queries “show me all integration points,” it gets a complete and consistent result rather than a partial or confused one. MDG governance is the quality gate that makes AI-readable EA possible.
Q: What questions can an AI agent actually answer from the EA repository?
Questions with structured answers: “Which applications support the Customer Onboarding capability?”, “What systems will be affected if we decommission the legacy CRM?”, “Which capabilities have no mapped applications?”, “What are the integration dependencies of the payments domain?”: all of these can be answered precisely from a well-structured EA model. Questions that require judgement: “Should we migrate to the cloud?”: benefit from EA context but still require an architect’s synthesis. The AI accelerates access to model content; it does not replace architectural reasoning.
Q: How does Retrieval-Augmented Generation (RAG) apply to EA?
When an AI agent answers a question using EA GraphLink, it is effectively doing RAG: retrieving relevant model content (elements, relationships, metadata) at query time and incorporating it into the response context. This is substantially more accurate than asking an LLM to recall EA facts from training data. The implication is that EA model quality matters enormously: garbage in, garbage out applies with full force. Well-governed, MDG-structured models produce accurate AI responses; inconsistent models produce hallucinations.
Q: Can AI tools update the EA model, or just read it?
Currently, EA GraphLink and MCP connections are primarily read-oriented: AI tools query and reason over the model. Write-back capability is an emerging area. The more productive framing right now is AI-assisted modeling: an architect works in Sparx EA, and tools like Kernaro Assist provide in-context suggestions based on the existing model and governance rules. Full autonomous AI-driven model updates require a level of governance infrastructure: validation, approval workflows, audit trails: that most organizations are still developing.
Q: What is the difference between EA GraphLink Interface A and Interface B?
Interface A is the GraphQL API designed for BI and analytics tools: Power BI, Tableau, and custom dashboards. It exposes model data as structured, queryable data that BI tools can consume and visualize. Interface B is the MCP Server designed for AI tools: Microsoft Copilot, Claude, Agentforce. It exposes model data through the Model Context Protocol, allowing AI agents to query the repository as part of their reasoning process. Both interfaces sit on top of the same underlying EA GraphLink layer and the same repository data.
Q: How does AI change the stakeholder experience of EA?
Dramatically. Traditionally, stakeholders who needed EA information had to request it from an architect, who had to query the model and produce a report or diagram. With EA GraphLink and tools like Kernaro AI Hub, stakeholders can ask questions in natural language and receive accurate, model-grounded answers immediately. This changes EA from a report-production service into a self-service knowledge platform. Architects spend less time generating outputs and more time doing architecture.
Q: Should we wait for AI tooling to mature before connecting the EA repository?
No. The time to connect is now, because the limiting factor is model quality, not tooling maturity. Organizations that spend the next 12 months improving their MDG governance, enriching their model content, and deploying EA GraphLink will be well-positioned to take advantage of AI capabilities as they mature. Organizations that wait will find themselves scrambling to clean up years of model debt when the executive pressure to “do AI with EA” arrives. Start now, even at small scale.
Q: Do we have to use TOGAF?
No. TOGAF is a framework, not a requirement. It is widely adopted because it provides a comprehensive, vendor-neutral starting point: but it is often implemented poorly because organizations try to apply all of it rather than the parts that address their specific situation. If TOGAF’s Architecture Development Method aligns with your program structure, use it. If it does not, take the parts that do and leave the rest. What matters is having a coherent, agreed approach: not which framework label you put on it.
Q: ArchiMate or BPMN: which should we use?
Both, for different purposes. ArchiMate is a cross-layer enterprise architecture language: it is designed to show how strategy, business, applications, and technology relate to each other. BPMN is a process modeling language: it is designed to show how work flows through a process in detail. ArchiMate tells you what exists and why; BPMN tells you how it operates. In Sparx EA, you can use both in the same repository and link BPMN processes to ArchiMate business functions, giving you depth where needed without losing the big picture.
Q: How do we model capabilities versus processes?
Capabilities answer “what can the organization do?”: they are stable, outcome-oriented, and independent of how work is organized. Processes answer “how does the organization do it?”: they are operational, sequential, and organization-specific. Model capabilities in ArchiMate (Capability elements, linked to strategic drivers and application services). Model processes in BPMN or ArchiMate Business Process elements. Link them: a process realizes a capability. This separation lets you maintain a stable capability map even as processes and structures change.
Q: How detailed should our EA models be?
As detailed as the decision you are trying to inform requires: no more. Over-modeling is as harmful as under-modeling: over-detailed models are expensive to maintain, hard to read, and tend to become stale. A useful heuristic: model at the level where a wrong assumption would lead to a wrong decision. For strategic planning, high-level capability maps and application portfolios are sufficient. For integration design, you need interface-level detail. For security architecture, you may need component and data flow detail. Let the decision drive the depth.
Q: What is a viewpoint and why does it matter?
A viewpoint defines the rules for constructing a view: which elements to include, which relationships to show, and which stakeholder concern it addresses. Without viewpoints, different architects produce different-looking diagrams that address different concerns: and stakeholders cannot tell which view to trust for which question. In Sparx EA, you can define custom viewpoints as part of your MDG Technology profiles, ensuring that the right elements are available in the right diagram types for the right audiences.
Q: Do we need to use an industry reference model like BIAN or TM Forum ODA?
Not necessarily, but they save significant time if you are in the relevant industry. BIAN provides a service landscape for banking that gives you a head start on capability mapping: rather than building from scratch, you validate and adapt an industry-standard starting point. The same applies to TM Forum ODA for telco. If you are outside these sectors, general frameworks like TOGAF combined with industry-specific metamodel profiles typically serve better than forcing a fit to an irrelevant reference model.
Q: How do we handle multiple modeling languages in the same repository?
Sparx EA handles this natively: UML, ArchiMate, BPMN, SysML, and custom profiles coexist in the same repository. The key is governance: define which language is used for which purpose, at which level of the architecture. A common pattern is ArchiMate for enterprise-level views, UML for application and software design, BPMN for process detail, and SysML for systems engineering where relevant. MDG Technology profiles enforce this by controlling which element types are available in which diagram types.
Q: What is the difference between a model and a diagram?
This is a fundamental distinction that EA architects must internalise. A model is the underlying structured data: elements, properties, relationships, tagged values: stored in the repository. A diagram is a graphical view of some elements from the model, arranged for communication. The same element can appear on many diagrams. Changing an element in the model is reflected everywhere it appears. In contrast, a drawing tool (Visio, draw.io) has no model: diagrams are the data, and they can easily become inconsistent. Sparx EA is a model-first tool.
Q: How do we model cloud architecture in Sparx EA?
Several approaches: ArchiMate’s technology and physical layers handle cloud infrastructure elements well: node, system software, artifact, and technology service elements map naturally to cloud constructs. Sparx EA also supports AWS, Azure, and GCP notation through community profiles and commercial add-ins. The key is consistency: agree on which elements represent which cloud constructs, capture it in an MDG Technology profile, and enforce it across the team. Cloud architecture models that are consistent in the repository become queryable by EA GraphLink for governance reporting.
Q: Is it worth modeling “as-is” if the goal is “to-be”?
Yes, always: because you cannot plan a journey without knowing your starting point, and because the current state almost always contains surprises that change the target. The as-is model also provides the baseline for impact assessment: when you propose a change, the as-is shows what will be affected. That said, do not model the current state in more detail than you need to understand the transition. As-is modeling is a means to a planning end, not a documentation exercise.
Q: How do we get executives to actually use EA outputs?
Make the outputs answer questions executives are already asking. If the CFO is asking “which applications can we turn off?”, produce an application portfolio view with cost and duplication data. If the CEO is asking “are we ready to enter the Asian market?”, produce a capability gap analysis against the market entry requirements. The EA artefacts that survive are the ones that sit in front of decisions. Artefacts that answer questions nobody is asking get ignored, regardless of quality.
Q: What is the difference between Kernaro AI Hub and Prolaborate?
Both provide browser-based stakeholder access to Sparx EA repository content without requiring Sparx EA licenses. Prolaborate (by Interfacing Technologies) focuses on portal-style presentation of model content: dashboards, wiki-style views, and survey tools built on top of the repository. Kernaro AI Hub focuses on AI-assisted access: stakeholders ask questions in natural language and receive model-grounded answers via the MCP-connected AI layer. They serve different engagement patterns: Prolaborate for structured browsing and governance workflows, Kernaro for conversational self-service. Many organizations benefit from both.
Q: How do we present architecture to non-technical stakeholders?
Lead with the business question, not the model. Show capability maps, not component diagrams. Use heat maps, road maps, and portfolio views rather than technical notation. Label everything in business language: if a stakeholder needs to understand ArchiMate notation to read a diagram, the diagram has failed. In Sparx EA, custom diagram types and tagged value reports can produce business-friendly views from the same underlying model that architects use for technical work. The model is shared; the presentation is audience-specific.
Q: How do we manage stakeholder resistance to EA?
Resistance usually comes from one of two sources: architects who feel the EA practice is adding governance overhead without visible benefit to their work, or business stakeholders who feel EA produces documents rather than decisions. The fix for the first is embedding EA in delivery processes: making it helpful, not just compliant. The fix for the second is visibly connecting EA outputs to business decisions: showing that the capability map informed the investment case, that the ADR explained the platform choice. Resistance rarely survives showed usefulness.
Q: What does a good EA communication plan look like?
It maps artefact types to stakeholder groups, with a cadence. Executive leadership gets quarterly portfolio and roadmap views. Program boards get architecture decision summaries and risk flags as needed. Delivery teams get pattern library access, architecture principles, and design reviews. The EA repository is not the communication: it is the source. Communication happens through targeted, audience-appropriate views, presentations, and now AI-assisted self-service. A communication plan makes this intentional rather than ad hoc.
Q: How do we run an Architecture Review for a delivery team without slowing them down?
Make reviews lightweight and early. The most valuable review is a brief conversation at the point where a significant architecture decision is about to be made: before work is committed, not after it is done. A 30-minute checkpoint with an architect who has reviewed the proposed approach against the EA model and principles is far more valuable: and far less disruptive: than a formal board review of completed work. Embed this pattern in your delivery process as a governance touchpoint, not a gate.
Q: How do we use EA artefacts in strategic planning?
Connect the capability map directly to the strategic planning cycle. Before annual planning, produce a capability heat map showing performance, investment, and gap ratings for each capability. During planning, use the application portfolio to identify the systems that support strategic priorities and those that constrain them. After planning, update the architecture roadmap to reflect the approved investments. This pattern: EA informing planning, planning driving the roadmap: is what sustains executive engagement over time.
Q: What is the best format for presenting EA work to a Board?
A single page. Board members need a clear answer to one question: “Are we making the right technology investments to deliver our strategy?” The best Board-level EA output is a capability heat map with a three-to-five item narrative: where we are strong, where we have gaps, what the priority investments are, and what we are retiring. Anything more detailed belongs in the appendix or a subsequent executive briefing. Complexity is an architect’s problem to absorb, not a stakeholder’s problem to navigate.
Q: How do we get project teams to update the EA repository?
Make updating the repository easier than not updating it: and make the benefit visible. This means good tooling (simple, governed templates in Sparx EA), clear ownership (each team knows which packages they are responsible for), and visible use (project teams see their data used in portfolio views and planning). Mandating updates without showing use produces grudging, low-quality updates. Showing that accurate data leads to better decisions produces motivated, high-quality contributions.
Q: How do we measure stakeholder satisfaction with the EA practice?
Three questions: Did they get the information they needed? Did they get it when they needed it? Did it help them make a better decision? These are more useful than maturity scores or artefact counts. Run a brief quarterly survey of your key stakeholder groups. Track which artefacts are accessed, by whom, and via which channel (EA GraphLink telemetry helps here). And track the decisions that EA explicitly informed: a simple log of “EA input contributed to this decision” is a powerful demonstrator of value over time.
Q: How do we set up an Architecture Review Board?
Start small: a chair (typically the Chief Architect or EA Director), two to three senior architects representing business, application, and technology domains, and a rotating seat for delivery program representation. Define the scope: what triggers a review (decision size, system criticality, architectural novelty), what the review produces (approved, conditionally approved, or returned with direction), and what the review does not cover (operational decisions below the threshold). Run reviews on a defined cadence: monthly is typical: with ad hoc sessions for urgent items. Publish decisions in the EA repository.
Q: What does an EA Center of Excellence look like?
At minimum: a Head of Architecture with enterprise scope, one or two domain architects covering business/data and technology, and a repository administrator. At full maturity: a CoE of eight to fifteen people with domain architects embedded in key programs, a standards function, a tools and methods capability, a stakeholder engagement function, and a metrics and reporting function. Most organizations should aim for the minimum viable CoE first and grow as showed value creates appetite for investment. A CoE that is too large before the value is proven becomes easy to cut.
Q: How do we measure EA maturity?
Use a maturity model with practical dimensions rather than abstract levels. Useful dimensions include: governance (is the ARB functioning and respected?), tooling (is the repository current and governed?), business engagement (are executives using EA outputs for decisions?), delivery integration (are delivery teams engaging with EA at the right points?), and AI readiness (is the model structured well enough for AI-assisted access?). Assess each dimension on a one-to-five scale with specific, observable criteria. Rerun the assessment annually. The goal is not to reach level five everywhere; it is to improve the dimensions that matter most for your current context.
Q: How do we govern architecture in a highly distributed organization?
Federated governance: a central CoE sets standards (metamodel profiles, principles, reference patterns, review thresholds) and each domain or business unit applies them with local architects. The central team owns the standards and the repository architecture; domains own their content. Regular cross-domain forums surface conflicts and opportunities for reuse. EA GraphLink enables the central team to maintain visibility across federated repositories without requiring centralized modeling of everything. This pattern scales; fully centralized governance does not.
Q: What is the relationship between EA governance and project governance?
EA governance is a quality layer over project governance: it ensures that project-level technology decisions align with enterprise standards and direction. In practice, this means EA checkpoints at key project stages: during business case development (is the proposed solution aligned with the architecture?), at design approval (does the design conform to standards and patterns?), and at go-live (has the repository been updated to reflect what was built?). Without these integration points, EA governance is advisory at best; with them, it has real teeth.
Q: How do we handle architecture debt: patterns and decisions we know are wrong?
Make it visible first. The EA repository should carry a lifecycle status on every significant element: current, transitioning, retiring, decommissioned: so that the state of the estate is honest, not aspirational. Then prioritize: not all debt needs to be addressed immediately, and trying to address it all at once is disruptive and expensive. An architecture roadmap that sequenced debt reduction alongside new capability delivery, with the business case for each item, turns debt management from a complaint into a plan.
Q: What governance mechanisms prevent model drift?
Four mechanisms working together: MDG Technology validation rules (the model cannot be saved with invalid or missing data), regular package reviews (scheduled assessments of model quality by package owners), baseline comparisons (periodic checks of what has changed and whether the changes were sanctioned), and EA GraphLink reporting (dashboards that flag elements with stale or incomplete data). The strongest single mechanism is embedding update obligations into delivery processes: so that the model is updated as part of project closeout, not months later when memories fade.
Q: How do we handle disagreements in the ARB?
The ARB chair makes the final call after ensuring all perspectives have been heard. Document the disagreement and the reasoning in the Architecture Decision Record: including the dissenting view and why it was not adopted. This matters for two reasons: it creates an honest audit trail, and it provides the foundation for revisiting the decision if circumstances change. An ARB that always reaches easy consensus is not reviewing hard enough; an ARB that frequently deadlocks has a governance structure problem.
Q: What does “good” architecture documentation look like?
Good documentation is current, purposeful, and audience-appropriate. Current means it reflects the actual state of the world, not the planned or aspirational state (unless explicitly labelled as such). Purposeful means every artefact exists to inform a specific decision or serve a specific stakeholder concern. Audience-appropriate means the level of detail and notation matches what the intended reader can and will engage with. A single diagram that is read by twenty decision-makers every month is worth more than a hundred diagrams read by nobody.
Q: How should we think about EA maturity in relation to AI readiness?
AI readiness is a dimension of EA maturity that most traditional maturity models do not address: but it is increasingly important. The key indicators of AI readiness are: consistent MDG governance (element types and relationships are unambiguous), model richness (elements have meaningful properties and tagged values, not just names), relationship completeness (capabilities are linked to applications, applications to technologies, technologies to data), and connectivity infrastructure (PCS and EA GraphLink are deployed). An organization with mature traditional EA governance typically achieves AI readiness quickly; an organization with a populated but ungoverned repository faces more remediation work.
Questions answered by Sparx Services architects based on client engagements across government, financial services, utilities, and complex enterprise programs. For specific questions about your situation, contact the Sparx Services team.
Talk to a Sparx Services architect about where your organization is on the journey and what the next stage looks like.