Every CIO and transformation director we’ve worked with wants to deploy Kernaro and EA GraphLink. That’s the right instinct. But “wants to deploy” and “is ready to deploy” are different conversations.
Before you commit budget to EA AI augmentation, you should be able to answer these seven questions. Not perfectly. But honestly. If you can’t answer three or more of them, an assessment is your first step, not deployment.
1. Is your Sparx EA repository actively maintained and trusted by the team?
Why it matters: AI is multiplicative. A good repository gets better with AI. A neglected repository gets worse — the AI will amplify inconsistencies and stale data. You’re not fixing the data quality problem; you’re automating it.
What a good answer sounds like:
- “We do monthly model reviews; elements get updated when decisions change”
- “Teams check the model before new projects start”
- “We have visibility into what’s current and what’s obsolete”
- “Architects debate whether something belongs in the model, not whether the model is accurate”
If the answer is ‘we don’t know’: Your repository is likely fragmented. Different teams have different standards. Element definitions are inconsistent. Before you deploy Kernaro, you need to know what you’re working with. That’s a Discover-scope assessment.
2. Is MDG Technology defined and documented, or implicit and fragmented?
Why it matters: MDG Technology is the contract between your model and AI tools. If your metamodel is implicit — “we understand what a Capability means but it’s never documented” — AI tools will make bad assumptions. If it’s fragmented — different teams define Capability differently — the AI output will be incoherent.
What a good answer sounds like:
- “We have MDG Technology defined; here’s the version and review date”
- “Element types, relationships, and tagged values are documented”
- “We have governance rules (e.g., every Capability must have an owner)”
- “New team members get trained on the metamodel in their first week”
If the answer is ‘our metamodel is in people’s heads’: This is the most common answer. It’s also the blocking issue for AI. You need explicit metamodel definition before Kernaro will produce good output. Discover includes MDG remediation.
3. Is Pro Cloud Server deployed and current?
Why it matters: EA GraphLink (the connectivity layer that surfaces your data to AI) depends on Pro Cloud Server. If you’re running single-user Sparx EA on individual machines, EA GraphLink won’t work. If Pro Cloud Server is three versions behind, you’re missing security and performance updates that matter.
What a good answer sounds like:
- “Pro Cloud Server is deployed in [environment]; we’re on version X.Y”
- “It’s updated in our normal patch schedule”
- “We have documentation and support for it”
- “Teams access Sparx EA through the cloud server”
If the answer is ‘we’re not sure’ or ‘we haven’t deployed it’: This is a prerequisite. You need to know your deployment topology. If you’re still running single-user EA across the organization, that’s a migration conversation before you can talk about AI augmentation. Discover assesses your deployment and identifies what needs to change.
4. What AI tools does your organisation use, and which would most benefit from architecture context?
Why it matters: EA GraphLink is a platform, not an application. Its value comes from feeding architecture context into the tools your teams already use. If you don’t know which tools matter, you don’t have a use case yet.
What a good answer sounds like:
- “We use [GenAI platform] for proposal generation; it would benefit from access to our capability definitions”
- “We use [automation platform] for workflow design; it needs to know which systems are available”
- “We’re considering [platform] and we need to understand how it connects to EA”
- “Our technical writers use [content platform]; giving them architecture context would speed up documentation”
If the answer is ‘we just want to use Kernaro’: Kernaro is one endpoint. It’s valuable. But if you’re not connecting EA context to your other tools, you’re not getting full value from the connectivity layer. Think about where architecture context would make the biggest difference in your current workflow.
5. What are the specific use cases you’re targeting in the first 6 months?
Why it matters: Deployment without use cases is capability-building theater. Use cases force specificity. They help you scope the work. They give you success criteria.
What a good answer sounds like:
- “Stakeholders are spending 20 hours a month manually documenting capability status; Kernaro Copilot will auto-draft those reports”
- “Our architects spend 15% of their time writing element descriptions; Kernaro Assist will handle that”
- “We need to answer ‘what apps support capability X?’ 50 times a year; EA GraphLink queries will answer that in seconds”
- “Our compliance team needs to know which applications are using deprecated technologies; that query is currently a 2-week manual review”
If the answer is ‘we’ll figure it out after deployment’: This is how you spend budget without getting value. Pick 2-3 specific, measurable use cases. Estimate the current effort and cost. That’s your ROI baseline.
6. What does success look like, and how will you measure it?
Why it matters: Success looks different to different people. An architect might measure “time saved”; a CIO might measure “capability inventory accuracy”; a business stakeholder might measure “faster decision-making on app investment.” You need to be aligned on what you’re measuring before you deploy.
What a good answer sounds like:
- “We’ll measure architect time on documentation (baseline: 120 hours/month; target: 80 hours/month after 6 months)”
- “We’ll track accuracy of Kernaro-generated descriptions (target: >90% require no edits)”
- “We’ll measure stakeholder self-service query response time (baseline: request to answer = 2 weeks; target: < 2 minutes with EA GraphLink)"
- “We’ll assess compliance coverage (baseline: we know if 60% of apps are aligned with tech standards; target: we know if 100% are aligned)”
If the answer is ‘we’ll just see how it goes’: This is the setup for “we spent money and didn’t get value.” Define your metrics now. They become your deployment criteria.
7. Who owns this initiative internally, and do they have enough authority to unblock it?
Why it matters: This determines whether you’re deploying a tool or changing how architecture gets done. If the owner is a mid-level architect, they don’t have authority to mandate how teams use Kernaro or to require MDG updates. If they’re part-time on this, it won’t sustain. This is a question about organisational alignment, not technology.
What a good answer sounds like:
- “The CTO owns this; she has authority over architecture standards and team budgets”
- “Reporting to the Chief Architect; they can mandate MDG Technology updates and model governance”
- “Sponsored by the VP of Technology; they can fund both the deployment and the capability-building work”
- “Ownership is shared between Enterprise Architecture and PMO; they coordinate on governance and standards”
If the answer is ‘the EA team will own it’: That’s a red flag. EA teams are usually under-resourced already. This initiative needs cross-organizational sponsorship. If it’s EA-only, it will compete with their day job and lose. If the owner doesn’t have authority over governance and standards, the deployment will hit barriers around MDG updates and model quality.
The assessment path
If you can answer all seven questions clearly, you’re ready for Connect or Amplify. You know your current state, you have specific use cases, and you have organizational buy-in.
If you can answer 5-6, you’re close. You have some visibility gaps. Clarify those before deployment.
If you can answer 3-4 or fewer, Discover is your first step. Discover includes:
- Assessment of your Sparx EA repository and deployment
- MDG Technology audit (explicit definition, documentation, governance rules)
- Pro Cloud Server readiness evaluation
- AI tool landscape mapping (what’s in use, what’s planned)
- Use case identification and prioritization
- Success metrics definition
- Organisational readiness assessment
Discover runs 4-8 weeks. It costs $25K-$75K. It answers these questions and produces a roadmap for what comes next.
You could deploy Kernaro without this assessment. You’d install the tool, architects would use it for safe tasks, you’d get incremental value, and you’d wonder why it didn’t transform how you do architecture.
Or you could spend the time upfront, answer these questions honestly, and deploy with clarity about what you’re building toward.
The difference between tool adoption and capability transformation is usually just this: spending the time to understand where you are before you decide where you’re going.