Three months into Kernaro GA, patterns are clear enough to see. Not because of what’s working — that was predictable. But because of what’s taking longer than expected, what’s working better than expected, and what teams are discovering about their own repositories that they didn’t see coming.
We’ve worked with a dozen teams across financial services, insurance, healthcare, and tech. Different industries, different starting points, same recurring observations.
What’s landing fastest: stakeholder self-service
Kernaro Copilot (the AI Hub for non-architects) is the fastest-to-value use case.
Teams deploy Kernaro Copilot and suddenly business stakeholders can ask their own questions: “Show me the capability decomposition for customer service.” “Which applications support order management?” “What’s the technology roadmap for our data platform?”
These questions used to require an architecture team’s time. Now they’re instant. Stakeholders get answers in minutes instead of weeks. The ROI is visible immediately, to non-technical people, in language they understand.
No team that deployed Kernaro Copilot first has regretted it. This is the use case that creates organizational momentum.
What’s working well, but with friction: Kernaro Assist
Kernaro Assist (the in-EA client for architects) is productive, but adoption is narrower than expected.
What architects are using it for:
- Drafting element descriptions and documentation
- Checking naming conventions against standards
- Generating quick data object or interface specifications
- Creating bulk descriptions for elements in a landscape import
- Reviewing for consistency across a capability map
What they’re not using it for (yet):
- Making architectural decisions
- Redesigning system landscapes
- Challenging existing model structures
- Fundamentally rethinking how they organize their architecture
This is the pattern we expected. Kernaro Assist is a copilot for tactical work, not strategic work. Teams use it for the safe tasks first. That’s normal.
But here’s what’s interesting: the teams that are getting the most value from Kernaro Assist aren’t the ones with the best tools — they’re the ones with governance structures already in place. If your team has standards (naming conventions, element type rules, relationship constraints), Kernaro Assist amplifies them. If you don’t, it generates a lot of text you still have to review.
The surprise: MDG remediation effort
Every team that deployed Kernaro discovered the same thing: their repository looks fine until it doesn’t.
When you query your model through EA GraphLink, suddenly you see inconsistencies that are invisible in the normal EA authoring experience. An ApplicationComponent that should decompose to smaller components but doesn’t. A Capability with no owner (or five owners). DataObjects that are referenced but never actually defined. Technology elements that are used but never appear in the standards register.
These inconsistencies don’t break Sparx EA. Humans navigate them fine. But when you feed the repository to an AI system, these gaps become obvious.
Teams expected 1-2 weeks to remediate. The average is 6-8 weeks to get to a point where Kernaro output is good enough for publication without heavy review. Some teams needed longer.
The conversation shifts: “Our Sparx EA looks fine to us” becomes “Our Sparx EA needs structural remediation before AI can amplify it.” That’s not a technology problem. That’s a data quality problem. And it’s fixable, but it takes time.
One team fixed it by treating MDG remediation as a parallel workstream to Kernaro deployment: three architects, six weeks, focused on model quality. By the time Kernaro Assist was fully available, the repository was ready. That team got 80% faster time-to-value than teams that tried to deploy AI and fix data quality in parallel.
What nobody expected: governance questions
“Who signs off on AI-generated content?” This question has no standard answer yet.
Some teams treat Kernaro-generated descriptions as drafts that require human review before publication. Others treat them as acceptable if they pass semantic checks (do they accurately represent the model?). One team created a tiered approach: Kernaro output for internal-only documentation is auto-approved if it passes validation; Kernaro output for external stakeholder reports requires architect review.
None of these are wrong. But they’re all different. And they all require governance decisions that most teams hadn’t codified before Kernaro came along.
The governance framework that works in one organization might not work in another. But the fact that this question needs answering — and it’s non-trivial — is worth knowing upfront.
One pattern we’ve seen work: treat Kernaro-generated content the same way you treat code from GitHub Copilot. It’s useful, it’s usually correct, and it still needs review. The review is faster than authoring from scratch, but it’s not zero-effort.
What’s working better than expected: EA GraphLink adoption
EA GraphLink is infrastructure. It’s not sexy. We expected slow adoption.
Teams are using it faster than expected, and not for the reasons we predicted. We thought they’d use it for Kernaro integration. They’re using it for internal queries, for systems integration mapping, for compliance reporting, and for feeding architecture context into their own internal tools.
“Can you give us a JSON API we can call from our infrastructure-as-code system to validate that we’re using approved technologies?” — that query, answered by EA GraphLink, unlocked a conversation about architecture governance that wasn’t happening before.
“Can we generate a current-state application dependency map in our systems engineering tool?” — yes, via EA GraphLink.
EA GraphLink is becoming infrastructure for teams to build their own architecture tools and workflows on top of. That’s a more interesting use case than AI integration alone.
The pattern underneath
All of this points to one observation:
The technical deployment is straightforward. The organizational and repository readiness work is what determines speed of value.
Kernaro installs cleanly. Pro Cloud Server works. EA GraphLink queries work. But the work that matters happens before and after:
Before: assessing your MDG Technology, understanding your repository quality, defining governance frameworks, clarifying use cases, aligning on success metrics, securing sponsorship.
After: adapting workflows to use the new tools, training teams, monitoring quality, evolving governance as you learn what works.
The teams that treated this as “let’s deploy the tool and see what happens” took twice as long to get to value as teams that treated it as “let’s assess where we are, plan the transformation, and then deploy the tools as part of that plan.”
What’s next
We’re three months in. The pattern is clear: Kernaro works. EA GraphLink works. The question now is how to structure adoption so that the technical capability translates into organizational capability change.
That’s not about the tools anymore. That’s about how you integrate them into how your teams actually work.
The teams that are ahead are the ones that answered the seven readiness questions upfront. The ones catching up are the ones that deployed first and assessed second.
If you’re thinking about Kernaro, start with clarity about where you are and what you’re trying to change. That assessment is the foundation for everything else.
The technical work is already proven. The organizational work is where the difference is made.