Governance

What good EA governance looks like in an AI-augmented practice

By Ryan Schmierer  ·  March 8, 2026

Traditional EA governance was built for a world where architects authored every element, every relationship, every description in the repository. The governance processes, reviews, completeness checks, metadata validation, assumed a human author and a human reviewer catching errors in real time.

That assumption breaks when Kernaro Assist starts generating model content. When your analysts can self-serve the architecture data they need. When the volume and velocity of repository growth accelerate dramatically. The governance framework needs to change.

We’ve seen this play out across dozens of Amplify engagements. Teams that try to apply traditional governance to AI-augmented practice either create bottlenecks that kill the productivity gains, or skip governance entirely and end up with degraded data quality. The teams that thrive have redesigned their governance approach around three specific shifts.

When Kernaro Assist generates model content

The first shift is authoring accountability. In traditional practice, an architect authors a description and it goes to review. In AI-augmented practice, Kernaro Assist proposes a description and an architect approves it. This sounds like a small change. It’s not.

Approval is a different cognitive task than authorship. When you’re authoring, you own the thinking. When you’re approving, you need to verify correctness without doing the original thinking work. This requires discipline. We’ve watched teams discover that approving AI-generated descriptions rapidly, without genuine review, is tempting and corrosive.

The teams that get this right treat Assist-generated content as draft, not near-final. They have explicit approval criteria: Does this description accurately reflect the system’s purpose? Does it capture the constraints we care about? Is the language aligned with our naming conventions? Does it reference dependencies correctly? These become active checklist items, not passive scanning.

The second shift is in relationship validation. Traditional review catches misnamed relationships, missing cardinality, incorrect stereotype assignment. Assist can generate these errors at volume. Governance needs to shift from “is the description correct?” to “are the relationships structurally sound?” Your review process becomes more technical and less editorial.

This is where MDG Technology becomes a governance tool, not just a metadata specification. If your MDG profiles define strict cardinality rules, stereotype requirements, and composition constraints, then those rules can be validated at point of generation. Kernaro Assist can be configured to reject proposed relationships that violate your MDG profiles before they’re submitted for approval. This puts governance at the entry point, not as a downstream review step. It’s faster and it prevents bad data from ever entering the review queue.

The third shift is version discipline. When humans author slowly, version proliferation isn’t a practical problem. When Assist generates content at 10x the volume, version discipline matters. You need explicit decisions about which versions of elements and relationships stay in the active model, which move to historical, when snapshots are taken, how long draft content can remain in review before being committed or rejected.

Teams that skip this end up with repositories that look clean at first glance but have 47 draft versions of a critical application and no clear record of why some were rejected. Effective governance means saying: “Draft content expires in 2 weeks. Either it’s approved and committed, or it’s rejected and archived.” This creates velocity without chaos.

When stakeholders can self-serve architecture data

The second governance shift happens when stakeholders move from “I’ll email the architect” to “I’ll ask Kernaro AI Hub.” The architecture data is now a product that non-architects can query directly.

This changes what you govern. You’re no longer primarily governing model authoring. You’re governing access, accuracy, and timeliness.

Query logging gives you an audit trail of what stakeholders are asking. This is remarkably valuable governance information. If you see 40 queries about which systems support the customer onboarding process, you learn that this question matters. You learn that the model might not be representing this clearly. You get data to prioritize your modeling work. Some teams use query patterns to trigger completeness reviews: if this question is being asked frequently and inconsistently answered, the model needs attention.

Access control becomes more granular. You can restrict certain elements to certain roles: finance stakeholders can see financial systems and data flows, but not security architecture details. Product managers can see application capabilities without seeing the underlying technical decisions. This requires thinking through data sensitivity in ways traditional EA didn’t require. It’s worth doing.

Answer accuracy monitoring becomes a governance responsibility. Kernaro AI Hub returns answers grounded in your live model, but the AI interpretation layer can still produce answers that are technically true but meaningfully wrong. A human architect needs to do periodic spot-checks: is the AI accurately representing our architecture? When you find an inaccuracy, is it a model problem or a reasoning problem? This is different work than traditional governance, but it’s essential.

Data freshness governance means ensuring the model is updated before major decisions are made against it. This sounds obvious, but it’s harder to enforce when stakeholders are querying in real time. You need a simple rule: model versioning is tied to decision gates, not calendar cycles. Before a major investment decision, the model is reviewed for completeness and accuracy. Before budget cycles, before project charters are approved. The governance rhythm shifts to business rhythm, not EA rhythm.

New governance capabilities the platform enables

The good news: AI-augmented practice enables new governance capabilities that traditional governance simply couldn’t achieve.

Kernaro Assist’s Broadcast Event agents can be configured to automatically validate completeness. Define a rule: “Every application must have at least one owner, at least one business process that uses it, and documented technology standards.” The agent runs on a schedule, identifies exceptions, and flags them for review. You go from annual completeness audits to continuous validation. Senior architects review only the exceptions, not the entire repository.

MDG profile enforcement at point of entry means non-conformance is caught immediately, not at review. If your MDG says “all business services must have a ‘Service Level Agreement’ artifact attached,” Kernaro Assist can be configured to require this before accepting a service definition. The governance rule is enforced by the tool, not by reviewer discipline.

Automated pre-review quality gates run simple checks before content reaches human reviewers. Are descriptions complete (above a minimum character count)? Are required fields populated? Do proposed relationships exist on both ends? Are naming conventions followed? These rote checks can be automated. Reviewers see only content that passes basic quality gates. This concentrates senior architect time on genuinely ambiguous decisions, not on catching missing fields.

The net result is that governance becomes both faster and more rigorous simultaneously. The rote checking is automated and happens continuously. Senior architect time concentrates on the genuinely difficult questions: Does this architecture decision align with our strategy? Does this design pattern match our standards? Is this system truly critical to our business capability? These are the conversations that matter.

Practical implementation

Start by mapping your current governance processes: what do you review? When? By whom? How long does it take? Then ask, for each step: which parts are rote checking that could be automated? Which parts require genuine judgment?

The rote parts become automation candidates. The judgment parts become your review criteria. You’ll find that traditional governance is 40-50% rote work. That’s the time you gain. That’s also where you’ve been losing accuracy, because reviewers are tired of the rote work and miss the important stuff.

Define your governance rules explicitly. Not “architectures should be good”, that’s unmeasurable. “Applications in the core category must have documented disaster recovery plans, must be owned by a named architect, and must have at least quarterly review.” Explicit rules can be automated.

Set up a quarterly governance review. Not to review the entire repository, that’s 2026’s way of burning time. Review what changed, what queries are being asked, what exceptions the automated checks are flagging. This is where you learn whether your governance is working or whether you need to adjust.

The teams we see succeeding in AI-augmented EA practice are the ones willing to rethink governance, not just accelerate it. They’re also the ones that see governance improvements as a primary benefit, not a necessary cost. Better data quality, faster decision cycles, clearer accountability. That’s what good governance in an AI-augmented practice delivers.

Ready to rethink your governance for AI augmentation? The Amplify offering is designed to guide this transformation.

Share this article

Ready to make your EA investment work harder?

Talk to a Sparx Services architect about where your organization is on the journey and what the next stage looks like.