Governance

Building a governance framework for AI-generated EA content

By Ryan Schmierer  ·  February 18, 2026

A few months ago, a team we work with used Kernaro Assist to generate initial model content from a design document. The AI created forty-three elements in an afternoon. Three months later, they were still untangling what was usable, what needed context added, and what had been genuinely wrong.

That’s not a failure of the AI. It’s a failure of governance.

When you introduce Kernaro Assist into an EA practice, you’re not just adding a tool. You’re introducing a new content source with different reliability profiles across different element types. Element descriptions might be ninety percent solid. Relationship logic needs architect review before it’s trusted. Naming conventions can actually be automated away. But if you don’t know which is which, you end up with a backlog of micro-decisions instead of a workflow.

Here’s what we’ve learned about governing AI-generated content at scale.

What You Can Trust (and What You Can’t)

Start by being precise about risk. Not all generated content carries the same weight.

Descriptions and documentation are the safety zone. If Kernaro Assist generates a description for a “transaction processing API,” you’re starting with something usable. The AI extracts language patterns from existing documentation, applies domain knowledge from your MDG, and produces readable prose. Is it always perfect? No. But it’s rarely wrong enough to create architecture risk. A quick review: does this match what we know about this component?: and you’re done. This is where AI saves real time.

Relationships and dependencies are where you stop. If Kernaro Assist suggests that Service A calls Service B, that’s not something you verify by glance. Relationship logic embeds assumptions about data flow, coupling, and system behavior. An AI seeing “A” and “B” mentioned in the same sentence doesn’t know whether A depends on B or whether they’re mentioned as independent alternatives. You need an architect who understands the domain to verify these. This isn’t a governance failure: it’s a correct use of human judgment.

Element names and classifications can actually be delegated further. This sounds counterintuitive, but if you’ve done the work to formalize your naming conventions in your MDG (which you should), Kernaro Assist can learn those patterns. A rule like “all APIs ending with ‘Service’ are integration points” or “all databases named ‘DW_*’ are data warehouse tables” can be encoded. The AI applies the pattern. You don’t review every name; you review the pattern once.

Integrating AI-Generated Content Into Existing Workflows

The mistake most teams make is treating AI-generated content as a separate stream. They have the AI produce artifacts, then manually fold them into the model. That’s extra work. Instead, make AI generation part of your standard workflow.

Here’s what that looks like: When Kernaro Assist generates content: whether from design documents, requirements, or system diagrams: it doesn’t dump raw output into your model. Instead, it creates staged content. Think of it as a quarantine zone with clear acceptance criteria.

Each staged element carries metadata: what document it came from, what confidence the AI assigned, what assumptions it made. Your architects review that metadata alongside the content. They’re not reviewing from scratch; they’re fact-checking against context. That’s faster and more reliable.

Then comes the second part: automated validation before human review. This is where your MDG governance becomes active, not just documentary. Rules like “no element without a responsible party,” “no relationship without a defined data flow,” or “no duplicate element names” can be baked into your governance layer. Kernaro Assist’s Broadcast Event agents can run these checks automatically on staged content. Elements that pass all rules move to an approval queue. Elements that fail get flagged for architect review with the specific rule violation highlighted.

You’re not adding governance overhead. You’re using technology to make governance faster.

Extending MDG Governance to AI-Generated Elements

Here’s where most teams haven’t caught up: your MDG: the metamodel definition system that governs element types, relationships, and attributes in Sparx EA: is still being used to govern human-created content. When you add AI generation, extend that same MDG to cover the AI.

This means:

Formalize what the AI is expected to generate. If you want Kernaro Assist to create API elements, be precise: An API element includes name, description, version, responsible team, and a list of exposed endpoints. Those last three are non-negotiable. The AI learns to produce them. Your governance checks whether they’re present.

Define the review path based on element type. Descriptions go to a junior architect for quick review. Relationships go to a senior architect who knows the domain. Naming is auto-validated. You’re matching review effort to actual risk, not treating all AI output identically.

Make the MDG the source of truth for AI training. When you onboard Kernaro Assist to your EA practice, you feed it your MDG. The AI learns your organization’s metamodel, your naming conventions, your attribute requirements. The better your MDG, the better the AI performs. This is the opposite of most software: your governance discipline directly improves AI quality.

Making This Stick in Practice

We’ve seen two patterns that work. The first is role-based review cycles. You designate one architect as the “AI content shepherd” for a sprinting period: maybe two weeks. Their job is to review staged content, spot patterns in what needs fixing, and flag those patterns back to the governance team. Did the AI consistently misunderstand relationships in one domain? That’s a signal to either retrain the AI on better examples or tighten the MDG in that area. This person becomes the clearing house for improvement.

The second is measuring what you accept. Once you’ve reviewed and accepted staged content, track what passed review and what didn’t. If eighty-five percent of generated descriptions pass review unmodified, that’s a data point: descriptions are low-effort wins. If only thirty percent of generated relationships pass review, relationships need human-first capture, and Kernaro Assist should assist with validation, not generation. This feedback loop tightens your governance naturally.

The Mature State

Mature teams use Kernaro Assist not to replace judgment but to free it up. Junior architects spend less time transcribing design documents into the model and more time asking whether those designs are sound. Senior architects spend less time in governance checkboxes and more time on exception handling: the anomalies the AI couldn’t understand. The model stays cleaner because AI-generated content doesn’t sit in the model unvetted; it’s reviewed and validated before it’s trusted.

Governance is often seen as friction. With AI-generated content, it’s the opposite. Governance is the mechanism that makes AI safe to use at scale.

The teams doing this well have stopped asking “Can we trust the AI?” and started asking “What level of review makes each type of content trustworthy?” That shift from binary skepticism to calibrated confidence is where governance becomes practical.

Your AI augmentation will only be as good as your ability to govern it. Start there.


Ready to govern AI-generated EA content at scale? Explore the Amplify offering: our service for teams operationalizing AI augmentation in enterprise architecture practice.

Share this article

Ready to make your EA investment work harder?

Talk to a Sparx Services architect about where your organization is on the journey and what the next stage looks like.