One of the first things Sparx Services does in every Amplify engagement is measure how your architecture team actually spends its time. Not what the job description says. Not what leadership thinks. Not what the team thinks. The actual time allocation across the work that matters.
This is valuable data. It tells you where your productivity problems really are. It tells you what AI augmentation can actually solve. It also makes the conversation with leadership concrete instead of theoretical.
You can run this exercise yourself. It takes a half day, and the insights are worth it.
The four domains
Architecture work falls into four categories. We’ve found this framework applies across teams of 2 and teams of 20.
Architecture Modelling is the time spent capturing information into the repository. This includes interviews and workshops where you’re gathering information, time spent creating or editing elements in Sparx EA, diagramming, adding descriptions and metadata, establishing relationships between elements. The output is: content in the repository.
Architecture Analysis is the time spent extracting insights from the model. Impact analysis (if I change this application, what breaks?), dependency mapping, compliance reporting, cross-system queries, extracting data for planning decisions. The output is: answers to specific questions.
Architecture Governance is the time spent ensuring the repository is maintained and the team is working to standard. Standards checking, review board preparation, completeness validation, MDG conformance, architectural decision logging, lessons learned capture. The output is: compliance and consistency.
Stakeholder Engagement is the time architects spend answering questions and producing briefings for people who aren’t architects. Email exchanges about who owns a system, producing current-state diagrams for planning sessions, explaining why architectural decisions were made, attending meetings primarily to answer questions that the model should answer. The output is: alignment and buy-in.
These four categories will account for roughly all of your architecture team’s time. (There’s some miscellaneous—admin, training, time that doesn’t fit—but it’s usually under 10%.)
How to structure the exercise
You have two options: retrospective estimate or time diary.
A retrospective estimate asks the team to look back at the previous two weeks and estimate what percentage of their time went into each domain. It’s fast—takes about 30 minutes per person. The conversation can happen individually or as a group. The trade-off is accuracy. People’s memory is imperfect, and people tend to recall the unusual or frustrating work more vividly than the routine work.
A time diary asks people to track their time for one week in 30-minute increments and categorize each block into one of the four domains. It’s more accurate, but more burdensome. It takes about 10 minutes per day per person, and requires discipline. Some people find the tracking itself useful. Others find it exhausting.
Both approaches have value. If you’re doing this for the first time, start with a retrospective estimate. It’s lower friction. If you’re building a baseline for before/after comparison with AI augmentation, a time diary is more defensible.
Frame the exercise clearly with your team: this is a practice health conversation, not a performance review. You’re not measuring individual productivity. You’re understanding where collective effort goes. The goal is honest numbers, not flattering ones. If someone’s spending 60% of their time answering email instead of doing analysis, that’s data you need. The honesty is more valuable than the impression of efficiency.
How to run the conversation
Here’s what to say, roughly:
“We’re going to map how our architecture team actually spends its time, broken into four categories. This isn’t about individual performance—we’re looking at team patterns. The reason: understanding where we spend effort tells us where AI augmentation can help and what the highest-value improvements are.
Here are the four domains. [Walk through each briefly.] Over the past two weeks, think about where you’ve spent your time. The goal is approximate accuracy, not precision. If you spent about 40% modelling, 20% analysis, 30% governance, 10% stakeholder engagement—that’s useful data.
I’ll go first, to model honesty. [Give your own estimate, and own the parts you’re not proud of.] The conversation is confidential. What you share here doesn’t leave this room and doesn’t affect performance reviews. The goal is understanding our practice, not measuring you.”
Then ask each team member individually or in small groups. Listen to the estimates. The conversation itself is valuable. Pay attention to which domains frustrate people, which ones feel like wasted time, which ones feel meaningful.
How to interpret results
Once you have estimates from the team, aggregate them. What’s your team-wide average?
A typical profile looks like this:
- 35-40% Architecture Modelling (creating and editing repository content)
- 25-30% Architecture Governance (standards, reviews, completeness)
- 15-25% Architecture Analysis (answering questions)
- 5-15% meaningful Stakeholder Engagement (conversations that change minds)
If your profile looks roughly like this, you’re operating normally. You’re doing a lot of rote work and you’re probably frustrated about how little analysis you do.
A healthy profile after AI augmentation is:
- 25-30% Architecture Modelling (AI drafts content, architects review)
- 15-20% Architecture Governance (automated checking, exception handling)
- 30-35% Architecture Analysis (more analysis because the query cost dropped)
- 20-25% meaningful Stakeholder Engagement (fewer “tell me who owns this” requests)
The gap between your current profile and this target is your AI augmentation opportunity. If you’re spending 40% on modelling and governance and you’d like to spend 20%, that’s a real productivity gain. That’s also what Kernaro Assist and Kernaro AI Hub are designed to recover.
Some variations are normal. A team heavy on governance might be in a high-regulatory environment. A team heavy on stakeholder engagement might support a large, distributed organization. The frame is: does this allocation match our priorities? If 50% of effort is answering email instead of analysis, and analysis is what drives strategy, you have a problem.
Next steps
Once you have a baseline, repeat the exercise quarterly. Not annually—the changes happen faster than that. Watch what shifts as you implement Kernaro Assist and Kernaro AI Hub. Watch whether automated governance actually returns time. Watch whether stakeholder self-service reduces the engagement domain or just changes its character.
We’ve included a simple template you can download and customize for your team. Print it, work through it together, and keep it. Six months from now, you’ll have data on whether your AI augmentation strategy is actually delivering the improvements you expected.
The teams that succeed with AI augmentation are the ones that measure the baseline first, define the target, and then hold themselves accountable to actually moving the needle. The time allocation exercise is how you start that conversation.
Ready to establish your baseline and plan your AI augmentation roadmap? The Amplify offering walks your team through this exercise as part of the Discover phase.