Most AI adoption advice optimises for the individual power user. But when one person on a squad uses Claude heavily and the rest don't, you don't get a faster team — you get a fractured one.
You know the pattern. You've probably seen it in the last quarter.
There's someone on the team — maybe two or three — who've gone deep on AI. They use Claude or Copilot for everything: drafting specs, generating test scenarios, synthesising research, building slide decks in half the time. Their individual output has genuinely accelerated. They're enthusiastic. They share tips in Slack. They're doing everything right.
And the team is slower than it was six months ago.
Not dramatically slower. Not in ways that show up cleanly in velocity charts or delivery dashboards. But slower in the ways that actually matter: decisions take longer because half the team can't evaluate the AI-generated analysis the other half produced. Sprint planning has become a negotiation between two different operating speeds. Code reviews back up because reviewers don't trust outputs they didn't see get built. And somewhere in the background, two or three people have gone quiet in retros — not because they have nothing to say, but because they've started to feel like they're the problem.
This is the real AI adoption challenge in 2026. Not whether your people will use AI. Not which tools to license. Not even whether leadership is "bought in." The challenge is what happens when adoption is uneven — when a small cluster of enthusiasts races ahead and the rest of the team doesn't, can't, or won't follow at the same pace.
And almost nobody is talking about it honestly.
The Five Power Users Plateau
If you work in or around Danish enterprise teams — particularly in programme delivery, transformation workstreams, or cross-functional squads — there's a pattern so consistent it almost qualifies as a law: AI adoption clusters around five people, then stops.
The number isn't always literally five. Sometimes it's three, sometimes eight. But the dynamic is the same. A small group discovers genuine value in generative AI tools. They integrate those tools into their daily workflows. Their enthusiasm is visible. And then... nothing. The adoption curve flattens. The rest of the organisation watches, nods politely, and continues working exactly as before.
This isn't a Danish peculiarity, but Danish data makes it particularly visible. DI Digital's readiness assessments have consistently shown a bimodal distribution in AI capability across Danish firms — a cluster of advanced practitioners and a much larger group still in exploration or early experimentation. Digitaliseringsstyrelsen's analyses of public and private sector digital maturity tell a similar story: pockets of sophistication surrounded by organisational inertia.
The conventional explanation is that this is a training problem. Get more people skilled up, the thinking goes, and adoption will propagate naturally. But that explanation doesn't survive contact with reality. Most of the non-adopters in these teams aren't unskilled. They're not resistant to technology. Many of them have attended the workshops, watched the demos, even experimented on their own. They've just made a rational calculation — often unconsciously — that the cost of changing their workflow exceeds the visible benefit, especially when the team's processes, review mechanisms, and shared expectations haven't changed to accommodate AI-assisted work.
The power users adopted AI despite the team context. Everyone else would need the team context to change before adoption makes sense for them. That's not a training gap. That's a systems problem.
Three Failure Modes of Uneven Adoption
When I work with Nordic teams navigating this transition, the friction from uneven AI adoption shows up in three predictable ways. Each one is subtle enough to be misdiagnosed as something else entirely.
1. The Review Bottleneck: Outputs Nobody Can Meaningfully Evaluate
Here's a scenario that's playing out in dozens of Danish delivery teams right now. A senior consultant uses Claude to produce a comprehensive stakeholder analysis — well-structured, thorough, drawing on patterns the consultant guided the model toward through careful prompting. The document is genuinely good. It lands in a shared folder for team review.
And it sits there.
Not because people are lazy. Because the reviewers face an impossible task. They're being asked to evaluate a 15-page analysis that was produced in 90 minutes through a process they didn't witness and can't reconstruct. They don't know which insights came from the consultant's expertise and which were generated by the model. They don't know what prompts shaped the output. They don't know what was discarded along the way. They can check whether the document reads well — and AI-generated text almost always reads well — but they can't assess whether it's right in the ways that matter for the programme.
So they do one of two things: they rubber-stamp it (introducing unexamined risk) or they slow-walk it (introducing delay and frustration). Neither outcome is good. Both are invisible in standard project metrics.
This isn't about AI quality. It's about epistemic asymmetry — a gap in how knowledge was produced that makes normal peer review mechanisms break down. The faster the power user works, the wider this gap becomes.
2. The Velocity Mismatch: When Sprint Planning Becomes Fiction
Agile delivery depends on a shared understanding of what "a sprint's worth of work" looks like. That understanding is calibrated through experience — teams develop an intuitive sense of how long things take based on how long things have taken before.
AI-assisted work breaks this calibration. When two team members can produce deliverables in a third of the time it takes their colleagues, story pointing becomes meaningless. Sprint commitments become either wildly optimistic (benchmarked to the power users' pace) or frustratingly conservative (benchmarked to the team's slowest workflow). Neither reflects reality.
The downstream effects are corrosive. Power users start taking on disproportionate workloads — not because they're asked to, but because the planning process implicitly assumes their pace. Non-users feel pressure to match a velocity they can't achieve without tools they haven't integrated. The scrum master or delivery lead, caught in the middle, starts making invisible accommodations that nobody acknowledges openly.
I've seen this dynamic turn functional squads into resentful ones in a single quarter. The velocity gap isn't just a planning problem — it's a trust problem wearing a process mask.
3. The Psychological Safety Collapse: When Non-Users Feel Implicitly Inadequate
This is the failure mode nobody wants to name, and it's the most damaging.
When a subset of the team visibly accelerates — producing more, producing faster, volunteering for more complex tasks — the implicit message to everyone else is: you're falling behind. It doesn't matter that nobody says this out loud. It doesn't matter that the power users are generous and encouraging. The structural dynamic speaks louder than any individual's good intentions.
Non-users start self-censoring. They contribute less in planning sessions. They avoid tasks where the speed gap would be visible. They stop asking questions about AI because the questions feel elementary and the answers feel like they should be obvious. In the worst cases, they begin to disengage entirely — not from the work, but from the collaborative fabric of the team.
This is particularly acute in Nordic work cultures, where egalitarian norms run deep and visible competence gaps create outsized social discomfort. The Danish concept of fællesskab — the sense of collective belonging and shared capability — is quietly undermined when a team splits into those who "get it" and those who don't.
And here's the bitter irony: the power users often feel this too. They sense the distance growing. They dial back their enthusiasm. They stop sharing what they've learned. The very people who could help the team transition become reluctant to lead, because leading feels like showing off.
Why Top-Down Mandates Make This Worse
The instinctive leadership response to uneven adoption is to mandate it. Roll out a policy. Set usage targets. Make AI proficiency part of performance reviews. Require that all team members complete a certification programme by Q3.
This approach has a near-perfect track record of failure.
Not because mandates are inherently wrong, but because they address the wrong layer of the problem. A mandate says: you must use this tool. It says nothing about: here's how we work together differently now that some of us use it. It optimises for individual compliance while ignoring the team-level dynamics that actually determine whether AI adoption creates value or friction.
Worse, mandates tend to amplify the psychological safety problem. When AI use becomes an obligation rather than a choice, non-adopters don't just feel behind — they feel non-compliant. The anxiety shifts from "I'm slower than my colleague" to "I'm failing to meet an organisational expectation." That's not a motivation boost. That's a recipe for performative adoption — people going through the motions of using AI without actually integrating it into meaningful work.
The EU AI Act's tiered obligations, now actively shaping compliance roadmaps across Danish and Nordic enterprises, add another layer of complexity. Organisations need usage norms and governance frameworks — but if those frameworks are experienced by teams as top-down control rather than shared practice, they'll generate compliance theatre rather than genuine capability.
The alternative isn't no structure. It's structure that emerges from practice rather than being imposed on it.
This is the approach we take in our AI advisory work and in the Nordic meetup and workshop formats we run through Applied Futures. The principle is simple: teams don't adopt AI by being told to. They adopt AI by solving real problems together, with AI, and then reflecting on what worked. Peer-led, practice-first. Start with a shared task, not a shared policy.
In concrete terms, this means workshops where mixed-capability teams tackle actual delivery challenges using AI tools — together, in the same room, with the prompting and iteration process visible to everyone. The power users don't teach. They work alongside. The non-users don't learn from slides. They learn from seeing how a colleague actually uses the tool, including the false starts, the bad outputs, and the judgment calls about what to keep and what to discard.
This approach works because it addresses the real barrier to propagation: not skill, but observability. Most people in non-adopting teams have never seen a competent colleague use AI on a real task in real time. They've seen demos. They've seen outputs. They've never seen the messy, iterative, judgment-heavy process. Making that process visible is what turns individual habit into team norm.
A Lightweight Diagnostic: Four Questions for Your Next Retro
You don't need a consultancy engagement to surface whether uneven AI adoption is creating hidden problems on your team. You need 30 minutes in your next retrospective and four honest questions.
These aren't trick questions. They're not designed to catch anyone out. They're designed to make visible what's currently invisible — the team-level effects of individual AI use patterns.
Question 1: "In the last sprint, did anyone produce a deliverable that others found difficult to review or evaluate — and if so, why?"
You're listening for: mentions of unfamiliar formats, unexpectedly long documents, analysis that "looked right but I couldn't tell how they got there." These are symptoms of the review bottleneck. Don't ask whether AI was involved — let the team surface the friction first.
Question 2: "Are our story points still calibrated? Does a '5' feel like the same amount of work for everyone?"
You're listening for: hedging, awkward silences, someone saying "it depends on who's doing it." If your pointing has silently bifurcated — one scale for AI-assisted work, another for traditional work — your sprint planning is already broken. You just haven't named it yet.
Question 3: "Is there any type of task on this team that you've started avoiding — not because you can't do it, but because someone else does it faster?"
You're listening for: the quiet admission that someone has narrowed their own role. This is the canary in the psychological safety coal mine. When people start self-selecting away from tasks because of a perceived speed gap, the team's bus factor is increasing and its resilience is decreasing.
Question 4: "If we were going to establish one shared agreement about how we use AI tools on this team, what would be most useful?"
You're listening for: what the team actually needs, which is almost never what leadership assumes. Maybe it's a norm about labelling AI-assisted outputs. Maybe it's a buddy system for prompt review. Maybe it's just permission to say "I don't use AI for this and here's why." The point isn't to implement whatever they suggest immediately — it's to shift the conversation from individual tool use to collective practice.
These four questions won't solve the problem. But they'll tell you whether you have one — and they'll signal to the team that the unevenness is something you're willing to address openly rather than ignore.
From Individual Habit to Team Norm: Minimum Viable Shared Agreements
Once you've surfaced the friction, the temptation is to build a comprehensive AI usage framework. Resist that temptation. What teams need at this stage isn't a framework — it's a small set of shared agreements that make AI-assisted collaboration legible and reviewable.
Based on what we've seen work in Nordic delivery teams through our team empowerment programmes, here are the minimum viable agreements that let a team move together:
Agreement 1: AI-Assisted Tagging
Every deliverable that involved significant AI assistance gets a simple tag — in the document header, the commit message, the Jira ticket, wherever your team tracks work. Not as surveillance. Not as a quality warning. As context for reviewers.
The tag doesn't need to be elaborate. Something like: "AI-assisted: initial draft generated via Claude, then restructured and validated against [source]." The point is to close the epistemic gap — to give reviewers enough process visibility to do their job meaningfully.
This sounds trivial. In practice, it transforms review quality. When a reviewer knows an output was AI-assisted, they read it differently — more critically on factual claims, less critically on prose quality. That's exactly the right calibration.
Agreement 2: Prompt Transparency for Shared Outputs
When AI-generated work feeds into team decisions or client deliverables, the prompts (or at least a summary of the prompting approach) should be accessible to the team. Not reviewed. Not approved. Just available.
This serves two purposes. First, it makes the power users' process observable — which, as discussed, is the key to propagation. Second, it creates a natural quality check. Bad prompts produce plausible-sounding nonsense. When prompts are visible, the team can collectively develop judgment about what constitutes good AI-assisted practice.
Agreement 3: Paired AI Work for High-Stakes Outputs
For deliverables that carry significant risk — client-facing analysis, regulatory submissions, architectural decisions — establish a norm of paired work: one person driving the AI interaction, one person observing and challenging in real time.
This is the single most effective practice for closing the adoption gap. The observer learns the process. The driver benefits from immediate challenge. The output is higher quality because it's been stress-tested during production rather than after. And the team builds shared capability without anyone having to attend a training session.
Agreement 4: Explicit Velocity Norms
In sprint planning, acknowledge the velocity difference openly. Establish that AI-assisted estimates and non-assisted estimates are both valid, and that the team plans to the team's capacity, not the fastest individual's. This might mean the power users take on more scope — but it should be an explicit, negotiated decision, not an implicit expectation that accretes over time.
These four agreements aren't a governance framework. They're not comprehensive. They're the minimum set of shared norms that prevent the three failure modes described above: they make review meaningful, they recalibrate planning, and they create the observability that lets non-users build confidence at their own pace.
The Deeper Issue: AI Adoption Is a Team Sport Played Individually
Here's what I keep coming back to in conversations with programme leads and transformation directors across Danish and Nordic enterprises: we've been thinking about AI adoption at the wrong unit of analysis.
Almost all the advice, the tooling, the training, the thought leadership — it's aimed at the individual. How you can be more productive with AI. How you can write better prompts. How you can automate your workflow. The implicit model is that organisational AI adoption is the sum of individual AI adoptions. Get enough individuals using AI, and the organisation will have adopted AI.
This is wrong in the same way that giving every player on a football team a faster pair of boots doesn't make the team play better football. Individual capability is necessary but nowhere near sufficient. What matters is how the capabilities compose — how they interact within the team's actual coordination mechanisms, communication patterns, and trust dynamics.
The Gartner hype cycle positioning of GenAI entering the Trough of Disillusionment isn't about the technology failing to deliver value. It's about organisations discovering that individual value doesn't automatically aggregate into collective value. The tools work. The team dynamics around the tools don't.
This is fundamentally a leadership challenge, not a technology challenge. And it requires leaders who are willing to do something uncomfortable: slow down individual adoption to speed up collective adoption. That might mean asking your most enthusiastic AI user to pair with a sceptic rather than racing ahead alone. It might mean establishing review norms that add friction to the power user's workflow in service of team legibility. It might mean having the conversation about psychological safety that nobody wants to have.
None of this is glamorous. None of it makes for exciting LinkedIn posts about 10x productivity gains. But it's the work that determines whether your team's AI adoption creates durable value or quietly tears the team apart.
Where to Start
If you're a team lead reading this and recognising your own squad, start small. Run the four-question diagnostic in your next retro. See what surfaces. You might discover that the unevenness isn't a problem yet — that your team has naturally developed its own accommodation mechanisms. Or you might discover a fault line that's been invisible until someone asked about it directly.
If you're a programme director or transformation lead seeing this pattern across multiple teams, the intervention needs to be at the practice level, not the policy level. Peer-led workshops where mixed-capability teams work on real problems together. Shared agreements that emerge from team retrospectives rather than descending from governance boards. Explicit acknowledgment that AI adoption speed varies and that variation is normal, not a performance issue.
We work with Nordic organisations on exactly this transition — moving from individual AI enthusiasm to team-level AI capability that actually scales. Not through mandates or training programmes, but through the harder, slower work of changing how teams coordinate, review, and build trust around AI-assisted work.
The power users on your team aren't the problem. The isolation of their practice is. Close that gap, and you won't just get better AI adoption — you'll get a better team.
*Jacob Rastad is the founder of Applied Futures, working with Nordic organisations to build team-level AI capability that scales beyond early adopters. If your team is navigating uneven AI adoption, get in touch.*

About the Author
Jacob Langvad Nilsson
Technology & Innovation Lead
Jacob Langvad Nilsson is a Digital Transformation Leader with 15+ years of experience orchestrating complex change initiatives. He helps organizations bridge strategy, technology, and people to drive meaningful digital change. With expertise in AI implementation, strategic foresight, and innovation methodologies, Jacob guides global organizations and government agencies through their transformation journeys. His approach combines futures research with practical execution, helping leaders navigate emerging technologies while building adaptive, human-centered organizations. Currently focused on AI adoption strategies and digital innovation, he transforms today's challenges into tomorrow's competitive advantages.
Ready to Transform Your Organization?
Let's discuss how these strategies can be applied to your specific challenges and goals.
Get in touchRelated Services
Related Insights
Solo AI Use Is Quietly Dividing Your Team
*When only two or three people on a team use AI daily and the rest don't, you don't get a productivity uplift. You get a knowledge asymmetry that erodes collaboration — and no training course will fix
Solo AI Gains Don't Compound: A Team Ritual Design Problem
Individual AI productivity rises reliably with tool access. Team-level output doesn't follow. The gap isn't a tooling problem or a governance problem — it's a ritual design problem.