Here's a pattern I keep seeing in Danish enterprises this year: The Copilot licenses have been rolled out. The AI policy has been published. A handful of people on each team are using AI daily — rewriting documents, summarising meeting notes, generating first drafts of strategy decks in minutes. And the rest of the team? They opened it once, maybe twice. Now they don't touch it.
Leadership sees the license dashboard and thinks adoption is underway. It isn't. What's underway is a quiet fracturing — a split in how teammates work, make decisions, and communicate — that nobody put on the agenda because it doesn't look like a problem yet.
But it is. And it's getting worse.
The Bimodal Adoption Curve Nobody Talks About
When you look at AI adoption data at the team level rather than the organisation level, a striking pattern emerges. Usage doesn't distribute on a bell curve. It clusters at two extremes: daily power users and almost-never users, with remarkably little in between.
McKinsey's research from late 2025 on the state of AI adoption found that while organisations were rapidly deploying generative AI tools, actual usage remained concentrated among a small percentage of employees — even in organisations with broad license rollouts. Gartner's data tells a similar story: tool access and tool adoption are diverging, not converging.
I see this constantly in the Nordic enterprises I work with. A team of eight has two people who use Claude or Copilot multiple times a day. They've built personal prompt libraries. They've figured out which tasks AI handles well and which it doesn't. They've developed an intuition for when to trust the output and when to push back.
The other six? They have the same license. They attended the same introductory webinar. But they don't use it. Not because they're resistant or incapable — but because nobody showed them what their version of useful looks like. The power users learned by tinkering. The non-users are waiting for a reason to start that feels relevant to their specific work.
This isn't a training gap. It's a practice gap. And that distinction matters enormously.
What Drives the Split
Three forces push teams toward this bimodal distribution:
Curiosity thresholds differ. Some people will experiment with a new tool the day they get access. Others need to see a concrete use case that maps to a task they already do. Neither disposition is wrong, but only one of them leads to self-directed adoption.
Early wins compound. The person who figures out that AI can cut their weekly reporting time in half starts using it for everything. Each successful use case generates the next one. Within weeks, they've built a personal workflow that's dramatically different from their colleagues'. The gap widens not because the non-users fell behind, but because the power users accelerated.
There's no visible middle path. Most organisations offer two things: a broad awareness session ("here's what AI can do") and an advanced technical track ("here's how to build agents"). The messy middle — here's how to use AI for the specific things your team does every Tuesday — is left to individuals to figure out. Most don't.
The Silent Team Debt
Here's where it stops being an individual productivity story and becomes a team collaboration problem.
When two people on a team are working at AI speed and six aren't, you don't get a faster team. You get a team with invisible dependencies and undocumented processes. I call this silent team debt, and it accumulates in three ways:
Undocumented prompts and shortcuts. The power user has a prompt that generates client-ready analysis from raw data in seconds. It lives in their personal notes. Nobody else knows it exists. If that person is out sick, on holiday, or leaves the company, the capability walks out the door with them.
Decisions made at AI speed that colleagues can't interrogate. When someone uses AI to rapidly synthesise research and arrives at a recommendation, the reasoning chain is compressed. Colleagues who didn't participate in that process — and who couldn't replicate it — are left to either trust the conclusion or slow everything down by asking for the workings. Most choose trust. That's not collaboration; it's deference.
Asymmetric contribution in shared work. In a team meeting, the person who pre-processed the agenda with AI shows up with polished talking points and pre-analysed data. Their colleagues show up with... their experience and judgment, which are valuable but now feel less impressive by comparison. Over time, this shifts who gets heard, who gets credit, and who feels like they're contributing.
The non-users don't usually articulate this as "I feel left behind by AI." They say things like "I'm not sure how that conclusion was reached" or "things are moving faster than I can follow" or simply go quieter in meetings. The symptom looks like disengagement. The cause is structural.
The Nordic Enterprise Pattern: Access Without Practice
Denmark and the broader Nordics present a particularly interesting case. Digital infrastructure is excellent. Trust in institutions is high. Most knowledge workers in large enterprises now have access to AI tools — Copilot rolled out through Microsoft agreements, Claude or ChatGPT available through enterprise licenses.
And with the EU AI Act's general-purpose AI provisions coming into force this year, Danish enterprises have been prompted to formalise AI governance. Policies have been written. Risk frameworks have been published. Acceptable use guidelines have been circulated.
This is necessary work. But it's created a particular illusion: the illusion that governance is adoption. That publishing a policy means people know how to work differently.
What I observe across the Danish enterprises I advise is a pattern I'd describe as high access, low practice penetration below manager level. Managers and senior leaders have often experimented with AI — partly because their roles involve more synthesis and communication tasks where AI delivers obvious value, and partly because they feel the strategic pressure to "get AI." But the teams they lead? Still largely in the "almost never" cluster.
The AI policies, while important for compliance, don't help here. Telling people what they're allowed to do with AI doesn't tell them what they should do with it, this Tuesday, for the specific deliverable they're working on.
Why Top-Down Training Doesn't Close the Gap
The default organisational response to an adoption gap is training. Run a workshop. Bring in an expert. Show people the features.
I've watched this play out dozens of times. Here's what typically happens:
The training session is well-attended. People nod along. Some take notes. The facilitator shows impressive demos. Attendees leave thinking "that was interesting" and then return to their desks, open their familiar tools, and continue working exactly as before.
Two weeks later, usage data looks the same as before the training.
This isn't because the training was bad. It's because training addresses awareness and the gap is in practice. These are fundamentally different problems.
Awareness is knowing that AI can summarise a document. Practice is having summarised enough documents with AI that you know when it hallucinates structure that isn't there, when to feed it the document in chunks versus all at once, and when a human summary would actually be better.
You can't train practice. You can only create the conditions for practice to happen.
This is one of the reasons I focus on team empowerment rather than one-off training sessions. The shift happens when teams build shared habits together, not when individuals attend a workshop alone.
What Actually Works: Peer-Led Practice
The approaches I've seen close the bimodal gap share a few characteristics. They're peer-led, not expert-led. They're embedded in existing team rhythms, not bolted on as separate events. And they make AI use visible and shared rather than private and individual.
Here's what this looks like concretely:
Shared Prompt Libraries — Owned by the Team
Instead of each person building their own collection of prompts (or not building one at all), the team maintains a shared library. Not a corporate repository managed by IT — a simple shared document or channel where anyone can add a prompt that worked well for a task the team actually does.
The magic isn't in the prompts themselves. It's in the visibility. When a non-user sees a prompt labelled "Summarise client feedback from Q1 survey — works well with messy data," they can see exactly how a colleague is using AI for a task they also do. The barrier to trying it drops from "figure out AI" to "try this specific thing."
Internal Meetups — Small, Regular, Practical
Fifteen to thirty minutes, every two weeks. Someone shows one thing they used AI for since the last meetup. Not a polished demo — a real example, including what didn't work. Others ask questions. Maybe someone tries it live.
This format works because it does three things simultaneously: it normalises AI use (it's not a secret superpower, it's a shared practice), it creates social motivation to experiment (you want to have something to show next time), and it surfaces the shortcuts and workarounds that power users have built but never documented.
I've been running a version of this through the Applied Futures meetup community here in Copenhagen, and the practitioners who attend consistently report that these peer exchanges shift their teams more than any formal programme.
Team Retros with AI as an Agenda Item
This is the simplest intervention and often the most powerful. In your regular team retrospective or weekly check-in, add one standing question: "Did anyone use AI for something this week that the rest of the team should know about?"
That's it. One question. It takes three minutes. But it does something crucial: it moves AI use from an individual behaviour to a team conversation. It creates a natural moment for the power users to share what they've learned and for the non-users to ask questions without it feeling like a remedial exercise.
Over weeks, this single question builds a shared understanding of where AI is helping, where it isn't, and what the team's collective capability actually looks like.
These approaches work because they address the real barrier to adoption: not awareness, not access, but the absence of shared practice. This is the kind of work I do with teams through AI advisory engagements — not installing tools, but building the team-level habits that make tools actually useful.
A Lightweight Diagnostic: Three Questions for Your Weekly
If you're a team lead wondering whether AI is compounding your team's capability or quietly fragmenting it, here are three questions you can ask in your next weekly meeting. You don't need a survey. You don't need a consultant. You just need honest answers.
1. "Who used AI for a work task this week, and what was it?"
Listen for the distribution. If the same two or three people answer every week and the rest stay silent, you have a bimodal pattern. Don't judge it — just notice it.
2. "Is there anything someone on this team does with AI that the rest of us would benefit from understanding?"
This surfaces the invisible shortcuts. Often, the power users don't even realise their workflows are opaque to others. This question gives them permission to share without it feeling like showing off, and it gives non-users permission to be curious without it feeling like admitting inadequacy.
3. "Did any decision or deliverable this week depend on AI in a way that wasn't visible to the whole team?"
This is the team debt question. If someone used AI to analyse data that informed a recommendation, and the rest of the team didn't know that, you have a transparency gap. Not a problem to punish — a process to make visible.
These three questions, asked regularly, will tell you more about your team's actual AI adoption than any dashboard or survey. And they cost nothing but five minutes of meeting time.
The Window Is Now
We're in a specific moment in 2026. The EU AI Act is driving governance conversations. Enterprises have invested in licenses. Employees have heard the message that AI matters. But between the policy documents and the actual daily work of teams, there's a gap — and it's widening.
The organisations that close this gap won't be the ones with the best AI strategy decks. They'll be the ones where team leads noticed the bimodal split, named it without blame, and created small, regular spaces for shared practice to develop.
This isn't a technology challenge. It's a team dynamics challenge. And the fix isn't more tools or more training. It's making the invisible visible, and making the individual shared.
The question isn't whether your team has access to AI. It almost certainly does. The question is whether your team is learning together — or whether two people are sprinting ahead while six watch from the sideline, increasingly unsure how to join in.
That's not a gap that closes on its own. But it does close — quickly — when someone decides to put it on the agenda.
*If you're seeing the bimodal pattern in your own team and want to move from individual AI use to shared team capability, explore how AI advisory and team empowerment programmes can help — or join us at an upcoming Applied Futures meetup to see peer-led practice in action.*

About the Author
Jacob Langvad Nilsson
Technology & Innovation Lead
Jacob Langvad Nilsson is a Digital Transformation Leader with 15+ years of experience orchestrating complex change initiatives. He helps organizations bridge strategy, technology, and people to drive meaningful digital change. With expertise in AI implementation, strategic foresight, and innovation methodologies, Jacob guides global organizations and government agencies through their transformation journeys. His approach combines futures research with practical execution, helping leaders navigate emerging technologies while building adaptive, human-centered organizations. Currently focused on AI adoption strategies and digital innovation, he transforms today's challenges into tomorrow's competitive advantages.
Ready to Transform Your Organization?
Let's discuss how these strategies can be applied to your specific challenges and goals.
Get in touchRelated Services
Related Insights
Solo AI Use Is Stalling Your Team: When Individual Productivity Becomes a Collective Liability
*Most AI adoption advice optimises for the individual power user. But when one person on a squad uses Claude heavily and the rest don't, you don't get a faster team — you get a fractured one.*
Solo AI Gains Don't Compound: A Team Ritual Design Problem
Individual AI productivity rises reliably with tool access. Team-level output doesn't follow. The gap isn't a tooling problem or a governance problem — it's a ritual design problem.