Individual AI productivity rises reliably with tool access. Team-level output doesn't follow. The gap isn't a tooling problem or a governance problem — it's a ritual design problem.
Here's a pattern I keep seeing across Danish and Nordic organisations in 2026: the CFO approved the Copilot licences. HR ran the onboarding webinars. A few enthusiastic individuals are producing genuinely impressive work with AI tools. And yet, when leadership looks at team-level output — the dashboards, the velocity metrics, the quarterly reviews — the numbers are stubbornly flat.
The individual gains are real. The team gains aren't materialising.
This isn't a mystery, but it is a problem that most organisations are misdiagnosing. They assume the fix is better tooling, tighter governance, or another round of training. It's none of those things. It's a ritual design problem — and it has a surprisingly concrete solution.
The Productivity Paradox at the Team Level
Let's start with what the data actually shows. McKinsey's 2025 State of AI report documented a pattern that should concern every leader who signed off on enterprise AI licences: individual productivity gains from AI tools are real and measurable in the first 90 days of adoption, but they plateau — and in many cases, begin to diminish — after roughly three months of solo use. The individual hits a ceiling. Not because the tools get worse, but because the individual runs out of new patterns to discover on their own.
DI Digital's Q1 2026 survey of Danish organisations reinforces this at the team level: organisations report high individual tool usage alongside flat team productivity. The licences distributed easily. The capability didn't.
This is the paradox. AI tools are among the easiest enterprise technologies to distribute — a licence, an SSO integration, a 30-minute tutorial. But AI capability — the ability to use these tools in ways that actually change how work gets done — is among the hardest things to distribute. Because the highest-value patterns aren't in the tool. They're in the heads of the people who've figured out how to use the tool well.
Think about what actually makes someone effective with AI in a work context. It's not that they know how to open Claude or Copilot. It's that they've developed:
Prompt strategies that reliably produce useful outputs for their specific domain
Workflow integrations where AI is embedded at the right point in a process, not bolted on as an afterthought
Output review heuristics — the instinct for when to trust AI output, when to verify, and when to discard
Every one of these is tacit knowledge. Every one of them was developed through experimentation, failure, and iteration. And in the vast majority of organisations, every one of them stays locked in the individual's head.
The Tacit Knowledge Trap
Here's the uncomfortable truth: your best AI users are not making your team better. They're making themselves better. And the organisation has given them no reason, no mechanism, and no moment to do anything differently.
This isn't selfishness. It's the absence of infrastructure. Most organisations have extensive infrastructure for sharing explicit knowledge — documentation systems, wikis, standard operating procedures. But they have almost nothing for sharing tacit knowledge about how to work with AI. No shared prompt libraries. No peer observation. No regular moments where someone says, "Here's a weird thing I tried this week, and here's what happened."
The result is predictable: a power-law distribution of AI capability within every team. One or two people are genuinely transforming their work. The rest are using AI as a slightly faster search engine, if they're using it at all. And the gap widens every week, because the people who are good at AI are getting better through practice, while the people who aren't have no way to learn from those who are.
This is the compounding problem. Individual AI gains don't compound at the team level because there's no mechanism for transmission. The knowledge stays where it was generated — in one person's workflow, in one person's prompt history, in one person's head.
What Compounding Actually Looks Like
I want to make this concrete with a contrast I've observed across two comparable teams.
A Danish engineering team of about 15 people — let's call them Team A — has the same AI tool access as a comparable team in the same organisation (Team B). Same licences, same security policies, same initial training. Both teams are 14 months into their AI tool deployment.
Team A runs a weekly AI retrospective. It's 20 minutes, appended to their existing Friday wrap-up. The format is simple: one or two people share an AI use case from the week — what they tried, what worked, what didn't. The team maintains a shared Notion database of prompts and workflow patterns that anyone can contribute to or pull from. And once a month, they do a "shadow session" where someone watches a colleague use AI in their actual workflow for 15 minutes, then they debrief.
Team B has none of this. They had the same onboarding. They have the same tools. Individual team members use AI regularly. But there's no sharing practice, no shared library, no structured moment for transmission.
The differences after 14 months are striking:
Team A reports that AI-assisted tasks have expanded from initial use cases (drafting, summarisation) into workflow areas that weren't part of any training programme — test case generation, specification review, client communication templating. The prompt library has over 200 entries. Team members report that they routinely use prompts or patterns they learned from colleagues, not from training. Morale surveys show the team rates their "confidence in using AI effectively" significantly higher than the organisational average. And critically, the distribution of AI capability across the team is relatively flat — there isn't a massive gap between the most and least capable users.
Team B shows the classic pattern: two or three power users, a middle group doing basic tasks, and several people who've largely stopped using AI tools after the initial novelty wore off. The power users are frustrated because they feel like they're the only ones "getting it." The rest of the team is frustrated because they see the power users producing impressive work but have no visibility into how. Team productivity metrics are essentially unchanged from pre-AI baselines.
Same tools. Same policies. Same training. Radically different outcomes. The only structural difference is the presence or absence of lightweight, repeatable sharing practices.
That's the ritual design gap.
Why Top-Down Approaches Keep Failing
Before I get to the specific rituals, I want to address why the most common organisational responses to this problem don't work.
More training doesn't solve it. Training teaches tool features. The gap isn't about features — it's about contextualised practice. The most valuable thing your best AI user knows isn't something that can be taught in a webinar. It's something they discovered by trying, failing, and iterating in the context of their actual work. Training programmes, by definition, are decontextualised.
AI Centres of Excellence don't solve it. CoEs centralise expertise. But the knowledge that needs to move is inherently distributed — it lives in the specific workflows, domains, and contexts of individual teams. A centralised team can set policy and select tools, but it cannot generate the contextual knowledge that makes AI useful in a particular team's work. By the time a CoE has documented a best practice, the practitioners have already moved on to the next pattern.
Better platforms don't solve it. I've seen organisations respond to the compounding gap by upgrading their AI tools — moving to more capable models, adding more integrations, deploying custom solutions. This is like responding to a communication problem by buying better phones. The issue isn't the channel. It's that nobody is talking.
Denmark's Digitaliseringsstyrelsen 2025–2028 digitalisation strategy explicitly flags collaborative AI capability as a national priority — not just individual tool access, but the organisational ability to develop and share AI competence. The EU AI Act's Article 4 competence obligations require demonstrable AI literacy across roles, not just in specialist functions. These aren't requirements you meet with a training programme and a certificate. They require ongoing, embedded capability development — exactly what rituals provide and courses don't.
This is why peer-led, practice-first formats consistently outperform top-down rollouts for building real AI capability. It's not ideology — it's a structural observation about how tacit knowledge moves. It moves through observation, conversation, and shared practice. Not through documentation, policy, or training slides. Organisations navigating these requirements need approaches that build capability through practice, not just through instruction.
A Diagnostic: Where Is Your Team?
Before installing any new practice, it's worth diagnosing where your team actually sits. Here's a quick assessment:
Score each statement 0 (not true), 1 (partly true), or 2 (fully true):
Team members can name at least three AI use cases their colleagues use regularly
The team has a shared location (document, channel, database) where AI prompts or patterns are collected
In the last month, at least one team meeting included someone demonstrating an AI workflow
New team members can find and use AI patterns developed by existing team members
The team has discussed and agreed on when AI output needs human review before use
0–3: No sharing infrastructure. AI capability is entirely individual. You're likely seeing the power-law distribution described above. Start with Ritual 1.
4–6: Informal sharing exists but isn't structured. Some knowledge moves, but inconsistently. You probably have one or two people who informally share, but it depends on their initiative. Add Ritual 2 or 3 to create structure.
7–10: Active sharing culture. You have the foundations. Your focus should be on consistency and expansion — making sure the practices survive personnel changes and extend to new use cases.
Three Ritual Formats You Can Install This Week
These are deliberately lightweight. The biggest risk with any new team practice is that it feels like overhead. Each of these is designed to fit inside existing team rhythms, require no special tools, and take less than 20 minutes per instance.
Ritual 1: The Shared Prompt Library with a Weekly 'Prompt of the Week'
What it is: A shared, searchable collection of AI prompts and patterns that team members contribute to, combined with a weekly highlight.
How to set it up (time: 30 minutes):
Create a shared document or database in whatever tool your team already uses (Notion, Confluence, a shared Google Doc, even a pinned Slack/Teams channel). Structure it simply: prompt text, what it's for, who contributed it, date added, and an optional "how well does this work" rating.
Seed it with 5–10 prompts from your team's existing power users. Ask them directly: "What are the three prompts or patterns you use most often?" This is often the first time anyone has asked them, and they're usually happy to share.
Add a recurring "Prompt of the Week" slot to an existing team meeting or communication. This is a 2-minute moment where someone shares one prompt or pattern — what it does, when to use it, any caveats. Rotate the responsibility.
Why it works: It makes tacit knowledge explicit and findable. The weekly highlight creates a regular, low-pressure moment for sharing. The rotation ensures it's not dependent on one person's enthusiasm. And the library becomes a persistent resource that survives personnel changes — which matters for the EU AI Act's competence requirements, where you need to demonstrate ongoing organisational capability, not just individual skill.
Common failure mode: The library becomes a graveyard — lots of entries, no one uses them. Prevent this by keeping the weekly highlight active and by encouraging people to comment on or rate existing entries. A living library needs curation, not just contribution.
Ritual 2: The AI Output Peer-Review Checkpoint
What it is: A lightweight review step, embedded in your existing code review, document review, or approval process, where AI-assisted outputs get a specific kind of peer scrutiny.
How to set it up (time: 15 minutes):
Identify one existing review process your team already runs — code review, document sign-off, design critique, whatever.
Add a single question to that review: "Was AI used in producing this? If so, what was the AI's contribution, and what did the human add, change, or verify?"
That's it. No new meeting, no new tool, no new process. Just one additional question in an existing checkpoint.
Why it works: It does three things simultaneously. First, it creates visibility — the team starts to see where and how AI is being used, which is prerequisite knowledge for learning from each other. Second, it creates a natural moment for knowledge transfer — when someone explains how they used AI to produce something, the reviewer learns a new pattern. Third, it builds the team's collective judgement about AI output quality, which is the single most important capability for responsible AI use and directly relevant to AI advisory work focused on practical implementation.
Common failure mode: It becomes a compliance checkbox rather than a learning moment. Prevent this by framing the question as genuinely curious, not auditing. The tone should be "oh interesting, how did you use it?" not "did you follow the AI policy?" If it feels like surveillance, people will stop declaring AI use, and you'll lose both the learning and the visibility.
Ritual 3: The 15-Minute Bi-Weekly 'What Surprised Me' Share-Out
What it is: A short, recurring slot — ideally appended to an existing standup or team sync — where one or two people share something unexpected they encountered while using AI.
How to set it up (time: 5 minutes):
Add a 15-minute block to an existing bi-weekly meeting. Label it "AI surprises" or whatever language fits your team's culture.
The format: one or two people share something that surprised them about AI in the last two weeks. This could be a surprisingly good result, a surprisingly bad one, an unexpected use case they discovered, or a workflow change they didn't anticipate. The emphasis is on surprise — not best practice, not polished demos, not success stories.
Rotate who shares. Keep it informal. No slides required.
Why it works: The "surprise" framing is deliberate. It lowers the bar for contribution (you don't need to have a success story), it surfaces the most interesting and novel patterns (surprises are, by definition, things the team didn't already know), and it creates psychological safety around failure (a surprising failure is just as valuable as a surprising success). It also naturally generates the kind of stories that spread through teams — "did you hear what happened when Maria tried using Claude for the test specifications?" — which is how tacit knowledge actually propagates in human groups.
Common failure mode: It fizzles after three or four sessions because people feel they don't have anything to share. Prevent this by keeping the cadence bi-weekly (not weekly — you need enough time between sessions for people to accumulate experiences) and by making the first few sessions explicitly include "negative surprises" and small observations. Not every share needs to be dramatic.
The Compounding Mechanism
What these three rituals have in common is that they create what I'd call a knowledge transmission layer between individual AI use and team-level capability. They don't require anyone to change their individual workflow. They don't require new tools or policies. They require about 30–45 minutes of team time per week, distributed across existing meetings.
But the effect compounds. Here's why:
In week 1, one person shares a prompt. In week 2, three people have tried it and have variations. In week 3, one of those variations gets shared back, and someone combines it with a different pattern. By month 3, the team has a shared vocabulary for AI use, a growing library of tested patterns, and — critically — the habit of learning from each other's AI practice.
This is what compounding looks like at the team level. It's not about any single practice being transformative. It's about the accumulation of small knowledge transfers, repeated consistently, creating a capability that no individual could develop alone.
The Danish engineering team I described earlier didn't start with anything more sophisticated than these three practices. They started with a shared document and a 15-minute Friday slot. Everything else grew from there.
The Compliance Dimension
For Danish and Nordic organisations, there's a practical compliance angle here that's worth naming explicitly. The EU AI Act's Article 4 requires organisations to ensure that staff working with AI systems have a sufficient level of AI literacy. This is not a one-time training requirement — it's an ongoing obligation that scales with the risk level of the AI systems in use.
Most organisations are planning to meet this requirement through training programmes. That's necessary but not sufficient. A training programme gives you a point-in-time certification. It doesn't give you the ongoing, demonstrable capability development that a regulator — or an auditor — would expect to see 12 months later.
Rituals do. A shared prompt library is a living document that demonstrates ongoing capability development. Peer review checkpoints create an auditable trail of AI output scrutiny. Regular share-outs are evidence of continuous learning. These aren't just good practices — they're compliance infrastructure, documented through the natural artefacts of team work rather than through separate compliance processes.
Starting This Week
If you've read this far and you're thinking about which ritual to start with, here's my recommendation:
If your team has never shared AI practices formally: Start with Ritual 3 (the bi-weekly share-out). It's the lowest commitment, requires no setup, and will quickly surface whether your team has enough AI activity to sustain the other two rituals.
If your team has some informal sharing but it's inconsistent: Start with Ritual 1 (the shared prompt library). It gives structure to knowledge that's already flowing and creates a persistent resource that outlasts any individual's memory or enthusiasm.
If your team already shares but lacks quality judgement about AI outputs: Start with Ritual 2 (the peer review checkpoint). It builds the collective discernment that separates teams who use AI from teams who use AI well.
None of these require leadership approval, budget, or a project plan. They require one person to suggest it in the next team meeting, and a team willing to try it for four weeks.
The organisations that are actually compounding AI gains in 2026 aren't the ones with the best tools, the biggest budgets, or the most sophisticated governance frameworks. They're the ones where knowledge moves between people — reliably, repeatedly, and in the context of real work.
That's a ritual design problem. And it's one you can start solving this week.
*Jacob Løvborg Jensen works with Nordic organisations on AI advisory and team empowerment — helping teams build the sharing practices and capability infrastructure that turn individual AI tool access into compounding team-level gains.*

About the Author
Jacob Langvad Nilsson
Technology & Innovation Lead
Jacob Langvad Nilsson is a Digital Transformation Leader with 15+ years of experience orchestrating complex change initiatives. He helps organizations bridge strategy, technology, and people to drive meaningful digital change. With expertise in AI implementation, strategic foresight, and innovation methodologies, Jacob guides global organizations and government agencies through their transformation journeys. His approach combines futures research with practical execution, helping leaders navigate emerging technologies while building adaptive, human-centered organizations. Currently focused on AI adoption strategies and digital innovation, he transforms today's challenges into tomorrow's competitive advantages.
Ready to Transform Your Organization?
Let's discuss how these strategies can be applied to your specific challenges and goals.
Get in touchRelated Services
Related Insights
Solo AI Use Is Stalling Your Team: When Individual Productivity Becomes a Collective Liability
*Most AI adoption advice optimises for the individual power user. But when one person on a squad uses Claude heavily and the rest don't, you don't get a faster team — you get a fractured one.*
Solo AI Use Is Quietly Dividing Your Team
*When only two or three people on a team use AI daily and the rest don't, you don't get a productivity uplift. You get a knowledge asymmetry that erodes collaboration — and no training course will fix