The state of AI capability development

We surveyed hundreds of L&D professionals globally to understand how AI training is being designed, delivered, and measured. The findings explain why so much investment is failing to translate into real change.

The numbers that matter

70%
don't measure the effectiveness of AI training
11%
fully link AI training to business outcomes
72%
of employees have had no formal AI training
8%
tailor AI training extensively to the individual

The detail behind the headlines

7.7/10

Self-reported AI confidence is high, but capability is uneven

People rate their confidence with AI at 7.7 out of 10 on average. But when you look at what they actually do with it, the picture is much more varied. Confidence and capability are not the same thing.

52%

Want a hybrid model that builds internal capability

Over half of respondents said they want a model that builds internal capability rather than outsourcing training entirely. They want to work with a partner, not just buy a course.

59%

Are still in early exploration or pilot stage

The majority of organisations are still working out what AI training should look like. They're past the point of doing nothing, but haven't yet found an approach that works.

6.5/10

Governance creates friction but isn't blocking progress

Governance scores 6.5 out of 10 as a challenge for AI adoption. While governance is important and necessary, it is sometimes seen as difficult to navigate, particularly in financial services and regulated sectors.

Training is too generic and disconnected from real work

The two most consistent criticisms across all responses: AI training is too generic, and it's not linked to the work people actually do. These are structural problems, not delivery problems.

Only 28% of employees have had any formal AI training

Despite near-universal access to AI tools like Microsoft Copilot, on average only 28% of employees have received any formal training. The gap between access and capability is the central problem.

What this means for your organisation

The research points to a clear set of implications for anyone responsible for AI capability in their organisation.

  • Generic AI training is structurally unlikely to change how people work
  • Without a diagnostic baseline, there is nothing to measure change against
  • Training needs to be built around real work, real tools, and real problems
  • Measurement needs to focus on productivity and performance, not attendance
  • Building internal champions is more sustainable than outsourcing training
  • Most organisations are still early enough to get this right
๐Ÿ“Š
The State of AI Capability Development
2026 Research Report ยท Peter Pease Learning Psychology

Download the full report

The full report goes deeper into the findings, with sector-level analysis and practical recommendations for L&D leaders, HR directors, and anyone responsible for AI capability in their organisation.

  • Full analysis of all responses
  • Sector-level breakdowns
  • Confidence vs capability analysis
  • Copilot usage and training alignment data
  • Practical recommendations
Request the full report โ†’

How the research was conducted

1,000+

Survey respondents

L&D professionals surveyed across different pieces of research, covering financial services, professional services, manufacturing, care, retail, not for profit, public sector, and more.

28

Roundtable participants

L&D leaders who contributed qualitative insights through structured discussion sessions.

4,000+

Organisations worked with

Organisations Peter has worked with over his career, informing the practical application and interpretation of findings.

What the data tells us

What percentage of employees have had formal AI training?

Our research found that on average only 28% of employees have received any formal AI training despite having access to AI tools. The gap between tool access and genuine capability is the central problem that most organisations have not yet addressed. The gap between tool access and genuine capability is the central problem that most organisations have not yet addressed. Source: Peter Pease Learning Psychology Research, 2026.

Do organisations measure whether AI training actually works?

70% of organisations do not measure the effectiveness of AI training at all. Of those that do measure something, only 11% fully link the results to business outcomes. Most measurement is limited to attendance and satisfaction scores, which tell you nothing about whether people's work has actually improved. Source: Peter Pease Learning Psychology Research, 2026.

Why does most AI training fail to change how people work?

The research identifies several structural failures: training is delivered as a one-off event rather than sustained capability development, it is disconnected from the real work people do, it starts with tools rather than problems, it treats everyone the same regardless of role or context, and there is no diagnostic baseline so there is nothing to measure change against. Only 8% of organisations tailor AI training extensively to the individual.

What do organisations actually want from AI training?

52% of organisations say they want a hybrid model that builds internal capability rather than outsourcing training entirely. They want to work with a partner, not just buy a course. The majority are still in early exploration or pilot stage and have not yet found an approach that works. Source: Peter Pease Learning Psychology Research, 2026.

How can I cite this research?

Please cite as: Peter Pease Learning Psychology (2026). The State of AI Capability Development. Research conducted with 1,000+ L&D professionals globally. For the full report including sector-level analysis, please use the contact form to request a copy.

Want to discuss what this means for your organisation?

The findings are most useful when applied to a specific situation. We're happy to talk through what they mean for your context.

Book a discovery call โ†’ Take the free diagnostic