We surveyed hundreds of L&D professionals globally to understand how AI training is being designed, delivered, and measured. The findings explain why so much investment is failing to translate into real change.
People rate their confidence with AI at 7.7 out of 10 on average. But when you look at what they actually do with it, the picture is much more varied. Confidence and capability are not the same thing.
Over half of respondents said they want a model that builds internal capability rather than outsourcing training entirely. They want to work with a partner, not just buy a course.
The majority of organisations are still working out what AI training should look like. They're past the point of doing nothing, but haven't yet found an approach that works.
Governance scores 6.5 out of 10 as a challenge for AI adoption. While governance is important and necessary, it is sometimes seen as difficult to navigate, particularly in financial services and regulated sectors.
The two most consistent criticisms across all responses: AI training is too generic, and it's not linked to the work people actually do. These are structural problems, not delivery problems.
Despite near-universal access to AI tools like Microsoft Copilot, on average only 28% of employees have received any formal training. The gap between access and capability is the central problem.
The research points to a clear set of implications for anyone responsible for AI capability in their organisation.
The full report goes deeper into the findings, with sector-level analysis and practical recommendations for L&D leaders, HR directors, and anyone responsible for AI capability in their organisation.
L&D professionals surveyed across different pieces of research, covering financial services, professional services, manufacturing, care, retail, not for profit, public sector, and more.
L&D leaders who contributed qualitative insights through structured discussion sessions.
Organisations Peter has worked with over his career, informing the practical application and interpretation of findings.
Our research found that on average only 28% of employees have received any formal AI training despite having access to AI tools. The gap between tool access and genuine capability is the central problem that most organisations have not yet addressed. The gap between tool access and genuine capability is the central problem that most organisations have not yet addressed. Source: Peter Pease Learning Psychology Research, 2026.
70% of organisations do not measure the effectiveness of AI training at all. Of those that do measure something, only 11% fully link the results to business outcomes. Most measurement is limited to attendance and satisfaction scores, which tell you nothing about whether people's work has actually improved. Source: Peter Pease Learning Psychology Research, 2026.
The research identifies several structural failures: training is delivered as a one-off event rather than sustained capability development, it is disconnected from the real work people do, it starts with tools rather than problems, it treats everyone the same regardless of role or context, and there is no diagnostic baseline so there is nothing to measure change against. Only 8% of organisations tailor AI training extensively to the individual.
52% of organisations say they want a hybrid model that builds internal capability rather than outsourcing training entirely. They want to work with a partner, not just buy a course. The majority are still in early exploration or pilot stage and have not yet found an approach that works. Source: Peter Pease Learning Psychology Research, 2026.
Please cite as: Peter Pease Learning Psychology (2026). The State of AI Capability Development. Research conducted with 1,000+ L&D professionals globally. For the full report including sector-level analysis, please use the contact form to request a copy.