ChatGPT has become as familiar among employees as Amazon’s Alexa, and it has similar capabilities. The rapid advent of artificial intelligence tools, including generative AI, hasn’t been limited to business settings. In fact, consumer use is on the rise, which can be both a good and a not-so-good development.
While organizations continue to grapple with whether and how gen AI tools should be incorporated into their work, many employees have already adopted these tools and are using them not only for personal use but in their everyday work.
Risks and rewards of AI adoption
Gen AI tools can be used to create content, review documents, update information and conduct online searches. And employees are using these tools.
But, while this bottom-up learning energy is valuable without systematic support, it creates inconsistent practices, knowledge gaps and overlooked opportunities for organizational capability building.
It also creates the potential for risk if employees are uploading proprietary company information or personally identifiable information (PII) for customers, clients or employees themselves.
Organizations are working hard to develop policies and upskill and reskill employees to use these tools appropriately. But in many cases, they’re not working quickly or effectively enough to produce desired and sustainable performance outcomes.
Training alone doesn’t create capability. Sustainable skill development calls for clear growth pathways, chances to practice, manager coaching and recognition structures that reinforce desired behaviors.
Why most AI training doesn’t stick
Training effectiveness is dependent on organization-wide performance systems that support the training. When training is an isolated event, regardless of how well-designed that training is, results show weak links to sustained behavior change.
And behavior change is what organizations need to see in employees as they get used to new AI tools.
What typically happens is that employees complete AI training, then return to their jobs, where sometimes their managers can’t coach them because:
- they’re also on a learning journey
- performance measures are still rewarding pre-AI approaches
- time pressure means experimentation with new tools isn’t possible and
- super-user communities to support troubleshooting challenges, as well as share breakthroughs, simply don’t exist.
Without deliberate environmental design, my research suggests that approximately 70% of training content will fail to translate into changed workplace behavior. AI is accelerating this pattern because the tools evolve faster than traditional training cycles can accommodate.
So, what has to happen? Organizations need to concentrate on the design of skill development pathways.
Designing skill development pathways
The most effective AI capability building doesn’t start with training courses, it starts with defining what skilled AI use actually looks like in observable, coachable behaviors.
Instead of vague objectives like “develop AI literacy,” successful L&D teams create behavioral definitions. For instance: “Generates first-draft client email using approved AI tool, then edits for client-specific context and relationship history before sending” or “Uses AI to summarize research articles, then cross-references key claims with original sources before incorporating into analysis.”
These concrete descriptions serve three purposes:
- Learners understand exactly what they’re working toward.
- Managers know what to observe and coach.
- L&D can design practice opportunities that mirror real performance requirements.
Another important element of effective skill development pathways is that the learning environment must be safe and supportive.
The psychology of safe learning environments
One of the most surprising findings from my work in workforce development: Restrictive AI directives often increase risky AI use rather than reducing it.
When employees fear punishment for experimentation, they don’t stop experimenting; they stop talking about it. Shadow AI use goes underground, making it invisible to IT and impossible for L&D to support or improve.
Effective L&D strategies should be designed to create psychological safety. This can be done through:
- transparent boundaries like clearly defining “encouraged experimentation zones” versus “prohibited zones.”
- leadership modeling where executives share their own AI learning journeys, including challenges and failures.
- formal communities of practice where employees share AI techniques and develop pooled expertise.
In organizations that do this, employees indicate greater confidence in AI use and a greater inclination to experiment appropriately than employees who have only received formal training.
Managers also have an important role here, but they must be supported in that role.
A mandate for manager coaching
The most underutilized lever for AI capability development isn’t new courses; it’s manager coaching. Yet, most organizations ask managers to “support AI adoption” without giving them the guidance, frameworks, visibility or skills they need to do so.
Manager feedback has a significant impact on the appropriate adoption and application of AI tools by employees. That feedback is most effective when delivered close to the next performance opportunity, not just after the last one. For AI-assisted work, this means managers should coach right before employees tackle their next similar task.
But effective AI coaching requires specific approaches that managers need to be trained on. For instance, instead of asking “Is this document good?,” managers should ask “Walk me through how you used AI to create this. What prompts did you use? How did you verify the information? What did you edit, and why?” These process conversations surface both learning opportunities and risk indicators.
After coaching sessions, inviting employees to share their takeaways as personal rules (e.g., “I consistently verify AI-generated statistics before including them in client presentations”) creates more enduring habits than externally imposed guidelines because they’re connected to the employee’s own reasoning.
L&D has an important role in equipping managers with coaching frameworks. This can be done through simple conversation guides, peer learning cohorts where managers share coaching challenges and example scenarios showing effective AI coaching in action.
When L&D invests in manager coaching capability, AI skill development scales far beyond what formal training alone can achieve.
As with the development of any new skills, employees need to be recognized for their successes.
Aligning recognition with learning behaviors
One pattern I see repeatedly in my training effectiveness research is that well-designed learning programs are repeatedly undermined by recognition systems that reward conflicting behaviors.
For instance:
- Employees are trained to use AI thoughtfully but praised only for volume.
- Training focuses on verifying AI outputs, but employees are recognized for speed over accuracy.
- Employees are encouraged to experiment, but they’re promoted based on error-free execution.
HR and L&D leaders can work together to redesign performance conversations to include questions about AI capability development, create learning-focused recognition that celebrates employees who share effective AI practices and update success metrics to include quality and verification, along with productivity measures.
Successful transfer of training also requires an opportunity for practice.
Embedding practice in workflows
When employees learn new tools and new approaches, they need to have an opportunity for experimentation. They need to have time for experimentation. And they need to be supported in their experimentation.
Too often, though, that doesn’t happen. Employees are busy. They’re under pressure to get their work done. They don’t feel safe or supported in taking time away from their “real jobs” to experiment with new tools.
But they should. My research clearly shows that training transfer requires opportunity for practice.
L&D teams can design for practice by identifying practice-rich roles (customer service representatives drafting responses, analysts creating reports), protecting experimentation time (even 30 minutes weekly makes a significant difference when used intentionally) and creating scaffolded challenges that build from simple to complex AI use.
The goal isn’t to perfect AI use in training before applying it at work. The goal is to create conditions where work itself becomes the learning environment. That’s how organizations can move from training events to learning systems.
From training events to learning systems
The AI skills paradox won’t be resolved through more training courses. It demands a shift from designing training events to designing learning systems where:
- behavioral expectations are concrete and observable
- practice opportunities are embedded in actual work
- manager coaching reinforces skill application
- recognition systems reward learning behaviors and
- communities of practice accelerate collective capability development.
When these systems align, employee initiative becomes organizational capability. Self-directed learning transforms into a strategic benefit. And the energy employees are already investing in AI skill development, which compounds into prolonged performance gains instead of dissipating through lack of support.
The opportunity for organizations and their HR and L&D leaders isn’t to attempt to control AI adoption, but to channel the learning energy and interest that already exists into capabilities that are safe, effective and strategically valuable.



















