
Learning leaders have lived with the same frustration for years. The business keeps asking for proof that learning and development (L&D) works. Pros in the corporate learning space point to course completions, smile sheets, and learning hours. Everyone leaves the conversation a little unsatisfied.
Now the pressure is rising. Skills are at the center of talent strategies. AI is reshaping work. Budgets are tighter. L&D is expected to act less like a content factory and more like a strategic partner that can move real business needles.
That shift is precisely what Claude Werder, Senior VP and Principal Analyst at Brandon Hall Group, and Steve Lange, Director of Consulting for Metrics That Matter by Explorance, explored in their recent Excellence at Work podcast. Their conversation focuses on a simple but urgent idea: if organizations want a skills-based strategy, they need a skills-based measurement strategy that connects learning, capability, and business impact.
This article pulls out the key insights from that discussion and turns them into a practical roadmap. It covers how to align skills and strategy, move beyond traditional metrics, connect skills to outcomes, use AI and custom frameworks wisely, and bring managers into the center of skills application.
You can watch the interview in its entirety on all major podcast platforms, including YouTube:
Most organizations say they care about skills. Fewer can explain how specific skills ladder up to the outcomes that matter most to the business. That is where strategic skills alignment starts.
Steve frames the context very clearly. L&D is under pressure to be more focused, more selective, and more connected to real business problems.
L&D is becoming and has to become much more agile with less resources. Therefore, they need to know what's working, what's not working, and why.
Strategic alignment means skills are never an end in themselves. They sit within a chain that runs from business goal to critical job to required skill to learning experience to on-the-job application, and finally to measurable outcomes.
Steve recommends starting from the top, not from a generic skills library.
You have to start with what are the most important jobs that need to be done in the business, and then, what are the skills and expertise that you need for those jobs.
That sounds simple, but it is a powerful filter. Instead of trying to maintain a massive skills taxonomy for every role, he suggests coming back to a few critical issues:
Once those are clear, L&D can look at the learning portfolio through a different lens. The question shifts from “What content do we have?” to “Which experiences build the specific capabilities that matter most?”
Skills growth usually does not happen in a single course. Steve gives a helpful way to think in stacked experiences.
First, I can take some microlearning on these three courses, then I can take an advanced course on one of those skills. After, I had another course that focused on all of them again.
That stack should align with business outcomes. It also becomes the basic unit of measurement: the path learners take to build and apply targeted skills, not just the last course they completed.
Most L&D dashboards still orbit around volume and satisfaction: enrollments, completions, time spent, “Did you like the course?” scores. Those metrics describe activity. They do not prove capability.
Steve is clear that the core goal of effectiveness has not changed. What is changing is the level of focus.
We've always wanted to measure the elusive transfer of training or on-the-job application of training. That's part of the holy grail of training effectiveness. With the skills, the conversation now is more focused.
Instead of asking whether training in general is effective, the question becomes whether a specific skill has been acquired and applied in a way that matters to the business.
One of the most significant barriers is definitional. Organizations say they want to measure skills, but they often have not agreed on what counts as evidence. Steve describes the challenge:
What does the acquisition of a skill mean? Is it a score on a written test? Are there application exercises during training so someone can get certified …? Is it a checklist that needs to be verified by an SME or a manager a few weeks after training?
The answer, in his view, is that there is no single indicator. Skills acquisition is a process that plays out over time and across different contexts.
It's really tough. It's a process that needs to be defined, tracked, and scalable, and connected to an outcome. So it's not just one thing. It's a lot of things over a timeline or a stage of events that need to happen.
Rather than rely on one survey or one test score, Steve advocates a multi-touch approach:
That kind of timeline requires more coordination, but it produces much more substantial evidence than a single post-course rating.
The most important part of a skills strategy is connecting skills to business results. Without that link, organizations risk repeating the same old story: enthusiastic learning, impressive registrant numbers, no clear business case.
Steve captures this bluntly.
It's got to be linked to a business problem. It can't be just, ‘We need more people skilled in AI.’ Okay, but why? What for? What is that going to do? And what is that going to get us?
The same logic applies beyond AI. Earning a certification, completing a program, or finishing a curriculum only matters if those capabilities translate into outcomes the business values.
Steve uses the example of project management and Six Sigma.
For example, I've earned the Project Management Professional (PMP) certification, or maybe I've earned my Six Sigma Green Belt. These are great skills to have, but what outcome are we trying to achieve for me by having them? Is it more projects on time and under budget? Is it more efficient processes with less waste?
This is where L&D and business leaders need to work together. They need to articulate the result that will count as success for a given skill investment: fewer errors, higher conversion rates, better customer satisfaction, reduced cycle times, higher retention, and so on.
Steve often talks about this as building a chain of evidence.
Do we have enough, I'll say, proof or evidence, or links in the chain that you're building? … Both before and during training, maybe 60 days down the road, you've got certain things, certain levels of rigor showing that you've had skill development in these areas.
That chain allows L&D to present a narrative that feels credible to executives: what skills were targeted, how they were built, how they were applied, and what moved in the business as a result.
Not every program needs a full ROI study. Steve uses Patti Phillips as a reference point on that topic and stresses the importance of focus.
You might not be able to do that with all your skills courses either. You have to start somewhere.
This is where the earlier idea of focusing on the most critical jobs and issues comes back into play. Put your most rigorous measurement around the programs that support those priorities.
It is easy to fall into a narrow view of AI in learning: faster content, easier asset creation, and quick knowledge summaries. Steve warns against seeing AI only as a way to speed up production.
He recalls a conversation with a group of CLOs about this tension.
Everyone's talking about how AI can help L&D provide or produce more content, right? You can do this faster now. You don't need to interview 15 SMEs. You can just interview AI and build all this stuff really quickly.
One CLO in that conversation took a different angle.
Instead of asking how AI can help us build more content because it's not really what we need, it's about how can we be more efficient at what we're doing … If we get AI to help us do those things, then we can start linking those to business goals and objectives not just for L&D but in in the business overall.
AI is particularly good at spotting trends and connections in complex data sets. That is exactly what skills measurement needs.
AI's really, really good at picking out those patterns, those trends that we might be missing, to let us know that, well, maybe not this direction, maybe that direction, or something along the lines of no one's going over here, so we need to put people over there.
Many HR and learning technologies are already building AI into their capabilities.
I know a lot of the HR tools — like LMSs, LXP, and other tools — are building these taxonomies with AI, so they're getting smarter.
That means AI can help organizations:
Steve also sees a future in which organizations manage diverse skill portfolios tailored to specific objectives or populations. Each portfolio can have its own measurement framework, tuned to the outcomes that matter most for that group. AI then helps maintain and refine those portfolios over time.
No matter how smart the measurement framework or how advanced the technology, skills only come to life on the job. That is where managers become essential.
Throughout the conversation, Steve emphasizes that acquiring and validating skills requires coaching, follow-up, and accountability beyond the classroom.
You've got to have something built into your process that either documents it or showcases it, or something where you can go back and know that, okay, yes, Steve went to this course. He passed this test, but then he also did this project where he applied all that training, and it was a three-person panel, and they checked him off on 35 pieces of criteria.
Managers, mentors, or subject-matter experts are often in the best position to provide that validation. Near the end of the podcast, he describes this as a true partnership between L&D and the business.
I just delivered a workshop today to a client, who talked about how it's a team sport, right? Especially with something like skills. L&D can develop the most excellent training in the world. But the point is that they're applying that back on the job, and we don't see what happens. So, we need that help to be able to say skills were acquired, and yes, they are being applied.
This pushes managers into a more active role in development, far beyond approving training requests. They need to:
That is a mindset shift for many leaders. It also requires L&D to support managers with tools, prompts, and simple processes that fit into the flow of work.
Skills-based transformation will not succeed on content volume alone. The organizations that win will be the ones that can show a clear, credible line from targeted skills to real outcomes.
The conversation between Claude Werder and Steve Lange points to a future where L&D leaders:
As Steve puts it, the core measurement philosophy does not need a complete overhaul.
You don't necessarily need to veer too far from what you're doing, but you still want to focus on how to answer the question the business asked: We need to help with these skills.
For L&D, CHROs, and business leaders, the opportunity is clear. Start small. Focus on a few critical skills tied to high-stakes outcomes. Build a realistic chain of measurement. Pull managers and AI into the process.
Do that consistently, and learning measurement stops being a backwards-looking report. It becomes a forward-looking instrument for building the capabilities your organization needs next.
