In Part 1, we explored why most pharmaceutical L&D teams only evaluate the impact of training to the level of satisfaction scores and completion rates, and why those metrics tell us little about training effectiveness. Now let’s tackle the harder question: What should we measure, and how can pharmaceutical L&D teams realistically do it?
The Metrics That Matter
Kirkpatrick’s model gives us a framework to think about what we should be measuring in pharmaceutical training.
Learning (Level 2)
How well did they understand it?
This is where knowledge and skill assessments come in. Can your pharmaceutical sales representatives correctly identify approved indications? Can they explain the mechanism of action? Can they handle common objections from healthcare professionals?
What to measure:
- Pre and post-training knowledge assessments
- Scenario-based skill demonstrations
- Confidence ratings on applying specific techniques
The pharmaceutical reality: This is entirely feasible for most L&D teams. You can build quality assessments into your programs eLearning or conduct practical skill evaluations during facilitated sessions.
The challenge: Assessments in pharmaceutical training are often embarrassingly easy. If all (or even most) learners score 100%, you’re not measuring learning. You’re measuring who can read and click buttons. Build assessments that require application, not just recall.
Behaviour (Level 3)
Are they using it in the real world?
Now it gets really interesting. Your representatives might understand the new questioning technique perfectly. They might demonstrate it effectively in role plays. But are they really using it with when meeting with customers?
What to measure
- Call behaviour metrics: Call duration, frequency of calls to difficult-to-access customers, observation of specific skill use during field coaching
- Customer feedback: Call usefulness, message recall, perceived value
- Process adherence: Observation data, CRM entries
- Manager observations: Field coaching reports, quality of customer interactions
The pharmaceutical reality: This is difficult but realistic. Many pharmaceutical organisations already collect call data, customer feedback, and process metrics. The challenge is connecting these to specific training interventions.
The challenges:
Time lag: Behaviour change takes time. Meaningful data might not be available until 3-6 months after training. By then, stakeholders may have shifted focus and the ‘before-training’ state often forgotten.
Causality: Did call duration improve because of the questioning skills training? Or because the territory changed? Or because the product gained reimbursement? Isolating the impact of training from other influencing factors is difficult.
Resourcing: Frequent field time to observe calls across a geographically dispersed field force is expensive and time-consuming. Most L&D teams don’t have the budget or headcount, but can mitigate this by providing standardised coaching aids to front-line managers.
Results (Level 4)
How did it impact the business?
Now we enter the promised land of training evaluation. Did sales increase? Did customer satisfaction improve? Did we acquire new accounts?
What to measure:
- Sales metrics: Revenue growth, market share, new account acquisition
- Quality metrics: Procedure compliance, targeting, right-first-time manufacturing, audit findings
- Customer metrics: Customer retention rates, competitive wins
- Safety metrics: Adverse event reporting timeliness and accuracy
The pharmaceutical reality: Evaluation to this level is hard. Really hard. It takes time and effort to collect and analyse the data. Perhaps this is something we can do (well) for major training investments where the effort is justified.
The challenges:
Time lag: Business results might not be seen for 6-12 months after training. Product launches take time. Market dynamics shift. Attempting to draw a direct link between training and revenue after a year is ambitious at best.
Multiple variables: Sales success depends on more than representative knowledge and skill. Product supply chains, pricing, competition, market access, regulatory environment, territory demographics, and many other factors play a part. Claiming to know that training “caused” a sales increase requires either sophisticated analysis or comfort with dishonesty.
Data access: L&D teams often struggle to get the business data needed for Level 4 evaluation. This data lives in multiple systems managed by different functions. Access to raw data isn’t always easy, and its usefulness in improving our business contribution not always appreciated by those whose help we need to pull it together.
Attribution: Even when we can demonstrate genuine improvement in business results, establishing the contribution of training challenging. The best we can usually claim is correlation, not causation.
What’s Next
We’ve covered what to measure at each level of the Kirkpatrick model and the real challenges pharmaceutical L&D teams face when trying to measure training effectiveness.
In Part 3, we’ll tackle the practical question: How do you actually implement meaningful evaluation without unlimited budgets or resources? We’ll consider which training initiatives deserve rigorous evaluation, how to build measurement into the design process from the start, and how to have honest conversations with stakeholders about what the data actually means and what it doesn’t.
Ultimately, evaluation isn’t about proving training works. It’s about understanding how well it works, so you can do more of what makes the most impact.
Pingback: "Measuring Training Impact in Pharma: Beyond Completion Rates"