Measuring Learning Impact in Pharma (Part 1)

You’ve just wrapped up deployment of a sales training program. The LMS dashboard shows 94% completion of the program. Satisfaction scores averaged 4.8 out of 5. Your stakeholders are happy.
But three months later, sales volumes haven’t improved and customer feedback is unchanged. When coaching in-field, you see little evidence of representatives using what they learned.

What went wrong? Nothing, according to your metrics. Everything, according to your observations.
Welcome to the trap of unaligned metrics – measures that don’t predict the outcomes you care about.
As many as half of all organisations use completion rate as the primary measure of training effectiveness. Let that sink in.

Completion rates tell us one thing: people clicked through to the end. It doesn’t tell us if they paid attention. It doesn’t tell us what they learned. It certainly doesn’t tell us whether their behaviour changed or if the business benefited.

Yet many L&D teams in Pharma continue reporting completion rates and satisfaction scores to stakeholders as evidence of training “success,” knowing full well these metrics are almost meaningless.
Why do we focus on such incomplete metrics? Mostly, because they’re immediate and they’re easy to collect. And they don’t initiate difficult conversations about causality, attribution, or whether training was even the right solution in the first place.
Easy doesn’t equal useful. Immediate doesn’t equal important.

Measuring effectiveness

If you’ve worked in L&D for any length of time, you’re familiar with the Kirkpatrick Model’s four levels of evaluation:

  1. Reaction did they like it?
  2. Learning did they understand it?
  3. Behaviour – are they applying it?
  4. Results – did it impact the business?

It’s a useful framework, but not as useful as we like to imagine. The problem isn’t the model – it’s how organisations use it, and more importantly how they don’t use it.
Research consistently shows that most organisations only measure Level 1 (reaction) and occasionally Level 2 (learning). Very few measure Level 3 (behaviour) and almost nobody measures Level 4 (results).
So, why do pharmaceutical L&D teams so often stop at Level 1 or 2?

  1. It’s fast. Satisfaction scores can be collected immediately after training. There’s no waiting and little analysis required.
  2. It’s cheap. Satisfaction surveys cost nothing except five minutes of learner time. Measuring behaviour change across a field force requires time for observation and data analysis, resources most L&D teams don’t have easy access to.
  3. It avoids difficult questions. If you only measure reaction, you never have to answer (or ask) “did this training actually work?” You can point to high satisfaction scores as indicators of success, even if nothing meaningful has changed.
  4. It’s what stakeholders expect. When was the last time senior leadership asked for Level 3 or Level 4 data? Probably never. They’ve been conditioned to accept completion rates and satisfaction scores as evidence of effectiveness.

This raises an awkward question: Is the real reason we stop at Level 1 or 2 that we’re afraid of what Levels 3 and 4 might reveal?

What’s Next

We’ve established that completion rates and satisfaction scores tell us almost nothing about training effectiveness. But that leaves us with an important question: if not those metrics, then what?

In Part 2, we’ll examine what you should actually measure at each level of the Kirkpatrick model – with practical pharmaceutical examples and an honest discussion of the real challenges L&D teams face when trying to move beyond Level 1 evaluation.

1 thought on “Measuring Learning Impact in Pharma (Part 1)”

  1. Pingback: Measuring Learning Impact in Pharma (Part 3) - Craig Nobbs | L&D | Instructional Design

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top