How do you prove the value of training when resources are stretched thin and some leadership questions the need for anything beyond a quick “happy sheet?” Recent articles I’ve read have converged around this theme: evaluation always requires extra time, money, and stakeholder buy-in—yet it’s precisely what can keep training departments alive when budgets tighten.
The Kirkpatrick Model Still Reigns… but Why?
Most experts agree that Kirkpatrick’s Levels 1 and 2 (Reaction and Learning) remain the primary focus for training evaluations. Moller & Mallin (1996) lament that while Levels 3 (Behavior) and 4 (Results) would yield richer insights, many organizations shy away from these deeper levels. They find it puzzling that performance technology is touted as crucial, yet actual behavior changes (and bottom-line results) seldom get measured.
Giangreco (2008), on the other hand, believes the “industrial-era mindset” behind Kirkpatrick is now challenged by how the modern economy empowers individuals—influenced by new technology and “social emancipation.” Giangreco raises a valid point: not every training is designed to produce all four levels of output. Kirkpatrick’s approach can feel overly rigid or simplistic given the complex, knowledge-based roles that have replaced older industrial setups.
Bringing New Perspectives: Confirmative Evaluation & LTEM
DeVaughn & Stefaniak (2020) discuss “confirmative evaluation,” which extends beyond the traditional formative and summative. It looks at ongoing behavior changes and the memory aspect—something Kirkpatrick glosses over. They argue that if Instructional Designers (IDs) can’t show real ROI through consistent and meaningful evaluation, they’ll always face resistance from key decision-makers. This idea resonates with me personally: I’ve even had job interviewers demand examples of how I used data to prove training effectiveness.
LTEM (Learning-Transfer Evaluation Model) by Thalheimer (2024) digs deeply into long-term retention and contextual tasks rather than only short-term learning gains. Thalheimer sees evaluation as integral to designing instruction with a competitive edge for organizations. The focus on bridging levels (e.g., from knowledge to on-the-job application) ensures training is far more than a cursory “Yes, they liked it, so it’s successful.”
Changing Context, Same Rationale
One constant through the evolution of these frameworks is the principle that IDs need to prove their worth—evaluation is often the difference between a training department’s survival and its being the first cut when finances tighten. Over time, though, the audience, corporate structure, and the style of training have all shifted, rendering Kirkpatrick somewhat incomplete or less universal than originally intended. Giangreco (2008) specifically claims the modern “post-industrial” environment has more fluid goals. In many settings, we might not aim for all four levels—some training is strictly short term, compliance-based, or addresses intangible goals like staff motivation.
Biggest Takeaway for Me
1. Holistic & Actionable Evaluation
Reading about LTEM convinced me that a well-planned evaluation is more than a post-training quiz. It can be baked into the design process itself so that each step of the instruction is validated, bridging immediate learning gains to actual on-the-job impacts.
2. Knowing When All Levels Aren’t Necessary
Sometimes, we just need a straightforward Level 1 or Level 2 (e.g., compliance training), or a program that strictly focuses on day-to-day job processes. The key is clarity from the start: What do we truly need to measure?
3. IDs as Trusted Advisors
Far from merely building courses, evaluators are often advisors who ask hard questions, gather data, and articulate the findings. It’s not just test scores; it’s stakeholder collaboration, performance metrics, and memory studies. “Evaluation” can absolutely go beyond that dreaded multiple-choice final test.
4. Tying It All Back to ROI
At the end of the day, we’re dealing with budget decisions and organizational priorities. If we can’t show how our training leads to improvement—whether it’s teacher satisfaction, marketing conversions, or safer trucking operations—key stakeholders won’t see the need for deeper evaluations.
Conclusion
All these articles highlight how evaluating training beyond the short term is challenging—but crucial. As an ID, learning to handle advanced evaluation models not only helps refine instruction but also ensures we remain relevant in a world that increasingly questions the cost of training. By selectively applying Kirkpatrick, LTEM, or confirmative evaluation (depending on the project’s goals and constraints), we can deliver actionable data that shows real behavioral change and long-term returns on the organization’s investment.
Moral of the story? Go beyond the “happy sheet.” Show stakeholders why they should care about consistent, well-thought-out evaluation. And if that means referencing LTEM’s bridging of memory, or Kirkpatrick’s emphasis on results—use whichever gets you the buy-in to design better, more meaningful training.
References
- DeVaughn, P., & Stefaniak, J. (2021). An Exploration of the Challenges Instructional Designers Encounter While Conducting Evaluations. Performance Improvement Quarterly, 33(4), 443–470.
- Giangreco, A., Carugati, A., & Sebastiano, A. (2010). Are we doing the right thing?: Food for thought on training evaluation and its context. Personnel Review, 39(2), 162–177.
- Moller, L., & Mallin, P. (1996). Evaluation Practices of Instructional Designers and Organizational Supports and Barriers. Performance Improvement Quarterly, 9(4), 82–92.
- Thalheimer, W. (2024). The Learning-Transfer Evaluation Model: Sending messages and nudging evidence-informed thinking to enable learning effectiveness. Link.
Leave a Reply
You must be logged in to post a comment.