Evaluation in Instructional Design

Taking an evaluation course this Spring term and am extremely excited to learn about this field! What I have learned so far is that instructional designers (IDs) are agents of change, tasked with driving positive outcomes in organizations. Without measurable, impactful results from the efforts of a training team, the value of their work can quickly be called into question, leading to the risk of being cut. Oh no! As Kirkpatrick and Kirkpatrick (2016) emphasize, it is critical for trainers and IDs to “provide compelling evidence that training delivers bottom-line results and contributes to mission accomplishment” (p. 4). Otherwise, we risk becoming dispensable when organizational goals aren’t being met—a precarious position, especially when those goals are ill-defined, or when IDs and trainers are reduced to “order takers” by stakeholders. 

The value of instructional design goes beyond creating singular training events; it lies in the overall impact of those events within the performance context, whether it’s reducing workplace safety issues or increasing sales revenue. Demonstrating this value requires more than just good design—it calls for systematic evaluation. Worthen et al. (2004) highlight the distinction between practitioners and evaluators, pointing out the importance of taking on roles such as a scientific expert or trusted advisor to stakeholders to showcase the merit of a training program or process. 

Drawing from my experience as a sales manager, I’ve been in the position of being that “trusted friend” to company owners, navigating the pressures of proving my training efforts were effective amidst declining sales numbers. Conducting ride-alongs with the sales team, I evaluated not only their performance but also my effectiveness in teaching essential skills, like building client rapport and closing deals. In such high-stakes environments, demonstrating the tangible results of training—such as improved sales metrics—was critical for validating the training program and, ultimately, preserving my role. 

Looking back, I recognize the value of incorporating external evaluation to assess the training program as a whole. While I was confident in my efforts, having an objective, birds-eye view could have strengthened the case for the effectiveness of my training. Sales figures may not lie, but they don’t tell the full story either. For instance, a comprehensive evaluation could have helped attribute performance improvements to specific training activities, such as conducting X number of ride-alongs or coaching underperforming team members into top performers. 

As an instructional designer creating training programs, particularly for sales, evaluation must be embedded at every stage of the ADDIE model. Starting with “the end in mind” is essential to ensure the training achieves its intended impact and demonstrates its value. Without this approach, training programs risk being undervalued and, ultimately, discarded when organizational goals are not met. 

References 

Kirkpatrick, J. D., & Kirkpatrick, W. K. (2016). Kirkpatrick’s four levels of training evaluation. Association for Talent Development. 

Worthen, B. R., Sanders, J. R., & Fitzpatrick, J. L. (2004). Program evaluation: Alternative approaches and practical guidelines (3rd ed.). Allyn and Bacon.