Evaluate Instructional and Non-Instructional Interventions

Challenge 1: Implement formative evaluation plans 

Criteria for successful completion of this challenge: Evidence of implementing a formative evaluation plan to provide information that can be used to make adjustments and improvements in the design.  Evidence must show a formative evaluation plan (expert review, one-to-one evaluation, small group, and field trial).  Reflection must address:   Which phase(s) of formative evaluation did you conduct? Which data did you collect (e.g., clarity and accuracy of instruction, general attitudes, procedural issues, etc.)? What were the results of formative evaluation and how did it affect your design?Examples: Evaluation Plan (EDCI 528), Design Documents (EDCI 572), Learning Module (EDCI 575), eLearning Project (EDCI 569), artifacts showing strategies for implementation of an evaluation plan (design, performance, workplace, educational, other).

Reflection

Coming Soon

Artifact

Coming Soon

Challenge 2: Implement summative evaluation plans

Criteria for successful completion of this challenge: Evidence of implementing a summative evaluation plan to evaluate the effectiveness of the instruction and decide whether to continue to use instruction.  Evidence must show an evaluation plan (e.g., Kirkpatrick’s Four Levels of evaluation).  Reflection must address: If the implementation of the summative evaluation met your expectations.  What were the results of the summative evaluation (did you continue with program/instruction, did you cancel it, did you modify it)?Examples: The following assignments are applicable if implemented:  Evaluation Plan (EDCI 528), Evaluation Plan (EDCI 577), artifacts showing implementation of an evaluation plan (design, performance, workplace, educational, other).

Artifact

Reflection

When I first developed the AI Tools Academy evaluation plan (outlined above), my primary intention was to confirm that the instruction genuinely improved participants’ AI-related knowledge, skill application, and long-term performance. Specifically, I wanted to see if K–12 teachers, managers, and small-business owners could embrace AI in their real-world contexts after attending a single-day workshop. In crafting the four-level summative evaluation plan (based on Kirkpatrick’s model), I hoped to (1) gather immediate feedback and learning gains, and (2) see whether on-the-job behavior actually changed, (3) measure organizational ROI, and (4) determine if the workshop should be refined, continued, or discontinued.


Did the Implementation Meet My Expectations?

Overall, yes, the plan met my expectations in these ways:

  1. Comprehensive Data: Because we used different instruments (Level 1 Reaction surveys, Level 2 quizzes and scenario tasks, Level 3 follow-up interviews, and Level 4 ROI metrics), I obtained a more holistic view of how effectively participants were learning to integrate AI into their daily workflows.
  2. Practical Timelines: The short-term (4–6 week) check-ins provided immediate behavior insights, while the longer (3–6 month) KPI assessments confirmed whether the organizational results improved.
  3. Stakeholder Buy-In: Presenting concise data at each stage helped convince skeptical stakeholders (such as small-business owners and district administrators) that the Academy had measurable value.

One unexpected hurdle was how varied the results were across different audiences—teachers often needed extra prompts and follow-up, whereas business owners tended to adopt AI more quickly if they saw direct ROI. It confirmed that a one-size-fits-all approach to evaluating this workshop would not suffice and that segmented data analysis was crucial.


What Were the Results of the Summative Evaluation?

  1. Continuation and Refinement: After analyzing the Level 3 interviews and usage logs, I found that about 70% of participants consistently applied at least one AI tool (e.g., ChatGPT for drafting content, or Copilot for document automation). Because this surpassed our benchmark (60–65%), we chose to continue delivering the workshop.
  2. Workshop Modifications: Level 1 and 2 data suggested some participants (especially teachers) desired more practical “time-saving hacks,” so we added an extra segment on prompt-engineering best practices.
  3. ROI Confirmed: Level 4 metrics for businesses (like a 15% jump in marketing engagement rates) convinced us that the program should scale. While not all businesses hit that threshold, the average improvement signaled that further expansions or additional modules for advanced AI integration might be valuable.

Overall Impact and Future Use

From the collected data and interviews, the AI Tools Academy demonstrated enough positive performance changes and organizational gains to justify continuing with the summative evaluation approach and iterating on the content design. Thanks to the four-level analysis, I could speak confidently to stakeholders about:

  • How satisfied participants were (Level 1),
  • Whether they truly learned the material (Level 2),
  • If they followed through on using AI at work (Level 3),
  • And how the workshop ultimately affected key performance indicators (Level 4).

With this evidence in hand, the workshop continued—with slight modifications—and I plan to re-evaluate it in another six months to keep building on these successes. This experience taught me that summative evaluation not only proves the “worth” of training but also reveals targeted areas for continuous improvement.