Evaluate Instructional and Non-Instructional Interventions

Challenge 1: Implement formative evaluation plans 

Criteria for successful completion of this challenge: Evidence of implementing a formative evaluation plan to provide information that can be used to make adjustments and improvements in the design.  Evidence must show a formative evaluation plan (expert review, one-to-one evaluation, small group, and field trial).  Reflection must address:   Which phase(s) of formative evaluation did you conduct? Which data did you collect (e.g., clarity and accuracy of instruction, general attitudes, procedural issues, etc.)? What were the results of formative evaluation and how did it affect your design?Examples: Evaluation Plan (EDCI 528), Design Documents (EDCI 572), Learning Module (EDCI 575), eLearning Project (EDCI 569), artifacts showing strategies for implementation of an evaluation plan (design, performance, workplace, educational, other).

Reflection

Reflection on Formative Evaluation of VGAL Training Modules 

During the analysis phase of this project, I focused specifically on the formative evaluation of the training design for Volunteer Guardians ad Litem (VGALs) to gain deeper insights on how to improve the curriculum and “get it right the first time,” and to “work smarter, not harder” as my old sales manager used to say. My goal was not to measure the final impact of the program, but rather to collect data before starting the design and delivery process and move the ball down the field with confidence! Why even attempt to build a solution that nobody has the technical capacity to use!  To assume would risk time and money. By administering Google Forms surveys to active GALs, VGALs, and staff, I gathered insights on learner preferences, accessibility challenges, and unmet support needs. This formative data collection was central to shaping the modules before they were finalized. 

What Data Was Collected 

The data collected included: 

  • Demographics: Age ranges, educational backgrounds, and current role (VGAL, active GAL, or staff). 
  • Training format preferences: Online modules, self-paced learning, blended learning, group sessions/discussions, and reading/research. 
  • Challenges and barriers: Time and scheduling conflicts, emotional stress, technical issues (e.g., file uploads, access, internet outages), and lack of clarity in communication. 
  • Desired supports: Requests for real-life practice, more detailed court process explanations, job aids and glossaries, notes/slides after training, and opportunities to learn from experienced GALs. 
  • Concerns about the role: Emotional load, time commitment, readiness for testimony, and uncertainty about effectiveness or personality fit. 

Results of the Formative Evaluation 

The formative evaluation produced several clear findings: 

  • Format Preferences: Most age groups favored online modules, self-paced learning, and group discussions. Older volunteers (60–79) valued blended formats and in-person training, while participants with advanced degrees leaned toward more in-depth online modules. 
  • Barriers: Time and scheduling stood out as the most significant barrier, followed by emotional fatigue and stress. While most participants did not foresee major technology challenges, several respondents reported access, uploading, or vision-related issues. 
  • Support Needs: Learners wanted practical guidance such as checklists, court-reporting examples, and follow-up notes. Many also requested more opportunities for mentorship or exposure to the lived experience of seasoned GALs. 
  • Concerns: Respondents expressed uncertainty about emotional readiness, time management, and clarity of direction in the role. 

How the Formative Evaluation Affected My Design 

The formative evaluation directly shaped how I redesigned the modules: 

  • Blended Flexibility: Because older volunteers strongly preferred a mix of online and in-person formats, I prioritized a blended structure with flexible scheduling and asynchronous options. 
  • Practical Tools: I integrated new job aids, documentation templates, and glossaries into the modules to address requests for clear expectations and practical court-reporting guidance. 
  • Accessibility Improvements: Based on feedback, I incorporated larger fonts, clearer upload instructions, and simplified navigation for online platforms. 
  • Mentorship Opportunities: I added reflection prompts and optional Q&A sessions with experienced GALs to provide real-world context and reduce role-related uncertainty. 
  • Emotional Support Considerations: I began including resources and discussions around emotional resilience, recognizing the stress and mental load associated with the role. 

Contrast With Summative Evaluation 

While formative evaluation helped me refine the design during development, a summative evaluation will later determine whether the program as a whole truly improves GAL performance. Summative evaluation will look at end results such as: 

  • Learner satisfaction (reaction). 
  • Knowledge and skill gains (learning). 
  • On-the-job performance improvements (behavior). 
  • System-level or court-level impact (results). 

For now, the formative phase ensured that the modules were learner-centered, accessible, and directly responsive to real challenges faced by GALs. 

In conclusion, conducting the formative evaluation highlighted the importance of listening to volunteers early and often. The results influenced immediate adjustments in content depth, delivery format, and support materials. This phase ensured that the training design is not just legally aligned with RCWs, but also practically relevant and responsive to the lived experience of volunteers. As I move forward, a summative evaluation will build on these formative insights to demonstrate the overall impact and effectiveness of the training program to create proof of lasting impact within the organization I have the pleasure of improving and working with now, and in the future. 

Challenge 2: Implement summative evaluation plans

Criteria for successful completion of this challenge: Evidence of implementing a summative evaluation plan to evaluate the effectiveness of the instruction and decide whether to continue to use instruction.  Evidence must show an evaluation plan (e.g., Kirkpatrick’s Four Levels of evaluation).  Reflection must address: If the implementation of the summative evaluation met your expectations.  What were the results of the summative evaluation (did you continue with program/instruction, did you cancel it, did you modify it)?Examples: The following assignments are applicable if implemented:  Evaluation Plan (EDCI 528), Evaluation Plan (EDCI 577), artifacts showing implementation of an evaluation plan (design, performance, workplace, educational, other).

Artifact

Reflection

When I first developed the AI Tools Academy evaluation plan (outlined above), my primary intention was to confirm that the instruction genuinely improved participants’ AI-related knowledge, skill application, and long-term performance. Specifically, I wanted to see if K–12 teachers, managers, and small-business owners could embrace AI in their real-world contexts after attending a single-day workshop. In crafting the four-level summative evaluation plan (based on Kirkpatrick’s model), I hoped to (1) gather immediate feedback and learning gains, and (2) see whether on-the-job behavior actually changed, (3) measure organizational ROI, and (4) determine if the workshop should be refined, continued, or discontinued.


Did the Implementation Meet My Expectations?

Overall, yes, the plan met my expectations in these ways:

  1. Comprehensive Data: Because we used different instruments (Level 1 Reaction surveys, Level 2 quizzes and scenario tasks, Level 3 follow-up interviews, and Level 4 ROI metrics), I obtained a more holistic view of how effectively participants were learning to integrate AI into their daily workflows.
  2. Practical Timelines: The short-term (4–6 week) check-ins provided immediate behavior insights, while the longer (3–6 month) KPI assessments confirmed whether the organizational results improved.
  3. Stakeholder Buy-In: Presenting concise data at each stage helped convince skeptical stakeholders (such as small-business owners and district administrators) that the Academy had measurable value.

One unexpected hurdle was how varied the results were across different audiences—teachers often needed extra prompts and follow-up, whereas business owners tended to adopt AI more quickly if they saw direct ROI. It confirmed that a one-size-fits-all approach to evaluating this workshop would not suffice and that segmented data analysis was crucial.


What Were the Results of the Summative Evaluation?

  1. Continuation and Refinement: After analyzing the Level 3 interviews and usage logs, I found that about 70% of participants consistently applied at least one AI tool (e.g., ChatGPT for drafting content, or Copilot for document automation). Because this surpassed our benchmark (60–65%), we chose to continue delivering the workshop.
  2. Workshop Modifications: Level 1 and 2 data suggested some participants (especially teachers) desired more practical “time-saving hacks,” so we added an extra segment on prompt-engineering best practices.
  3. ROI Confirmed: Level 4 metrics for businesses (like a 15% jump in marketing engagement rates) convinced us that the program should scale. While not all businesses hit that threshold, the average improvement signaled that further expansions or additional modules for advanced AI integration might be valuable.

Overall Impact and Future Use

From the collected data and interviews, the AI Tools Academy demonstrated enough positive performance changes and organizational gains to justify continuing with the summative evaluation approach and iterating on the content design. Thanks to the four-level analysis, I could speak confidently to stakeholders about:

  • How satisfied participants were (Level 1),
  • Whether they truly learned the material (Level 2),
  • If they followed through on using AI at work (Level 3),
  • And how the workshop ultimately affected key performance indicators (Level 4).

With this evidence in hand, the workshop continued—with slight modifications—and I plan to re-evaluate it in another six months to keep building on these successes. This experience taught me that summative evaluation not only proves the “worth” of training but also reveals targeted areas for continuous improvement.