Evaluation and Implementation

Why this supra‑badge matters 

Evaluating instructional and non‑instructional interventions is central to change management. Planning builds the case; evaluation proves the worth—and guides improvement. Effective leaders distinguish when a gap requires training (skills/knowledge) versus when other levers (process, tooling, incentives, policy) drive performance. This supra‑badge demonstrates how I make those calls, collect evidence, and translate findings into decisions and diffusion strategies that take solutions from pilot to practice. 

Sub‑Badge: Evaluation Planning & Differentiating Interventions 

Competencies. Determine whether the problem is instructional or systemic; define success criteria and data sources up front; align measures to objectives and stakeholders’ KPIs. 

Personal achievements. I adopt a decision tree early in projects to test for non‑training causes (access, workflow friction, role clarity). This protects scope, prevents “training as a reflex,” and ensures that when we do build training, its outcomes map to real performance measures, not vanity metrics. 

Sub‑Badge: Formative Evaluation (Iterate with Evidence) 

Competencies. Collect early data; analyze learner context; iterate content, modality, and supports before full rollout. 

Personal achievements & artifact. For Volunteer Guardian ad Litem (VGAL) training, I administered Google Forms surveys to active GALs, VGALs, and staff to surface learner preferences, accessibility challenges, and unmet support needs. Listening early and often led to immediate design changes—right‑sizing content depth, adjusting delivery format, and adding targeted job aids. Formative evaluation here functioned as risk reduction: we caught mismatches before they became rework. 

Sub‑Badge: Summative Evaluation (Demonstrate Impact) 

Competencies. Design summative studies; interpret results across multiple levels; translate findings into decisions. 

Personal achievements & artifact. I conducted a summative evaluation of the AI Tools Academy across four levels: 

  • Level 1 – Reaction: participant satisfaction and perceived relevance. 
  • Level 2 – Learning: knowledge/skill gains against stated objectives. 
  • Level 3 – Behavior: adoption of AI at work (frequency, task fit, barriers). 
  • Level 4 – Results: movement in KPIs the workshop aimed to influence. 

This process strengthened my ability to speak with stakeholders about value, not just activity—while pinpointing targeted improvements for the next iteration. 

Sub‑Badge: Dissemination & Diffusion — Vision of Change (Challenge 1) 

Competencies. Design a plan for dissemination and diffusion of instructional and non‑instructional interventions; create a vision of change that aligns learning goals, performance goals, and organizational goals; address dialogue and negotiation needs during planning. 

Personal achievements & artifact (VGAL Microlearning Redesign). 

How I arrived at the redesign (new vision & interventions). This collection highlights my work designing AI‑enhanced microlearning modules for Volunteer Guardians ad Litem (VGALs). The redesign process began with a performance gap analysis and in‑depth SME interviews, which revealed two critical capability areas: cultural awareness and interviewing skills. Through dialogue with stakeholders, we prioritized needs that aligned most directly with both learner development and organizational priorities: ensuring GALs conduct interviews that are culturally sensitive, legally sound, and supportive of the child’s best interests. 

Alignment with learning, performance, and organizational goals. From this foundation, I created detailed storyboards and design documents that evolved into interactive H5P modules. Each artifact blends theory and innovation including behaviorism (structured practice and immediate feedback), cognitivism (signaling, chunking, worked examples, AI‑powered whiteboards), and Gagné’s Nine Events to guide attention, practice, and assessment. I also integrated a custom GPT assistant for scenario‑based coaching, along with NotebookLM‑generated videos and interactive quizzes. These tools directly supported learning goals (build interviewing competence), performance goals (consistent, neutral documentation and effective advocacy), and organizational goals (compliance with RCWs and improved outcomes for children in care). 

Dialogue and negotiation in the planning process. The vision of change emerged from intentional dialogue with SMEs, program managers, and active volunteers. We negotiated scope (which skills first), modality (what to deliver as job aids vs. microlearning), and adoption supports (practice prompts, exemplars, checklists). This collaborative alignment ensured interventions were research‑based, feasible in volunteers’ reality, and anchored to statutory responsibilities and the organization’s mission. 

Resulting diffusion plan. The strategy paired early pilots with rapid feedback loops, concise manager/mentor guides, and lightweight analytics to identify where learners needed more support. The result: accessible, engaging, AI‑supported training designed to scale while preserving quality. 

Overall experience: what I gained 

This supra‑badge strengthened my ability to: (1) separate instructional from non‑instructional causes, (2) de‑risk builds with formative evidence, (3) demonstrate impact with summative measures that matter to stakeholders, and (4) craft a vision of change and diffusion plan that secures alignment and adoption—not just launch. 

Applying this in current and future practice 

  • Current practice. Every solution now includes an evaluation brief and a diffusion brief at kickoff: success criteria, data sources, decision thresholds, stakeholder map, message map, and pilot‑to‑scale checkpoints. 
  • Future practice. I’ll deepen behavior‑level measures (Level 3) with lightweight instrumentation and coach prompts, link Level‑4 outcomes to operational dashboards, and continue using AI‑augmented microlearning plus job aids to accelerate capability building while honoring policy and privacy. 

Closing thought 

Evaluation & Implementation made the outcome‑work visible: not just what we launched, but what changed and how it spreads. The artifacts here show how I listen, measure, align, and diffuse—habits I’ll keep refining to drive adoption, performance, and meaningful results at scale. 

Challenge 1: Proof that I am able to implement formative evaluation plans 

Challenge 2: Further proof that I am able to implement summative evaluation plans

Challenge 1: The strong ability to create a vision of change that aligns learning and performance goals with organizational goals