Challenge 1: Determine subordinate and prerequisite skills and knowledge 

Criteria for successful completion of this challenge: Evidence of determining subordinate and prerequisite skills and knowledge.  Reflection must address: How you determined subordinate and pre-req skills/knowledge for an audience (goal analysis, instructional analysis, etc.).Examples: Demonstration of identifying all of the steps a learner needs in order to achieve the learner goal, organizing learning objectives in a hierarchical order, identifying the steps needed in order to meet a goal, EDCI 572 Design Documents, EDCI 577 Content/Audience analysis (Jet Blue, Instructional Product Evaluations), artifacts focused on determining pre-req skills and knowledge (design, performance, workplace, educational, other).

Artifact

Reflection

In designing the Evaluation Plan for the AI Tools Academy, I discovered how crucial it is to create a systematic, data-driven approach that captures the true impact of a training program. The workshop itself aims to teach K–12 teachers, college students, and small business owners how to integrate AI tools ethically and effectively; however, building a plan to measure and demonstrate success involved multiple layers of careful thought.

First, I learned the importance of aligning evaluation instruments (surveys, quizzes, scenario tasks) with the course’s learning objectives. By clarifying exactly what we wanted participants to know (e.g., AI basics, responsible usage) and do (e.g., create lesson plans or business strategies), I could design Level 1 and Level 2 instruments that directly tested those competencies. For instance, scenario-based tasks allowed me to observe how well participants could apply AI to a real-life challenge, whether that was drafting an AI-driven marketing email or developing a short AI-assisted lesson plan. This approach ensured that data collected would be actionable—helping stakeholders see whether participants truly gained practical skills.

Second, the Kirkpatrick Four-Level Model served as a reliable blueprint for structuring the plan. Level 1 (Reaction) data helped ensure the session was relevant, paced well, and resonated with a broad audience (teachers, business owners, etc.). Level 2 (Learning) confirmed actual skill uptake, while Level 3 (Behavior) focused on follow-up surveys and interviews to see if participants continued using AI post-training. Finally, by revisiting organizational metrics over the next few months (Level 4, Results), I learned how to tie short-term learning outcomes to long-term ROI, such as faster lesson prep or improved sales pipelines.

Lastly, incorporating accessibility features was a major takeaway. The evaluation plan specified steps to create large-print surveys, screen-reader-friendly forms, and “Not Applicable” options for certain Likert items, ensuring everyone—regardless of job role or ability—could offer meaningful feedback. This underscored how thoughtful evaluation planning benefits all participants, not just the ones we initially picture.

Overall, by mapping instruments to learning objectives, timing data collection points to measure immediate and sustained application, and reporting results back to organizational leadership, I saw how a thorough evaluation plan can reveal the deeper impact of a single-day workshop. It helps answer critical questions like “Did this training change day-to-day practices?” and “Are we seeing tangible organizational improvements?”—thus reinforcing the training’s ultimate purpose of shaping how educators and business owners responsibly harness AI in their work.

Challenge 2: Use appropriate techniques to analyze various types and sources to validate content

Criteria for successful completion of this challenge: Evidence of utilizing validation techniques (checking the source, researching the author – education, experience, reputation, how many times cited, etc.). Reflection must address: The specific techniques you used to validate your sources and content.

Examples: Any research paper (EDCI 513 Final Literature Review, EDCI 531 Final Paper), peer-reviews focusing on checking other’s sources, annotated bibliography (EDCI 660), work-related documentation (design, performance, workplace, educational, other) focused on use of or creation of validation techniques.

Artifact

Reflection

For the competency “Use appropriate techniques to analyze various types and sources to validate content,” I have selected my paper, “Transforming Workforce Training: The Impact of AI on Soft and Traditional Skills Development,” as an artifact. This literature review explores the role of Artificial Intelligence (AI) in both workforce training and adult education, and it required a rigorous process of source validation to ensure that the arguments presented were supported by credible and well-founded research.

To validate my sources and content, I employed several techniques. First, I verified the credibility of the journals and publishers by cross-checking the databases from which I retrieved the articles, such as Springer Nature and the Journal of Ethics in AI. The peer-reviewed status of the publications and their reputations in the fields of education and technology ensured that I was working with high-quality, scholarly materials.

I also researched the backgrounds of the authors to assess their qualifications. For example, I reviewed the education and experience of key authors like Salman Khan and Hannele Niemi to confirm their expertise in AI in education. I looked at their prior publications and checked their institutional affiliations, which included respected universities and think tanks. This verification process helped me understand the depth of their contributions and any biases they might bring to their studies.

Another technique I used was analyzing the number of times each article had been cited by other scholars. Articles such as Hattie and Timperley’s (2007) research on the importance of feedback were highly cited, which indicated their significance and the influence of their findings within the academic community. This gave me confidence that these sources were foundational and widely recognized as valid contributions to the field.

Furthermore, I examined the dates of publication to ensure that my sources were up-to-date, particularly for a fast-evolving topic like AI. For instance, I relied on articles published within the last three years, such as those by Cardon et al. (2024) and Wach et al. (2023), to incorporate the most recent developments and discussions on AI applications in workforce training. Ensuring the timeliness of my sources allowed me to present current trends and potential future implications in a rapidly changing technological landscape.

Finally, I cross-referenced multiple sources to identify consistencies or discrepancies in the findings related to AI’s impact on skill development. This comparative approach enabled me to validate the content by looking for convergent evidence across different studies. For example, both Morandini et al. (2023) and Ostin (2023) discussed AI’s role in upskilling, and the consistency in their conclusions provided strong support for my arguments.

Overall, these validation techniques—checking the source credibility, researching author backgrounds, assessing citations, verifying publication dates, and cross-referencing studies—helped ensure that my literature review was built on a solid foundation of reliable information. This approach allowed me to confidently support my arguments about the transformative potential of AI in both soft and traditional skills training.

Moving forward, I plan to continue refining my ability to validate sources effectively. Recognizing the value of strong research in instructional design, I am committed to applying these validation techniques rigorously in future projects to ensure that my instructional materials are not only accurate but also based on the most credible and up-to-date evidence available.