Becoming the Agent-Savvy LXD: Preparing for the Rise of Super-AI-Powered Employees

By Jason Boursier


Not long ago, the job of a Learning Experience Designer (LXD) centered around well-scoped eLearning courses, instructor-led trainings, and well-worn LMS platforms. But something has changed. Or rather, everything has changed. I recently had a revelation: as AI rapidly transforms how people work, learn, and make decisions, a new role must emerge within the field of Learning and Development.

Enter the Agentic Learning Architect — or what I like to call the Agent-Savvy LXD.

This position isn’t theoretical. It’s necessary. And it’s already beginning to take shape.

We are stepping into a future where every employee is supported by an invisible team of AI agents. These agents manage data, write emails, generate insights, and even make suggestions based on real-time business logic. But here’s the catch: just because these tools exist doesn’t mean people know how to use them well.

That’s where we come in.


Why Traditional L&D Isn’t Enough Anymore

As a learning designer, I used to think that if a training had clear objectives and was accessible, it would be effective. But today, we face workplace shifts that traditional L&D can’t keep up with:

Shift in the WorkplaceWhy Classic L&D Falls ShortWhat Agentic LXDs Must Do
Personal AI agents handle tasksTraining only teaches human SOPsDesign handoff simulations and agent workflows
Learning is “in the flow of work”Courses are too slow and detachedCreate just-in-time micro-lessons triggered by data
Roles evolve faster than job titlesCurricula lag behind new tech releasesBuild flexible, AI-updated learning pipelines
Hyper-personalization is expectedOne-size-fits-all overwhelms learnersUse AI to customize paths, with human ethical oversight
Agents still failOver-trust creates new risksTrain learners to recognize bias, hallucination, and overreach

Core Responsibilities of an Agent-Savvy LXD

To support the rise of hybrid AI-augmented workers, LXDs like me need to upskill in six core areas:

  1. Workflow Mapping & Role Split
    • Define what the human does, what the AI agent handles, and how they collaborate. Build training around these models.
  2. Prompt-Craft & Agent Orchestration Literacy
    • Teach people how to think like a conductor: framing prompts, delegating tasks, refining outputs.
  3. Data-Rich Adaptive Delivery
    • Leverage tools like xAPI to track not just what learners click, but how they interact with AI tools in real time.
  4. Ethics & Trust Calibration
    • Design experiences that make learners think about bias, transparency, and intellectual property.
  5. Inclusive Enablement
    • Bridge gaps in access, confidence, and representation through scaffolded AI support and inclusive content.
  6. Continuous Content Governance
    • Vet AI-generated content. Be the final filter before it hits learners’ screens.

Learning Theories Still Matter (Maybe More Than Ever)

As AI joins the workforce, human learning won’t disappear — it will evolve. Four timeless theories still guide how we design for this shift:

  • Connectivism: Workers need to learn how to navigate a network of human and AI collaborators.
  • Distributed Cognition: AI tools become part of the learner’s memory and processing system.
  • Self-Determination Theory: Learners must retain autonomy, grow competence, and feel connected, even as AI supports them.
  • Activity Theory: Understanding the tools, tensions, and goals behind learner behavior remains vital.

Tools We’ll Need in Our LXD Toolbelt

The Agent-Savvy LXD will use next-gen tools:

  • Autogen / LangGraph for agent simulations
  • Microsoft Copilot / Slack bots for in-flow learning
  • Articulate Storyline + API hooks for interactive scenarios
  • ZapWorks for AR-enhanced field tasks
  • Learning Locker / Power BI for adaptive xAPI-driven feedback
  • TrustLayer / Holistic AI for ethical checkpoints

How I’m Preparing for This Role (And How You Can Too)

This revelation didn’t come from a single article. It came from watching tools evolve, projects transform, and learners struggle with emerging workflows. To prepare:

  • I’m building agent-assisted task simulations for real-world systems (like boat reservations or sailing lessons).
  • I’m integrating xAPI into Articulate Storyline to track not just completion, but interaction with AI prompts.
  • I’m studying responsible AI frameworks and ethics training from MIT.
  • I’m connecting with early adopters in Discord and Slack groups focused on AI in learning.

And most importantly, I’m sharing my process. Because we’re not just designing content anymore — we’re designing collaboration between humans and intelligent systems.


What the Future Looks Like

TimeframeTitleValue Proposition
2025-26AI Learning TechnologistBuild prompt libraries and smart micro-learning interventions
2027-30Agentic Learning ArchitectDesign orchestration curricula and multi-agent simulations
2030+Human-AI Fluency OfficerLead org-wide AI + L&D strategy across HR, Ops, and Learning

McKinsey projects that up to 40% of workflows will be AI-augmented by 2030. The need for ethical, creative, instructional leadership will only grow.


Final Thoughts

The question is no longer if hybrid human + AI workplaces are coming. They’re here. The real question is:

Who will help people learn to thrive in them?

I believe the answer is us — the instructional designers, LXDs, and learning architects who are willing to evolve, experiment, and lead.

If you’re reading this and nodding along, then I invite you to build this future with me.

Because the next great learning revolution won’t just happen in classrooms or LMSs. It will happen in the flow of intelligent work.

Let’s design it right.