The Challenge
When internal AI tools became available in 2023, most instructional designers were skeptical—or didn't know where to start. Some worried AI would replace their expertise. Others saw it as a solution looking for a problem.
And yet, we had real challenges AI could solve. Updating course materials was time-intensive. Accessibility reviews were manual and inconsistent. Technical research—understanding macOS APIs, writing shell scripts, validating code samples—consumed hours that could have gone to instructional design.
The question wasn't whether AI could help. It was: how do we adopt these tools without compromising quality or overwhelming the team?
My Approach
I led adoption of internal AI tools by demonstrating, not evangelizing. That is, I showed the team what worked—and what didn't—through real projects.
Leading by Example
I started with the Jamf 300 instructor guide—a technical certification course with a 100+ page instructor manual that needed substantial updates. Rather than write from scratch, I:
- Developed prompt templates for expanding technical explanations, adding troubleshooting scenarios, and clarifying complex concepts
- Created a workflow where AI generated draft content based on course objectives and SME input
- Edited rigorously to ensure technical accuracy, instructional alignment, and appropriate scaffolding
The result: I added over 5,000 words of high-quality instructor guidance—scenario-based examples, common student misconceptions, troubleshooting tips—saving 100+ hours of writing and research time.
Modeling Prompt Engineering
I ran internal workshops demonstrating how to write effective prompts:
- Context-setting prompts: "You are an expert in macOS device management. Explain X to IT administrators who are new to Apple platforms."
- Constraint-based prompts: "Rewrite this explanation for accessibility—use simpler language, define jargon, and add concrete examples."
- Iterative refinement: Showing the team how to improve outputs through follow-up prompts rather than accepting first drafts
This wasn't about learning AI—it was about learning how to partner with AI to produce better training.
The Solution
I built repeatable workflows for common tasks:
Accessibility Reviews
I use AI to conduct accessibility reviews of course content—checking for:
- Unclear jargon without context
- Complex sentence structures that reduce readability
- Missing explanations for visual elements
- Opportunities to add concrete examples for abstract concepts
This doesn't replace human judgment—it surfaces issues I might miss after the 15th revision.
Technical Research and Scripting
I use AI to develop shell scripts and research technical aspects of macOS:
- Generating starter scripts for common automation tasks (then refining based on testing)
- Explaining macOS APIs and frameworks (e.g., "How does
jamf reconinteract with inventory management?") - Validating code samples for accuracy and best practices
This accelerates research—AI provides a starting point, I verify against Apple documentation and real-world testing.
Cross-Departmental Advocacy
I participate in cross-departmental internal AI discussions, helping teams across the organization understand:
- Where AI adds value (drafting, research, accessibility checks)
- Where it doesn't (instructional design decisions, learner empathy, assessment strategy)
- How to frame AI adoption as augmentation, not replacement
Impact
Team Adoption
- Instructional designers who were skeptical are now integrating AI into their workflows
- Prompt engineering has become a shared skill across the team
- Internal AI tool usage has increased as designers see concrete results
Time Savings
- Saved 100+ hours on the Jamf 300 instructor guide update alone
- Accessibility reviews that took hours now take minutes (with human validation)
- Technical research cycles reduced from days to hours
Quality Improvements
- More time for scenario development and hands-on practice design
- Richer instructor guidance (troubleshooting tips, common misconceptions, extension activities)
- More consistent accessibility standards across all content
What I Learned
AI is a force multiplier—not a replacement. The work that requires understanding learners, context, and instructional strategy still needs human expertise. And yet, AI accelerates the mechanical work—research, drafting, accessibility checks—freeing designers to focus on what actually makes training work.
Adoption requires demonstration, not persuasion. I didn't convince the team to use AI by talking about it—I showed them concrete examples. The Jamf 300 instructor guide wasn't just a deliverable; it was proof that AI could save time and raise quality.
Prompt engineering is instructional design. Writing effective prompts requires the same skills as writing learning objectives—clarity, specificity, understanding of audience and context. That is, instructional designers already have the skills to use AI well. They just need to see how.
Stakeholder communication matters as much as the tool. Framing AI as a "partner" rather than a "replacement" helped teams understand it wasn't devaluing their expertise—it was amplifying it. The instructional designer still makes the design decisions. AI just handles the drafting and research legwork.