Key distinction: Skills are not stored prompts — they set autonomous agents loose to take action (read/write files, run code, iterate). Emphasize the comparison table on the tutorial page.
Show the SKILL.md structure (YAML frontmatter + markdown instructions), the directory tree
Walk through the three example skills: /tree (quick inline), /validate-code (forked, autonomous), /ship-it (inline with approval gates)
Explain inline vs forked execution — when you want to watch vs hand off
Show the -h flag pattern: /tree -h prints usage help, like a Unix man page. Recommend adding this to any skill that takes arguments.
Participants download and install the /newskill skill file
Demo: run /newskill reviewpaper to build a paper review skill interactively
Key point: the iterative refinement loop (describe → draft → feedback → revise) is itself a transferable pattern for working with AI
Let participants try building their own skill for a task in their research
Demo: Pong — Show Claude Code building a complete video game from a single prompt: “Make a version of Pong in a single HTML file, black and white, arrow keys, with a ball speed slider.” Opens in the browser and is immediately playable. Good wow-factor demo.
Demo: /arXiv — Run the /arXiv skill to fetch today's new astro-ph papers and generate an HTML summary table. Papers on dark energy and flux calibration are flagged and promoted to the top. Shows a practical, domain-specific skill that participants can adapt to their own field.
Wrap-up
Show /cost to see session API usage
Point to "After the Session" section for getting personal API keys
Mention VS Code extension as alternative interface
Framing: Generative AI is an essential technical skill — not optional, not a fad. The field moves fast: today it's Claude Code, tomorrow it will be something else. The specific tools will change; the ability to work with AI agents will not.
Practical advice: Look at the tasks you spend significant time on and think about these tools as accelerators. What takes you a day that could take an hour? Then think bigger — what ambitious project would you attempt if the activation energy were lower?
The non-negotiable: Validation and verification are essential. AI makes it easy to produce results; your job is to make sure those results are correct. Never skip the step of checking the work.
Mac: Some participants may not have Python 3 — check with which python3
Windows: Python not on PATH (forgot to check the box during install)
Windows: PowerShell execution policy blocks .ps1 scripts — need Set-ExecutionPolicy
Both: 403 errors usually mean the API key is wrong or the model is not available
Both: "Auth conflict" warning is expected and harmless — tell participants to ignore it
Key pedagogical moments
Step 5 (sunset markers): This shows Claude Code's ability to break complex problems into sub-tasks, install libraries, and combine results
Step 7 (critical thinking): The Fourier analysis may give unexpected results — the dominant period should be ~1 day (diurnal cycle), but participants should think about whether the data supports this and what other periods appear
Plan Mode: Emphasize that reviewing the plan is critical — don't just rubber-stamp it. This teaches good habits for working with AI agents
Skills exercise
Participants need to install the SKILL.md file to ~/.claude/skills/newskill/SKILL.md — the download button is on the tutorial page
They must restart Claude Code after installing the skill for it to be discovered
If /newskill doesn't appear in autocomplete, check the file is in the right place: cat ~/.claude/skills/newskill/SKILL.md
Start with the "skills vs stored prompts" comparison — this is the conceptual hook that makes skills click
The three example skills (/tree, /validate-code, /ship-it) show the range: simple inline, forked autonomous, and inline with approval gates
The /newskill demo works best with a concrete example like /reviewpaper — participants can relate to it
Encourage participants to think about their own recurring tasks: data processing pipelines, code review checklists, report templates
The -h flag pattern is worth emphasizing — demo /tree -h to show self-documenting skills. /newskill now asks users whether to include a -h flag when building new skills.
Skills with context: fork run in isolated subagents — useful for long-running or expensive tasks
Project skills (.claude/skills/) can be committed to git and shared with collaborators — this is a practical selling point
Cost awareness
The ML exercise (Step 10) is the most expensive step — it involves multiple model calls and iterations
Have participants run /cost periodically to see cumulative usage
The workshop API key has a shared quota — if many participants run the ML exercise simultaneously, watch the quota