This page is for instructors only.
It is listed on the Site Map but not in the main navigation bar.
Session Run-of-Show
Brief recap of Session 1
Quick review: Gemini, NotebookLM, Gems
Any follow-up questions from last week
Two architectures (lecture, ~10 min)
Show the two architecture diagrams on the session page
Architecture 1: Google Colab + Gemini — everything in the cloud, free
Architecture 2: Local JupyterLab + HUIT Bedrock proxy — code and data stay on your machine, API calls billed to PI
Exercise 1: Google Colab with Gemini (3 notebooks)
Ensure everyone can access Colab with their Harvard Google account
Notebook 1 — AI in Colab: Gemini sidebar (chat) vs. magic wand (cell-level AI), pre-installed packages, installing new ones with !pip install
Notebook 2 — File I/O with Google Drive: Generating, writing, reading, and analyzing a CSV file on Google Drive
Notebook 3 — Data Browsing: Load SDSS galaxy photometry from Google Drive, inspect and visualize, then use Gemini chat to generate a 3D interactive plot and convert the notebook to a standalone program
Key point: two Gemini interfaces — the magic wand (cell context) vs. the blue Gemini star at the bottom (general chat, no cell context)
Exercise 2: Local JupyterLab with Jupyternaut
Walk through the setup guide (Mac | Windows): venv, pip install, environment variables, Jupyternaut settings
Common issue: forgetting to source ./set_env.sh before launching JupyterLab
Prompt 1: Damped harmonic oscillator notebook with slider controls — tests whether AI can generate a complete interactive notebook from scratch
Prompt 2: "Do some sensible analysis on sdss_photometry.csv" — tests whether AI can figure out what the data is and choose appropriate analyses without being told
Key point: local notebooks give you control over environment, files, and privacy
Preview of Session 3
Show the Claude Code architecture diagram
Explain the difference: Colab/Jupyter AI works inside a notebook, Claude Code is an autonomous agent at the OS level
No setup required before Session 3 — we'll do it together
Participants need a Harvard-affiliated Google account (g.harvard.edu) for Colab
Each notebook has detailed markdown cells — participants can follow along or work at their own pace
Notebook 3 uses sdss_photometry.csv — make sure participants upload it to their Colab Notebooks folder on Google Drive (same folder as the notebooks)
Notebook 3's Gemini exercises use the chat interface (blue star at bottom), not the magic wand
The 3D scatter plot prompt asks for redshift as radius with RA/Dec as angles on a sphere — expect Plotly output
The standalone program prompt asks Gemini to produce a .py file and an HTML user's guide — good example of AI going from notebook to distributable code
Local Jupyter + HUIT
This exercise requires Python 3.13+ and a working HUIT API key
Participants create a virtual environment and a local set_env.sh script (Mac) or PowerShell environment variables (Windows)
The AWS_BEARER_TOKEN_BEDROCK environment variable must be set before launching JupyterLab — this is the most common failure point
Jupyter AI v3 (beta) is required: pip install "jupyter-ai==3.0.0b9" boto3
Jupyternaut settings: Settings → Jupyternaut Settings, set model ID to bedrock/converse/us.anthropic.claude-opus-4-5-20251101-v1:0, add api_base parameter
The damped oscillator prompt tests code generation + interactive widgets; the SDSS prompt tests whether AI can infer domain knowledge from column names
Cost awareness
Remind participants that API calls through HUIT charge to their PI's billing account
Encourage setting monthly spending limits when registering API keys
The workshop API key has a shared quota — remind participants not to run expensive operations
Discussion Points
How does AI-assisted coding compare to writing code from scratch?
When is AI code generation helpful vs. when does it get in the way?
How do you verify AI-generated code is correct?
What are the risks of relying on AI for debugging — can it introduce new bugs?
How do you decide between Colab (cloud) and local Jupyter (your machine)?
How well did the AI understand the SDSS data without being told what the columns mean?