Cover course goals, what generative AI is, and how it fits into research workflows
Gemini access and demos
Get everyone signed in with their Harvard-affiliated Google account and into Gemini
Have participants look for the Harvard shield icon next to their profile picture to confirm they are in a Harvard-affiliated account (not a personal Google account)
Give a demo/tour of the Gemini interface options:
Model selection (mention environmental and cost impacts of larger models)
Pull-down menu of tools (Deep Research, Canvas, etc.)
Uploading files with the + button
Audio input
Do the menu/meal planning prompt exercise together (Exercise 1 from the exercises page)
Ask participants to share their meal suggestions with the group — different people will get different answers from the same prompt. Use this to demonstrate the variability and non-deterministic nature of AI outputs.
Have participants select Canvas from the Gemini tools menu
Prompt: "Make me an interactive simulator of a damped simple harmonic oscillator. Allow me to set parameters with sliders."
This demonstrates how AI can generate functional interactive applications from a single sentence
Good discussion point: how might this be useful for teaching, exploring parameter spaces, or rapid prototyping?
NotebookLM demo and grant proposal exercise
Walk through Exercise 3 (grant proposal review)
Show source citations and document grounding
Ethics discussion
Key message: we currently lack shared expectations and norms for AI use in research
Disclose, disclose, disclose — when in doubt, be transparent about AI use
Have an explicit conversation with your colleagues, and especially your advisor/supervisor, about expectations for AI use
Pose these questions to the room (show of hands):
How many of you think it's OK to use generative AI to write the first draft of a research paper? (If there are adjacent disparities, have people discuss with their neighbors and then revote.)
Regardless of how you voted on the last question, should AI entities ever be listed as co-authors?
If you're sent a paper to review, is it OK to ask for AI help? (Show of hands.)
What about drafting a research proposal? Do you know where to look for guidance on AI use in proposals?
How many grad students and postdocs do we have in the room? Have you discussed AI use with your advisor/supervisor? Do you have a shared understanding?
See also the additional discussion questions below
Validation and verification discussion
Key message: you bear final responsibility for any and all generative AI outputs you use. AI is a tool, not an authority.
Discuss the importance of validating and verifying AI-generated content before using it in research, publications, or decision-making
AI can produce plausible-sounding but incorrect results (hallucinations), miss important nuances, or reflect biases in training data
Treat AI output as a first draft that requires expert review, not as a finished product
Literature review exercise (Exercise 4)
Participants upload their own papers or use provided set
Data browsing exercise (Exercise 5)
Upload fake_data.xlsx to Gemini
Give participants a few minutes, then poll the group (show of hands):
Did it make and show you plots?
Did it do some interpretation of the data?
Did it note that it's unlikely to find bees and butterflies at 10 F?
Did it find the correlation of ~0.5 between bees and butterflies?
Use this to discuss variability in AI outputs — different participants may get different results from the same data
Gems exercise (Exercise 7)
Demo the cosmic distance ladder review Gem as an example of a shared Gem for a course — show how it works out of the box with no setup
Explain what Gems are: custom AI assistants with two kinds of persistence — persistent prompts and persistent uploaded context materials
Walk through converting the paper reviewer from Exercise 6 into a Gem
Show how to write the persistent prompt that defines behavior
Demonstrate uploading reference documents to a Gem
Show sharing — Gems can be shared within a Google Workspace, but individual chats cannot be shared
Wrap-up summary
Why use Harvard/HUIT accounts: Cybersecurity (no vendor storage, not used for training), privacy (can upload up to Level-3 confidential data), and accounting (API costs link to research grants through HUIT billing)
Ethics — DISCLOSE! We lack established norms for AI use in research. When in doubt, disclose. Talk to your advisor/supervisor and establish shared expectations.
Gemini: General-purpose AI assistant — text generation, image generation, data analysis, code execution. Good for quick tasks, brainstorming, and exploring ideas.
NotebookLM: Document-grounded research assistant. Upload your sources and get answers with citations back to specific passages. Best for literature review, document comparison, and grant proposal checking.
Gems: Custom AI assistants with persistent prompts and persistent uploaded context. Build reusable, shareable tools for recurring research tasks.
Choosing model for Gemini: Discuss the different Gemini models available (Flash, Pro, Advanced) and their cost and climate consequences. More powerful models consume more energy and cost more per query.
Tools menu: Demonstrate the Tools menu options including Deep research, Create images, and Canvas for collaborative document editing.
Uploading files with the + sign: Show how to use the + button to upload documents, images, and data files for analysis.
Cloud execution: Emphasize that all data processing and code execution happen on the cloud — your computer is just the input and output device.
Key NotebookLM Features to Demonstrate
RAG encoding of uploaded files: Explain that NotebookLM looks preferentially at uploaded files, which get RAG (Retrieval-Augmented Generation) encoded upon arrival. This means it creates a searchable index of your documents for more accurate, source-grounded responses.
Core use cases: NotebookLM is particularly good for interrogating and comparing documents, summarizing information across multiple sources, and even generating audio podcasts from your research materials.
How to share notebooks: Demonstrate the sharing feature by clicking the three-dot menu in the upper right corner of a notebook, then selecting "Share". You can share with specific people via email (within your Google Workspace) or generate a shareable link. Show how collaborators can view, comment, or edit depending on permissions granted.
Show how links to source documents work: When NotebookLM generates a response, demonstrate how to click on the citation numbers (e.g., [1], [2]) that appear inline in the text. These are clickable links that jump directly to the relevant passage in the source document, highlighting the exact section that supports the claim. This source-grounding is what distinguishes NotebookLM from general-purpose LLMs.
Instructor Notes: Data Browsing Exercise (Exercise 5)
The fake_data.xlsx file contains columns for date, temperature (F), number of bees,
and number of butterflies. Key things to look for in participant results:
Plot generation: Most participants should get plots, but the style and type will vary (scatter, line, bar). Good opportunity to discuss how the same prompt yields different outputs.
Physical interpretation: Did it flag that insect counts near zero at 10 F make biological sense? This tests whether the AI applies domain knowledge beyond pure statistics.
Correlation: The bees and butterflies columns have a correlation of approximately 0.5. Did Gemini find and report this? Some runs will, some won't — depends on the model and how it interprets the prompt.
Teaching moment: Poll the group with a show of hands for each question above.
The variation in responses illustrates that AI outputs are non-deterministic — the same
data and similar prompts can produce meaningfully different analyses.
Instructor Notes: Gems (Exercise 7)
What Gems are: Custom AI assistants with a persistent prompt (system prompt) that defines their behavior. Similar to Custom GPTs in ChatGPT.
Key advantage over regular Gemini: The persistent prompt means you don't have to re-explain the task every time. The Gem remembers its role across conversations.
Creating a Gem: Access through "Gem manager" in the Gemini sidebar. Click "New Gem", write the persistent prompt, name it, optionally upload reference documents.
Sharing: Gems can be shared with others in your Google Workspace — useful for standardizing workflows across a research group (e.g., everyone uses the same paper reviewer Gem).
Example Gem ideas for researchers: Paper reviewer, grant writing assistant, literature synthesizer, methods section writer, data analysis explainer, lab protocol checker.
Uploading documents: Like NotebookLM, you can upload reference documents (journal guidelines, style guides) that the Gem always considers.
General Discussion Questions
For wrapping up or filling time:
What worked well in Gemini's response?
What could be improved?
How might you refine the prompt to get better suggestions?
How does this relate to using AI for research tasks?
Session Preview Notes
Before concluding, briefly preview the upcoming sessions:
Session 2 (Next week): We'll integrate AI directly into Python workflows using Google Colab with Gemini. Participants will learn to debug, document, and generate code with AI assistance.
Session 3 (Week after): We'll explore Claude Code, a powerful command-line tool that can autonomously execute multi-step coding tasks, run tests, and create complete analysis pipelines.