1. What We're Building
A formatted Word document summarizing recent literature, produced reproducibly from a plan file.
The end product is a literature review in Word format covering a specific research topic. For this tutorial, we will use "ERP biomarkers in autism" as our example. The document will include a summary table of the top papers, key findings from each study, and a narrative synthesis that ties them together.
More importantly, the review is reproducible. The plan file records every search term, filter, and instruction you used. If your advisor asks you to expand the date range or add a new keyword, you update one line in the plan and run the loop again. The AI regenerates the document with your changes applied. No copy-pasting, no starting over.
2. Setting Up
Create a project from the IRL Basic Template and install two skills.
Start by creating a new IRL project. Open a terminal and run:
irl init "ERP biomarker lit review" -t irl-basic
This creates a folder named something like 260209-erp-biomarker-lit-review with the standard IRL structure inside. Open this folder in VS Code (or any editor).
Next, open plans/main-plan.md and find the First Time Setup section. Add these two lines to install the skills your project needs:
## First Time Setup - Install PubMed Skill: https://github.com/K-Dense-AI/claude-scientific-skills/tree/main/scientific-skills/pubmed-database - Install Word DOCX Skill: https://github.com/anthropics/skills/tree/main/skills/docx
Skills are small instruction sets that teach your AI assistant how to use specific tools. The PubMed skill teaches it how to query the PubMed database, and the DOCX skill teaches it how to produce properly formatted Word documents. You install them once, and they stay available for every loop.
Here is what your workspace looks like after setup. The plan file is open in the editor, and the project structure is visible in the sidebar.
3. Writing Your First Instructions
Tell the AI what to do, step by step, in plain language.
Now comes the part that matters: writing the Instruction Loop. This is where you tell the AI exactly what work to perform. Scroll down in your plan file and find the Instruction Loop section. Write your instructions as a numbered list:
## Instruction Loop 1. Search PubMed for "ERP biomarkers autism" (last 5 years, English, review articles) 2. Save top 20 results to 02-data/pubmed-results.json 3. Summarize each paper (title, authors, year, key findings) in a table 4. Draft a narrative literature review to 03-outputs/lit-review.docx
That is the entire instruction set. Four lines. Notice that you are writing in plain English, not code. You specify what you want, not how to do it. The AI figures out the how, using the skills you installed in the setup step.
Each line maps to a concrete action: search, save, summarize, draft. The file paths (02-data/pubmed-results.json and 03-outputs/lit-review.docx) tell the AI exactly where to put the results.
Your editor now shows the instructions typed into the plan file. Nothing else has changed yet.
4. Running the Loop
One command. That is all you need.
Open a terminal AI (Claude Code, Copilot Chat, or any AI assistant with file access) and type:
Review main-plan.md, check for any revisions, and execute
This is the only command you will ever need for any IRL project. The AI reads your plan file, checks whether there are any revisions since the last run, and then executes every instruction in the loop. You do not need to remember different commands for different tasks. The plan file contains all the context.
While the loop runs, you will see the AI working through each step: connecting to PubMed, downloading results, building the summary table, and drafting the review document. This typically takes one to three minutes depending on the number of results.
The terminal shows the AI executing your instructions. You can watch each step as it happens.
5. Reviewing the Output
Check what the AI produced. Your judgment drives the quality.
After the loop finishes, two new files appear in your project:
02-data/pubmed-results.json contains the raw search results from PubMed. This is your data layer. You can inspect it, share it, or feed it into a different analysis later.
03-outputs/lit-review.docx is the formatted literature review. Open it and read through it carefully. Check that the citations are accurate, the summaries reflect each paper's actual findings, and the narrative makes sense for your audience.
This is where your expertise matters most. The AI can summarize papers and structure a document, but only you can judge whether the review captures the right themes, whether any important studies are missing, and whether the analysis holds up to scrutiny.
New files appear in the sidebar. The preview shows an excerpt from the generated document.
6. Making Revisions
Update the plan, run again. The loop handles the rest.
After reviewing the document, you notice two things you want to change: the review needs a dedicated section on methodology gaps across the studies, and the citations should use APA format. Instead of manually editing the Word document, you go back to the plan file and add a revision.
## Revisions - Add a section on methodology gaps across the reviewed studies. - Use APA citation format throughout.
Now run the same command again:
Review main-plan.md, check for any revisions, and execute
The AI sees your revisions, reads through the existing data, and regenerates the document with a new methodology gaps section and APA citations applied throughout. The original search results are still in 02-data/, so it does not need to re-query PubMed. It just rebuilds the output.
This is the power of the loop. Each revision is a small, traceable change. You can look at the plan file and see exactly what changed between versions. If the new section is not right, you revise the revision. The cycle continues until the output meets your standards.
The editor shows the revision added to the plan. The terminal re-runs the loop with the updated instructions.
7. The After-Loop
Save a checkpoint. Your plan file is your running record.
After each loop, save a version checkpoint. If you are using Git, this is a commit. If not, simply note the date and what changed in your activity log. The plan file template includes an activity.md file for exactly this purpose.
The plan file itself becomes your running record of every decision and change you made. Six months from now, when someone asks why you filtered for review articles only, or why you chose a five-year window, the answer is right there in the plan. No digging through chat logs. No trying to remember what you asked the AI to do.
The terminal shows the checkpoint process completing. Your project is versioned and traceable.
8. What You've Learned
One pattern, any project.
Here is what you just did. You wrote a plan describing what you wanted. You ran a single command. You reviewed the output and decided what to change. You updated the plan and ran the command again. That is the entire IRL workflow.
The same pattern works for any project. A data analysis, a grant draft, a conference poster, a course syllabus, a technical report. You change the instructions in the plan file, install whatever skills you need, and run the same loop. The structure stays the same. Only the content changes.
The key insight is that your plan file is the durable artifact, not the chat. Every decision you made, every revision, every search parameter is recorded in a file you own. That is work you can retrace, share, and build on.
Read the full IRL Explainer to learn more about the pattern →