A practical system for redesigning assignments so they measure what students actually know — even when they have access to AI. Built around six design principles and organized into three sections you can adopt independently.
Generative AI can produce a polished essay in minutes. Detection tools are unreliable. Banning AI from homework is unenforceable — students report it's now embedded in everyday tools like Google Search. Teachers need a different approach: assignments where student thinking is the product, visible throughout the process, not hidden behind a final draft.
Process over product.
The most effective response to AI isn't detection or prohibition — it's better assignment design. When you grade the process (checkpoints, drafts, revisions, oral defenses) instead of just the final product, AI stops being a shortcut and starts being a tool. The shift: move from grading only final products (which AI can generate) to grading the thinking that leads to them (which AI cannot fake).
A useful ratio: make process worth 50–70% of the total grade, with the final product making up the rest.
Every strategy in this guide flows from six core principles. You don't need to hit all six on every assignment, but keeping them in mind helps you make stronger design choices.
Students know exactly what’s expected. Vague assignments create anxiety and shortcuts.
Work connects to real contexts and personal meaning. Generic prompts get generic AI responses.
The journey is visible and valued. When only the final product counts, students skip straight to it.
Learning is demonstrated, not just submitted. A document alone proves nothing.
Students articulate their own thinking. AI can’t write about a student’s genuine confusion or breakthrough.
All students can access and succeed. Any strategy that resists AI must work for students with IEPs, language barriers, anxiety, and limited resources.
If you read nothing else, try one of these on your next assignment:
Add one checkpoint before the final deadline.
Anchor one element in local or personal context.
Include one moment of live evidence.
Shift your rubric weight toward process (50–70%).
State your AI expectations clearly.
Start with one. When student thinking is visible throughout the process, the question shifts from “Did they use AI?” to “What did they learn?”
Three sections that work independently but reinforce each other. Start with whichever one fits your most pressing need.
Anchor assignments in local context, state a clear purpose, and label AI expectations so students know what\u2019s expected.
Generic prompts get generic AI answers. The fix: anchor assignments in things AI doesn't know — your classroom discussions, your students' lives, your local community.
| Generic (AI-vulnerable) | Anchored (process-visible) |
|---|---|
| "Explain supply and demand" | "Document the price of one item at three stores in our area. Use supply and demand theory to explain the price differences you found" |
| "Analyze symbolism in The Great Gatsby" | "Choose a symbol from Gatsby that reminds you of something in your own community. Write about why this symbol resonates across time and place" |
| "Write about courage" | "Describe a moment this year when you had to choose between what was easy and what was right. What did you learn about yourself?" |
Students are less likely to use AI when they understand why a skill matters. Not what they'll do, but why it benefits them.
“In college, you'll write 20-page papers without AI. In jobs, your emails represent your intelligence. This is practice thinking through complex ideas in your own voice. AI cannot develop that skill for you.”
Students told us expectations vary across classes and gray areas cause anxiety. Label every assignment with a stoplight color:
When students develop their thesis, select evidence, and map their argument in class, the final draft can happen at home — even with some AI assistance — because the intellectual foundation is already documented and verified.
Thesis development, key evidence selection, outlines, core analysis
Draft development (after in-class foundation), research, revision, editing
AI literacy tasks, critique of AI output, evaluation and verification work
Scaffolds, checkpoints, and studio time that prevent the panic and confusion that drive students to AI.
Understanding why students turn to AI helps you design around the problem. Most of these are design problems, not character problems:
20% of students said they use AI mainly because they don't know how to start. Give them a way in:
In-class brainstorm before assigning homework (20 min)
10 min: Silent individual brainstorm
5 min: Share one idea with a partner
5 min: Whole class generates possible approaches
Then assign: "Continue developing one of these ideas at home"
This takes 20 minutes of class time but makes it much less likely a student stares at a blank screen at 11pm and reaches for ChatGPT.
Students are most likely to turn to AI during last-minute deadlines. Breaking assignments into required stages prevents the panic that leads to shortcuts:
| Week | Checkpoint | Weight |
|---|---|---|
| 1 | Topic + 3 potential sources | 5% |
| 2 | Annotated bibliography (5 sources) | 15% |
| 2 | Thesis + outline with topic sentences | 15% |
| 3 | Rough draft | 20% |
| 3 | Peer review session (in class) | 5% |
| 4 | Final draft + AI process statement | 40% |
Dedicate class time to work sessions where students complete foundational elements with you in the room. When they get stuck, they ask you instead of asking AI.
When students revise, ask them to write a brief memo (3–5 sentences) explaining what they changed and why. This turns revision from invisible polish into visible thinking.
Version history shows how writing actually developed: multiple work sessions, visible revisions, ideas building on each other. Red flags should start conversations, not accusations — some students with learning differences may write elsewhere first, and English learners who got grammar help can trigger the same flags.
Documentation and demonstration strategies that verify what students actually know — through process artifacts and live evidence.
You learn more about what a student actually understands from their checkpoints than from the final product. Suggested weighting:
For major assignments, require a brief reflection (100–150 words) submitted alongside the final work. Think of it as a Works Cited page for AI. Students answer:
Grade on honesty and specificity, not on whether they used AI. Use process statements on major assignments — not every assignment. Overuse turns them into exactly the kind of busywork that drives students toward AI.
AI can't fake live thinking. Set aside 10–30% of the grade for formats where students show understanding in front of you:
Exit tickets, one-minute essays, in-class annotations, whiteboard problem-solving
Oral defenses, presentations with Q&A, panel discussions, speed rotations
Peer workshops, fishbowl discussions, gallery walks with defenses
Concept mapping, infographics, photography with annotation, podcasts, sketchnoting
You don't need to conference with every student on every assignment:
Every verification method must work for all students — IEPs, English learners, anxiety, limited resources. Offer at least two pathways for any verification. Build accommodations into the design from the start. If you suspect AI over-reliance, talk first — listen, understand, then decide together what happens next.
Before distributing any major assignment, verify:
Design the task provides the pedagogy. Support the process prevents shortcuts. Gather evidence verifies learning. In practice:
A teacher labels an essay assignment as 🟡 Yellow — Limited AI permitted.
The assignment is designed with staged checkpoints: in-class thesis development, annotated source review, rough draft, and final paper.
The student submits an AI Process Statement alongside their final draft, documenting that they used ChatGPT to brainstorm themes but developed their argument independently.
The teacher grades process artifacts (50–70%) and the final product (30–50%), with an oral defense confirming the student can explain their reasoning.
Everything here is free. Print them, adapt them, share them with your department.
↓This system is grounded in primary research (11 faculty interviews, 13 student interviews, a 328-student survey) conducted at a New England independent school in Fall 2025, combined with secondary research spanning process-focused pedagogy (Emig 1971; Murray 1972; Flower & Hayes 1981), authentic assessment (Wiggins 1990; Newmann et al. 1996), oral assessment (Joughin 1998, 2010), personalization and engagement (Cordova & Lepper 1996), AI assessment frameworks (Perkins et al. 2024, 2025; TeachAI 2025; Bower et al. 2024), and national adoption data (College Board 2025; Alan Turing Institute 2025). Developed as a Brown University Master's in Technology Leadership Critical Challenge Project.
Join the newsletter for future posts, research updates, and tools that extend this system.