← Back to insights
ResearchTeachers + Leaders
January 24, 20264 min read

What 328 Students Told Us About AI

Student use of AI is already more nuanced than the public conversation suggests. The important question is not whether students use it, but what kinds of schoolwork make that use shallow or thoughtful.

Public conversation about students and AI often collapses into two caricatures.

One story says students are using AI to avoid thinking. The other says AI is simply the next calculator and schools should stop worrying.

Neither is precise enough to help a secondary school make good decisions.

In Brown-based research with 328 students, the more useful picture was messier and more instructive: students are already making situational judgments about when AI helps, when it shortcuts, when it saves time, and when it feels educationally hollow.

That nuance matters.

Finding 1: students are not using AI in only one way

Students described a wide range of use:

  • brainstorming before starting
  • clarifying a difficult concept
  • checking structure or organization
  • generating examples
  • translating or simplifying language
  • producing full drafts when they were stuck or rushed

The policy implication is obvious: blanket language about "AI use" is too vague to guide behavior. Schools need more precise categories of acceptable, limited, and unacceptable use.

Finding 2: time pressure changes behavior fast

When deadlines tightened, student judgment often narrowed.

That does not excuse poor academic choices, but it does tell us something about the environment. If a class structure routinely rewards fast polish over visible process, students will naturally reach for tools that help them produce polish.

This is one reason I care so much about assignment design. Integrity is not only a character question. It is also a systems question.

Finding 3: students can tell when AI weakens their learning

One of the more interesting patterns was that many students already recognized the tradeoff.

They could describe moments when AI helped them get unstuck, and different moments when it let them bypass the part of the work that actually mattered. In other words, students often know when they are learning and when they are merely completing.

That gives schools an opening.

Instead of treating students only as compliance risks, we can build more explicit reflection into the work:

  • What did AI help you do?
  • What did you reject?
  • Where did your thinking change?
  • What evidence shows the final product is still yours?

Those questions move students toward judgment rather than simple rule-following.

Finding 4: teachers need better evidence, not just better detection

The study reinforced a view I already held in practice: detection alone is the wrong center of gravity.

If teachers rely on final products as the main proof of learning, AI will continue to scramble their confidence. But if classrooms collect evidence of reasoning, revision, explanation, and choice, the assessment picture gets much stronger.

That evidence can take many forms:

  • annotated drafts
  • oral checkpoints
  • process notes
  • source comparison logs
  • reflection on AI use
  • in-class decision points

These are not anti-technology measures. They are better teaching measures.

Finding 5: policy language rarely reaches the classroom intact

Students often live with rules that sound clear in leadership documents but vague in actual coursework.

A school may say "AI may be used for brainstorming but not for final writing," but what counts as brainstorming? Does restructuring count? Does paraphrasing? What about generating counterarguments?

The implementation gap is where confusion grows.

This is why policy-to-practice work matters. Schools need examples, models, and shared language that teachers can actually use in assignments and students can actually interpret.

What secondary schools should do with findings like these

The point of this kind of research is not to produce a dramatic headline. It is to improve design.

For grades 9-12, I think the highest-value responses are:

  1. Write clearer categories of AI use.
  2. Redesign assignments around visible evidence of thinking.
  3. Train teachers with subject-specific examples, not generic AI overviews.
  4. Build student reflection into AI-supported work.
  5. Review where school structures are unintentionally rewarding shortcut behavior.

Those moves are more durable than any one tool policy.

The bigger takeaway

Students are not waiting for schools to become comfortable. They are already building habits.

That means schools need to move faster toward clarity, but not toward panic. The goal is not to eliminate ambiguity from a changing technology. The goal is to help students and teachers develop better judgment inside that ambiguity.

That is the work I want this site to support.

Chris Meehan

Chris Meehan

Academic Technology Director at Berkshire School, researching AI in grades 9-12 at Brown University. I publish practical frameworks, tools, and articles for secondary-school educators navigating AI.