Skip to main content
·4 min read

Inline screenshots in student questions: the lightest-weight AI for education

When students can attach a screenshot of the confusing slide, instructors spot confusion 5× faster. Here's why inline image attachments are the quiet winner of 2026 classroom AI.

inline imagesclassroom AIstudent questionsteaching toolsEdTech

Inline screenshots in student questions

There's been a lot of noise about AI in education — personalized tutors, AI grading, generative lesson plans. The actual most impactful AI feature in my classroom this semester? Audience members can attach a screenshot to their question.

The workflow

  1. Student is confused by a slide
  2. Pulls out phone (already scanned the QR at start of class)
  3. Taps the camera icon in the question form
  4. Snaps a photo of the slide on the screen
  5. Types "I don't get this"
  6. Hits send
  7. Instructor's sidebar shows: question + photo

Total time: 15 seconds.

Why it's the lightest-weight AI feature

Notice there's no complex AI inference happening in the workflow itself. The student takes a normal phone photo. The instructor sees the photo. That's it.

The AI layer enhances it in two ways:

  • Clustering: If multiple students submit photos of the same

slide, the sidebar groups them. The instructor sees "these 12 students are confused by slide 14" at a glance.

  • OCR/context extraction: For photos of handwritten problems,

OCR extracts the text so the instructor can search/sort.

Both of these are optional enhancements on top of the core photo-upload feature.

Why it beats typed-only questions

Typed questions in a classroom context often fail because the student can't articulate what's confusing. "I don't understand slide 14" is unhelpful. "I don't understand slide 14 — [photo of the specific equation]" is extremely helpful.

The photo removes the articulation burden. The instructor sees exactly what the student is looking at.

Specific classroom patterns

STEM problems mid-exam

Student stuck on problem 3. Taps camera, snaps their worksheet, types "I get to step 2 then I'm stuck." Instructor sees the student's work and the specific sticking point. Targeted help.

Language class examples

Teacher shows a complex example sentence. Student snaps it, types "why is 的 here?" Teacher can answer precisely without re-reading the whole sentence.

Code review in CS class

Instructor shows a code snippet. Student snaps it, circles a line (future feature!), types "this line looks off." Real-time code review.

Art critique

Student snaps their draft, types "is this composition balanced?" Instructor can offer feedback without requiring the student to describe the work in words.

Implementation notes

The image upload flow in TA pilot:

  • Pastes or captures image from phone
  • Client-side resizing to 800px longest side
  • Uploaded to Supabase Storage via anonymous RLS policy
  • URL attached to the question row
  • Instructor's sidebar renders the image inline
  • TTL: 36 hours (same as session, auto-cleaned)

No PII stays on our servers longer than the session. No face recognition. No scanning for content. Just photo → question → instructor.

Privacy considerations

  • Students should know photos are visible to the instructor (not the

whole class by default)

  • Instructors should avoid sharing a student's photo to the class

without asking

  • A "post to class" button is coming, with an explicit confirm step
  • Photos auto-delete 36 hours after the session ends

The bigger lesson

AI in education isn't always about sophisticated models. Sometimes it's about reducing friction for a simple, high-value workflow. Attaching a photo to a question is low-tech. It's also the highest-impact classroom feature we've shipped.

Lesson: before reaching for generative AI, look for the friction that's preventing a student from participating at all.

Related reading


Ready to run your own live Q&A?

Add TA pilot to Chrome and you're live with a QR in under a minute.