Skip to main content
0

In the Weeds: Can I Trust AI With My Kid's Homework?

A
a-gnt Community15 min read

A long, honest look at AI homework help — what it's actually good for, what it breaks, and a framework for keeping it useful without letting it do the learning.

The first entry in a recurring series where we sit with a hard question for longer than the internet usually allows.

It's 9:17 pm on a Tuesday. You already know the scene. A kitchen table. A worksheet. A kid whose eyes have gone just glassy enough that you can tell the day-cost is adding up. A parent with a laptop open and a cursor blinking in the chat box of whichever AI lives in the browser tab that happens to be open. The question the parent is about to type is a small, specific one — something like "what's 3/4 divided by 1/2 and can you show the work" — but the question underneath that question is the one this piece is about.

The question underneath is: am I cheating for my kid right now, or am I helping?

We at a-gnt spent a week inside that question. Not a day, not an afternoon — a full week of sitting at kitchen tables (our own, and tables belonging to parents who let us watch) and running real homework sessions through four different AIs. Math worksheets. A book report. A social studies essay about the Missouri Compromise. A science diagram. A Spanish vocab sheet. We watched the AIs work. We watched the parents watch the AIs work. We watched the kids, who are, it turns out, the most interesting people in the room to watch, because they react to getting an answer the same way a cat reacts to being handed food — briefly grateful, and then suspicious that the price is about to come.

This piece is what we took home from that week. It is longer than an internet article should be. That's on purpose. The question is not going to get simpler if we write a 600-word listicle, and the decision a parent has to make about this at 9:17 pm on a Tuesday deserves more than a tweet.

Here's the thesis, and we'll say it plainly before we earn it: AI homework help is not cheating, and it is not learning. It's a tool, and the entire question of whether it helps or harms a kid comes down to how it's used. A hammer is not a cheating device when you use it to put up a shelf, and it is not a learning device when you use it to hit a nail. It is a hammer. The question with a hammer is "are you holding it by the right end." The question with AI is similar, but harder, because AI looks like a friend, not a tool, and friends — well, friends are where all the interesting moral questions live.

So. Let's sit with this one.

The three bad outcomes, named honestly

Before we can talk about using AI well, we have to name the three ways it can go wrong, because all three of them are real, all three of them happened to us or to parents we were watching during our week, and any framework that pretends they don't exist is useless.

Bad outcome #1: AI just does the homework. This is the obvious one. A kid pastes a worksheet into ChatGPT, the AI spits out answers, the kid writes them down, the homework gets turned in, and the learning that was supposed to happen during the homework — the thinking, the stumbling, the getting-it-wrong-and-trying-again — does not happen. The worksheet was not the point of the worksheet. The worksheet was the vehicle for the thinking. If the AI does the thinking, the kid gets a clean worksheet and an unchanged brain, and the next worksheet will be just as hard, and the one after that, and the pile will eventually fall down somewhere.

This is cheating. It's cheating in the exact same way it's been cheating for thirty years, when a kid copied from the back of the book or from a friend's notebook. The technology is new. The moral shape of the thing is not.

Bad outcome #2: AI confidently makes things up, and the kid turns it in. This one is newer, and it is, for our money, the scarier of the two. AI doesn't just give right answers. It gives wrong answers with the same tone and the same formatting and the same sentence rhythm as its right answers. It will tell a kid that George Washington signed the Constitution (he didn't — he presided over the convention that wrote it) with the exact same cadence it uses to tell the same kid that Abraham Lincoln signed the Emancipation Proclamation (he did). The kid can't tell the two apart. The parent frequently can't tell the two apart. The teacher can, and will.

We watched this happen, live, during our week. An 11-year-old was writing a social studies piece on the Missouri Compromise. The AI she was using — not going to name it, this particular failure has happened in all of them at some point — casually invented a detail about one of the senators involved that we, at the table, fact-checked and found to be confidently, completely wrong. Not wrong in a "debatable interpretation" way. Wrong in a "that person did not exist in that place at that time" way. The AI had zero signal that it was about to hallucinate. Its paragraph read just like the one before it, and the one before that, both of which were correct.

If the kid had copied it into her paper and turned it in, the harm would not be that she "cheated." The harm would be that she now believed a thing that wasn't true, with the confidence of having seen it in print, and that is the kind of wrong-in-your-head that is very expensive to un-wrong later.

Bad outcome #3: AI gives a correct answer that skips the "why." This is the one that doesn't show up on the surface as a problem, and is, for that reason, the sneakiest of the three. A kid asks AI how to solve a long division problem. The AI solves it, correctly, and shows the work. The kid copies the work, gets the right answer on the worksheet, and turns it in. Nothing is technically wrong. But the kid hasn't learned long division — they've learned "an AI can do this for me," which is actually a true and useful thing to know, but is not what long division is for.

Long division is not a useful life skill. Almost nobody does long division in their head in adulthood. The reason long division is taught is that doing long division is one of the first times a kid's brain gets forced to handle a multi-step procedure with a tracked state — you have to remember where you are in the process, which matters later when they're doing algebra and calculus and eventually, maybe, writing code or balancing accounts or building a piece of furniture from instructions. The value of the exercise is the exercise. If AI removes the exercise, the grade looks fine and the brain doesn't get the thing the exercise was for.

We saw this, too. It is the most common failure mode we watched during our week. Kids weren't using AI to cheat in the dramatic sense. They were using it to skip the boring middle part of understanding something, and then turning in the finished product. Everybody won in the short term. We are writing this sentence a week later, and at least two of those kids are probably going to hit a wall in three months when the concept they skipped past becomes the prerequisite for the next concept.

These three bad outcomes are the terrain. Any framework for using AI well on homework has to work around all three of them, at the same time, on a Tuesday at 9:17 pm when nobody has energy left.

What good looks like: the explain-it-back rule

Here is the single best rule we found in the course of a whole week of testing, and it is simple enough that you could write it on a sticky note and put it on the fridge:

The rule: after using AI on any homework problem, the kid has to explain, in their own words, what they just did and why it worked. If they can't, they didn't learn it, and they have to go back through with the AI until they can.

That's the whole rule. Everything else in this piece is a support beam for it.

The rule works because it sidesteps the entire "cheating or learning" question by making the test of learning external to the AI interaction. The AI can do whatever it wants — explain a step, give a hint, walk through the whole problem, even just give the answer — and none of it counts until the kid can stand up from the table, look their parent in the eye, and explain the thing in language the parent recognizes as the kid's own. No kid can explain what they don't understand. No kid can fake understanding in their own voice, because their own voice is too specific to fake. If a ten-year-old can say, out loud, "okay, so you find a common denominator first because you can't add fractions that are different sizes, like you can't add a pizza slice that's cut in fourths to one that's cut in thirds unless you first cut them both the same way," then they get it, and it doesn't matter whether they got there by working the problem alone, working it with a parent, working it with a tutor, working it with 📚The Midnight Homework Buddy, or working it with a chatbot they pasted a worksheet into at 9:15 pm.

The rule also handles the hallucination problem, partially. A kid who has to explain a historical fact in their own words is a kid who, if asked "wait, where did you hear that," might — might — notice they can't actually back it up. It's not a perfect fact-check. But it is a much better fact-check than "did the answer look plausible on the page."

And it handles the skip-the-middle problem. The middle is where the explanation lives. If a kid can explain the middle, they did the middle. Whether they did it in their head, on paper, or by watching the AI do it first and then reproducing it themselves, doesn't matter, because the explanation is the proof that the brain did the work.

The decision framework, in four questions

The rule is the core. But you need a little more structure for the edge cases. Here is the short version of what we found worked.

Before you (or your kid) open an AI for homework help, ask these four questions. We'll put them in the order we think they matter.

1. What kind of homework is this? Not all homework is the same. A math worksheet where the kid is supposed to practice a new procedure is different from a research paper where the kid is supposed to find and synthesize information. A creative writing prompt is different from a vocabulary quiz. AI does some of these well and some of them badly, and the smart move is different for each.

For practice-oriented homework (math worksheets, spelling, vocab, anything where the whole point is repetition building a skill), AI is mostly a coaching aid — it should explain, hint, and walk through, but not do. The kid has to do the reps themselves, or the reps didn't happen.

For research-oriented homework (a paper, a presentation, a short essay), AI is a research assistant that must be fact-checked. It can suggest angles, summarize topics, even draft an outline. It cannot be trusted to be factually correct, and every concrete claim it makes has to be verified against an actual source before it goes in the paper. We will die on this hill. If your kid is using AI for research and not fact-checking it, they are one careful teacher away from a very bad day.

For creative work (a story, a poem, a personal essay), AI should not be writing anything the kid is going to turn in. It can brainstorm. It can ask questions. It cannot draft. Creative writing assignments are explicitly about the kid's voice, and the kid's voice cannot be outsourced without the thing becoming something else entirely.

2. Is the AI being asked to give the answer, or to teach? This is a verb question about the prompt itself. "Solve this problem" is a different request than "help me understand this problem." The first gets you bad outcome #1 almost immediately. The second, if the AI is any good, gets you into teaching mode.

This is the whole reason we built ✏️Homework Helper That Teaches, our copy-pasteable prompt that forces the AI into Socratic mode. It asks the kid diagnostic questions before it touches the problem, walks them through one step at a time, and refuses to hand over the answer even if asked. You paste it in at the start of a session. It is free and it takes thirty seconds. If you remember nothing else from this piece, we would rather you remember that one.

3. Can a parent (or the kid, or someone) verify the answer from a trusted source? This is the hallucination filter. For math, the parent or a second AI or a quick check against the back of the book can usually tell you whether the answer is right. For history, a one-minute check on Wikipedia or the school textbook can catch 80% of AI hallucinations. For science, a quick cross-reference against the textbook or a reputable site. The work is small. It just has to happen. An unchecked AI answer on a history paper is a time bomb.

4. Can the kid explain it back? The rule, again, always. If they can, you're done. If they can't, you're not done, and you have to keep going. This is non-negotiable. It is the load-bearing wall of the whole framework.

The honest caveat: sometimes it is cheating, and you should say so

We promised honesty in the thesis, so here's the honest part: sometimes, using AI on homework is cheating, and we don't think we should soften that for anybody.

If a kid's assignment is explicitly "write a personal essay about a time you overcame a challenge, in your own words, and turn it in tomorrow," and a kid types "write me a personal essay about overcoming a challenge" into ChatGPT and hands in the output, that is cheating. It is not a nuanced situation. It is not a "well, it depends" situation. The assignment was to generate something from inside themselves, and they generated something from inside a chatbot instead. That's a thing to name with a kid, not to rationalize around.

Similarly, if a teacher has explicitly said "no AI on this assignment," using AI on that assignment is cheating, regardless of whether we think the teacher's rule is well-designed. The rules of the school are the rules of the school. Teaching a kid to follow them while they're in school is a reasonable parenting move. If you disagree with a rule, the thing to do is raise it with the teacher, not quietly subvert it at the kitchen table. Kids notice when parents subvert rules quietly. It teaches them something, and what it teaches is not the thing most parents want to teach.

We say this because the discourse around AI and school has gotten very, very hand-wavy about this particular question, and we think parents deserve a straight answer: yes, sometimes it's cheating. You can tell when it is because you can name, in a sentence, what the assignment was trying to develop in the kid, and the AI doing the work defeats that development. When that's the case, the right answer isn't a framework. The right answer is "put the computer down and do this one yourself."

The part where we tell you it doesn't always work

Here's the part where we tell you the framework is not a cheat code. Because no framework is.

We ran sessions during our week where we followed the rules, used the teaching prompt, had the parent in the room, and the kid still fell asleep mid-problem, or got bored and rage-clicked, or nodded along without actually learning the thing, or turned in the worksheet and then got the same concept wrong on the test a week later. The framework helps. It does not make kids suddenly into self-motivated learning machines, because kids are not, and have never been, self-motivated learning machines. Kids are small people who are tired and hungry and who find worksheets boring for the same reason adults find spreadsheets boring, which is that worksheets and spreadsheets are genuinely boring, and calling them names doesn't help.

The framework's job is not to turn homework into a magical experience. The framework's job is to make sure that, on the nights when homework does happen, the AI is being used as a tool that helps rather than a shortcut that erodes. Some nights homework is not going to happen well regardless. That's not the AI's fault. That's Tuesday.

One more thing, on the same honest thread: AI is not a replacement for a tutor, a teacher, or a parent. We know this sounds obvious. We're saying it anyway, because the marketing around some AI tutoring tools is making it sound like the AI is a complete solution, and it isn't. A kid who is consistently struggling needs a person in the room, sometimes a professional person, and AI is a useful supplement to that person, not a substitute for them. If your kid is falling behind, talk to their teacher. If they have a specific learning difference, talk to someone qualified. AI can help on the margin. It cannot see what a trained human can see.

Back to Tuesday, 9:17 pm

Let's end where we started. The kitchen table. The worksheet. The blinking cursor.

Here is what we would, having lived inside this question for a week, actually do in that exact moment, in that exact house, with that exact kid.

First, we'd pause before typing. Ten seconds. We'd ask the kid: "Before we get any help on this, tell me — what's the part that's confusing?" If the kid can articulate the part, we're already halfway there, and the AI, when it enters, has a specific job instead of a vague one. If the kid can't articulate the part, that's useful to know too, because it means the wall is further back than this one problem.

Second, we'd open ✏️Homework Helper That Teaches — or, for parents who want a more persistent personality across multiple sessions, 📚The Midnight Homework Buddy — and paste in the problem with the grade level filled in. We wouldn't just open a chat window and type the problem cold. The framing matters more than most people think. An AI with no framing will drift toward "give the answer" mode because that's what most of its training data looks like. An AI with framing stays in teaching mode.

Third, we'd sit with the kid while they work through it. Not hover. Sit. Drink a glass of water. Let them have the conversation with the AI directly. Be there for when they get stuck on a step and need a human to say "you got this, take it one more time." The parent's job here is not to teach the math. The parent's job is to be a calm presence in the room while the teaching happens. Most parents are surprisingly relieved to learn that they don't have to remember how to do long division to help with long division homework. They just have to stay at the table.

Fourth — and this is the explain-it-back rule again, and it's worth repeating — we'd ask the kid, at the end, to explain in their own words what they just figured out. Not "what's the answer" but "how would you explain to somebody how to do this kind of problem." If they can, bedtime. If they can't, one more round, one more problem, one more conversation with the AI, until they can.

Fifth, sometimes, the right move is to stop. It's late. The kid is cooked. The next worksheet is going to come whether or not tonight's gets done, and a kid who's exhausted is not learning anything — they're just experiencing failure slowly. We watched a parent during our week make this exact call on a particularly rough night, and what she said to her kid was: "It's 10 pm, we're stopping, we'll put a sticky note on the worksheet that says 'got stuck on #4 and #5' and we'll email your teacher in the morning. Teachers want to know when a kid is stuck. That's information, not failure." We wrote that sentence down. We think it might be the single healthiest thing we heard anyone say all week.

That's the framework. That's what a week of sitting in kitchens gave us. The AI isn't going to cheat for your kid. The AI isn't going to teach your kid. The AI is going to do whatever the room asks it to do, and the room is the parent, and the parent is — you guessed it — you.

It's 9:17 pm. Type carefully.

This is the first entry in "In the Weeds," a recurring series from a-gnt where we sit with a real question longer than the internet usually lets us. The tools mentioned here are ✏️Homework Helper That Teaches, 📚The Midnight Homework Buddy, ⚖️The Sibling Referee for when the other kid decides to have their own crisis at the same table, and 📅School Year Planner for the parent who would like to stop being surprised by worksheets in the first place.

Share this post:

Ratings & Reviews

0.0

out of 5

0 ratings

No reviews yet. Be the first to share your experience.