Hacks: The Phone Photo Prompt That Makes Your Worst Shot Printable
One prompt. One photo. Three edits with exact slider values for Snapseed, Lightroom, and Apple Photos. The photo you almost posted but didn't — this is for that one.
The photo is on your phone. Your kid is mid-laugh, the light is catching their hair, and the background is the kitchen you meant to clean before anyone came over. It's the best photo you've taken in months and it looks like it was shot through a dirty window.
Here's the prompt:
“"Analyze this photo and give me exactly three edits — the three that will make the biggest visual difference. For each edit, give me the specific slider values for Snapseed, Apple Photos, AND Lightroom Mobile. No general advice. Specific numbers."
That's it. Paste it into Claude or ChatGPT (with the photo attached), and you get back something like:
Edit 1: Exposure +0.5 stops. The subject's face is underexposed by half a stop — brightening it separates them from the background and makes the expression readable. Snapseed: Brightness +35. Apple Photos: Exposure slider to +0.5. Lightroom Mobile: Exposure +0.50.
Edit 2: Warmth to 5600K. The auto white balance went slightly cool, which makes the kitchen light look fluorescent instead of warm. Shifting warm brings the photo back to how the room actually felt. Snapseed: Warmth +15. Apple Photos: Warmth slider 60% right. Lightroom Mobile: Temp +400 (or drag right until the preview matches daylight).
Edit 3: Crop to 4:5, cut above the dish rack. There's visual noise in the upper-right corner that your eye ignores but the camera captured. A tighter crop puts the subject on the left third line and loses the clutter. All editors: Crop tool → 4:5 ratio → drag to taste.
Three edits. Three minutes. The photo goes from "fine" to "I should print that."
Why three is the magic number
Phone cameras in 2026 are genuinely good. The sensor, the computational photography pipeline, the auto-HDR — they do 80% of the work. The gap between a phone photo and a "real" photo is almost never about the camera. It's about three things: exposure on the subject, color temperature, and composition. Fix those three and you've closed most of the gap.
More than three edits usually means you're compensating for something the photo can't recover from — motion blur, extreme backlighting, a subject that's too far away. In those cases, no amount of editing saves it. But for the vast majority of phone photos — the ones where the shot is basically there but doesn't pop — three is enough.
The AI is particularly good at this because it can see the histogram, the color channels, and the composition grid simultaneously. A human editor does the same thing, but it takes experience to know where to look first. The prompt outsources the triage.
The genre variations
The base prompt works for everything, but you can sharpen it by adding context:
For portraits: "...and prioritize skin tone accuracy and eye brightness."
For food: "...and make the food look warm and appetizing without oversaturating."
For real estate or product shots: "...and prioritize straight lines, neutral white balance, and even lighting."
For pets: "...and prioritize sharpness on the eyes. If the eyes aren't sharp, tell me honestly."
For landscapes: "...and recover as much sky detail as possible without making the foreground look dark."
Each addition gives the AI a priority framework for choosing which three edits matter most. Without it, the AI defaults to a general-purpose analysis, which is usually fine but sometimes misses the genre-specific thing that would make the biggest difference.
The "edit it for me" extension
If you're using ChatGPT with Images 2.0, add this to the end of the prompt:
“"After giving me the slider values, also generate an edited version of the photo with these changes applied."
The AI will produce a modified version directly. It's not pixel-perfect (it's re-generating, not editing the actual file), but for social media, messaging, or quick sharing, it's often good enough — and it takes zero effort.
For a higher-fidelity result, use the slider values in your editor of choice. The AI's version is a preview; your edit is the final.
What this won't fix
Blur. If the subject moved or your hand shook, no edit saves it. The AI will tell you this honestly if you use the prompt above — it'll say "the primary limitation is motion blur on the subject, which can't be corrected in post." That's useful information: it means stop tweaking and take another shot next time.
Extreme backlighting where the subject is silhouetted. You can recover some, but if the face is a dark blob, there's no detail to bring back.
A bad moment. Editing fixes the technical. It doesn't fix the expression, the timing, or the composition. If you want better photos, the real hack is taking more of them. The editing prompt handles the rest.
The 📸Phone Photo Glow-Up on a-gnt
If you want this as a reusable tool — with the genre variations built in, the editor-specific instructions pre-loaded, and the "edit it for me" extension ready to go — grab the 📸Phone Photo Glow-Up from the a-gnt catalog. It's the full version of what this hack demonstrates.
One prompt. One photo. Three edits. The photo you almost posted but didn't — that's the one this is for.
Ratings & Reviews
0.0
out of 5
0 ratings
No reviews yet. Be the first to share your experience.