Image Generation

How to Detect AI-Generated Images: Visual Artifacts and Tools

A practical approach to spotting synthetic images in 2025, covering both visual inspection and automated verification.

Trần Quang Hùng
Trần Quang HùngChief Explainer of Things
December 16, 20259 min read
Share:
Magnifying glass revealing pixel artifacts and computational patterns beneath a portrait photograph, illustrating AI image detection

QUICK INFO

Difficulty Beginner to Intermediate
Time Required 15-30 minutes to learn core techniques; ongoing practice
Prerequisites Basic image viewing skills; familiarity with major AI generators helpful
Tools Needed Web browser; optionally Hive Moderation, Illuminarty, or Content Credentials

What You'll Learn:

  • Identify the five categories of visual artifacts AI generators produce
  • Examine hands, text, and backgrounds for telltale errors
  • Use free detection tools to verify suspicious images
  • Understand why detection is getting harder (and what still works)

GUIDE

Detect AI-Generated Images by Examining Visual Artifacts and Using Detection Tools

A practical approach to spotting synthetic images in 2025, covering both visual inspection and automated verification.

This guide covers techniques for identifying AI-generated images, focusing on what actually works against current generators like Midjourney, DALL-E, Stable Diffusion, and Flux. It's aimed at anyone who needs to verify image authenticity: journalists, content moderators, educators, or anyone skeptical of what they see online.

The bad news: AI image generators have improved dramatically. The good news: they still make predictable mistakes.

The Core Problem

AI image generators don't understand what they're creating. They've been trained on patterns from billions of images, so they know what things look like, but they have no concept of hands, physics, or how languages work. When you ask for a photo of someone holding a coffee mug, the model isn't thinking "hand grips mug." It's predicting which pixels are statistically likely to appear next based on its training data.

This fundamental limitation produces artifacts. A study from Northwestern (Matt Groh and colleagues) categorized these into five types: anatomical implausibilities, stylistic artifacts, functional implausibilities, violations of physics, and sociocultural implausibilities. I'll work through the ones that are most reliable for detection.

Hands and Fingers

This is where to start. Human hands have 27 bones each, multiple joints, and move in complex ways that AI models struggle to replicate. The training data usually doesn't show hands clearly (they're small, often partially occluded, highly variable), so generators essentially guess.

Look for extra fingers (six or seven on one hand), fingers fused together, missing knuckles, impossible angles, or fingers that seem to grow from the wrong places. Pay particular attention to hands holding objects. The boundary between hand and object confuses generators, and you'll often see elongated coffee mugs, pens that pass through fingers, or objects that seem to float.

One caution: hands have improved significantly. A Midjourney update in 2023 made headlines specifically for better hand rendering. The main subject's hands in a well-composed image may look fine now. Check the background instead. People in crowds, secondary figures, anyone not the focal point. That's where hands still fall apart.

Text and Writing

AI generators produce gibberish text. Not intentionally garbled text, but something that looks like text without actually being readable. Signs in the background say "CAVTIGN" instead of "CAUTION." A college sweatshirt has the school name in the wrong font or misspelled. Street signs use letters that don't quite exist.

This happens because generators don't understand language. They've seen text in images, so they know text-shaped things should appear on signs and clothing, but they're reproducing visual patterns rather than spelling. Your brain is wired to catch even small errors in written language, so this check is quick and reliable.

Actually, I should clarify: this works best for English and Latin alphabets. I haven't tested systematically against generators trained on other writing systems. The principle should hold (pattern matching without understanding), but I can't verify it does.

Backgrounds and Architecture

The main subject of an AI image usually looks competent. The background is where things get strange. Buildings have staircases that lead nowhere, windows at odd heights, ceilings that slope without reason. Light fixtures don't match. A row of supposedly identical hanging lamps will have subtle differences. Railings connect to nothing. Chairs have the wrong number of legs.

AI generators blur backgrounds aggressively to hide these problems. A fully blurred background isn't proof of AI generation (photographers use shallow depth of field constantly), but combined with other signs it's worth noting.

Crowds are particularly problematic. Faces blur together, limbs don't connect to bodies, people merge with each other. If an image shows a large group, zoom in on individuals who aren't the focus.

Skin and Lighting

AI-generated portraits often have an uncanny quality. Skin looks too smooth, almost waxy. Colors seem oversaturated. There's a sheen that makes people look like video game characters or CGI rather than photographs. Groh's research team describes this as the image looking "airbrushed."

Lighting inconsistencies are harder to spot but more damning. Look for shadows falling in different directions, specular highlights that don't match the apparent light source, or lighting that's too perfectly even across the entire frame.

This is where I should mention that detection is probabilistic. A real photo might have smooth skin from heavy retouching. A skilled prompt engineer can sometimes work around these tells. You're looking for patterns across multiple indicators, not any single definitive proof.

Jewelry, Accessories, and Small Details

Earrings don't match. Necklaces hang at impossible heights. Rings don't wrap around fingers correctly. Shirt buttons float mid-air or stack weirdly. Collars don't fold properly. Watch faces are blank or distorted. Purse straps pass through shoulders.

These details trip up generators because they're small, variable, and require understanding how physical objects interact with bodies. Treat an image like one of those "spot the difference" puzzles. The error is usually in the accessories.

Detection Tools

Visual inspection only gets you so far. Several tools exist to automate detection.

Hive Moderation reports the highest accuracy I've seen claimed: 98-99.9% in their testing. It identifies not just whether an image is AI-generated but which model likely created it (Midjourney, DALL-E, Stable Diffusion, etc.). The API handles billions of requests monthly. Enterprise pricing, though there's a demo.

Illuminarty does something different: it highlights which regions of an image appear AI-generated. This is useful for composites where someone has combined real and synthetic elements. Around 75% accuracy in independent testing, which is lower than Hive, but the localization feature is unique.

AI or Not (also called Optic) focuses on deepfake detection specifically. About 88-89% accuracy in testing, with strength in facial manipulation. Free tier available.

Content Credentials (from C2PA) takes a different approach. It checks metadata and provenance rather than analyzing pixels. Adobe, Microsoft, OpenAI, and others support this standard. If an AI generator that supports C2PA created the image, Content Credentials will identify it. The limitation: metadata can be stripped, and not all generators support the standard. In my testing, images from Midjourney and Stable Diffusion often showed "No Content Credential" even when generated recently. Adobe Firefly and OpenAI's DALL-E worked more reliably.

All these tools have limitations. They struggle with images that have been heavily edited, screenshotted, or run through compression. New generators they haven't trained against may evade detection. False positives happen. Use them as one input alongside visual inspection.

What Doesn't Work

A few approaches that seem reasonable but aren't reliable:

Reverse image search (Google Images, TinEye) sometimes helps but isn't detection. It can find if an image existed before, which is useful for proving provenance, but won't identify a brand-new AI-generated image.

Metadata analysis alone is insufficient. EXIF data can be faked, stripped, or never present.

"It looks too perfect" is too subjective. Professional photography often looks highly polished. Some AI images look deliberately rough or artistic.

Troubleshooting

Problem: Detection tool says "inconclusive" or gives a middling confidence score. What's happening: The image has been edited, compressed, or the tool genuinely can't tell. Try a different tool. Do visual inspection alongside.

Problem: I found something that looks like an artifact, but a photographer friend says real cameras can produce this. What's happening: Some artifacts overlap with legitimate photography effects. Lens flare, bokeh, motion blur, grain. No single indicator is proof. Look for multiple issues.

Problem: Content Credentials shows nothing on an image I'm confident is AI-generated. What's happening: Many generators don't embed credentials, or they were stripped during sharing. Not finding a credential doesn't mean the image is real.

What's Next

The research paper that prompted this guide (FakeVLM from Shanghai AI Lab and Sun Yat-sen University) represents where detection is heading: multimodal AI models that don't just classify images as real or fake but explain why in natural language. Their FakeClue dataset includes over 100,000 images with human-written descriptions of artifacts. The model outperforms existing tools while providing interpretable explanations.

This matters because the current cat-and-mouse between generators and detectors will continue. Detection models trained only on older generators may fail against new ones. Models that understand why artifacts appear (not just pattern-match against them) should generalize better.

For now, combine visual inspection with multiple detection tools. No single approach is reliable alone.


PRO TIPS

Start with hands on secondary figures rather than the main subject. Background hands haven't improved as much as focal hands have.

Zoom to 100% or higher. Many artifacts disappear at thumbnail size.

Check text anywhere it appears: signs, clothing, books, screens, packaging. Generators fail at text consistently.

If using detection tools, try multiple. Hive and Illuminarty use different approaches and catch different things.


FAQ

Q: Can AI generators that make good hands now evade detection entirely? A: Better hands don't solve the underlying problem. Generators still fail at text, physics, and small details. And detection tools analyze pixel-level patterns, not just visible artifacts.

Q: Do these techniques work for AI-generated videos? A: Some carry over (faces, hands, text), but video introduces additional tells like temporal inconsistency (things changing frame-to-frame that shouldn't). Dedicated video deepfake tools like Deepware Scanner exist.

Q: What if someone uses AI to generate an image, then manually fixes the obvious artifacts? A: Partial AI-generation is harder to catch. Illuminarty's region-highlighting helps here. Heavy manual editing may also leave its own traces.

Q: How accurate are these detection tools really? A: Vendor claims are often best-case scenarios. Independent testing typically shows lower numbers. Hive probably leads, but even 98% means 1 in 50 images is wrong. Always combine with visual inspection.


RESOURCES

Tags:AI detectiondeepfakesynthetic imagesimage forensicsDALL-EMidjourneyStable Diffusionvisual artifactscontent verification
Trần Quang Hùng

Trần Quang Hùng

Chief Explainer of Things

Hùng is the guy his friends text when their Wi-Fi breaks, their code won't compile, or their furniture instructions make no sense. Now he's channeling that energy into guides that help thousands of readers solve problems without the panic.

Related Articles

Stay Ahead of the AI Curve

Get the latest AI news, reviews, and deals delivered straight to your inbox. Join 100,000+ AI enthusiasts.

By subscribing, you agree to our Privacy Policy. Unsubscribe anytime.

How to Detect AI-Generated Images: Visual Artifacts and Tools | aiHola