Spotting AI-Generated Product Reviews: A Shopper's Field Guide
AI-written reviews are flooding Amazon, eBay, Etsy, and the App Store. Here's how to read a review page like a forensic analyst — vocabulary tells, account-level signals, structural patterns — and when to run a passage through a detector.
The five-star review you're about to trust may have never been written by a human. Marketplaces that depend on review trust — Amazon, eBay, Etsy, the App Store, Google Maps — are all being flooded with AI-generated reviews. Sellers prompt LLMs to produce dozens of plausible reviews at scale; review brokers package them as a service; foreign content farms run the operation as a commodity. Here's how to read your way around it.
Why AI reviews are hard for platforms to fix
Review platforms have spam filters, but the filters were built for the previous era — copy-paste duplicates, account farms, unnaturally fast posting. Modern AI reviews look different:
- Each review is unique (no duplicate detection)
- Each is plausibly grammatical (no obvious red flags)
- Posted on cadence that mimics organic activity (no rate-limit triggers)
- Often delivered through real-looking accounts purchased on the secondary market
The platforms are catching up. They aren't fully caught up. Until they are, the buyer has to read defensively.
Vocabulary tells — the "delve" pattern in product context
LLMs in 2026 have a recognizable vocabulary in commercial review contexts. Watch for clusters of these words in a single review:
- delve, delving, navigated, leveraged, robust, comprehensive, multifaceted, intricate, pivotal, groundbreaking, seamless, fostered, ever-evolving, myriad, plethora, embark, unparalleled, transformative, holistic
Single occurrences mean nothing. Three or four in one short review is a strong tell. A toothbrush review that calls the product "an unparalleled, multifaceted addition to my morning routine that has transformed my approach to oral hygiene" is almost certainly AI.
Structural tells — the symmetric review
Real reviews are messy. Real reviewers leave abrupt sentences, run-ons, mid-thought shifts. AI-written reviews are too clean. Watch for:
- Three pros, three cons. AI loves symmetric lists. Real reviewers usually pile pros on or rant about cons; they rarely produce equal-weight pairs.
- Opening with a rhetorical question. "Looking for a better X? Look no further!" — almost always AI in 2026.
- The smooth wrap-up. "All in all, this product has exceeded my expectations and I would highly recommend it to anyone in the market for X." Real reviewers don't write "all in all."
- Excessive em-dashes and semicolons. Real casual writers use neither; AI loves both.
- Missing or generic specifics. "Battery life is great" with no number. "Sound quality is fantastic" with no comparison. AI reviews fail on the details that real users give for free — actual measurements, specific use cases, comparisons to other products they own.
Account-level signals
The review itself is one data point. The account is another:
- Reviewing across unrelated categories at the same hour — fertilizer, headphones, dietary supplements, baby products, all 5-star, all this morning. That's a real-account-rented-to-a-broker pattern.
- All reviews exactly the same length. Real reviewers write short or long depending on whether they have something to say. AI services calibrate to a target word count.
- Geographic mismatch. "Used this hot tub in my Texas backyard" from an account that has only ever shipped to Latvia.
- Photos that look stock or AI-generated themselves. Cross-reference with our image detector. Reviewers who upload "real" pictures of the product often upload AI-generated stock that matches the listing photos too perfectly.
- A burst of glowing reviews concentrated in the first month after listing. New seller, ten reviews on day three, all 4–5 stars, none from verified purchasers. This is the classic boost pattern.
What 5-star skew really means
It's tempting to trust products with mostly 5-star reviews. The pattern that should alarm you instead is:
- A long tail of 1-stars complaining about real problems (battery dies, fabric tears, app crashes)
- Plus a sudden recent surge of detailed 5-stars praising the product in vague-yet-glowing terms
That's a seller buying reputation cleanup. The 1-stars are the truth; the 5-stars are the sponsored kind. Filter to verified purchases only, then sort by recent — that often reveals the underlying signal.
Use the detector on the borderline cases
When you're staring at a review you can't quite call, paste it into our text detector. The five forensic signals (burstiness, lexical tics, n-gram repetition, sentence-start variety, punctuation pattern) plus the per-token perplexity heatmap will give you an audit trail rather than a vibe.
A few rules of thumb specific to product reviews:
- Detection is more reliable on longer reviews. A 30-word review can't be reliably classified.
- Aggregate the signal across multiple reviews from the same account. If 10 reviews all score 80%+ AI from the same account, the account is the problem regardless of which specific review you started with.
- A clean detection score ≠ honest review. A real human can be paid to write a fake review. Detection only catches the AI path; it doesn't catch incentivized humans.
Better tools than the review section
Sometimes the right move is to stop reading reviews entirely and verify a different way:
- Manufacturer transparency. Is this a real brand with a real website, real return policy, real customer service phone number you can actually call?
- YouTube/TikTok reviews from creators with histories. Video is harder to fake at scale, and creators who have built audiences over years have skin in the game.
- Reddit. Search the product name on /r/[relevant subreddit]. Real users in topic-specific communities are much harder to fake than Amazon accounts.
- Return windows. A product worth buying is one you can send back. Lean on the return policy as your real fallback.
What platforms should do (and aren't yet)
The structural fixes are obvious:
- Adopt C2PA-style provenance for review submission. A signed claim "this account submitted this review at this time from this device" makes review broker accounts much harder to scale.
- Surface "verified purchase" rates more prominently.
- Show review velocity over time. Sudden spikes are the most common pattern of paid reviews.
- Run their own detection across reviews. Most platforms have the data to do this; few publish their methodology.
Until the platforms harden, the burden falls on shoppers. The good news: the techniques in this post don't take long to internalize. After a few hundred reviews of practice, the vibe is unmistakable.
A note on the asymmetry
Sellers profit from a one-time fake review. Shoppers absorb the cost of every bad purchase. That asymmetry is exactly why the equilibrium tilts toward more fakes over time, not fewer. Defending yourself isn't optional — it's the price of buying anything online in 2026.
For the technical side of how detection works, see our How to spot AI-written text. For the deeper accuracy framing — what an "87% AI" score actually tells you — see What '87%' actually means.