Key Points
- Study of ~30,000 Amazon reviews found AI‑generated content in multiple‑category products.
- Baby products had the highest AI review rate at about 5.2%; garden items were lowest at 1.7%.
- AI‑written reviews skew heavily positive, with roughly 74% receiving five‑star ratings.
- 93% of suspected AI reviews were marked as verified purchases, enhancing perceived authenticity.
- FTC treats AI‑generated fake reviews as illegal; UK regulators are also focusing on deceptive reviews.
- Experts recommend Amazon and policymakers explicitly include AI‑generated text in fake‑review definitions.
- Consumers are urged to avoid using AI tools to write their own reviews to maintain trust.

Study Findings
A systematic review of nearly 30,000 customer reviews for 500 best‑selling Amazon items revealed that AI‑generated text appears across multiple product categories. Baby products showed the highest incidence, with roughly 5.2% of the 3,037 reviews examined containing language generated by large language models. The beauty and wellness & relaxation categories followed closely, each with about 5% and 4.4% AI‑written reviews respectively. In contrast, garden items exhibited the lowest rate at approximately 1.7%.
The analysis also highlighted a skew toward positive sentiment in AI‑generated reviews. Approximately 74% of the identified AI‑written reviews were five‑star, compared with 56% for human‑written reviews, while only about 10% of AI‑generated reviews were one‑star versus 21% for non‑AI reviews. A striking 93% of the suspected AI reviews were marked as verified purchases, lending them an appearance of authenticity.
Implications for Consumers and Brands
The presence of AI‑crafted reviews raises serious questions about the reliability of Amazon’s rating system. Positive, verified‑purchase reviews that are actually generated by algorithms can artificially inflate product ratings, potentially misleading shoppers who rely on peer feedback to make purchasing decisions. Conversely, the lower prevalence of AI‑generated negative reviews suggests that the technology is being used primarily to boost product perception rather than provide balanced critique.
For brands, especially those competing in saturated markets like beauty and baby care, the temptation to employ AI tools to generate favorable feedback is evident. However, the practice risks eroding consumer trust if uncovered, and may attract regulatory scrutiny.
Regulatory and Platform Responses
In the United States, the Federal Trade Commission (FTC) treats fake reviews, including those produced by artificial intelligence, as illegal under federal law and can pursue civil penalties against violators. The United Kingdom’s Competition and Markets Authority (CMA) has also signaled intent to clamp down on deceptive reviews, though its guidance does not explicitly reference AI‑generated content.
Experts call for Amazon to strengthen its detection mechanisms and for regulators to ensure that definitions of “fake review” explicitly encompass AI‑generated text. Without such measures, the utility of Amazon’s AI‑generated review highlights—short paragraphs summarizing common themes across thousands of reviews—could be compromised as the proportion of inauthentic content grows.
Calls to Action
Stakeholders urge a two‑step approach: first, acknowledge the emerging threat of AI‑generated reviews; second, expand legal and platform policies to clearly label such content as fake. In the meantime, consumers are advised to be cautious and avoid outsourcing review writing to large language models, preserving the integrity of their own purchasing experiences.
Source: techradar.com