Key Points
- Guardian investigation found misleading AI Overviews for liver test queries.
- Google removed AI Overviews for the exact queries highlighted by the report.
- Variations of the queries initially still showed AI‑generated summaries.
- Google spokesperson said the company does not comment on individual removals.
- Internal clinicians reviewed the queries and found the information largely accurate.
- British Liver Trust welcomed the removal but warned of broader AI health content issues.
- TechCrunch sought further comment on the situation.
Background
The Guardian conducted an investigation into Google’s AI Overviews, a feature that provides concise answers to user queries directly in search results. The investigation found that for health‑related queries about liver test reference ranges, the AI Overviews presented numbers that did not account for factors such as nationality, sex, ethnicity, or age. This omission could lead users to incorrectly assume their test results were normal.
Specific queries examined by the Guardian included “what is the normal range for liver blood tests” and “what is the normal range for liver function tests.” The report noted that while the AI Overviews for these exact phrasings were removed, variations like “lft reference range” still produced AI‑generated summaries at the time of testing.
Google’s Response
After the Guardian story was published, Google removed the AI Overviews for the exact queries highlighted. When testing the queries later that morning, none of them displayed AI Overviews, although the option to ask the same query in AI Mode remained available. In several instances, the top search result was the Guardian article itself discussing the removal.
A Google spokesperson declined to comment on individual removals, stating the company does not discuss specific changes but works to make broad improvements. The spokesperson also mentioned that an internal team of clinicians reviewed the queries identified by the Guardian and concluded that, in many cases, the information was not inaccurate and was supported by high‑quality websites.
Reactions and Wider Implications
Vanessa Hebditch, director of communications and policy at the British Liver Trust, described the removal as “excellent news.” However, she emphasized that focusing on a single search result does not address the larger concern of AI Overviews for health topics overall. The incident highlights ongoing challenges in ensuring that AI‑generated health information is both accurate and appropriately contextualized.
TechCrunch reached out for additional comment, underscoring the broader industry interest in how major platforms handle AI‑driven health content. The episode illustrates the tension between providing quick, AI‑generated answers and the need for nuanced, medically sound information that accounts for individual patient variables.
Source: techcrunch.com