Key Points
- Perplexity’s web crawlers are allegedly skirting restrictions
- The company’s bots appear to be disguising their identity to get around robots.txt files and firewalls
- Perplexity is getting around obstacles by using a generic browser intended to impersonate Google Chrome on macOS
- The company’s undeclared crawler can rotate through IP addresses not listed in Perplexity’s official IP range
- Cloudflare has removed Perplexity’s bots from its list of verified bots
- Cloudflare has implemented a way to identify and block Perplexity’s stealth crawler
A flowchart created by Cloudflare to illustrate the different ways Perplexity's web crawlers try to access the content of a website.
Perplexity’s web crawlers are allegedly skirting restrictions, according to a new report from Cloudflare. The report claims that the company’s bots appear to be “stealth crawling” sites by disguising their identity to get around robots.txt files and firewalls.
Robots.txt is a simple file websites host that lets web crawlers know if they can scrape a website’s content or not. Perplexity’s official web crawling bots are “PerplexityBot” and “Perplexity-User.” In Cloudflare’s tests, Perplexity was still able to display the content of a new, unindexed website, even when those specific bots were blocked by robots.txt.
The behavior extended to websites with specific Web Application Firewall (WAF) rules that restricted web crawlers, as well. Cloudflare believes that Perplexity is getting around those obstacles by using “a generic browser intended to impersonate Google Chrome on macOS” when robots.txt prohibits its normal bots.
In Cloudflare’s tests, the company’s undeclared crawler could also rotate through IP addresses not listed in Perplexity’s official IP range to get through firewalls. Cloudflare says that Perplexity appears to be doing the same thing with autonomous system numbers (ASNs) — an identifier for IP addresses operated by the same business — writing that it spotted the crawler switching ASNs “across tens of thousands of domains and millions of requests per day.”
Up-to-date information from websites is vital to companies training AI models, especially as services like Perplexity are used as replacements for search engines. Perplexity has also been caught in the past circumventing the rules to stay up-to-date.
Previous Incidents
Multiple websites reported that Perplexity was still accessing their content despite them forbidding it in robots.txt — something the company blamed on the third-party web crawlers it was using at the time. Perplexity later partnered with multiple publishers to share revenue earned from ads displayed alongside their content, seemingly as a make-good for its past behavior.
Stopping companies from scraping content from the web will likely remain a game of whack-a-mole. In the meantime, Cloudflare has removed Perplexity’s bots from its list of verified bots and implemented a way to identify and block Perplexity’s stealth crawler from accessing its customers’ content.
Source: engadget.com