Has the FTC's antitrust investigation created a scraping economy?

In recent weeks Google has begun to crack down on commercial offerings that scrape search results data. Companies like Raven Tools and Ahrefs have made the call to stop providing this data in order to keep their AdWords API access.

If you’re an AdWords API user then, I believe, you’ve given your consent that Google can inspect your code. It appears that if Google sees the use of scraped data in there, or perhaps if it has other reasons to believe you’re engaged in SERPs scraping, then the search engine has finally started to take some hands on action.

Previously, Google’s main defense against SERPs scraping was a captcha.

Last week I speculated that Google wasn’t making this extra effort just to protect ranking data. After all, the Google Webmaster Console provides ranking data for your own website (just not that of your competitors). I wondered whether Google was increasingly interested in protecting its Knowledge Graph data too.

There are now reports that the FTC may end it’s two-year antitrust probe of Google if the search engine makes some voluntary changes. In particular; the FTC wants to see Google stop using restaurant and travel reviews from sites like TripAdvisor and Yelp.

At this stage this is all speculation.

However, if Google is limited in the data it is able to show/collect by “scraping” certain high profile sites then the search engine may be more protective of the data it shows. The difference between scraping and indexing in this scenario does begin to get a bit blurry.

The maths is fairly simple – the more rare something becomes, the more expensive it gets – and that is what may be happening with carefully ordered data. That might well explain why Google is looking to protect its own presentation of data with more enthusiasm than it has previously.

Popular Posts