Didn’t Google obey the noindex directive in the past?
StoneTemple, which is now a part of Perficient Digital, published an article back in 2015 noting that Google didn’t obey the robots.txt noindex directive 100% of the time.
The takeaway from their research back in 2015 was that:
“Ultimately, the NoIndex directive in Robots.txt is pretty effective. It worked in 11 out of 12 cases we tested. It might work for your site, and because of how it’s implemented it gives you a path to prevent crawling of a page AND also have it removed from the index. That’s pretty useful in concept. However, our tests didn’t show 100 percent success, so it does not always work.”
Unfortunately for most SEO companies, that is no longer the case. Google has made it very clear that they will not support the noindex robots.txt directive at all.
Why is Google changing now?
It is well known in the SEO community that Google has been looking to make this change for at least several years and with the tech giant pushing to standardize the protocol, it can now move this aspect of their agenda forward.
Google said they had “analyzed the usage of robots.txt rules” in order to help determine this course of action. Google focuses on looking at unsupported implementations of the internet draft, such as crawl-delay, nofollow, and noindex. “Since these rules were never documented by Google, naturally, their usage in relation to Googlebot is very low,” Google said. “These mistakes hurt websites’ presence in Google’s search results in ways we don’t think webmasters intended.”