Have you at any time wanted to avert Google from indexing a individual URL on your net website and exhibiting it in their lookup motor success pages (SERPs)? If you regulate world-wide-web websites extended sufficient, a day will very likely appear when you need to have to know how to do this.
The 3 methods most usually made use of to prevent the indexing of a URL by Google are as follows:
Applying the rel=”nofollow” attribute on all anchor components made use of to connection to the page to protect against the back links from getting followed by the crawler.
Using a disallow directive in the site’s robots.txt file to reduce the web page from staying crawled and indexed.
Utilizing the meta robots tag with the content=”noindex” attribute to protect against the web page from currently being indexed.
When the dissimilarities in the three approaches appear to be refined at first glance, the success can vary substantially depending on which process you opt for.
Applying rel=”nofollow” to protect against Google indexing
A lot of inexperienced webmasters try to avoid Google from indexing a individual URL by applying the rel=”nofollow” attribute on HTML anchor things. They incorporate the attribute to every single anchor component on their web-site made use of to hyperlink to that URL.
Which include a rel=”nofollow” attribute on a link stops Google’s crawler from following the website link which, in transform, prevents them from getting, crawling, and indexing the target web site. While this process may work as a short-phrase solution, it is not a viable long-phrase alternative.
The flaw with this solution is that it assumes all inbound backlinks to the URL will consist of a rel=”nofollow” attribute. google index download , nevertheless, has no way to prevent other web sites from linking to the URL with a adopted url. So the chances that the URL will inevitably get crawled and indexed working with this technique is fairly high.
Utilizing robots.txt to avert Google indexing
An additional widespread strategy applied to reduce the indexing of a URL by Google is to use the robots.txt file. A disallow directive can be extra to the robots.txt file for the URL in concern. Google’s crawler will honor the directive which will reduce the web site from being crawled and indexed. In some situations, having said that, the URL can nevertheless look in the SERPs.
Sometimes Google will show a URL in their SERPs though they have hardly ever indexed the contents of that webpage. If sufficient net web sites link to the URL then Google can frequently infer the matter of the page from the backlink text of those inbound one-way links. As a final result they will show the URL in the SERPs for similar searches. When utilizing a disallow directive in the robots.txt file will stop Google from crawling and indexing a URL, it does not ensure that the URL will by no means show up in the SERPs.
Making use of the meta robots tag to protect against Google indexing
If you have to have to prevent Google from indexing a URL though also protecting against that URL from becoming exhibited in the SERPs then the most effective method is to use a meta robots tag with a information=”noindex” attribute within just the head factor of the internet web site. Of course, for Google to basically see this meta robots tag they need to have to initially be able to find and crawl the web page, so do not block the URL with robots.txt. When Google crawls the webpage and discovers the meta robots noindex tag, they will flag the URL so that it will never be proven in the SERPs. This is the most efficient way to protect against Google from indexing a URL and displaying it in their lookup success.