Accidentally Blocking Link Juice with Robots.txt

Tue, Jun 9, 2009

Link Building

Today we’re going to talk about a mistake that people make all too often – blocking pages with robots.txt when they have links to other pages on your site or elsewhere and thereby killing any link juice the blocked page may have. Of course, there is even some debate in the SEO circles about whether or not we should be blocking anything at all (after all, why put it up in the first place if you are going to block it?), but we’ll leave that discussion for another day.

Many people fail to understand exactly how the search engines process blocking requests, and the various methods to do so. There are two methods, used in the most of seo services, that can be employed to block the search engines from accessing and crawling particular pages:

  • Blocking with robots.txt – Tells the search engine not to visit the URL, but to go ahead and keep it in its index of pages. So the search engine knows it exists, but has no idea what is on the page.
  • Blocking with the meta tag NoIndex – Tells the search engine to go ahead and visit, but not to remember anything it sees there.

When you use robots.txt to block access to a page, the links will appear in most of the search engines as just that – a link. It will have no title, description or anything else.

The problem is that your blocked pages – chances are they have accumulated links and “juice” from the search engines, but they cannot pass it on. So if you have content downstream being linked from these blocked pages that isn’t blocked then guess what – you lose all your link juice. You are in effect hurting yourself!

If you insist of hiding pages behind robots.txt regarding your search engine optimization strategies, then there are two important things to remember:

  • If you are going to link from anything off the page, use the nofollow directive to help conserve link juice.
  • If you know you have a page that is being blocked by robots.txt that has link juice, either consider removing it from being “hidden” or use the meta tag “noindex, follow” so they can pass their link juice on to other pages on your site that could benefit from it.

Related Posts:

This post was written by:

Cassiano Travareli - who has written 90 posts on SEO Blog | SEO Marketing World.

SEO Specialist! Loves everything about Search Engine Marketing.

One Response to “Accidentally Blocking Link Juice with Robots.txt”


Trackbacks/Pingbacks

  1. […] links are processed, SEO firms have been looking for a way to minimize their risk of losing link juice as this new method rolls out. To put it simply, Google is no longer treating pages with nofollow […]

Leave a Reply