Reasons for Google not catching a page: A SEO checklist for Google's cache problem

Reasons for Google not catching a page: A SEO checklist for Google's cache problem

  1. If a page is password protected and the search engines cannot access it but find many links coming to it they will keep the URL in search results but there wont be any cache since the page cannot be accessed by Googlebot.




  2. If the page is too large for indexing we discussed this before in our blog Googlebot has problem caching large pages. For example large PPT, large PDFs or large Docs are not cached by Google. From our research they have a indexing limit of 1.5 Mb and I have personally not seen a page more than that being cached again understand indexing and caching are two different things I am discussing about indexing only here.
  3. If a page has errors and not rendering properly for Googlebot they fail to index that page. This might sometimes happen with dynamic pages some servers when the Googlebot is visiting might not render the page problem and this will result in Googlebot not caching the page.




  4. If you are suffering from a page penalty then it will affect page caching.
    Also if Googlebot has not visited a page for a long time but the page is still in index it will loose its credibility in the index and will loose the cache.
  5. Accidental blocking of a page in robots.txt or in Meta tags will stop search engines from caching a page. For example the Nocache meta tag syntax will tell the search engine robot not to cache the web page.

Labels:


0 Comments:

Post a Comment

Links to this post:

Create a Link

<< SEO Blog Home