Can I disallow crawling of my CSS and JavaScript files?
A fun question from SEOmofo in Simi Valley. They ask “If I externalize all CSS style definitions and JaveScript scripts and disallow all user agents from accessing these files (via.robot.txt), would this cause problems for Googlebot? Does Googlebot need access to these files?”
I personally would recommend not blocking that for example the white house recently rolled out a new robot.txt and I think they blocked the images, directory or CSS or Java Script or something like that. You really don’t need to that. In fact sometimes it can be very helpful if we think something spam is going on with Java Script or if somebody is doing a sneaky redirect or something like that. So my personal advice would to let Googlebot to go ahead and crawl that and then it’s not like these files are huge anyway so it doesn’t consume a lot of bandwidth. So my personal advice just let Googlebot access to all that stuff and then most of the time we won’t ever fetch it. But in the rare occasion when we doing a quality check on behalf of someone or we receive a spam report and then we can go ahead and fetch that make sure your site is clean and not having any source of problems.
No comments yet.
Leave a comment
Blogroll
Categories
- 2013 seo trends
- author rank
- Bing search engine
- blogger
- Fake popularity
- google ads
- Google Adsense
- google fault
- google impact
- google Investigation
- google knowledge
- Google panda
- Google penguin
- Google Plus
- Google webmaster tools
- Hummingbird algorithm
- infographics
- link building
- Mattcutts Video Transcript
- Microsoft
- MSN Live Search
- Negative SEO
- pagerank
- Paid links
- Panda and penguin timeline
- Panda Update
- Panda Update #22
- Panda Update 25
- Panda update releases 2012
- Penguin Update
- Sandbox Tool
- search engines
- SEO
- SEO cartoons comics
- seo predictions
- seo techniques
- SEO tools
- seo updates
- social bookmarking
- Social Media
- SOPA Act
- Spam
- Uncategorized
- Webmaster News
- website
- Yahoo