Which is more important: content or links?
Here’s a funny question from Jeff in NYC, “As Google’s algo evolves, is it better to have exceptional links and mediocre content, or exceptional content and mediocre links?”
I’ll stop right there rather than finishing the question. Google always has to trade off the balance between authority and topicality for lack of a better word. If somebody types in Viagra which is one of the most spammed terms in the world, you want something that’s about Viagra you don’t just want something that has lot of authority something like news week or time that is talking about or writing an article and they have one mention of Viagra and they say oh this is something like Viagra or something just to throw-off phrase. So you do want authority, the sites that are trust worthy, that are reputable but you also want topicality you don’t want something that is off topic you want to about what user typed down. So we try to find a good balance there. So I would like to say have a well rounded site, great content has to be the foundation of any good site because, mediocre content tends not to attract exceptional link by itself and if you try getting exceptional link on really really crappy content you are going to be pushing uphill it’s going to be harder to get those links you have to do stuff that we can set as bad or scuzzy for the web like paying for them. So it’s better to have great content and to get those links naturally and then you get both, you get great content and you get great links. Then trying to have something that’s really really not that interesting and trying to just push and push and push and bug people and send out spam emails and ask for links and those sorts of things. So you want to have a well rounded site, and one of the best ways to do that, is to have a fantastic, interesting, useful content, great resources, great information and then that naturally attracts the links and then search engines want to reflect the fact that the web thinks that you are interesting or important or helpful.
Do ids in heading tags affect search engines?
Alright, we’ve got a question from Spain specifically Madrid, Dictina asks “Does using a class or an id in a header tag <h1 id=”whatever”>text</h1> instead of plain headers <h1>text</h1> interfere with the way search engines see and understand headings?”
I believe the answer is no, because you still have for example h1, you just have h1 id= whatever. So I don’t think that interferes in anyway. We are pretty good about saying here’s a hyperlink or an image tag and here are extra attributes, width or heights and we can prose those and sometimes we can use them. For example an image search you can now search for a specific image width and height of the images. But we are very good about noisy documents, documents width, extra ids all sorts of DIVs tables that aren’t closed those sorts of things. We do pretty well about disregarding those. So I would do clean great syntax make it easier for yourself when you develop or upgrade your site in future and Google will do a good job of ignoring the elements like on DIVs and stuff like that where you give it a class or an id name that’s not strictly necessary but often good form to do. So don’t worry about it, as far as search engines go.
What can I do if a competitor is spamming?
Laura Thieme from Columbus, OH says “A client has a leading competitor who has created 100 or so blog sites with little depth – Google gives them top ranking on every related term to their industry. Why does this tactic win? I thought this was crap hat tactics?”
This ‘crap hat’ at a phrase Danny Sullivan used in a recent conference, he’s just annoyed when people do spamming junky stuff that doesn’t really help anybody on the web. So Laura, do a spam report and give us some specifics because, I’d love to check it out. I think our scoring does pretty well on finding out links that we shouldn’t be crediting but we love to get new data on things that we should be doing better or link exchanges that don’t appear to be diffusing if they are really really excessive. So send a standard report so that we can check it out because that’s the sort of stuff we love to get and we’ll see where it goes from there. Thanks!
Should I strip file extensions from my URLs?
Tons of questions from the UK! J from London asks “Does stripping file extensions from URLs (site.com/folder/page/html versus site.com/folder/page have demonstrable benefit in the SERPs?”
I don’t really think it does and personally I would not do that. People like to know that it’s a html page that they are hitting. If you have a directory then, sure have a directory, but personally if you do not have .html then if your web server is not configured correctly we are making guesses about is it a pdf or is it a .exe or is it a cfm all the different mime types that there are trying to figure out what type of content it is. So if possible I would probably just stick with the standard convention have something like htm or html, users understand that, they don’t get confused they won’t be quite as cautious about clicking on a result. So it doesn’t make much difference in core ranking but I think behaviorally and not making something that a rough edge that people would get stuck on or worry about so I would probably stick with having the extension, having the html or something like that.
How do I optimize an e-commerce site without rich content?
A difficult question from Paddy Moogan in the UK. “Hi Matt what are you opinions on optimizing an Ecommerce website where the main pages/products may not necessarily be rich in content?
That’s a tough question, alright. Essentially you are saying here’s a place where you can buy products and there’s not a lot of content or may be the content is duplicated from a bunch of places. So my short answer is put on your user-hat. If you type in a product and you get a ton of places to buy things and there’s no real information on that page and there’s no real value add and all you get is places to buy then you get pretty annoyed. In fact whenever we ask your users, what’s the top issue for you these days it’s less about web spam like cloaking or hidden text or other stuff like that and more about search quality issue, Oh I don’t like how many commercial results I see or I get too many products or too many comparison shopping sites or things like that. So that’s the sentiment that we’ve heard a lot and what you should be asking yourself is do I want to take that step do I want to make and ecommerce site if I don’t have a lot of original content or value add or if I have a lot of duplicate content or if it’s in a fillet feed and if there’s really not that much that I’m adding to it and so I’d ask you, if you really want to jump into that and start optimizing that or do you want to look into something more original something more compelling some other hook that you could get a lot of visitors for and so my advice is, if possible think about how you can move towards that high value add, unique sort of site not just a site that somebody might view it as cookie cutter or they get annoyed about if they land and find a page just looks like 500 other pages that they have just seen on the web. So those are the sort of things to think about.
Two questions about the link: operator
Sven Heb from Wiesbaden, Germany asks “how accurate is Google’s back link check (link:..)? Are all nofollow back links filtered out or why does Yahoo/MSN show quite more back link results?”
The short answer is that, shortly we only have room for a small percentage of back links because web search was the main part and we didn’t have a ton of servers for link: queries and so we have doubled or increased the amount of back links that we show over time for link: but it’s still a sub-sample, its relatively small percentage. I think that’s a pretty good balance, because if you just automatically show a ton of back links for any website, then spammers and competitors can use that to try to reverse engineer someone’s ranking. And you don’t necessarily want someone spying on your rankings and try to figure out how they can compete with you by getting every single link that you’ve got. What we do instead is a nice compromise; if you register your site in google.com/webmasters or Webmaster council then you can see all of the back links that we know about you. So a vast vast vast the majority of the back links that we know about are there in Google’s Webmaster council. So you can look at the sub sample for any website or any page on the web, but if you want to see pretty much the full dump of what we know about you can see it for your own site but not necessarily for your competitors. We think that that’s a pretty good compromise and that’s probably the policy that we’ll have going forward.
San Diego Tim from San Diego says “If you have inbound links from reputable sites but those sites do not show up in a link:webname.com search, does this mean you are not getting any ‘credit’ n Google’s eyes for having inbound links?
No, it doesn’t. link: only show a sample, a sub sample of the back links that we know about and it’s a random sample and it’s not like we only show the high page rank back links that’s what we used to do and anyone who had a page rank of four or below wasn’t able to see their back links because they weren’t in the high page ranks they weren’t getting high page rank links. So we made it more fair by randomizing which back links we would show and we also doubled the back links we would show at that time. Now what’s interesting if you only show links that flow page ranks or that we trust or that do not have nofollow then people could kind of reverse engineer that and say, oh I’ll try to get the links that are really valuable. So we show the links that carry a lot credit in our system and we also show the links that we don’t really trust or don’t carry a lot of credits in our system. So it is truly a random sample, stuff that’s nofollow, stuff that is followed, stuff that we believe a lot, stuff that we don’t trust that much. Just because you don’t see one particular link in link: it doesn’t mean it doesn’t or does flow reputation or page rank or whatever you want to refer to it as. If it’s your own site you can use Google’s Webmaster council sign up and get a very complete basically the vast majority of links that we know about as a dump that you even download as a CSV file. So if you do want to get a really good idea of your back links that’s the place to get a pretty exhaustive list of your links according to Google.
Why does Google index blogs faster than other sites?
Lots of questions from the UK! Lee Willis from Cumbria, UK asks “Why does Google crawl/index blogs (specifically sites notified by “WordPress XMLRPC pings”) so much faster than a “normal” site submitting a revised Sitemap. What is the impact of that on the overall “quality” of the index?
Well, we always try to maximize the quality, the relevance and the accuracy of your index, and you want to make a distinction between crawling and indexing because site map submission does not guarantee that we will crawl the URLs on that list, it is very helpful to help us discover new URLs or to make canonicalization decisions. But we don’t guarantee that if you submit a site map we will go ahead and crawl it. There have been some people who did some experiments where they saw that happen but I’m not going to confirm it or deny that for the policies can always change on how exactly we do use site map submissions. But crawling and indexing is different, so if you do a ping a lot of time Google will come and crawl you but often its Google blog search because if you are doing those WordPress or web blogs or FeedBurner pings those pings are often those sort of things that are blogs. So the blog search can come and crawl you five minutes later. But then if you show up, you might show up in the blog search corpus not in our main web index corpus. So just because you get crawled it doesn’t mean that you are getting any sort of index boost or anything like that. We do sort of try to rationally decide what is the best quality of data, how do we get that, sometimes its crawling stuff immediately like with blog search you have very fast, very real time sort of results. And sometimes it’s taking site maps, it might result in crawling at a different pace or you may not get any boost at all but we do use that information not to waste and help us try to improve canonicalization and help us try to improve the quality of the index. So I wouldn’t say Ping that’s the way it automatically get crawled or anything like that. We make great content you get to be well known we will probably crawl you relatively frequently and see updated content any time you make a good change.
Are Google SERPs moving to Ajax?
Here’s a question from Owen in London. Owen in London asks “Can you confirm if the Google SERPs are moving to AJAX, http://tinyurl.com/be5shp, if so do you think it’ll affect analytics which rely on the keyword information being in the URL?”
So, Google did roll out a change few weeks ago, which is for a very smaller percentage of users, very small like under 1% right now are doing almost what you might call java script in hand search results. So you show up on Google’s page and as you are typing you can do neat things on java script and so you can try to make things faster and you can try to make things smoother for users, there’s a lot of really smart stuff you can do. The team really didn’t think about refers and how that might break analytics packages and stuff downstream. So it’s a very small percentage of people this is being trialed on and people are thinking about, are there ways to have refers anything that you do is so useful to have refers. So ten years from now, if refers are not the conventional browser sense then may be browsers can return everything after the poun sign. For example even though after the # mark or after the poun sign isn’t officially part of the URL or URI and if browsers were to pass that along then that would help all sorts of refers and analytical packages. So the way that I think about it right now is we have try experiments as how to make search results better and faster and cleaner and it’s not the intent to break refers but we have to keep trying out new things and we do want to have the ability where analytics packages can still continue to work.
More than one H1 on a page: good or bad?
A very short to the point question from Erin, south of Boston. Erin asks “More than one H1 on a page: good or bad?”
Well, if there’s a logical reason to have multiple sections then it’s not bad to have multiple H1’s. I would pay attention to overdoing it, if your entire page is H1 that looks pretty cruddy. So don’t do all H1s and then you see assessed to look like regular tags. Because we see people who are competitors complain about that if users ever turn off CSS or if CSS doesn’t load it looks really bad. It’s alright to have a little bit of H1 here and then maybe there’s two sections on a page so may be have a little bit of H1 here. But you should really use it for the headers or the headings which is what the intent is and not to just throw H1 everywhere you can on the page. Because I can tell if you just throw H1 everywhere on the page people have tried to abuse that and so our algorithms have tried to take that into account so it doesn’t really do you that much good. So I would use it where it makes sense and more sparingly but you can have it multiple times.
Will Google add guest accounts to Webmaster Tools?
Here is a question from Ian M from United Kingdom. “Is Google planning to create read-only “Guest accounts” for Webmaster Tools? Many clients (particularly in heavily regulated industries e.g. banks) are very reluctant to provide access to a third party.”
Great feature suggestion! I have no idea! Because the Webmaster Tools team they have to plan their resources and what they work on just like any other team and I can see a valid use for this, at the same time there are other things that the tools folks are working on that are really useful. Some people want infrastructure updates so that back link reports always rock solid or new data is really really fresh and it’s hard to play that off. So it’s a valid suggestion, I appreciate the suggestion I don’t know what level priority they give that, because there’s probably a relatively limited impact compared to making reports rock solid or overhauling our UI’s and things like that, that’s going to be useful for every single person not just for a smaller faction. But that’s something I can imagine as doing in the future so we’ll definitely take that into account and we appreciate the suggestion.
Blogroll
Categories
- AI Search & SEO
- author rank
- Authority Trust
- Bing search engine
- blogger
- CDN & Caching.
- Content Strategy
- Core Web Vitals
- Experience SEO
- Fake popularity
- gbp-optimization
- Google Adsense
- Google Business Profile Optimization
- google fault
- google impact
- google Investigation
- google knowledge
- Google panda
- Google penguin
- Google Plus
- Google Search Console
- Google Search Updates
- Google webmaster tools
- google-business-profile
- google-maps-ranking
- Hummingbird algorithm
- infographics
- link building
- Local SEO
- local-seo
- Mattcutts Video Transcript
- Microsoft
- Mobile Performance Optimization
- Mobile SEO
- MSN Live Search
- Negative SEO
- On-Page SEO
- Page Speed Optimization
- pagerank
- Paid links
- Panda and penguin timeline
- Panda Update
- Panda Update #22
- Panda Update 25
- Panda update releases 2012
- Penguin Update
- Performance Optimization
- Sandbox Tool
- search engines
- SEO
- SEO Audits
- SEO Audits & Monitoring
- SEO cartoons comics
- seo predictions
- SEO Recovery & Fixes
- SEO Reporting & Analytics
- seo techniques
- SEO Tips & Strategies
- SEO tools
- SEO Trends 2013
- seo updates
- Server Optimization
- Small Business Marketing
- social bookmarking
- Social Media
- SOPA Act
- Spam
- Technical SEO
- Uncategorized
- User Experience (UX)
- Webmaster News
- website
- Website Security
- Website Speed Optimization
- Yahoo




