Interview with Udi Manber Google’s search quality scientist

I just read this very interesting interview from Vice president of Search Quality in Google Mr.Udi Manber,

http://www.popularmechanics.com/blogs/technology_news/4259137.html

According to wikipedia

“Udi Manber (Hebrew: אודי מנבר‎) is an Israeli computer scientist. He is one of the authors of agrep and GLIMPSE. As of December 2007, he is employed by Google as one of their vice-presidents of engineering.
[edit] BiographyHe earned both his bachelor’s degree in 1975 in mathematics and his master’s degree in 1978 from the Technion in Israel. At the University of Washington, he earned another master’s degree in 1981 and his Ph.D. in computer science in 1982.
He has won a Presidential Young Investigator Award in 1985, 3 best-paper awards, and the Usenix annual Software Tools User Group Award software award in 1999.
He was a professor at the University of Arizona and authored several articles while there. He wrote Introduction to Algorithms — A Creative Approach (ISBN 0-201-12037-2), a book on algorithms.
He became the chief scientist at Yahoo! in 1998.
In 2002, he joined Amazon.com, where he became “chief algorithms officer” and a vice president. He later was appointed CEO of the Amazon spin-off company A9.com. He filed a patent on behalf of Amazon.[1]
As of February 8, 2006, he has been hired by Google as one of their vice-presidents of engineering. In December 2007, he announced Knol, Google’s new project to create a knowledge repository.[2]

John Muller’s example of Google local search ranking

John Muller in one of his recent blog post explains how easy its to get into Google local search and how important its for business.

From the time Google’s Local Search was introduced people were eager to get into it. Some think its difficult to get into local search but the fact its not. Google sees some factors like User reviews from some popular sites as a ranking factor for local search.

From Mr. John’s example here

http://maps.google.com/maps?hl=en&q=toy+store&near=Kirkland,+WA&fb=1&view=text&latlng=47674330,-122129701,11298439279698530468&dtab=0

the No.1 ranking site treetoptoys.com has nothing but a flash homepage and some flash links if that site were to compete for Organic rankings i am sure they wont rank but for local they are doing fine because of the reviews they have on other sites.

Also John points how important its for SEO companies to find local businesses and help them out to go online and rank in local search. Its very good for their business. Just some simple traditional SEO is more than enough to rank in local search since the competition pretty low.

Read in detail on John’s view here http://johnmu.com/untitled-document/#more-117

Matt Cutts Warns Against Buying Links

Regarding buying links what we do is we try to tackle things algorithmically but we also try things that are scalable and robust. So when I refer both algorithm and user both actually has to go well. So its more on the line to give people the heads up yeah we do consider buying links outside our guidelines and I just sort of notify that we may take strong action against sites in future. So people want to do that they always have the rights if you are the webmaster its your site you can do what ever you want to do on your site I totally support that idea but we Search Engine think and decide what we feel is best to return a high quality index. So people want to co-operate with Google and users and try to do things in a way that is good for users , good for them and good for the search engines that’s fantastic. And we will try to return the best results we can.

Trust Rank Explained by Matt Cutts – transcript mini

Hello Matt: I was talking about the Trust Rank and you said something is going on with the Trademark or I don’t know I couldn’t concentrate but so can you tell something more about it.

Mattcutts: Yes let me know talk about that a little bit. What is trust rank everybody is curious about that. Its kind of nice to ask , everybody has a vague view about it. So turns out there was someone in Yahoo , Peterson and some other people at yahoo and they wrote a paper about something called Trust Rank and what it does is it tries to treat reputation and like physical mass how it goes around the web and what physical properties does trust have and its really interesting stuff but its completely separate from Google.

And so couple of years ago at the exact same time Google was working on a Anti-phishing filter and part of that we need to come up with a name for it so they filed for a trademark and I think they used the name Trust rank and it was really a coincidence yahoo had a Trust Rank as a search research project and we had Trust rank as a trademark so everybody talks about Trust Rank, Trust Rank and so if you ask five different people they will have five different opinions about exactly what trust rank is.

Matt Cutts Discusses Webmaster Tools – mattcutts video transcript


I am up in the Kirkland office today, Up here for an outside little bit planning and they said you see why don’t we throw together video and like 10 minutes or less. So we said alright lets give it a shot. So we are thinking about some of the common things you do with webmaster console or some topics webmasters want to hear about people want to go to webmaster console and check their backlinks.

They also like to know if they have any penalties there are a lot of really good stats in the webmaster console. One thing I had been hearing questions about is how do I remove my URLs from Google. So why would you want to do this well suppose you are in school and you accidentally left your Social Security Number of all your students up on the web or your store you left people’s credit card numbers. Or you are running a forum and suddenly you are spammed full of porn by a Ukrainian forum spammer which happened to a friend of mine recently. So whatever the reason you want some URLs out of Google instead of getting URLs into Google. Lets look at some of the possible approaches some the different ways you can do it.

What ill do is , ill go through each of these ones and will kind of draw a Happy Face by one or two I think are specially good as far as getting the contents out of Google or perhaps abandoning them from getting into Google the first place. So the first thing a lot of people say is ok I just don’t like to a page is a secret server page Google won’t know or ever find it that way I don’t have to find a way up of showing up the Search Engines. This is not a great up roach and ill give you a very simple reason why. We actually see so many people surf to a page and then serve to an other web server and that causes your browser to create a referrer in the HTTP in browser codes the header status which showed up before will show up on the other web server. And that other web server shows hey these are the top referrers to my page and may be that’s a clickable hyperlink then Google can crawl that other web server and find a link to your so called secret web page. So its very weak to say “you know what I don’t want to link to it ill just keep it a secret and no one will ever find about it”. For what ever reason somebody will call from that page somebody will link from that page, somebody will refer from that page or as I said somebody will accidentally link to that page and that’s you know if there is a link on web to that page there is a reasonable reason that we might find it so I don’t recommend anyone using that its relatively very weak way. An other way you can do is something called .htaccess. Ok that sounds little, let me tell you very simply. This is a very simple file that lets you do simple things like redirect from one URL to an other URL the thing I am specifically talking about is can password protect a sub-directory or even you can protect your entire site now I don’t think we provide a .htaccess tool in the webmaster tools but that’s ok there are a lot of them out on the web and if you do a simple search like .htaccess tool or wizard something like that you will find one that will say like a password protective directory and it can even tell a directory and generate one for you and you can just copy and paste that onto your website.

So this is very good why is this strong why am I going to draw a Happy face here. Well you got a password on that directory Googlebot is not gonna guess that password you know we are not going to crawl that directory at all and we if we cant get to it . It will never show up in our index. This is very strong, very robust and efficient for the search engine because someone has to know the password to get into that directory. So this is one of the two ways I really really recommend this is a preventive measure so if already got chance to get into it you already had it vulnerable on your site so if you plan in advance and you know what the sensitive areas are going to be just put a password on there and it will work really well.

Ok here is an other way one that a lot of people know about called Robots.txt. This one has been here for over a decade atleast 1996 and essentially its like a electronic no trespassing sign it says here are areas of your site that Google or other search engine are not allowed to crawl, we do provide robots.txt tool in the webmaster console so you can create one and test out URLs and see if Googlebot is allowed to get to them , you can test out like the different variants of Googlebot like the Image-Googlebot is allowed to get to it and you can take new robots.txt files for test drive so you can say how about I try this for my robots.txt could you crawl this URL, or could you crawl this URL and you can just try it out and make sure it works ok. That’s nice because other wise you are going to shoot yourself on your foot say you make a robots.txt and make it like and it has a syntax error and say it keeps everybody in or keeps everybody out that’s going to cause a problem. So I recommend you take that tool for a test drive and see what you like and then you can put it live.

Then ok robots.txt is kind of interesting, different search engines have different polices of uncrawled URLs , ill give you a very simple example way, way, way back in days sites like Ebay.com , Nytimes.com don’t want anyone to crawl their site so they had a robots.txt file that said

Useragent: *
Disallow: / ( everybody )

So this will not allow any search engines to crawl even if you are a well behaved search engine. So that is kind of problematic so you are a search engines and somebody typed in Ebay and you cannot return Ebay.com it looks kind of dumb and its like what we decided or what we our policy still is we will crawl this page but we will not show a uncrawled reference sometimes we can make it look pretty good about it. Sometimes if there is a entry to nytimes.com in the Open Directory project ( ODP ) we can show that snippet from the ODP and show it for nytimes.com as a uncrawled reference and for users its good even though we are not allowed to crawl and we infact did not crawl it. So robots.txt is to prevent crawling but it wont completely prevent that URL from completely showing up in Google so there are otherways to do it. Lets move on to NOINDEX meta tag. What that simple says for Google atleast is don’t show that page at all in search engines so if we find Noindex we will completely drop it from Google search results we will still crawl it but we wont actually show it if somebody does a search in search result query for that page. So its pretty powerful works very well and very simple to understand there are couple complicating factors, yahoo and Microsoft even if you use the noindex meta tag can show a reference to that page, they wont return the complete the full snippet and stuff like that but you might see the link to that. We do see some people having problem with that for example you are a webmaster and you put up a noindex meta tag and put it up on your site been shifting around in developing your site you might forget and might not take that noindex meta tag down so very simple example. The Hungarian version of BMW I think has done this, there is a musician ( harper ) you probably heard about is pretty popular has a noindex metatag its still there and if you are the webmaster of that site we love you to take that down. So there are various people in google would have said may be we should not show the snippet of the url but show a reference to that URL. There is one other corner case on this noindex which is we can only abide by that meta tag only if we had crawled that page of we haven’t crawled that page we haven’t seen that meta tag and we haven’t know its there. So in theory its possible if you link to that page and we don’t get a chance to crawl that page we don’t see a noindex and we don’t drop It out completely. So there are couple of cases where you have atleast the reference which will show up in google and pretty much yahoo and Microsoft will always have a reference to that page if you use the noindex metatag.

So here is another approach you can use that is the Nofollow tag that can be added on individual links. This is an other type weak approve since inevitably say there are 20 links to that page may be I am going to put a Nofollow on all of them may be it’s a sign in page may be if you are a expedia.com and you want to add a Nofollow on my itineraries it makes perfect sense right. Why would you want Googlebot to crawl into your itineraries because that’s a personalized thing. But inevitably somebody links to that page or you forget to have a page which not every single link with a Nofollow so its very common that, ill just draw a very simple example suppose we have a page A and we have a Nofollow link to page B,

We will follow that link we will drop it out of our link graph we will drop it off completely so we wont discover page B because of this link but now like say there is an other guy on page C that wants to link to page B we might actually follow that link and will eventually end up indexing page B so you can try to make sure every link to a page is no-followed but sometimes its hard to follow that every single link is no-followed correctly so this like the NOINDEX does have some weird corner cases where you can very easily a page gets crawled since not every link has the Nofollow-ed or in the noindex case we can actually get to the page and end up crawling the page and end up later seeing the noindex tag. So lets move on to an other powerful way I tend to use this whenever a forum gets by porn spammer recently. And that’s the URL removal tool. So .htaccess is great as a preventive measure you put a password on it no-one can guess what it is, no search engine’s are going to get in there, it wont get indexed. The other thing you can do is if you do let the search engines in before and you want to take it down later you got the URL removal tool. We have offered the URL removal tool for atleast 5 years probably more for long time it sat on pages like services.google.com and it’s a completely self service that runs 24/7 but just recently the webmaster console team has integrated the URL removal tool into the webmaster console. Much much simpler to use the UI is much better what it helps is it will remove the URL for 6 months and if that was a mistake and if you removed your entire domain which you don’t need to then you need to email Google’s user support telling them hey I didn’t mean to remove my entire site can you revoke that and someone in google have to do that. Now you can do it yourself also its powerful and well accessible in webmaster console. Anytime you can go in to webmaster console and say hey I didn’t mean to remove my entire site and remove that request and that request gets revoked very quickly. So to use the webmaster console its not that hard to prove that you are the owner of the site, you just need to make a page on the root of the website , root of the directory or root of the site to say yep here is a little signature in the text file to say that this is my site. Once you prove that this is my domain then you get a lot more stats and this wonderful little URL remove tool. And it can remove a very nice level of speed in there you can remove a whole domain, you can remove a sub-directory thing you can even remove individual URLs and you can see actually the status of all the URLs you have put a request to be removed, initially it will show a status that the request is pending and later it will show that the URL removal has been processed/ removed. This will change the status to revoke. You can give a reason what ever you have like hey I got the credit card numbers, Social security numbers of what ever sensitive you had there removed and you want to revoke the URL removal from Google’s index. In other words its save to crawl and index again. So all the ways to remove the URLs or churning up URLs from showing up in Google there are a lot of different options some of them are very strong like the robots.txt, noindex but they do have these very weird corner cases like we might show the reference to the URL in varied situations so that ones that I definitely recommend is the .htaccess that will prevent the search engines and people from getting into the first place and for Google we have the URL removal tool so if you got URLs crawled that you don’t want to show up in Google’s index you can still get them out and get them out relatively quickly.

Thanks Very much Hope that was very helpful.

Matt Cutts.

Why we prepared this Video transcript?

We know this video is more than a year old but still there are people who have questions about their site and want to listen from a Search Engine Expert. Also there are millions of Non-English people who want to know what’s there in this video so a transcript is something that can be easily translated to be read in other languages. We know there are people with hearing disability who browse our site this is a friendly version for them where they can read and understand what’s there in this video.

This transcript is copyright – Search Engine Genie.

Feel free to translate them but make sure proper credit is given to Search Engine Genie


Supplemental Results – Matt cutts video transcript

Ok we got some supplemental results questions david writes in he says “Matt should I be worried about this, site table1.com returns 10,000 results site:table1.com -intitle:by returns a 100,000 results all supplemental. David in general I wouldn’t worry about this I want to explain the concept of beating path.

So if there is a problem with one word search in Google that’s a big deal if it’s a 20 word search that’s obviously a less of a big deal its because its often not a big impact. Supplemental results team takes reports very seriously and acts very quickly on them, But in general in something in supplemental results is mostly off path than our main web results. And once you start getting into negation or negation by a special operator like -intitle then its pretty off the main path and you are talking about results estimates its not the actual web results but the estimates for the number of results. The good news is there are couple of things which can help in bringing up the result estimates more accurate ah atleast I know 2 things that can influence an infrastructure.

Deliberately trying to make the site results more accurate

Other one is the change in the infrastructure to improve our raw quality

But side benefits is the it gives the estimated number of results to be more accurately when it involves the supplemental results. SO there are atleast a couple of changes that might make things more accurate but in general once you start to get really far from the beginning path -intitle all these stuff specially for supplemental results I wouldn’t worry that much about the result estimates, historically we haven’t worried too much since not many people were interested. But we do hear more people sort of saying yes I am crazy about this so you need to put more effort into that.

Erin writes and says that “I have a question on redirects, I have one or more pages of various moved across various websites I use classic ASP and shows how he gave response for 301. He said these redirects are setup for quite a while and if he runs a spider on them it reads the redirect fine. This is probably an instance where you would have seen this happen in supplemental results, so here is how we can go about things. There is a main web results Googlebot and a supplemental results Googlebot so the next time the supplemental results Googlebot visits that page and sees the 301 it will reindex accordingly and things will refresh and so on historically the supplemental results have been a lot spidered data that is not refreshed as it is done for the normal web results. If you check the cache anybody can verify that the results and the crawl date vary so the good news is that the supplemental results are getting fresher and fresher and the effort is made to make them quite fresh.

For example Chris writes “I like to know more about the supplemental results, it seems while I want in vacation my sites got put there, I have one site that had a Pagerank of 6 and it got put in supplemental results since like May. So like I talked about the fact that there is a new infrastructure in our supplemental results I mentioned that also in blog post and I don’t know how many people have noticed it but certainly I have said that before I think it was in the indexing timeline in fact so as we refresh our supplemental results and start to use new indexing infrastructure in the indexing results in supplemental results the net effect is the data will be a lot fresher also I wouldn’t be surprised that I have some URLs in supplemental results I wouldn’t worry about it that much and over the course of summer the supplemental results team will see all the reports that they receive especially things off the beat infact as I said like site: and operators that are hysteric they will be working on that those return the sort of results that everybody will naturally expects so stay tuned on supplemental results and already its lot fresher and lot more comprehensive than it was and I think its just going to keep improving.

Why we prepared this Video transcript?

We know this video is more than a year old but still there are people who have questions about their site and want to listen from a Search Engine Expert. Also there are millions of Non-English people who want to know what’s there in this video so a transcript is something that can be easily translated to be read in other languages. We know there are people with hearing disability who browse our site this is a friendly version for them where they can read and understand what’s there in this video.

This transcript is copyright – Search Engine Genie.

Feel free to translate them but make sure proper credit is given to Search Engine Genie.

Optimize for Search Engines or Users? – Matt cutts video transcript

By the Way I shoot my disclaimer to somebody in Google video’s team and he said matt you look like you had been kidnapped so maybe I should be some rocket boom world map or something like that out there you guys worry more about the information that how pretty it looks I am guessing, alright todder writes in my simply question is this “which do you find more important in developing and maintaining a website search engine optimization or end user optimization?”

And that says ill hang up and listen, todder that’s a great question well both are very important and I think if you don’t have both you wont do the best you want to do, I think search engine optimization its harder to be found if you don’t have end user optimization you don’t get conversion you don’t get people to really stay and really enjoy your site post in your forum or buy your product or anything else. SO I think you do need both the trick in my mind is to try to see the world such that they are the same thing. You want to make it look best so that the user’s interest and the search engine interest are as aligned as you can and if you can do that then you are usually in very good shape because you have compeling content you have people want to visit your site. It will be very easy for your visitors to get around and for search engines to get around. And you don’t have to do any weird tricks anything you do for search engines are also be shown to users. So I think you need to balance both of them.

Tedsey writes in with a couple of interesting questions “can you point us to some spam detection tools, I want to monitor my site to make sure I come out clean and show that I am valid among the no good spamming competitors. Well if you sure want to check spamming some tools you can use.

First of in Google we have lot of tools to detect and flag spam but most of them are not out side of Google one thing you can look at is Yahoo site explorer which is good it actually shows backlinks for specific pages or per domain I think that could be very handy. There are also other tools that could show you everything in one IP address, if you are going to be on a virtual host you are going to share with a lot of perfectly normal sites but sometimes there might be a lot of bad spam sites on that Ip address and you could end up in the wrong way. So you got to be careful that you are not automatically considered part of something wrong. As far as checking your specific site is concerned I will definitely hit Google sitemaps in webmaster console that will tell you crawl errors or other problems we found.

Second question tedsey asks “What about the cleanliness of code for example W3C, any chance that the accessible problem will leak into the main algorithm?” People had been asking me this for a long time and my typical answer is normal people write code with errors it just happens all the time. Eric one of the founders of the HTML standards said 40% of all html pages have syntax errors and there is no way a search engine can remove 40% of its contents from its index just because somebody didn’t validate or something like that. So there are a lot of content especially content that is manmade students that already use or things like that its very quality but probably doesn’t validate. So if you asked me a while a go I would have said yah I don’t have a signal like that in our algorithms and its probably for a good reason that said now T.V Raman had done the work on accessible search and you know I am sure in future somebody can look at for a possible positive signal. If you have pass through the quality you have to pass through vigorous validation and stuff like that in general its great idea to go and get your site validated. But I wouldn’t put that in top of your list, I would put on making compelling content and making a great site at the top of your list and once you have got that you can go back and dug your eyes and check whether you got good accessibility as well. Well you always want to have good accessibility but validating and closing off that last few things usually that doesn’t matter a lot for search engines.

Why we prepared this Video transcript?

We know this video is more than a year old but still there are people who have questions about their site and want to listen from a Search Engine Expert. Also there are millions of Non-English people who want to know what’s there in this video so a transcript is something that can be easily translated to be read in other languages. We know there are people with hearing disability who browse our site this is a friendly version for them where they can read and understand what’s there in this video.

This transcript is copyright – Search Engine Genie.

Feel free to translate them but make sure proper credit is given to Search Engine Genie

Matt Cutts on Duplicate Content and Paid Search – Video Transcript


Ya,Ya. It was just a bunch of Q&A it was fun

Yep, Ya, Ya, Well that’s something they have been doing a little more is submit sessions so there is penalty box submit tomorrow so its gonna be nice, That what people wants its like a 2 way thing a lot more than, ok, lets plan your thing with a power point then some time for questions, There is a little bit of power point, but lot of it is like what u wan the most, what r the features u really need’

Ya it was funny at the end, That when I laughed, Ok u have seen most of it, so she gave puffy examples instead, good willow bad willow, good centre good centre,

It worked ya it worked Hmm,Hmmm, Yep ,Ya

I think its interesting, it’s a deep issue, Its kinda tough 1 too, so really good question of claiming your content but then, we were talking about that couple of days ago with a bunch of goggle’s, And we always have to worry about how it can be found, and so, what if somebody innocent doesn’t claim their content, and then, and then smart man that comes along, and claims everybody else’s content,

And so when u have got your whole frequency and u have to worry about people taking your content in between the time u scroll your pages that’s a tricky thing, Now nice thing is some thing like blog search, we get a pinning, we get to scroll it, we can see it rite then, so the time frame in blog search is so much faster, so we get to little bit more on ownership, so I think we hoping to try in a lot of different things but it is a difficult issue.

Yep, Right, Yep


Ya u know, 1 thing v have said is which is pretty good rule have come is if u do syndicate your content, try n make sure people know you are the master source of it, you can do that with the link from the article or link from the video at least that way u have got 2 copies of the same content, This one the chronicle one the real one the 3rd one is likely to have more links, so this automatically see that not only its got lot of paid links and stuff like that so this makes the job a little bit easier. So there are some good rules have come, u can do whatever u syndicate it, it helps break the tie a little bit.

Ya absolutely, some people say oh dear Google tell me what to do. And we are not, like your the web master, it’s your site, u do whatever u want on your site. And that’s your rite. But that’s our in depths, and here we are finding what we think are best practices, and if you wanna do really well and good in Google, I think most webmasters do, here are some you can do well but you know people will always have rites to take on more risk but, we wanna tell them that there is a lot of risk involved in that, and so people should think about before they do it

Well, and I think its interesting, because we have said we don’t like as early a s 2005 but we have not talked about this recently, so even though it wasn’t going to be incredible popular with seos, because seos like to have as many tools in their tool boxes as possible I said it time to revise at this topics, I we can remind people about it, and so even though I knew there would be a lot of comments Its important to reintegrate use the stance and we mite be taking stronger action on it in the future so its sort of giving people the heads up, its like giving them a little bit of notice, They can choose what they wanna do but they should also think about the possible consequences of what they choose to do.

I think we would be good to make a lot of that, I talked about it during the q& A, our guide lines are pretty minimal, I wanna give the people the idea of what to do,. Triangular links, pentagonal links, Hexagonal links n stuff like that

We were saying, how about four way links, no that’s against the Google line, some were like 5 way links at some point u wanna think you wanna give the idea n then and people can infer from that but I think it would be nice to have few more details, we have looked at, how can provide few more scenarios, we take some from the we set on the blog and cooperate that into the web master guide lines.

Its possible its, its more like it, we have it at the back of our mind, So for example within the past few months we have revised our webmaster help in general to say no not everything is 100% automatic, No human have ever touched that. Because u need really room for social search so it is an ongoing process we talked about search results and how its not a good process to not have tones of search results that don’t add a lot of value so v do go back, for example we also added a spy-ware, Trojan, and that sort of stuff, so it is kind of living document we always go back every few months what we need to add and how are we gonna make it better.

Why we prepared this Video transcript?

We know this video is more than a year old but still there are people who have questions about their site and want to listen from a Search Engine Expert. Also there are millions of Non-English people who want to know what’s there in this video so a transcript is something that can be easily translated to be read in other languages. We know there are people with hearing disability who browse our site this is a friendly version for them where they can read and understand what’s there in this video.

This transcript is copyright – Search Engine Genie.

Feel free to translate them but make sure proper credit is given to Search Engine Genie


Google ( GOOG ) to Announce First Quarter 2008 Financial Results

Google to Announce First Quarter 2008 Financial Results
MOUNTAIN VIEW, Calif. – April 7, 2008 – Google Inc. (NASDAQ:GOOG) today announced that it will hold its quarterly conference call to discuss first quarter 2008 financial results on Thursday, April 17, 2008 at 1:30 p.m. Pacific Time (4:30 p.m. Eastern Time).
The live webcast of Google’s earnings conference call can be accessed at http://investor.google.com/webcast. The webcast version of the conference call will be available through the same link following the conference call.
About Google Inc. Google’s innovative search technologies connect millions of people around the world with information every day. Founded in 1998 by Stanford Ph.D. students Larry Page and Sergey Brin, Google today is a top web property in all major global markets. Google’s targeted advertising program provides businesses of all sizes with measurable results, while enhancing the overall web experience for users. Google is headquartered in Silicon Valley with offices throughout the Americas, Europe and Asia. For more information, visit www.google.com.

The much awaited GOOG first quarter financial results are bound to be released on April 17th. This is big news due to the current status lots of companies are reporting loss of earnings. Google one of the largest company in the world need to prove that its still a very popular and profitable company. Current share price of Google is already hovering around 450$ down from 750$. If they show a less than expected earning their shares are bound to tumble much more

Lets wait and see how it goes on,

pagerank 10 link claim – fake claim in adwords advertisment

I came across a website advertising in Google adwords claiming to get backlink from a Pagerank 10 site. Wow I just cant believe my eyes we at Search Engine Genie have the most updated comprehensive list of pagerank 10 sites that are available when ever there is a pagerank update. Recently the number of pagerank 10 sites have dropped significantly at one point there were about 20 pagerank 10 sites not we hardly have 10 to 15 sites at any page rank update.

Saying this I am surprised to see a site claiming to get back link from a pagerank 10 website check out our link of PR 10 sites here http://www.searchenginegenie.com/pagerank-10-sites.htm

If you can see these are the sites that are currently PR 10,

http://www.adobe.com/
http://www.w3.org/
http://www.macromedia.com/
http://www.energy.gov/
http://www.nasa.gov/
http://www.google.com/
http://www.nsf.gov/
http://www.whitehouse.gov/
http://www.real.com/
http://www.usa.gov/index.shtml
http://www.doe.gov/

I dont think any of the above sites are willing to sell text link on their website, US government website, Department of energy, science foundation , white house, real player site, adobe all these guys done need to make money selling text links. So you figure out what these guys are upto,

Search Engine Genie ,

Request a Free SEO Quote