Google

Google value tumbles on US financial crisis

GooG shar price is one of the worst hit of all top brands in US in recent financial crisis. Google relies on people’s spending online for advertising as a result the global turmoil is cracking Google’s big value. Google value dropped almost half from its alltime high of 740$.

What google uses to crawl websites – Is it something similar to chrome

Is Google using chrome to crawl websites?

An interesting thread in Webmasterworld.com :

webmasterworld.com/google/3760236.htm

Thought i would drop in to report a very interesting Google activity I’ve been observing today.
I have an “who’s online” script running and reporting visitors in real time as they browse the site. The script uses javascript to report referrer and currently viewed page.
One of the visitors today was from 66.249.72.180 [google.com] sucking approx 200 pages in a 15-20 minutes frame and invoking the on page tracking javascript….sending referring page and current URL information (it had to invoke javascript just like a browser would to do that). In other words acting very much like it is a full on browser.
Maybe it is already old news to some, maybe i missed previous topics discussing this. Anyway, it is the first time i see this thing in real time. Very interesting crawl development.
Anyone else noticed this ?
P.S. Page to page browsing was occuring at a very fast rate, could not be human.

Google to Announce Third Quarter 2008 Financial Results

MOUNTAIN VIEW, Calif (October 6, 2008) — Google Inc. (NASDAQ: GOOG) today announced that it will hold its quarterly conference call to discuss third quarter 2008 financial results on Thursday, October 16, 2008 at 1:30 p.m. Pacific Time (4:30 p.m. Eastern Time).
The live webcast of Google’s earnings conference call can be accessed at investor.google.com/webcast. The webcast version of the conference call will be available through the same link following the conference call.

Google blog search gets a new look

Google blogsearch a search engine which monitors blogs now gets a new look,

According to Google official blog

“Did you know that millions of bloggers around the world write new posts each week? If you’re like me, you probably read only a tiny fraction of these in Google Reader. What’s everybody else writing about? Our Blog Search team thought this was an interesting enough question to look into. What we found was a massive mix: entertaining items about celebrities, personal perspectives on political figures, cutting-edge (and sometimes unverified) news stories, and a range of niche topics often ignored by the mainstream media. Today, we’re pleased to launch a new homepage for Google Blog Search so that you too can browse and discover the most interesting stories in the blogosphere. Adapting some of the technology pioneered by Google News, we’re now showing categories on the left side of the website and organizing the blog posts within those categories into clusters, which are groupings of posts about the same story or event. Grouping them in clusters lets you see the best posts on a story or get a variety of perspectives. When you look within a cluster, you’ll find a collection of the most interesting and recent posts on the topic, along with a timeline graph that shows you how the story is gaining momentum in the blogosphere. In this example, the green “64 blogs” link takes you inside the cluster and shows you all the blog posts for a story. We’ve had a great time building the new homepage and we hope you enjoy using it. “

trust and authority two different things in search engines.

A webmaster world thread analyzes the difference between trust and authority in Google. Its a good thread to discuss please join here www.webmasterworld.com/google/3753332.htm

“While studying Google’s recently granted [url=http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&u=%2Fnetahtml%2FPTO%2Fsearch-adv.htm&r=1&p=1&f=G&l=50&d=PTXT&S1=7,346,839.PN.&OS=pn/7,346,839&RS=PN/7,346,839]Historical Data patent[/url], I noticed that the language helps to separate two concepts that we tend to use casually at times: trust and authority.
…links may be weighted based on how much the documents containing the links are trusted (e.g., government documents can be given high trust). Links may also, or alternatively, be weighted based on how authoritative the documents containing the links are (e.g., authoritative documents may be determined in a manner similar to that described in [url=http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&u=%2Fnetahtml%2FPTO%2Fsearch-adv.htm&r=1&p=1&f=G&l=50&d=PTXT&S1=6,285,999.PN.&OS=pn/6,285,999&RS=PN/6,285,999]U.S. Pat. No. 6,285,999)[/url].
Clearly, Google has two different metrics going on. As you can see from the reference to Larry Page’s original patent, authority in Google’s terminology comes from backlinks. When lots of other websites link to your website, you become more and more of an authority.
But that isn’t to say you’ve got trust. So what exactly is trust? Here’s an interesting section from the same patent:
…search engine 125 may monitor one or a combination of the following factors: (1) the extent to and rate at which advertisements are presented or updated by a given document over time; (2) the quality of the advertisers (e.g., a document whose advertisements refer/link to documents known to search engine 125 over time to have relatively high traffic and trust, such as amazon.com, may be given relatively more weight than those documents whose advertisements refer to low traffic/untrustworthy documents, such as a pornographic site);
So we’ve got two references here, government documents and high traffic! From other reading, I’m pretty sure that trust calculations work like this – at least in part. Google starts with a hand picked “seed list” of trusted domains. Then trust calculations can be made that flow from those domains through their links.
If a website has a direct link from a trust-seed document, that’s the next best situation to being chosen as a seed document. Lots of trust flows from that link.
If a document is two clicks away from a seed document, that’s pretty good and a decent amount of trust flows through – and so on. This is the essence of “trustrank” – a concept described in [url=http://dbpubs.stanford.edu:8090/pub/2004-17]this paper by Stanford University and three Yahoo researchers[/url].
This approach to calculating trust has been refined by the original authors to include “negative seeds” – that is, sites that are known to exist for spamming purposes. The measurements are intended to identify artifically inflated PageRank scores. See this pdf document from Stanford: [url=http://dbpubs.stanford.edu:8090/pub/showDoc.Fulltext?lang=en&doc=2005-33&format=pdf&compression=&name=2005-33.pdf]Link Spam Detection[/url]
To what degree Google follows this exact approach for calculating trust is unknown, but it’s a good bet that they share the same basic ideas. “

Top 10 brand pyramid – 2008 top brands

My designer did a nice pyramid of 2008 top 10 brands, Its a cool image hope you like it.

How google evaluates search – Google engineer talks

How google evaluates search:

Scott of Google has given a good insight of how Google evaluates search results: Look at this to get a good idea on how Google handles search evaluation:

“Evaluating search is difficult for several reasons.

  • First, understanding what a user really wants when they type a query — the query’s “intent” — can be very difficult. For highly navigational queries like [ebay] or [orbitz], we can guess that most users want to navigate to the respective sites. But how about [olympics]? Does the user want news, medal counts from the recent Beijing games, the IOC’s homepage, historical information about the games, … ? This same exact question, of course, is faced by our ranking and search UI teams. Evaluation is the other side of that coin.
  • Second, comparing the quality of search engines (whether Google versus our competitors, Google versus Google a month ago, or Google versus Google plus the “letter T” hack) is never black and white. It’s essentially impossible to make a change that is 100% positive in all situations; with any algorithmic change you make to search, many searches will get better and some will get worse.
  • Third, there are several dimensions to “good” results. Traditional search evaluation has focused on the relevance of the results, and of course that is our highest priority as well. But today’s search-engine users expect more than just relevance. Are the results fresh and timely? Are they from authoritative sources? Are they comprehensive? Are they free of spam? Are their titles and snippets descriptive enough? Do they include additional UI elements a user might find helpful for the query (maps, images, query suggestions, etc.)? Our evaluations attempt to cover each of these dimensions where appropriate.
  • Fourth, evaluating Google search quality requires covering an enormous breadth. We cover over a hundred locales (country/language pairs) with in-depth evaluation. Beyond locales, we support search quality teams working on many different kinds of queries and features. For example, we explicitly measure the quality of Google’s spelling suggestions, universal search results, image and video searches, related query suggestions, stock oneboxes, and many, many more.”

Source: Google Blog: http://googleblog.blogspot.com/2008/09/search-evaluation-at-google.html

Google testing or Adwords hacked ?

When i was recently doing some queries related to our site i saw a weird adwords result right on top of the page.

2nd result from the top had something which said test and the URL as aa.com i haven’t seen this before i wonder why Google is testing on live adwords results or whether the adwords results had been hacked. When i clicked on the results it actually went to aa.com so it mostly looks like some sort of testing.

Did anyone else notice it look at this screen shot:

Duplication problem with Google – how Google handle duplicates.

Official Google webmaster central blog has an interesting post on how Google handles duplicates.
Susan of webmaster central team state:

“When we detect duplicate content, such as through variations caused by URL parameters, we group the duplicate URLs into one cluster.
We select what we think is the “best” URL to represent the cluster in search results.
We then consolidate properties of the URLs in the cluster, such as link popularity, to the representative URL.
Here’s how this could affect you as a webmaster:

In step 2, Google’s idea of what the “best” URL is might not be the same as your idea. If you want to have control over whether www.example.com/skates.asp?color=black&brand=riedell or www.example.com/skates.asp?brand=riedell&color=black gets shown in our search results, you may want to take action to mitigate your duplication. One way of letting us know which URL you prefer is by including the preferred URL in your Sitemap.
In step 3, if we aren’t able to detect all the duplicates of a particular page, we won’t be able to consolidate all of their properties. This may dilute the strength of that content’s ranking signals by splitting them across multiple URLs.

In most cases Google does a good job of handling this type of duplication. However, you may also want to consider content that’s being duplicated across domains. In particular, deciding to build a site whose purpose inherently involves content duplication is something you should think twice about if your business model is going to rely on search traffic, unless you can add a lot of additional value for users. For example, we sometimes hear from Amazon.com affiliates who are having a hard time ranking for content that originates solely from Amazon. Is this because Google wants to stop them from trying to sell Everyone Poops? No; it’s because how the heck are they going to outrank Amazon if they’re providing the exact same listing? Amazon has a lot of online business authority (most likely more than a typical Amazon affiliate site does), and the average Google search user probably wants the original information on Amazon, unless the affiliate site has added a significant amount of additional value.

Lastly, consider the effect that duplication can have on your site’s bandwidth. Duplicated content can lead to inefficient crawling: when Googlebot discovers ten URLs on your site, it has to crawl each of those URLs before it knows whether they contain the same content (and thus before we can group them as described above). The more time and resources that Googlebot spends crawling duplicate content across multiple URLs, the less time it has to get to the rest of your content.

In summary: Having duplicate content can affect your site in a variety of ways; but unless you’ve been duplicating deliberately, it’s unlikely that one of those ways will be a penalty. This means that:

You typically don’t need to submit a reconsideration request when you’re cleaning up innocently duplicated content.
If you’re a webmaster of beginner-to-intermediate savviness, you probably don’t need to put too much energy into worrying about duplicate content, since most search engines have ways of handling it.
You can help your fellow webmasters by not perpetuating the myth of duplicate content penalties! The remedies for duplicate content are entirely within your control. Here are some good places to start.

How To Write Winning Meta Titles:write great titles to keep your the best converting one

How To Write Winning Meta Titles:

There are many tips to write good Meta titles. A Meta title is a title, or name of your page. The title is shown by the browser, usually at the top of your computer screen, and tells a reader what page they are on. Meta titles are “read” by search engine robots, and viewed by site visitors.

The metal title is very vital for helping the page rank high in search engine returns and should be written to cater to search engine robots – not to site visitors. Meta titles should make sense to the reader, but the wording should be related to keyword search popularity and relevance to the rest of the web page plus other meta data and content.

The four most awful mistakes you can make when creating a meta title for your page are:

Not creating any title at all;
Naming your page the identical/same names as your website;
Naming all your pages the same name, or something similar to each other; and
Naming the page without linking it to your content and other Meta data.

Be certain to use keyword selector tools and keyword density tools to help you write your Meta title.
Examples of “Bad” Meta Titles:
The following instance Meta titles are too vague and do not give either robots or site readers enough information:
Flowers
Examples of Good Meta Titles:
· Flowers – How to Plant Flowers
· Population Statistics – 2008 United Kingdom Population Statistics
· Dessert Recipes – Best pudding Recipes
· Tax Tips – tips on how to pay less amount of tax
The above Title tags accomplish three things:

· they assist robots understand what is most important about the content on the page by repeating part of the keyword phrases that would be found in article titles and content;
· They make logic to people reading them; and
· By using plurals when prudent, it allows more possible keyword searchers (both on singular and plural or major keywords).

How Long Should a Meta Title Be in Length?
Normally, a title should be long enough to be clear; short enough to avoid being “truncated.” Truncation happens when a title is very long. Search engine robots will only read so much character then move on. Different search engines read different numbers of characters but if you keep your titles less than 150 characters you will keep most vital search engine robots happy.
Tips on How to Create Meta Titles
When creating Meta titles:
Always replicate keyword phrases;
Do tie these phrases to your content and other meta data;
Do use plurals when doable;
Do bound the use of punctuation; and
Do use initial caps all through the title.

Request a Free SEO Quote