Gmail down - Cannot access Gmail now

This is the 2nd time Gmail is down in the past 15 days. I cannot access gmail now and even my friends in India and USA cannot access gmail. I am surprised because gmail is the No.1 email provider now and i wonder why Google is not taking this serious.



It has been down now for more than 10 minutes which is a big surprise. Seeing the millions of users around the world i am surprised how much work and email time they loose. I hope Google fixes this soon. Is this some sort of ddos attack on Google?

Google

Error

Server ErrorThe server encountered a temporary error and could not complete your request.
Please try again in 30 seconds.

Labels: ,


Google gets all the bashing but why?

From what I have seen in forums and blogs Google gets so much bashing for something they do to defend their algorithm. Why do people do that? Don't they ever know they wouldn't have been doing SEO for their sites to rank in Google if Google never exist?

I have been watching Google ever since I started my online Business. I have seen major Google updates for a period of over 7 years almost all the updates were aimed at protecting their algorithm and getting rid of Spam and sites that entertain aggressive search engine ranking tactics. Today Google has changed into a highly quality search engine with good results. If they were not targeting the aggressive search engine optimization people they will not be what they are today.



Hottest topic in today's SEO world is the Google's ability to detect and penalize paid links. Whether you buy it or sell it if you get caught by Google police you are gone. Once in a SEOmoz post Matt cuts replied to Rebecca's post where he talks about natural links being like very strong tires and paid or other artificial links as week tubes / tires than can burst any time. It's actually true and from what I have seen every site that got affected for links had some sort of problem with artificial links.

Personal experience

Our own site had some problem with Google rankings when we created the search engine promotion widget and got lots of backlinks without knowing we were abusing it. Then we were hit with ranking filter which prevented our site from being in top 10. Did we whine? Well know personally we were not aware that widget links can hurt a site. We were not abusing the system in any way with widgets we spent money on our widgets and the only way we get back our investment is by links. We do that for all our tools but Google never complained on it but when we redirected the links from widgets to our Homepage Google algorithm got angry with our site and reduced our rankings.

What did we do?

We never whined we made all the widget links optional no-follow, cleaned up some links to homepage, removed link to homepage and added it to the widget page directory, checked for any other potential problem with our website and submitted a re-consideration review and in 1 month we were back in rankings.

So was Google wrong with our website?

Ofcourse no even though we thought widget links when not abused will not affect rankings still we shouldn't have linked to the homepage with keywords. It's our mistake and Google has every right to make us regret for this mistake their own way. But Google were nice, in fact very nice after rectifying our mistake and explaining them we got back to rankings. So Google definitely want us back in their rankings. Over 4000 people use our SEO tools (https://www.searchenginegenie.com/seo-tools.htm ) and out of that almost 2000 come from search engines. Google knows that and they know our tools get lot of traffic from them and they are happy to send people because people like it.

We don't come under link buying / selling category

We never bought a single link to our site almost all of them are links to our tools, widgets and some custom built links through articles, directories, blogging etc. We don't buy links but still hit with a link buying / selling detection algorithm. Was Google wrong in doing this? Ofcourse no why because abusing a widget Is same like buying links. Those links are not editorially given links, people linked to us in exchange for our widget. They didn't link to our homepage because they liked our site. We understand / I understand and when everyone in our company understands Google's position we are all good with anything Google decides. But not everyone take it that way I see so much Google bashing out there when something Google does to protect itself and its algorithm.

Being SEO is nothing to be proud of.

Some people think SEO is something great and they are the best in the world. I'll tell you in Google point of view most of the SEOs are very close to spammers. Not everyone but most I said, including places like SEOmoz which is popular among SEOs discuss so much link buying / selling. Even Rand fishkin is an active support of Text-link-ads and he also supports buying / selling links for ranking. If this industry supports so much text link buying / link selling for ranking purposes and Google tries to defend itself is it wrong? For most SEOs yes Google is wrong. I would call that **** ****. Without Google you would have never existed, who are you to give commands to Google? The massive improvement by Google in transparency with webmasters and Google has helped webmasters a lot. But still webmasters and SEOs want more and more. They don't want Google to penalize link buying, selling and other sort of aggressive and abusive link building tactics. I would say better leave the SEOs to run the search engine they know how difficult it is. Even the so called Google supporters abuse the search engines when they loose rankings. If you lost your ranking see the mistake you did. Rectify your mistakes, fix them and ask Google to reconsider rather than whining that Google is useless.

Confession from a SEO.

I am in this industry for more than 7 years. Am I proud to be a SEO? No never this industry is hated by so many people including the search quality engineers themselves. I am passionate about search engines I like them, I like the miracle algorithm that works behind it, I like all the PHDs. I personally wanted to become a scientist which never happened. I want to be friends of search engineers not for SEO benefit but to admire and gain knowledge from the wonderful work they do. I sometimes wonder why I came into this SEO industry. Truth I came into SEO from my programming background only for the money involved. This industry has so much money involved than programming and web design. People will pour money if they get good business from search engines. I have seen that practically in some PPC campaigns our company handles. Some big clients spend around 100,000$ a month for PPC. Though 'not the same case in SEO still the rewards are high. But I am always looking alternate ways because I am not the bad guy type who goes after money. I like to earn money in a way everyone appreciates. Not in a way everyone glares at you. To all the SEOs out there realize the type of work you are doing and please give respect to my loveable search engine. If Google never existed I wouldn't be here running a Business in SEO. Love Google and appreciate everything they do whether its right or wrong. Everyone appreciates if Google does something right and everyone bashes if they do something harsh to protect their algorithm. Love Google and all its efforts.

My suggestion to all the SEOs and newbie's (so called SEOs out there) . Google is a search engine for people it's not for you to play with.

Labels: , ,


Is Google penetrating our secret lives?

Many of us are not aware how much we get addicted to Google and its products. I personally use Google search, Gmail, Google reader for reading my favorite blogs, Google images, Product search, Google Maps and much more Google products. Virtually you can say most of my internet experience in around Google search engine. It's not just because I am Google addict. It's also because SEO is my lively hood. I spend most of my time in internet and Google.

But doing all these most of us forget, Google is tracking every activity of us using cookies and other technologies. So more we use Google and its products more our privacy is invaded. I suspect a big YES. We all know Google needs to collect information from its visitors to keep its search engine going efficiently, if any of our sites use Google analytics it's important they track down every detail possible for their Analytics users. Similarly every product has a catch for itself. Google has a reason always to spy on us for each of its products. So is this something to worry about? If someone is spying on you will be happy to let it go? Don't you think you will get disturbed it's true that more you use Google products more they spy at you? But it's their duty you cannot blame them but you need to be aware that Google knows what you are doing.
The most penetrating and efficient Google spy tool is the Google Pagerank display on the Google toolbar. If you enable PageRank display on Google toolbar for each and every web page or website you visit the toolbar needs to query the Google datacenter to get its PageRank. So Google knows each and every page you are visiting. Most of us who are aware of this either turn of the PageRank display or turn off the Toolbar itself when we are surfing for personal or sensitive information. Its not just me who is concerned on Google's penetration there are lots of discussion on blogs and forums about this issue. Most of the people conclude by saying if you are worried about privacy don't use products that invade your privacy. That is the reason many corporate don't allow emails like Gmail, Yahoo Mail, Hotmail etc. Only corporate emails are allowed to be used. Some companies even ban search engine usage to protect privacy. I recently came across an interesting theory in a forum on Google's privacy invasion. A user posts
"Google's Algorithm has changed to the point where it now tracks every user by IP address, so for example if you are searching for a specific search term or browsing a website running AdSense ads, then it will log the sites you visit or terms you search for and display matching ads to you regardless of what website you visit. I think this is one of the reasons why people often see irrelevant ads on their websites. Google is refining their technique and logging all of our activity individually. I also believe Google has a way of rating the relevance of each users visit, for example Google might pay AdSense account holders for clicks based on criteria around the kinds of sites a person visits prior to arriving at their site.

I will provide a better example; A person that visits Gamespot.com and then clicks on a link to Netflix from Gamespot might result in a publisher earning .10-.20 cents for that click, however a person that has visited moviereviews.com and then went to Gamespot and then clicked the Netflix link might result in a publisher earning .20-.30 cents for the click since the chance of an actual sale is increased. "
Even he agrees it's just a theory but it looks dangerous and little bit possible. Also you should look at the most interesting Google flu trends search.
http://www.google.org/flutrends/
Here Google will show the spread of flu in certain parts of the world monitoring the searches arising from those places for flu related keywords. This is one example what Google can do with the data it collects I am sure we can see something similar to this a lot in future.
What I am telling Google users is that be aware that you are being monitored for everything you do in Google. Even if you don't have cookie enabled still there are lots of ways Google will collect your data. What is important is the awareness that's required when using internet and Google.
Good Luck.

Labels:


Does Google's crawler active on one day compared to another day?

There are few people who report Google is indexing pages more on weekdays than weekends, also it seems Google's traffic in much more in first 3 weekdays than towards weekends or Fridays.
I do agree with the traffic point its obvious that weekdays are much more popular than weekends. People tend to use computers more on weekdays especially from work places. We monitor a lot of websites and the pattern remains the same.

But for pages indexed I don't buy the argument. If you see more pages indexed on some weekdays it could just be a coincidence. What I have seen when Google indexes pages it keeps them in its index for a long time. So when a page is indexed on say Monday it will still remain on Saturday. So the numbers should virtually remain same as of Monday. But from what I have seen, sometimes lot of crawling happens on weekends and sometimes it happens on weekdays. I don't see much difference; I think it has to be mostly with the person who operates the crawlers.

Labels:


Google's influence on Yahoo

Those of us who were in search engine optimization for many years know once yahoo results were completely powered by Google. Google used to have almost 90% market share excluding only MSN and its powered search engines. Where are we now today do we still see any relationships with Google. We recently had a major controversy where Yahoo had a deal with Google to display adwords ads in its search results. But Microsoft was not happy with it.

First we had yahoo make a deal with Google
"Yahoo said it had agreed to let Google put search ads on its site in what it called an $800 million annual revenue opportunity that would boost cash flow by $250 million to $450 million in the first 12 months.
Yahoo's ads and Google's would be pitted against each other in an auction style process that could make a deal easier to pass regulatory approval.
"Yahoo is being a reseller of Google whenever it makes sense and that is likely to be a lot of the time given how much more effective Google Web search ads have proven to be," Global Crown Capital analyst Martin Pyykkonen said."

http://www.reuters.com/article/topNews/idUSN1247863820080612?feedType=RSS&feedName=topNews



Then Google decided to dump yahoo and the rift began:
"The U.S. Justice Department said on Wednesday it had told Google it planned to file a lawsuit to block the deal, under which Google would have placed its more lucrative ads on Yahoo searches.
"Had the companies implemented their arrangement, Yahoo's competition likely would have been blunted immediately with respect to the search pages that Yahoo chose to fill with ads sold by Google rather than its own ads," the government said.
Yahoo regretted Google's decision, saying it was "disappointed that Google has elected to withdraw from the agreement rather than defend it in court."

Labels:


Google suggest one of the find of Google i would say is ever evolving

When first Google introduced it as a beta version everyone liked it and later Google moved suggest option to Google.com regular search. One thing was lagging but. Google was not showing suggestions after we do a search on Google.com homepage. Once you navigate from the page and into results page the suggestion stops. I personally wanted suggest to work both in homepage as well as results page. Now Google has made the changes and it works in both versions now.

Similarly now Google has introduced personalized search which gets saved into web history as preferred suggestions. All the searches you previously made when logged into Google will show up first before the regular suggestions. They also now provide the ability to remove the personalized search keywords which is cool I would say.



Also direct links now appear if you are looking for a specific site. Google's intelligent algorithm understands your motive to find a website based on your partial keyword input and will show you the correct URL of the site you might want to reach.

Google never missed out commercializing Google suggest. They also have suggestions for sponsored links after all they need to impress their share holders right?

Labels:


Google webmaster tools new features:

Highlights
  • One-stop Dashboard: We redesigned our dashboard to bring together data you view regularly: Links to your site, Top search queries, Sitemaps, and Crawl errors.
  • More top search queries: You now have up to 100 queries to track for impressions and click through! In addition, we've substantially improved data quality in this area.
  • Sitemap tracking for multiple users: In the past, you were unable to monitor Sitemaps submitted by other users or via mechanisms like robots.txt. Now you can track the status of Sitemaps submitted by other users in addition to yourself.
  • Message subscription: To make sure you never miss an important notification, you can subscribe to Message Center notifications via e-mail. Stay up-to-date without having to log in as frequently.


  • Improved menu and navigation: We reorganized our features into a more logical grouping, making them easier to find and access. More details on changes.
  • Smarter help: Every page displays links to relevant Help Center articles and by the way, we've streamlined our Help Center and made it easier to use.
  • Sites must be verified to access detailed functionality: Since we're providing so much more data, going forward your site must be verified before you can access any features in Webmaster Tools, including features such as Sitemaps, Test Robots.txt and Generate Robots.txt which were previously available for unverified sites. If you submit Sitemaps for unverified sites, you can continue to do so using Sitemap pings or by including the Sitemap location in your robots.txt file.
  • Removal of the enhanced Image Search option: We're always iterating and improving on our services, both by adding new product attributes and removing old ones. With this release, the enhanced Image Search option is no longer a component of Webmaster Tools. The Google Image Labeler will continue to select images from sites regardless of this setting.


Webmaster tools has now many new features, when you sign into webmaster tools you will see a new home for your site with a message center, and all the sites that you have. To reach a verified site there a one stop dashboard this gives you all the highlights from the data. You can now get your favorite features easily, more navigation and trouble shoot problems can be seen, additionally you can now see more search queries for your site that appears in better than never before. You have robots.txt and URLs in access for some time. But now all tools are together at last under one tab. We already sent messages to your site to webmaster tools Inbox now you can forward those messages to people you know. We all really enjoyed redesigning webmaster tools. This is just a beginning stay tuned for more updates.

Labels: ,


Google competitor Wolfram alpha launching this month May 2009:

The long-expected Wolfram Alpha search engine is due to be launched this month. We are waiting for it anxiously as unfortunately we didn't get the opportunity to test it, however, others did and it looks amazing. I will start with the fact that many said that this is the Google Killer, but in fact Wolfram Alpha is not a conventional search engine, it is more a computational knowledge engine which is based on ideas from Stephen Wolfram. Recently, Google launched its public data search, and not even that can be compared to Wolfram Alpha.

Don think of Wolfram Alpha as a Google Killer, though, because frankly Google doesn't really have anything like it—except for maybe Google's new public data search, which, while impressive, doesn't look nearly as robust as Wolfram Alpha. (Then again, we'll have to wait and see how well Wolfram Alpha works when it gets in the hands of the public.) Either way, Google will still corner the market on most normal search. (We're not always looking for the kind of answers Wolfram Alpha provides when we hit up Google.) As for how this editor uses Google and Wikipedia, I'd actually imagine that Wolfram Alpha could be more of a Wikipedia competitor than a Google competitor.
The system, Wolfram Alpha, was developed by Stephen Wolfram (49), a British physicist, and showcased at Harvard University in the U.S. last week. "Revolutionary new web software could put giants such as Google in the shade," the daily claimed. Although the system is still new, it has already attracted massive hype among technology pundits, it added.

"Wolfram Alpha will not only give a straight answer to questions such as 'how high is Mount Everest?' but it will also produce a neat page of related information - all properly sourced - such as geographical location and nearby towns, and other mountains, complete with graphs and charts," it said. "Or ask what the weather was like in London on the day John F. Kennedy was assassinated, it will cross-check and provide the answer."

Labels:


Google Patent to rank personalized pages based on bookmarking:

I am thinking about crawling social media profiles and links shared on those profiles, to which you link on your Google profiles, maybe even Gmail and gchat; and of course sites you join on Google friend connect; and what you share through Google and Google reader. And once they identify your twitter hyperlinked idiosyncrasies, they could then discover those of your followers and rank documents based on what everyone loves... or loves to spam ;)

And ultimately distinguishing what one and one's followers and friends truly love and love to spam, is the feature measuring you're 'linger time:'

1. A computer-implemented method, the method comprising: receiving a search query from a user; receiving a request from the user to personalize a search result; responsive to the search query and the request to personalize the search result, generating a personalized search result by searching a personalized search object; responsive to the search query, generating a general search result by searching a general search object; providing the personalized search result and the general search result for display; selecting an advertisement based at least in part upon the personalized search object; and providing the advertisement for display.

2. The method of claim 1, wherein the personalized search object comprises an article associated with a bookmark.

3. The method of claim 2, wherein an index associated with the bookmark is stored on a server remote from a client with which the bookmark is associated.

4. The method of claim 2, wherein an index associated with the bookmark is stored on a client with which the bookmark is associated wherein searching of the personalized search object is performed by a client-side agent.

5. The method of claim 1, wherein the general search object comprises an index of articles.

6. The method of claim 5, wherein the index comprises an index of articles associated with a global computer network.

7. The method of claim 1, wherein the general search object comprises a plurality of global indices.

8. The method of claim 1, wherein the personalized search object comprises a plurality of bookmarks.

9. The method of claim 1, wherein the personalized search object comprises an annotation.

10. The method of claim 1, wherein the personalized search object comprises a rating.

11. The method of claim 1, further comprising identifying a user cluster based at least in part on the personalized search object and providing to the user a suggestion of another user with which to network based on the user cluster.

12. The method of claim 1, further comprising identifying the personalized search object based at least in part on an implicit measure of the user's interest.

13. The method of claim 12, wherein the implicit measure of the user's interest comprises a history of user accesses.

14. The method of claim 12, wherein the history of user accesses comprises at least one of: a period of linger time, a quantity of repeat visits, and a quantity of click-through.

15. A computer storage medium encoded with a computer program, the computer program comprising instructions that when executed cause a computer to perform operations comprising: receiving a search query from a user; receiving a request from the user to personalize the search result; responsive to the search query and the request to personalize the search result, generating a personalized result by searching a personalized search object; responsive to the search query, generating a general result by searching a general search object; providing the personalized search result and the general search result for display; and providing an advertisement for display on a browser based at least in part on one of the personalized search result and the general search result.

16. The computer storage medium of claim 15, wherein the instructions when executed cause the computer to perform operations further comprising identifying a cluster of users based at least in part on the personalized search object.

17. The computer storage medium of claim 15, wherein the instructions when executed cause the computer to perform operations further comprising identifying the personalized search object based at least in part on an implicit measure of the user's interest.

Labels:


Google moving to Ajax based result pages:

It seems Google is now moving into Ajax based result pages. As per the Google analytics blog "Starting this week, you may start seeing a new referring URL format for visitors coming from Google search result pages. Up to now, the usual referrer for clicks on search results for the term "flowers","

The key difference between these two urls is that instead of "/search?" the URL contains a "/url?". If you run your own analyses, be sure that you do not depend on the "/search?" portion of the URL to determine if a visit started with an organic search click.

New parameters as per Google blog:
------- old
https://www.google.com/search
hl=en
q=flowers
btnG=Google+Search

------- new
https://www.google.com/url
sa=t
source=web
ct=res
cd=7
url=http%3A%2F%2Fwww.example.com%2Fmypage.htm
ei=0SjdSa-1N5O8M_qW8dQN
rct=j
q=flowers
usg=AFQjCNHJXSUh7Vw7oubPaO3tZOzz-F-u_w
sig2=X8uCFh6IoPtnwmvGMULQfw

According to Mattcutts a senior Google employee this change is to make sure search results are retrieved faster than it usually does. Matt says "The team there only thinks about speed. They want to get the results back to users as quick as humanly possible. JavaScript makes the search results a lot faster. Suppose you do a search for flowers, as you're typing flowers, they can do a query from the back end and fold search results right into the page. You're still in Google.com and they can pull in the results automatically."

Labels:


Stealth links and Googlebot :

Webmaster world owner and senior webmaster Brett Tabke posted an interesting thread what he calls stealth links in Google. Links that are not the same HREF links but still seem to count in Google. He calls them stealth links according to him following are some prominent stealth links:

  • another site links to your graphics (img src)
  • a site links to your javascript files
  • a site links to your css files?
  • rss feeds and other xml feeds that people can link to without notice or referrals necc being generated.
  • links in email that some se's can read (yahoo mail, hotmail, Gmail)
  • links marked with noindex
  • links marked with nofollow
  • urls within javascript or js comments
  • raw urls within css or css comments
  • urls within meta data of graphics and video files
  • urls within html comments
  • urls within the head section or meta data of a html page
  • links or pages that maybe surfed while visitor has page rank engaged on the toolbar
  • the target of a constructed, obfuscated, or encrypted js url (hidden until executed)
  • links behind pay walls that Google can spider via webmaster tools
  • Domains that have been 301'd with links.
  • Links in Flash movies (games, quizzes, etc).
  • non href'ed url's. (raw url on page http://www.webmasterworld.com)
  • Links in any documents other than web pages e.g. .doc, .pdf, .txt, etc.
  • blocking a page in robots.txt should make it blocked from bots, but they still spider it.

According to me most of links from the above sources are not counted by Google atleast for ranking purposes. Brett came up with the above list because of the discussion that was started in another thread about pages getting pagerank without any external links. I feel most of the people who are complaining about pages getting pagerank without external links either don't know to check backlinks or just rely on yahoo and Google backlink data which is totally unreliable.

Labels:


Does Protected Whois affect Google rankings?

Many of us prefer to protect our whois data to avoid spammers and scammers getting our email addresses for potential abuse. We protect our privacy on some of our sites though we don't do it on our company website. Many of us know Google uses whois data in for search engine rankings. They primarily use this to avoid spammers capturing expired domains and using the backlink power of those domains and use it for their own website.

This Google patent already describes Google's usage of whois for ranking purposes.
Some extra from that Google patent:
1. A geographic information system (GIS) comprising information about a plurality of geospatial entities and configured to prioritize the geospatial entities according to a ranking mechanism.

2. The system of claim 1, wherein the ranking mechanism uses data about a meta attribute of a geospatial entity to determine the geospatial entity's priority.

3. The system of claim 2, wherein the meta attribute comprises one of: quality of information available about the geospatial entity, an attribute of a description of the geospatial entity, and an attribute of a definition of the geospatial entity.

4. The system of claim 2, wherein the meta attribute comprises an indicator of the geospatial entity's popularity.

5. The system of claim 2, wherein the meta attribute comprises one of: an age attribute, a stature attribute, and an importance attribute.

6. The system of claim 2, wherein the meta attribute comprises a relationship of a geospatial entity to its place in a hierarchy of geospatial entities.

7. The system of claim 1, wherein an entity of the plurality of entities comprises a collection of geospatial objects and wherein the priority of the entity is determined responsive to a characteristic of the collection of geospatial objects.

8. The system of claim 1, wherein an entity of the plurality of entities comprises a geospatial entity defined in an on-line forum and wherein the ranking mechanism uses data generated in the on-line forum to determine the rank of the geospatial entity.

9. The system of claim 1, wherein the ranking mechanism uses data harvested from a website on the internet about a geospatial entity to determine the geospatial entity's priority.

10. The system of claim 1, wherein the ranking mechanism determines a geospatial entity's priority from a combination of weighted data from a plurality of meta attributes of the geospatial entity.

11. A computer-implemented method for ranking geospatial entities, the method comprising: receiving geospatial entity data; evaluating attributes of geospatial entities included in the received geospatial entity data; ranking the geospatial entities based on the evaluation; and storing the ranked geospatial entity data.

12. The method of claim 11, wherein the geospatial entity data comprises data generated in a community forum.

13. The method of claim 11, wherein the geospatial entity data comprises data associated with a specific user and further comprising using the ranked geospatial entity data to generate a map for the specific user.

14. The method of claim 11, further comprising selecting geospatial entities for a geographical display based on the rankings of the geospatial entities.

15. The method of claim 11, further comprising providing the ranked geospatial entity data to a map system configured to generate a map that includes ranked geospatial entities and unranked geospatial entities.

16. The method of claim 11, further comprising selecting geospatial entities to include in navigation instructions based on rankings of the geospatial entities.

17. The method of claim 11, further comprising selecting a geospatial entity to associate with an advertising term based on the geospatial entity's ranking.

18. The method of claim 11, further comprising providing the ranked geospatial entity data to an application for generating a search result based on the ranked geospatial entity data.

19. The method of claim 11, wherein evaluating is performed responsive to user instructions for providing personalized geospatial entity rankings.

20. The method of claim 19, wherein the user instructions comprise a weighting to be applied to an attribute of a geospatial entity identified in the geospatial entity data.

21. A system for ranking geospatial entities, the system comprising: an interface for receiving ranking data about a plurality of geospatial entities; an entity ranking module for generating place ranks for geospatial entities according to a ranking mechanism based on the ranking data; and a database for storing ranked entity data generated by the entity ranking module.

22. The system of claim 21, wherein the interface is configured to provide the ranked entity data to a requesting application.

23. The system of claim 21, wherein the entity ranking module is configured to evaluate a plurality of diverse attributes to determine a total score for a geospatial entity.

24. The system of claim 21, wherein the entity ranking module is configured to organize ranked entity data into placemark layers.

25. The system of claim 24, wherein each placemark layer corresponds to at least one of: a level of detail, a density, an altitude, and an entity category.

26. The system of claim 21, wherein the requesting application is a map server system configured to use the ranked entity data to generate a map including entities selected on the basis of place ranks.

27. The system of claim 26, wherein the entity ranking module is hosted on the map server system.

28. An entity ranking module hosted on a client device, the module for generating rankings for a plurality of geospatial entities and the module comprising: an interface for receiving entity data that defines a plurality of geospatial entities and ranking data that describes the plurality of geospatial entities; and a ranking engine for generating rankings for the geospatial entities, wherein the rankings are used to select which of the geospatial items to include in a map to be displayed on the client device.

29. The module of claim 28, further comprising a memory for storing data about a user of the client device and wherein the ranking engine is configured to apply a ranking mechanism responsive to the user data.

30. The module of claim 29, wherein the user data comprises user preferences about the relative weightings of attributes evaluated by the ranking engine.

31. The module of claim 29, wherein the user data comprises a user defined geospatial entity.

32. The module of claim 29, wherein the user data comprises an indication of a user's interest in a geospatial entity and wherein the ranking mechanism assigns a rankings premium to the geospatial entity based on the user's interest.
What I am trying to say is you need to be careful when changing whois info or trying to suddenly protect your privacy by hiding your whois data. If there is sudden change you can expect some sort of problem but if done correctly I am sure you wont have any problem with whois privacy protection.

Labels:


Does Googlebot crawl videos:


Recently I have noticed surge in videos from YouTube ranking for competitive phrases. If you see YouTube page all you see is a video without proper text, few comments shown and all other are just junk data. So here are my questions?

1. How a video ranks on top of Google results when it doesn't have any proper optimization done or it just has keywords in title.

2. How does Google determine the content of a video? Does it have a program that somehow analyzes the video and evaluates its content, even when there are no spoken or written words, as in this case. Or does a Google employee actually watch the video and personally evaluate its content?,

Doing a bit of more searches I feel Google will be seeing the popularity of the video, real backlinks coming to the video, no of instances the video has been embedded etc.

Google can also see the comments and valuate the quality of the video some videos attract 1000s of comments so I feel Google will have ability to crawl all the comments though many contents are hidden.

Also the quality of the domain itself matters youtube.com is PR 9 domain which signals very high level importance and popularity in Google's eyes. You can expect pages under youtube.com to rank because of the quality of YouTube domain.

Video views can also signal Google that the video is important.

In fact only Google knows the algorithm behind ranking YouTube videos. I will leave it to them.

Labels:


Google introduces broader keyword suggestions and longer snippets:

Google on 24th march released a new technology that can better understand concepts and associations related to the search done by end users, this advancement will help provide more related keyword suggestions on top and bottom of the search result pages.
Example, if you search for [principles of physics], Google algorithms understand that "angular momentum," "special relativity," "big bang" and "quantum mechanic" are related terms that could help you find what you need.

Longer snippets in Google results page:

Now Google in offering longer snippets for longer queries. A snippet is the dark blue title and is followed by few lines of text which was previously indexed from your website. Below are a couple of examples.

Suppose you were looking for information about Earth's rotation around the sun, and specifically wanted to know about its tilt and distance from the sun. So you type all of that into Google: [earth's rotation axis tilt and distance from sun]. A normal-length snippet wouldn't be able to show you the context for all of those words, but with longer snippets you can be sure that the first result covers all those topics. In addition, the extra line of snippets for the third result shows the word "sun" in context, suggesting that the page doesn't talk about Earth's distance from the sun:

Labels:


Unable to get rid of potential penalty?

You know your site is penalized. You cleaned up what you thought was the trigger. Filed re-inclusion request. After that you are in the dark. The chances are you never get reincluded-

- Your site might never have had a problem to begin with. It might have been one of those Google's freak collateral damage issues that landed you in the soup.

- You correctly identified the issue, cleaned it up, filed a request, but you are in the mandatory penalty period, which you don't know is how long and when will it end.

- You haven't identified the problem or have partly addressed it and Google wants you to do more, a fact you are not aware of and are waiting endlessly for the penalty to end.

- It might never have been a penalty by Google's definition, but an algorithmic tweak that has affected a select set of keywords. If the overall traffic hasn't been affected drastically, perhaps a perceived penalty might belong to this category.

- Your site is affected (penalized or algorithmically tweaked), you undertake damage control efforts, file a request to Google citing what might have been the problem that you addressed, which might be a news to Google! So, they use the stick you gave to beat you.

It might just be best to clean up issues that you are aware of and leave the rest to destiny.
Google penalties are vague and sometimes affect high quality sites I feel best is to go for re-inclusion requests.

Labels:


Google India has launched a new campaign in India dubbed the Google bus to travel to different cities in India and educate them on the importance of Internet. Currently the Google bus is touring Tamil Nadu and the campaign is going successful. Google Bus primary motive is to attract more people into Using internet and this will give them more benefit by later making those people buy from Internet. Google's primary revenue is from selling ads in their sponsored links section in Google search page. The Bus is scheduled to tour over 15 towns in Tamil Nadu over a span of 45 days. Currently they are already done with more than 5 towns including the capital city chennai.

Routs the bus will take in Tamil Nadu:

Follow our route

03 Feb Chennai
05 Feb Vellore
06 Feb Krishnagiri
07 Feb Salem
12 Feb Pollachi
14 Feb Coimbatore
18 Feb Dindigul
19 Feb Madurai
23 Feb Tirunelveli
25 Feb Nagercoil
27 Feb Tuticorin
02 Mar Pudukkottai
03 Mar Tiruchirappalli
06 Mar Thanjavur
08 Mar Kumbakonam
10 Mar Neyveli
12 Mar Cuddalore
13 Mar Tiruvannamalai



Labels:


Is Google devaluing forums:

Several people have reported that Google is devaluing forums and not indexing them well these days. I did notice something similar in couple of forums we monitor. Google stopped indexing our PHPBB forums. Before it was indexing it every day, all new posts appeared in Google the same day, sometimes with an hour. Now it stopped. It doesn't index topics at all, and last time it cached the main board page (index.php) was January 30.

My rankings of what is already indexed didn't drop, the overall position of the site on the generic keywords even improved. And yes, we checked: our board was not hacked, there are no redirects, .htaccess or robots.txt hacks. Webmaster Tool doesn't display any error messages, everything looks ok. But crawl graphs do show that Google crawls less in January-February than in November-December. It seems like Google just ignores the board now, like it has "better things to crawl". Which is a pity, because there are tons of important info being posted daily.

We did some research on few of the other PHPBB sites we monitor, and even those boards faces the same situation. I suspect this could just be a temporary glitch with Google. Hope it gets fixed soon.

Labels:


Google and Brand Authority

A webmaster world member has started a discussion whether Google is giving brand authority to certain results. When you say Brand Authority it means if you are the top brand for that particular search you will be ranking for that term.

For example:

Keyword: Laptop
#1 - Apple
#2 - Dell

Keyword: High Speed Internet
#1 - ATT
#2 - Comcast

Keyword: Quit Smoking
#1 - SmokeFree.gov
#2 - CDC.gov

I have seen this in many cases there are some keywords where a top brand for that keyword ranks even if their website is all flash or just a splash page. Google's Eric Schmidt has already said there needs to be some sort of Authority / Social Buzz for a top brand to rank. Its not just about onpage factors or offpage factors for top brands if a particular brand is top on a particular search I am sure they will be ranking atleast for Google. I wonder how Google does this they always say their results don't have manual intervention but it seems to make a top brand rank without proper web popularity should come from some extra ordinary human based algorithm. Most top brands have good offline popularity but not web popularity but still they rank well for their phrases.

Google says its not just about factors influencing the site alone but the social media factor also boosts top brands. Especially this year many users begin to notice brand authority. Probably google has figured out a way to make the top brands rank for their corresponding keywords. The real question are Google users happy to see their top brands on top of Google results for major keywords if yes I don't think Google need to worry how they get them out there. Users come first and I feel most users are happy to see top brands on top of results for brand related keywords.

Labels:


Google webmaster holiday ideas

Google compiled a list of quick and simple tips for websites preparing for the holiday rush. For online and offline retailers, we understand that your website is a big part of your business, especially this time of year. Whether it's to make the sale online or to increase foot traffic to your brick-and-mortar location, your web presence is a critical part of your business plan. The tips below are fast, free, and can make a big difference.Verify that your site is indexed by Google (and is returned in search results)Check your snippet content and page titles with the site: command [site:example.com] -- do they look accurate and descriptive for users? Ideally, each title and snippet should be unique in order to reflect that each URL contains unique content. If anything is missing or you want more details, you can also use the Content Analysis tool in Webmaster Tools. There you can see which URLs on your site show duplicate titles or meta descriptions.

Label your images accurately

Don't miss out on potential customers! Because good 'alt' text and descriptive filenames help us better understand images, make sure you change non-descriptive file names [001.jpg] to something more accurate [NintendoWii.jpg]. Image Search is one of our largest search properties, so you should take advantage of it.Know what Google knows (about your site)Check for crawl errors and learn the top queries that bring traffic to your site through Webmaster Tools. See our diagnostics checklist.Have a plan for expiring and temporary pagesMake sure to serve accurate HTTP status codes. If you no longer sell a product, serve a 404. If you have changed a product page to a new URL, serve a 301 to redirect the old page to the new one. Keeping your site up-to-date can help bring more targeted traffic your way.Increase foot traffic tooIf your website directs customers to a brick-and-mortar location, make sure you claim and double check your business listing in Google Local.
Usability 101Test the usability of your checkout process with various browsers. Ask yourself if a user can get from product page to checkout without assistance. Is your checkout button easy to find?

Labels: ,


HTC PLANNING TO RELEASE GOOGLE ANDROID PHONE THIS SUMMER

The 14 new members who joined Google's Open Handset Alliance showed their support in development of the Google Android mobile operating system. Among the fresh addition is Sony Ericsson, and it looks like the company is not wasting anytime and has hit the ground running.

By summer 2009, according to several sources Sony Ericsson is planning to release its Android Handset. A company spokesman says that first model will be on the higher end while it will release more mass market devices at a later time. In addition, HTC is asked to work on whole portfolio of Android devices and also on the release program of Android devices. HTC is the manufacturer of the first Android Smartphone, T-MobileG1, which was questionable in the Hardware department, but now HTC has acquired one & Co Design Inc. for this handset designs, perhaps we will see some sleeker device? Summer can actually get here quick enough.

Labels:


SEO experiment keyword rich links webmasterworld member

A webmaster world member asks "Hi guys, I am doing an interesting experiment on two of my more throw away domains. The experiment is testing to try and determine more information about how linking to the homepage affects rankings. The testing involves various controls - linking to the root domain from the nav only using 'home', linking from the nav using 'main keyword', linking from nav using 'variations' of keyword, linking from content only (while nav links saying home) to home using 'keywords' etc, etc.

First, I should mention some points about the domain.
4 years old Owned by me Dedicated IP Canonical comdomized HTML only Ranks top 5 in Google.com for main, second and third keyword phrases. Total of 90 pages, all unique content (written by me)
Testing was done over a 3 month period, with grace periods in between testing.
Here is so far what I have found. Might tell us a little about the threshold and re ranking filters
1. Linking home from every page in content using the same keyword caused 6 page drop in rankings.
2. Linking home using keyword in nav on all pages caused the same drop.
3. Link home from every page in content using variations caused a 3 page drop.
4. Linking home from the first 10 pages listed in google.com for site:domain.com/* brought increased ranking (from 5th to 3rd)
What is really interesting is that I gotten this down to the 'by page' factor. When I *slightly* cross the threshold and add links to two extra pages, and then wait until they are cached, I tip the scales and drop, to page 6.
What is further interesting is that linking home from content using variations of keywords WAS quite effective to a point, after which the site plummeted.

As well, this might point to a 'hard line' being crossed in terms of threshold, at one point I had the website going between position 4 and 51-60 for the same keyword every second day (flipping back and forth)
My test will be about trying to -950 the website by being ridiculously deliberate in nav linking, and then seeing if I can reverse the results by removing those (and how long it takes for the trust to be reinstated to the website) "

Labels: ,


Google calendar officially comes to Apple's iCal

On Monday, Google announced full support for the CalDAV protocol along with the release of a small piece of software for Mac computers so that users can easily link up their Google Calendars with iCal applications.

In July, CalDAV was previously launched by Google; however still consumer had to manually add their calendars directly to CalDAV- supporting application like Mozilla Sunbrid and Apple's iCal. The newly launched Mac utility named "Calaboration" let the user to plug in their google calendar username and password to send the Google calendar over to iCal. It provides the benefits of two way synchronization that mean what ever changes you make on either end will appear to both in every few minutes.
After all the changes made to Calaboration, when it was started it worked without any problem. With this current implementation, we are able to see other people schedules, as well as reply yes, no or may be to calendar invitations. The only problem faced earlier was syncing errors which mean it dint allow to write data to Google servers, which was remedied with a closing and reopening of the program after the initial CalDAV setup.

If you are sunbird user, you can grab the Calaboration. As there is a simple provider extention that does the same thing.

Labels:


Losing pages in search engine index a concern

According to a member " I have been doing a lot of digging lately because of a site I have that has been losing pages in the index… at least I thought it was losing pages. In the coarse of investigating, I have been finding a lot of discrepancies and have come to the conclusion that though tools and search operators may be helpful, they seem to be far from accurate and do not fully portray what is in the data and returns. What I found fascinating is that while I perceived that I was losing pages in the index, I actually have been increasing position for some relatively hard to get keywords and phrases. In fact, the site in question just went to #6 for widgets. It seems the more I search and investigate, the more glaring the discrepancies.

I was having a lot of problems with the site and duplicate content. It seems there were several ways of getting to the same page (different URLs) and as we know, this can be a bad thing. The site has a forum that has generated 16,000 topics (some of them on multiple pages) so in essence, I am going to estimate that I have around 19,000 pages total on the site. Now at the height of the duplicate problem, when I did a site:mysite.com, I was getting over 80,000 pages returned. WOW! I fixed all the dupe content issues and now each page has one URL and each has a uniquely generated title, description, keywords and of course, the content is different since it is user generated. I used robots.txt to get rid of the duplicated pages and started to watch what would happen. This seemed to have corrected the problem. Pages started going supplemental and dropped, as far as I can tell. But the pendulum seemed to have swung too far! Within the past month, the number of pages returned using site: have been slowly dropping.

Now when I do a site:mysite.com, it only shows 4000 pages. Huh? What's the deal with that? Not only that, when I do a site:mysite.com/*, I only get about 800 pages. So I am confused, of course. But are the missing pages really not there? I conducted about 200 searches for the pages that I thought were missing and found every single one of them, though the searches were fairly specific. So what does this tell me? The site: operator does not work. All of my pages are there, it's just Google doesn't want to count them all with this operator. What does this mean? Not sure, but it is what it is. For every page I find missing, I can find in a search. The tool seems to be broken - like a lot of the tools on G. "

What causes drop of indexed pages?

Labels:


"Results 1 to 35 of about 4" - what causes it ?

Interesting post by a senior member in webmaster world

"Two days ago a new CMS went live that replaces an old one that had loads of canonical problems. Some content pages had upwards of 4 or 8 different URLs indexed, and some content pages could be returned for a near infinite number of different URLs.
The old CMS used horrible dynamic parameter-driven URLs and the new one uses short folder-like URL formats, and *all* canonicalization factors have been taken into account.

There are a lots of sites to be moved over to the new CMS, but we started with the smallest -- so small that it doesn't really need a CMS (except that using the CMS has made it very easy for the owner to keep it updated).
The site was already fully indexed, and some content pages show under multiple URLs, because basic non-www to www canonicalisation and so on was only added a few months ago. Many of the really old non-canonical URLs are also still listed.
Now that the new CMS has been installed, and the old content reinstated, most of the old URLs in the SERPs (actually all except domain root) are now 301 redirects (from long and horrible dynamic, to short folder-look alike URLs).
Last night Google reported "1 to 35 of about 8" for a site:domain.com search.
Today it shows "1 to 35 of 4". None of the new URLs are showing up yet (I expect they will in the next 24 to 36 hours).
So, their internal system "knows" that most of the URLs they already had are now redirects, and it seems that those URLs aren't now included as a part of the "site count" (the "of nnn" number).
I would guess that the URLs that now redirect are already moved over to Supplemental.

Hence... "1 to 35" - what we are showing you "of 4" - how many URLs that we think are "real" (200 OK).
Now I understand a bit more, I think.
I would expect the new URLs for content to start appearing tomorrow, or soon after. "

Labels:


What google uses to crawl websites - Is it something similar to chrome

Is Google using chrome to crawl websites?

An interesting thread in Webmasterworld.com :

webmasterworld.com/google/3760236.htm

Thought i would drop in to report a very interesting Google activity I've been observing today.
I have an "who's online" script running and reporting visitors in real time as they browse the site. The script uses javascript to report referrer and currently viewed page.
One of the visitors today was from 66.249.72.180 [google.com] sucking approx 200 pages in a 15-20 minutes frame and invoking the on page tracking javascript....sending referring page and current URL information (it had to invoke javascript just like a browser would to do that). In other words acting very much like it is a full on browser.
Maybe it is already old news to some, maybe i missed previous topics discussing this. Anyway, it is the first time i see this thing in real time. Very interesting crawl development.

Anyone else noticed this ?
P.S. Page to page browsing was occuring at a very fast rate, could not be human.

Labels:


Google to Announce Third Quarter 2008 Financial Results

MOUNTAIN VIEW, Calif (October 6, 2008) -- Google Inc. (NASDAQ: GOOG) today announced that it will hold its quarterly conference call to discuss third quarter 2008 financial results on Thursday, October 16, 2008 at 1:30 p.m. Pacific Time (4:30 p.m. Eastern Time).

The live webcast of Google's earnings conference call can be accessed at investor.google.com/webcast. The webcast version of the conference call will be available through the same link following the conference call.

Labels:


trust and authority two different things in search engines.

A webmaster world thread analyzes the difference between trust and authority in Google. Its a good thread to discuss please join here www.webmasterworld.com/google/3753332.htm

"While studying Google's recently granted [url=http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&u=%2Fnetahtml%2FPTO%2Fsearch-adv.htm&r=1&p=1&f=G&l=50&d=PTXT&S1=7,346,839.PN.&OS=pn/7,346,839&RS=PN/7,346,839]Historical Data patent[/url], I noticed that the language helps to separate two concepts that we tend to use casually at times: trust and authority.
...links may be weighted based on how much the documents containing the links are trusted (e.g., government documents can be given high trust). Links may also, or alternatively, be weighted based on how authoritative the documents containing the links are (e.g., authoritative documents may be determined in a manner similar to that described in [url=http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&u=%2Fnetahtml%2FPTO%2Fsearch-adv.htm&r=1&p=1&f=G&l=50&d=PTXT&S1=6,285,999.PN.&OS=pn/6,285,999&RS=PN/6,285,999]U.S. Pat. No. 6,285,999)[/url].
Clearly, Google has two different metrics going on. As you can see from the reference to Larry Page's original patent, authority in Google's terminology comes from backlinks. When lots of other websites link to your website, you become more and more of an authority.

But that isn't to say you've got trust. So what exactly is trust? Here's an interesting section from the same patent:
...search engine 125 may monitor one or a combination of the following factors: (1) the extent to and rate at which advertisements are presented or updated by a given document over time; (2) the quality of the advertisers (e.g., a document whose advertisements refer/link to documents known to search engine 125 over time to have relatively high traffic and trust, such as amazon.com, may be given relatively more weight than those documents whose advertisements refer to low traffic/untrustworthy documents, such as a pornographic site);
So we've got two references here, government documents and high traffic! From other reading, I'm pretty sure that trust calculations work like this - at least in part. Google starts with a hand picked "seed list" of trusted domains. Then trust calculations can be made that flow from those domains through their links.
If a website has a direct link from a trust-seed document, that's the next best situation to being chosen as a seed document. Lots of trust flows from that link.
If a document is two clicks away from a seed document, that's pretty good and a decent amount of trust flows through - and so on. This is the essence of "trustrank" - a concept described in [url=http://dbpubs.stanford.edu:8090/pub/2004-17]this paper by Stanford University and three Yahoo researchers[/url].
This approach to calculating trust has been refined by the original authors to include "negative seeds" - that is, sites that are known to exist for spamming purposes. The measurements are intended to identify artifically inflated PageRank scores. See this pdf document from Stanford: [url=http://dbpubs.stanford.edu:8090/pub/showDoc.Fulltext?lang=en&doc=2005-33&format=pdf&compression=&name=2005-33.pdf]Link Spam Detection[/url]
To what degree Google follows this exact approach for calculating trust is unknown, but it's a good bet that they share the same basic ideas. "

Labels: ,


Is TPR penalty lifted for some sites



Some webmaster world members are noticing that the Toolbar Pagerank penalty is lifted for their sites. Google started imposing Toolbar Pagerank Penalties for sites that sell / buy links around January this year now it seems to be lifted for some sites. Though its being reported in forums we never noticed anything like that across our client sites. Probably its because we don't sell or buy links for our clients due to our policy.

forum discussion here: http://www.webmasterworld.com/google/3729425.htm

Labels:


How to start a multilingual site: Help from Google to make a google friendly multilingual site

This blog is all about of how to start a multilingual site & various pros in having a multilingual site. Multilingual site is a site where a person can have a site in different languages. But the first thing you'll want to consider is if it makes sense for you to acquire country-specific top-level domains (TLD) for all the countries you plan to serve. This option is beneficial if you want to target the countries that each TLD is allied with, a method known as geo targeting. Geo targeting is different from language targeting. Geo targeting refers to the sites whose main target is in a particular region/location in the world & it allows you to lay down different geographic targets for different subdirectories or sub domains (e.g., /de/ for Germany). Where as language targeting is one which targets to reach all speakers of a particular language around the world & where you probably don't want to limit yourself to a specific geographic location. In this case you don't want to use the geographic target tool. Since its difficult to maintain & update multiple domains, its better to buy one non-country-specific domain, which hosts all the different versions of your website. In this case, there are two options which are recommended:

First option is to place the content of every language in a different sub domain. For our example, you would have en.example.com, de.example.com, and es.example.com.
Second option is to place the content of every language in a different subdirectory. This is easier to handle when updating & maintaining your site. For our example, you would have example.com/en/, example.com/de/, and example.com/es/.
There may arise a doubt for some that when same content is posted in different languages then will it result to a duplicate one?? Definitely not, but you should make sure that your site is well organized. And always avoid mixing languages on each page as this may confuse Googlebot as well as your users. It's always good to have navigation & content in same language on each page. You can also know how many of your pages are recognized in a certain language by performing a language specific site search. Multilingual site is a benefit to the owner of the site & to the visitors of the site as they get information in their language. For example: when a person wants to know fashion designing institutions in London then he may type that query in search along with the language he needs the page to be displayed in. He feels so comfortable when he gets the information in the language he knows & understand.

Official post https://googlewebmastercentral.blogspot.com/2008/08/how-to-start-multilingual-site.html

Labels: ,


Google bans webposition Gold position checker



Google has always been warning against automated position checkers which uses a lot of its resources. Now Google has taken a stronger hand and has blocked Webposition Gold software from performing automated ranking requests in Google. AUtomate rank requests creates a lot of junk queries and uses a lot of server resources of Google. Google has been issuing warning not to use webposition Gold but people continue to use it. Now Google has taken action and has blocked all web position gold queries. Web position Gold has an unique way of sending queries to Google and it seems google was able to detect it using their bot filter software.

We at Search engine Genie never use bulk keyword rank checkers. Our rank checkers are search engine friendly and allows only limited queries per day.

Labels:


Another company wants a piece of Google pie - Mediaset

First we had Viacom then we had the Belgium newspaper group and now we have another company suing Google. Mediaset a media company is suing Google and Youtube for using copyrighted materials on their website.



According to reuters

"Mediaset, controlled by Prime Minister Silvio Berlusconi, joins others broadcasters seeking compensation from YouTube, a video-sharing website, for copyright infringement.
Mediaset filed suit in a Rome court, the company said in a statement on Wednesday. A YouTube spokeswoman said it did not see the need for the legal case.
"YouTube respects copyright holders and takes copyright issues very seriously," the spokeswoman said in London. Google bought YouTube in 2006.
"There is no need for legal action ... We prohibit users from uploading infringing material and we cooperate with all copyright holders to identify and promptly remove infringing content as soon as we are officially notified," Google said in a separate statement.
Lawsuits and trials in Italian are often lengthy and it is forecast the outcome.
Mediaset said a sample analysis of YouTube at June 10 found "at least 4,643 videos and clips owned by us, equivalent to more than 325 hours of transmission without having rights".
Mediaset said this was equal to the loss of 315,672 days of broadcasting by its three TV channels."



Well i have always said a lawsuit against youtube.com is not the best idea since youtube is a public resource and cannot be threatened. We will loose the freedom of internet if Youtube looses its way by lawsuits.

Labels: ,


Google knows the web is big - a informative post in Google blog,

Google is one of the biggest website. We've known it for a long time that the web is big. The first Google index in 1998 already had 26 million pages, and by 2000 the Google index reached the one billion mark. Over the last eight years, they've seen a lot of big numbers about how much content is really out there. Recently, even their search engineers stopped in awe about just how big the web is these days when their systems that process links on the web to find new content hit a milestone1 trillion-unique URLs on the web at once! So how many unique pages does the web really contain??

No one knows how many it contains but the number of pages out there is infinite! We don't index every one of those trillion pages, many of them are similar to each other, or represent auto-generated content. But Google is proud to have the most comprehensive index of any search engine, and there goal is always been to index the entire world's data. To keep up with this volume of information, their systems have come a long way since the first set of web data Google processed to answer queries. Then they did everything in batches- one workstation could compute the Pagerank graph on 26 million pages in a couple of hours, and that set of pages would be used as Google's index for a fixed period of time.

Today, Google downloads the web continuously, collecting updated page information and re-processing the entire web-link graph several times per day. This graph of one trillion URLs is similar to a map made up of one trillion intersections. So multiple times every day, they do the computational equivalent of fully exploring every intersection of every road in the United States. Google's distributed infrastructure allows applications to efficiently traverse a link graph with many trillions of connections, or quickly sort petabytes of data, just to prepare to answer the most important question- your next Google search.

http://googleblog.blogspot.com/2008/07/we-knew-web-was-big.html

Labels: ,


Yahoo and Microsoft gain more market share but still Google leads way ahead

Some of the popular websites are Google, Yahoo, Microsoft, AOL, & Ask. Among these search engines Yahoo & Microsoft showed increment where as Google declined to great extent compared to previous years. The percentage of searches handled in US by five search engines are 61.5%, 20.9%, 9.2%, 4.1%, & 4.3%. Finally, a change - Google slips while Yahoo and Microsoft gain. Now there comes a question whether Google is in trouble?? The answer will obviously be no because of raw number of searches, June 2008 was another record breaker for Google. But Google dropped from 61.8% in May 2008 to 61.5% in June 2008, the first time a share drop has been shown over the past year since December 2007 (when it went from 58.6% to 58.4%).On other hand Microsoft showed its first gain in the past year. After many months of incremental decline, Microsoft rose from an 8.5% share in May 2008 to 9.2% in June 2008.

This is likely a factor in Microsoft's rise. By this it has achieved great success but at the same time Microsoft is hoping that program will generate more than a 0.7% rise in its share, and that's all it has gotten so far. Clearly the program isn't a massive initial game changer that some thought it to be. Instead, if Cash back is going to be a success, clearly now it will be something that happens over time. Let's see if that indeed happens in the coming months. The other hand even yahoo too is showing a rise. After months of drops with the occasional rise, Yahoo posts two straight months of gains, i.e. 20.4% in April 2008 to 20.6% in May, then 20.9 percent in June 2008. I think there's great rise in Yahoo & Microsoft. Lets see the actual number of searches each handled versus market share:
Google: 7.1 billion
Yahoo: 2.4 billion
Microsoft: 1.1 billion
Ask: 501 million
AOL: 471 million

By this we can tell that Google still shows a gain. Google went over the 7 billion searches served mark. Whereas Yahoo, at 2.4 billion searches, & Microsoft, at just over 1 billion searches, which didn't break any past records but at least got closer to territory it held a year ago. On the whole there is a great increment in yahoo & Microsoft compared to previous years & Google declined a bit but then too created a record!

Labels: , ,


How to submit a re-inclusion request Google's official video - transcripted by SRequesting reconsideration in Google how to remove banned site – Video



Requesting reconsideration in Google how to remove banned site – Video transcript


Posted by Mariya Moeva, Search Quality Team



Hai I am Mariya Moeva from the Google Search Quality Team and I like to talk to you about reconsideration requests. In this video we will go over one how to submit a reconsideration request for your site. Lets take a webmaster lady here as an example. Ricky a hard working webmaster who works on his ancient politics blog everyday lets call it example.com one day he checks and sees that his site no longer appears in Google search results. Lets see some things to know whether he needs to submit reconsideration requests. First he needs to check whether his sites disappearance from the index may be caused by access issues. You can do that too by logging into your webmaster tools account on the overview page you will be able to see when was the last time Google-bot successfully accessed your webpage. Here you can also check whether there are any crawling errors for example if your server was busy or unavailable when we try to access your site you would get an URL unreachable message alternatively there will be URLs on your site blocked by your robots.txt file you can see this by URLs restricted by robots.txt.

If these URLs are not what you expected you can go to tools and select analyze robots.txt here you can see whether you robots.txt file is properly formatted and only blocking parts of your site that you don't want Google to crawl. If google has no problems accessing your site check to see if there is a message waiting for you in the message center of your webmaster tools account. This is the place where Google uses to communicate with you an put information to you on webmaster tools account in the sites that you manage. If we see that there is something wrong with your site we may send you a message there detailing things you need to fix to get back your site in compliance with Google webmaster guidelines. Ricky logs into his webmaster tools account and checked that there are no new messages. He doesn't find any messages if you don't find any message in the message center check to see if your site has been on is in violation on Google's webmaster guidelines you can find that in the help center under the topic creating a Google friendly site how to make my site perform best in Google. If you are not sure why Google is not including your site a great place to look for help is our Google webmaster help group there you will find many friendly and knowledgeable webmasters and Googler's who will be happy to look at your site and give suggestions on what you might need to fix,

You can find links to both the help center and the Google help group at Google.com/webmasters to get to the bottom of why his site has disappeared from the index Ricky opens the webmaster guidelines and starts reading. In quality guidelines we specifically mention completing avoiding hidden text or hidden links on the page. He remember that at one point he hired a friend named Liz who claimed to say he knows something about web design and that he can make the site rank better in Google. Then he scans his site completely and finds blocks of hidden text on footer of all his pages. IF your site is in violation of Google webmaster guidelines and if you think this might have affected the way your site is ranked in Google now will be a good time to submit a reconsideration request. But before you do that make changes to your site so that it falls between the Google

S webmaster guidelines. Ricky removed all the hidden text from his pages now he can go ahead and submit a request for reconsideration. Login to your webmaster tools account under tools click on request reconsideration and follow the steps make sure you explain what you did wrong with your site and what steps you have taken to fix it.

Once you have submitted a request you will receive a message from us in the message center confirming that we have received it. We will then review for compliance with the Google webmaster guidelines. So that's an overview of how to submit a reinclusion and reconsideration request. Thanks for watching and Good luck with web mastering and ranking.

Labels: ,


Interesting post in webmasterworld on how punctuation in keywords affect results.

Andy of webmasterworld made an interesting post on how punctuation in keywords affects search results and search engine rankings

Read the post

"Various punctuation characters have a noticeable impact on search results - mostly from a searcher perspective. As a webmaster, you may find that your users include punctuation in some keywords, and so it can be of use to know what the effect on the results they see is. And besides, knowing how to search Google is one step towards understanding how Google works. This is a spot check of the current handling of punctuation by Google.
Indexed punctuation
Key_word
Underscores are treated as a letter of the alphabet, which is why you can search for an underscore directly. Use underscores in content if your visitors include an underscore when searching (e.g. if you had a programming site).
Key&word
Ampersands or 'and symbols' have fairly unique handling. They're both indexed and also treated as the equivalent of word "and". If there are no spaces separating the symbol and the adjacent letters, the search results are an approximate equivalent of combining results for ["key and word"] and ["key & word"] (note the phrase matching). Use ampersands in copy as is natural for your target audience.
Explicit search operators
Many punctuation characters are explicit search operators, with a documented effect on results. Search operators are not indexed (or at least, they can't be searched for) and so are usually treated as word separators when found within website copy:
Key¦word
An (unbroken) pipe character is the equivalent of boolean OR: a search for [key OR word]. It can be a handy shortcut when conducting complex queries.
Key"word
A double quote triggers an exact or phrase search for the proceeding words (whether you include a closing double quote or not). So in this instance, it's the equivalent of a search for [key word] since a single word can't be a phrase. ["key word] is the same as searching for ["key word"].
Key*word
An asterisk is a wildcard search for zero or more words: [key ... word]. Putting numbers on both sides will trigger the calculator. Occasionally, Google delivers (strange!) results if you search for an asterisk directly.
Key~word
A tilde triggers Google's related word operator - in this instance, a search for both 'key' and 'word', as well as other words related to 'word' - like 'Microsoft', 'dictionary' and others.

Search operator oddities
Key-word
A hyphen (as is probably consistent with language use) returns a mix of results for the words both used separately, and joined together - somewhere between [key word] and [keyword]. It's the preferred word separator within website URLs, since other punctuation characters that are treated as a word-separator have specific functions within a URL.
Others
A few punctuation characters have a strange impact on results - returning far fewer results than for either separated or concatenated words. They are neither known search operators, or indexed characters. These are . / \ @ = :
As far as I'm, aware, all other punctuation characters are treated as simply a space or word separator.
So, do I have too much time on my hands? Probably. But why not confuse whoever looks at Google's search logs by trying a few punctuation searches yourself? ;)
Do you know any punctuation with an effect on results not discussed here, or more about the effect on results of the punctuation above? "

Labels: ,


Google uses Search Logs effectively to combar Webspam

Fighting web spam using effective tracking of logs and click through data.

Matt Cutts Senior Software Engineer and Lead of Web Spam Team in Google recently made an interesting post on how Google effectively fights spam using Data collection.

Web Spam is the most annoying part of Internet today. Especially more than 85% people uses search engines to land on any site for the first time Search Engine Spam should be totally avoided when it comes to user search experience. Search engines have always had the taunting task of fighting web spam from the day they came into existence. Google is one of the search engines which used effective anti-web spam methods to combat search engine spam. This is one reason they are keeping their position on top of all search engines.

First time i have seen Google really acknowledge that they are using log data in their algorithm to combat spam. According to Official Google blog

"Data from search logs is one tool we use to fight web spam and return cleaner and more relevant results. Logs data such as IP address and cookie information make it possible to create and use metrics that measure the different aspects of our search quality (such as index size and coverage, results "freshness," and spam). Whenever we create a new metric, it's essential to be able to go over our logs data and compute new spam metrics using previous queries or results. We use our search logs to go "back in time" and see how well Google did on queries from months before. When we create a metric that measures a new type of spam more accurately, we not only start tracking our spam success going forward, but we also use logs data to see how we were doing on that type of spam in previous months and years.

The IP and cookie information is important for helping us apply this method only to searches that are from legitimate users as opposed to those that were generated by bots and other false searches. For example, if a bot sends the same queries to Google over and over again, those queries should really be discarded before we measure how much spam our users see. All of this--log data, IP addresses, and cookie information--makes your search results cleaner and more relevant."

As per Matt cutts IP address and search logs do play a role in judging the quality of results delivered to users. I personally feel this is a good option i know there are some IPs that spam the search engines more than the regular IPs if Google is able to monitor the IPs pretty well they can effectively block automated queries and this can be used in search algorithm. Some keywords will always be spammed more than others, Google as they say can use these type of tracking to impose more filters to those kind of phrases. As a person with more than 5 years of experience with search engines i can see Google imposes stronger filters for certain phrases than others. For keywords like Cancer, mesothelioma more gov, org authority sites rank which for keywords like auto transport, real estate commercial sites do a better job. Really enjoy the way the results are displayed since i don't like to see a commercial site when i search for medicine related information. Most of the time commercial sites provide much lesser value to users than non-commercial sites. There are areas where we need to see more commercial sites and there are areas that needs more information sites to be dominant. Only way to get this right is to check through the hysterical data and search logs and see what keywords are searched more from where, what the user did after clicking the data etc. User tracking can be done effectively using strong filters and effective methods.

Search Engines are facing problems every day. Apart from Web spam and search engine spam they see DDOS attacks, excessive bot activity, scrappers etc. To stand on top they need to keep working on stronger methods to combat spam.

Labels: , ,


How to prevent Google Bowling - Interesting discussion in WMW

I came across an interesting discussion in webmasterworld about how to prevent damage to a site by links from other sites. You can read more about the discussion here.

Tedster a long term member and webmaster world Administrator answers pretty well. I am very impressed with his answer and agree 100%.

Even if you block IP addresses or redirect pages, the links that point to your
website are still there, and they're on other sites that you can't control.
Those links will have whatever effect they have with Google's also.
There is
one thing that protects a website against Google Bowling - a solid backlink
profile of its own. The more your "real" quality backlinks grow, the less anyone
else's malicious actions can affect it.


Regardless of spam backlinks or now as tedster says a good backlink profile will automatically remove any bad PR received from bad links.

Labels: ,


June Ranking drop reported for lot of sites

Webmaster world members report ranking loss for lot of sites in first week of June. We had some of our client sites shaken but not to the extent discussed in that thread here



A member posted the following message which will give a little insight on what is being discussed

"Initially, I thought there were some problems with the geo filtering being screwed up with regards to the UK, but I'm not sure about this statement in isolation. Maybe it's a bug and maybe it's not.
Since it's hard to pinpoint we really need a lot of information to decipher the problem or adjustment that G has put in place . If it is only UK related sites, then my hunch is that this could be rolled out in other regions with more aggressive filtering.
Some things I'd like to know or qualify , especially from the long established members here or folks with long established sites, are things like this :
- is this purely selective - is it only high PR sites and if so what level of PR - is it only UK related sites and does that mean the TLD and/or hosting [ chief suspect is geo filtering issues ] - do these sites have an inbalance of linking techniques e.g. lot's of navigation IBL's , footer IBL's , - do these sites publish frequent content
My observation so far is that it has been
- highly selective in a competitve niche to a minority of sites [ total stability around the site i watch ]
- effected by a combination of factor [ not sure what right now ie one factor in isolation isn't enough to send a site down, more one event combined with another is. This is because I'm observing others using the same techniques which are unaffected ]
-effected by introduction of high PR link/s , leading to - recent upward PR increases [ not visible on toolbar ]which have caused a re assessment of the site's "trust" rank. - it only effects UK TLD's [ need more info on this ] - a further discounting of low value pages bringing the overall PR down - thin affiliate sites or sites with aggregated content been only effected [ not 100% on this - just some sites I'm watching ] - linking stagnation or momentum altered - lack of fresh , original content

Actually, my feeling is that it's nothing new , it's just more aggressive in it's selection of sites
In the case of one site I'm observing, for any phrase or content [ exact match and broad match ] the whole site has been tanked to between -40 to -60 on phrases that should rank. Not one single exception. When it initially disappeared from Google it was completely off the index for 3 days.
This might correspond to recent discussion by Googler John Mu about the - 60 penalty which appears to have been acknowledged , which would seem to be based on a recent change at G. Some folks have reported improvements coming back quickly with a site clean up.
But there's not enough reports to be sure.
My concern is that the recovery could be indefinite and have effected the trust rank of effected sites - and of course communicating with G through WMT leaves webmasters are exposed to hand checks which could expose other inadvertent nasties and one way communication. "

Labels: ,


Sergey plans his space invasion





Sergey Brin Joint founder of Google has planned a space mission. Sergey said "I am a big believer in the exploration and commercial development of the space frontier, and am looking forward to the possibility of going into space," Brin said in a statement. "Space Adventures helped open the space frontier to private citizens and thus pave the way for the personal spaceflight industry. The Orbital Mission Explorers Circle enables me to make an immediate investment while preserving the option to participate in a future spaceflight."




He has signed up a multi million dollar contract with space adventures ( spaceadventures.com ) for frequent tourising type of flights to Space.

Labels: , , ,


How Google handles Scrapers - Nice information from Google team

Its been a long battle between Google, webmasters and content thieves who scrap information from a website and display it on their website to get traffic. Many Webmasters had been complaining for a long time about this problem. As far as i know Google is already doing a good job with content thieves and scraper sites. Now they have opened up with their inner workings on how they tackle this problem.

We tackle 2 types of dupe content problems one within a site and other with external sites. Dupe content within a site can easily be fixed. I am sure we have full control over it. We can find all potential areas which might create 2 pages of same content and prevent one version from crawling or remove any links to those pages which might be duplicates.

External sites are always a problem since we don't have any control over it. Google says they are now effectively tracking down potential duplicates and give maximum credit to the Original source and filter out rest of the duplicates.

If you find a site which is ranking above you using your content Google says

  1. Check if your content is still accessible to our crawlers. You might unintentionally have blocked access to parts of your content in your robots.txt file.
  2. You can look in your Sitemap file to see if you made changes for the particular content which has been scraped.
  3. Check if your site is in line with our webmaster guidelines.

Labels: ,


.info domains banned in Google - Webmaster reports .info domains getting penalized

Some active members of webmaster world are discussing about a potential penalty to .info domain names. It seems for about 2 weeks Google has been experimenting with .info domain names by removing from search results for a certain period of time to check how much spam it stops in their results.

"This has happened to my .info domain since last night and I am really upset at it.
All my 300 keywords have stopped working in Google but my site still appearing in Google with site:www.example.info and www.example.info searches.
Yesterday I received 600 visitors from Google and today only 4.
My .info domain is one and half years old. Till, Today I regularly update it with unique content and don't promote it much as I am already receiving number of visitors.. I never did spamming or adopted prohibited ways to promote the site.
I do Free directory submissions and very seldom links exchanges.
Can any expert tell me please what why it has happened to my site?
Is it a permanent problem or temporarily.?
I will be thankful for any help and guidance "


Did Google really ban .info domains i doubt it its very difficult for Google to ever attempt something like that. If you check the search for Global registry you can see afilias.info ranking in top 3 this is an indication that .info domain itself is not banned. May be because that domain extension is abused too much Google might have removed some domains.

Labels: ,


Google has more than 200,000 Servers

According to a interview with Google fellow Dean its estimated Google has more than 200,000 servers in various of its datacenter around the Globe.

"Google doesn't reveal exactly how many servers it has, but I'd estimate it's easily in the hundreds of thousands. It puts 40 servers in each rack, Dean said, and by one reckoning, Google has 36 data centers across the globe. With 150 racks per data center, that would mean Google has more than 200,000 servers, and I'd guess it's far beyond that and growing every day. "

Labels: ,


Is TPR Penalty back - Google's Toolbar Pagerank Reduction Seem to be visible again

It seems Google's TPR penalty is back, Google around February this year brought a new type of penalty for sites that buy / Sell links. Its called the TPR penalty. In this type of penalty Google will reduce the pagerank of sites that are suspected of Buying or Selling Backlinks. It seems the penalty is back and more sites are affected.

A member has reported Pagerank reduction to his website forums.digitalpoint.com/showthread.php?t=862890 here.

If you see TPR affecting your site please post here. We are doing some research on this and would love to hear feedback from others. We want to see whether TPR penalty affects only sites that Buy and Sell links or it affects innocent sites caught up inbetween. We have seen some sites loose TB PR without any link selling or buying that is why we need to ASK.

SEG

Labels: , , ,


Viacom's threat might kill internet freedom - Youtube lawsuit

Viacom sued youtube and its owner Google for allowing copyrighted videos to be posted on their website. Viacom claims there are more than 150,000 of copyrighted videos posted in youtube and they claim a billion dollar as damages for illegal viewing of their copyrighted videos.

"Viacom claimed YouTube consistently allowed unauthorised copies of popular television programming and movies to be posted on its website and viewed tens of thousands of times.
It said it had identified more than 150,000 such abuses which included clips from shows such as South Park, SpongeBob SquarePants and MTV Unplugged.
The company says the infringement also included the documentary An Inconvenient Truth which had been viewed "an astounding 1.5 billion times". "




According to the document:

threatens the way hundreds of millions of people legitimately exchange information

I agree with above Google's claim. The whole Internet is built upon sharing information with each other just because someone illegal posted a copyrighted video on a popular video hosting site it doesn't mean video site is responsible for it. If this is the case then there wont be any Forums or blogs or message boards out there. Then every site that runs a message board will have to verify each and every comment posted since some of them might be copyrighted.

Dow Jones: "When we filed this lawsuit, we not only served our own interests, we served the interests of everyone who owns copyrights they want protected."

Well i am not sure what is the interest of other copyright owners i feel Dow jones comments should be for Viacom only. Google's policy has always been don't be evil they grew up with that motto. Today they are way ahead of all other Search Engines because users are their No.1 priority. Even though Google's Search engine indexes millions of copyrighted pages and stores in their database still no-one complains since they are done what is best for Internet users.

Video sharing websites are a great way to share legitimate information across Internet. Just because someone posted some copyrighted videos its not fair to blame Youtube totally.

Well, I can give a number of reasons why Viacom is completely wrong with their lawsuit.

1. If Youtube is suppose to verify all millions of videos posted in their site they cannot run a site at all. There are so many areas where legitimate information is shared like forums, blog comments, news sites etc. If every single information needs to be verified then there won't be any good information sites on the web.

2. The word "copyright" itself will kill the way Internet works. Today anything and everything is copyrighted and its impossible to share legitimate information unless they are verified. I feel Copyright kills Internet freedom and there should be New laws for Internet information sharing which will help freedom of sharing.

3. After they faced lawsuit last year Google added automated copyright detection tool which stops some copyright videos from being uploaded. This is a very legitimate move and the best Youtube can do to prevent copyright videos. It also shows Google or Youtube don't want copyrighted videos on their sites.

4. Youtube has a very clear copyright policy on videos hosted by them. If you read the following

Commercial Content Is Copyrighted

The most common reason we take down videos for copyright infringement is that
they are direct copies of copyrighted content and the owners of the copyrighted
content have alerted us that their content is being used without their
permission. Once we become aware of an unauthorized use, we will remove the
video promptly. That is the law.
Some examples of copyrighted content
are:
TV shows
Including sitcoms, sports broadcasts, news broadcasts,
comedy shows, cartoons, dramas, etc.
Includes network and cable TV,
pay-per-view and on-demand TV
Music videos, such as the ones you might find
on music video channels
Videos of live concerts, even if you captured the
video yourself
Even if you took the video yourself, the performer controls
the right to use his/her image in a video, the songwriter owns the rights to the
song being performed, and sometimes the venue prohibits filming without
permission, so this video is likely to infringe somebody Else's rights.
Movies and movie trailers
Commercials
Slide shows that include
photos or images owned by somebody else
A Few Guiding Principles
It
doesn't matter how long or short the clip is, or exactly how it got to YouTube.
If you taped it off cable, videotaped your TV screen, or downloaded it from some
other website, it is still copyrighted, and requires the copyright owner's
permission to distribute.
It doesn't matter whether or not you give credit
to the owner/author/songwriter—it is still copyrighted.
It doesn't matter
that you are not selling the video for money—it is still copyrighted.
It
doesn't matter whether or not the video contains a copyright notice—it is still
copyrighted.
It doesn't matter whether other similar videos appear on our
site—it is still copyrighted.
It doesn't matter if you created a video made
of short clips of copyrighted content—even though you edited it together, the
content is still copyrighted.

You can see they are very clear about copyright and even provide links for copyright complains.

5. As of March there are over 3 Billion videos hosted in Youtube. Viacom claim 150,000 copyrighted videos which is only a fraction of the overall videos posted in Youtube. How can you expect youtube to check through this rubble and find copyrighted videos when its actually the user who is suppose to care about this.

6. Most of the time quality of videos posted in Youtube are very bad and they are also very short videos. I feel its not a big deal for small videos of low quality posted in youtube for viewing will kill media Business like Viacom altogether.

7. Some people suggest effective verification system like asking for Credit Card details to post videos. Well i know so many people who are not willing to post their CC information for legitimate online buying how can you expect them to give their CC details just for uploading videos this is a bit too much and anything like this will stop legitimate users from enjoying the freedom of Internet.

Viacom should withdraw their lawsuit to avoid humiliation in court, i feel there wont be anything legitimate against Youtube or Google which will prove Viacom's claim. If Viacom ever win this lawsuit which looks highly unlikely the way the whole Internet works will change and everyone will start suing each other which will be a disaster for Internet.

Lets all stand behind Youtube and Google and wish them success with their defence. Youtube should come out clear from this for the welfare of Internet.

Search Engine Genie

Labels: ,