Some interesting stuffs in Co-op network:
You know anyone can create a custom search engine with Google. I just came across this interesting page where a guy gives out a search engine that can actually find do follow blogs.
http://www.google.com/coop/cse?cx=010363265520675485990:xjmxcyokkls. . . SO what is the use of do-follow blogs these blogs allow people to comment on their blogs without a no follow . Without a no follow means you can Spam the hell out of that Blog. But most do-follow blogs have effective Spam prevention systems they have pre-moderation enabled in most cases. SO I don’t think there is any special benefit finding do-follow blogs. But being in an era of no follow its good to find some do-follow blogs.
When you create a custom search engine you can
# include one website, multiple websites, or specific web pages.
# Host the search box and results on your own website.
# Customize the colors and branding to match your existing web pages.
This custom search engine as it claims doesn’t show all do-follow blogs. Some of the blogs coming up are actually no-follow. If you want to find do-follow blogs I recommend doing a search by your own and checking source code rather than counting on these search engines.
Google introduces new features in searcology coference:
As people get more sophisticated at search they are coming to us to solve more complex problems. To stay on top of this, we have spent a lot of time looking at how we can better understand the wide range of information that’s on the web and quickly connect people to just the nuggets they need at that moment. We want to help our users find more useful information, and do more useful things with it.
Our first announcement today is a new set of features that we call Search Options, which are a collection of tools that let you slice and dice your results and generate different views to find what you need faster and easier. Search Options helps solve a problem that can be vexing: what query should I ask?
Let’s say you are looking for forum discussions about a specific product, but are most interested in ones that have taken place more recently. That’s not an easy query to formulate, but with Search Options you can search for the product’s name, apply the option to filter out anything but forum sites, and then apply an option to only see results from the past week. Just last week, at our Shareholders’ Meeting, I had a woman ask me why she couldn’t organize her results by time, with the most recent information appearing first. “Come back Tuesday,” I wanted to say!
The Search Options panel also gives you the ability to view your results in new ways. One view gives you more information about each result, including images as well as text, while others let you explore and iterate your search in different ways. We think of the Search Options panel as a tool belt that gives you new ways to interact with Google Search, and we plan to fill it with more innovative and useful features in the future.
Another challenging problem we have worked on is better understanding the information you get back from a search. When you see your results from a Google search, how do you decide which one has the best information for you? Or, how can we help you make the best decision about where to click?
We call the set of information we return with each result a “snippet,” and today we are announcing that some of our snippets are going to get richer. These “rich snippets” extract and show more useful information from web pages than the preview text that you are used to seeing.
Google competitor Wolfram alpha launching this month May 2009:
The long-expected Wolfram Alpha search engine is due to be launched this month. We are waiting for it anxiously as unfortunately we didn’t get the opportunity to test it, however, others did and it looks amazing. I will start with the fact that many said that this is the Google Killer, but in fact Wolfram Alpha is not a conventional search engine, it is more a computational knowledge engine which is based on ideas from Stephen Wolfram. Recently, Google launched its public data search, and not even that can be compared to Wolfram Alpha.
Don think of Wolfram Alpha as a Google Killer, though, because frankly Google doesn’t really have anything like it—except for maybe Google’s new public data search, which, while impressive, doesn’t look nearly as robust as Wolfram Alpha. (Then again, we’ll have to wait and see how well Wolfram Alpha works when it gets in the hands of the public.) Either way, Google will still corner the market on most normal search. (We’re not always looking for the kind of answers Wolfram Alpha provides when we hit up Google.) As for how this editor uses Google and Wikipedia, I’d actually imagine that Wolfram Alpha could be more of a Wikipedia competitor than a Google competitor.
The system, Wolfram Alpha, was developed by Stephen Wolfram (49), a British physicist, and showcased at Harvard University in the U.S. last week. “Revolutionary new web software could put giants such as Google in the shade,” the daily claimed. Although the system is still new, it has already attracted massive hype among technology pundits, it added.
“Wolfram Alpha will not only give a straight answer to questions such as ‘how high is Mount Everest?’ but it will also produce a neat page of related information – all properly sourced – such as geographical location and nearby towns, and other mountains, complete with graphs and charts,” it said. “Or ask what the weather was like in London on the day John F. Kennedy was assassinated, it will cross-check and provide the answer.”
Google Patent to rank personalized pages based on bookmarking:
I am thinking about crawling social media profiles and links shared on those profiles, to which you link on your Google profiles, maybe even Gmail and gchat; and of course sites you join on Google friend connect; and what you share through Google and Google reader. And once they identify your twitter hyperlinked idiosyncrasies, they could then discover those of your followers and rank documents based on what everyone loves… or loves to spam 😉
And ultimately distinguishing what one and one’s followers and friends truly love and love to spam, is the feature measuring you’re ‘linger time:’
1. A computer-implemented method, the method comprising: receiving a search query from a user; receiving a request from the user to personalize a search result; responsive to the search query and the request to personalize the search result, generating a personalized search result by searching a personalized search object; responsive to the search query, generating a general search result by searching a general search object; providing the personalized search result and the general search result for display; selecting an advertisement based at least in part upon the personalized search object; and providing the advertisement for display.
2. The method of claim 1, wherein the personalized search object comprises an article associated with a bookmark.
3. The method of claim 2, wherein an index associated with the bookmark is stored on a server remote from a client with which the bookmark is associated.
4. The method of claim 2, wherein an index associated with the bookmark is stored on a client with which the bookmark is associated wherein searching of the personalized search object is performed by a client-side agent.
5. The method of claim 1, wherein the general search object comprises an index of articles.
6. The method of claim 5, wherein the index comprises an index of articles associated with a global computer network.
7. The method of claim 1, wherein the general search object comprises a plurality of global indices.
8. The method of claim 1, wherein the personalized search object comprises a plurality of bookmarks.
9. The method of claim 1, wherein the personalized search object comprises an annotation.
10. The method of claim 1, wherein the personalized search object comprises a rating.
11. The method of claim 1, further comprising identifying a user cluster based at least in part on the personalized search object and providing to the user a suggestion of another user with which to network based on the user cluster.
12. The method of claim 1, further comprising identifying the personalized search object based at least in part on an implicit measure of the user’s interest.
13. The method of claim 12, wherein the implicit measure of the user’s interest comprises a history of user accesses.
14. The method of claim 12, wherein the history of user accesses comprises at least one of: a period of linger time, a quantity of repeat visits, and a quantity of click-through.
15. A computer storage medium encoded with a computer program, the computer program comprising instructions that when executed cause a computer to perform operations comprising: receiving a search query from a user; receiving a request from the user to personalize the search result; responsive to the search query and the request to personalize the search result, generating a personalized result by searching a personalized search object; responsive to the search query, generating a general result by searching a general search object; providing the personalized search result and the general search result for display; and providing an advertisement for display on a browser based at least in part on one of the personalized search result and the general search result.
16. The computer storage medium of claim 15, wherein the instructions when executed cause the computer to perform operations further comprising identifying a cluster of users based at least in part on the personalized search object.
17. The computer storage medium of claim 15, wherein the instructions when executed cause the computer to perform operations further comprising identifying the personalized search object based at least in part on an implicit measure of the user’s interest.
Google moving to Ajax based result pages:
It seems Google is now moving into Ajax based result pages. As per the Google analytics blog “Starting this week, you may start seeing a new referring URL format for visitors coming from Google search result pages. Up to now, the usual referrer for clicks on search results for the term “flowers”,”
The key difference between these two urls is that instead of “/search?” the URL contains a “/url?”. If you run your own analyses, be sure that you do not depend on the “/search?” portion of the URL to determine if a visit started with an organic search click.
New parameters as per Google blog:
——- old
http://www.google.com/search
hl=en
q=flowers
btnG=Google+Search
——- new
http://www.google.com/url
sa=t
source=web
ct=res
cd=7
url=http%3A%2F%2Fwww.example.com%2Fmypage.htm
ei=0SjdSa-1N5O8M_qW8dQN
rct=j
q=flowers
usg=AFQjCNHJXSUh7Vw7oubPaO3tZOzz-F-u_w
sig2=X8uCFh6IoPtnwmvGMULQfw
According to Mattcutts a senior Google employee this change is to make sure search results are retrieved faster than it usually does. Matt says “The team there only thinks about speed. They want to get the results back to users as quick as humanly possible. JavaScript makes the search results a lot faster. Suppose you do a search for flowers, as you’re typing flowers, they can do a query from the back end and fold search results right into the page. You’re still in Google.com and they can pull in the results automatically.”
Can hot-linking benefit your website?
What is hot linking?
Directly embedding or linking to a resource on another server, such as an image or video, so that it appears to be part of the linking website. Hot linking provides the ability within an online manual to jump from place to place by clicking on table of contents and index entries, cross-references, or icons.
So can hot linking benefit you. Yes I would say hot linking benefits you because you can get some quality traffic to your images from other websites that embed your image. We get loads of traffic from hot linking most of them are valid visitors. People who follow hotlink images also tend to look around your website for other images or videos. So yes there is solid benefit. But there are some draw backs imagine your image is added to an adult site and Google image crawler picks it up it might damage the reputation of your website and your image might start featuring only when safe search filter off. Most people don’t switch off safe search filter so its recommended to keep note of who is hot linking if you see a bad source feel free to request them to remove your image.
2 sites on same server using same database:
I saw this question asked in one of the forums where I am a frequent reader. So what happens when a product sites use same database. We had a similar problem with one of our client. He has his own dedicated server where he hosts his websites. He is in clothing industry and its difficult to move his whole database that too he is on a dedicated server paying around 200$ a month. So if he decides to move to a different server for one site he needs to shed out huge amount of money. So he dedicated to use same database for 2 sites and we are suppose to optimize both sites and rank them.
One site started doing well and continued to stay on top position for a long time but the other website never came up. We tried all the tactics in SEO but still can’t get the other website to rank. One year passed and 2 years passed still we are unable to rank that website. This is a clear indication Google doesn’t like 2 sites using same database and hosted on same IP. We even modified 1000s of product descriptions but still we cannot make the site look unique for Google.
So I recommend in case you want to use same database for 2 sites don’t host them together. Better move the database to a different hosting write unique product descriptions I am sure you can make the site rank and can gain traffic.
Alexa and sub domain rankings;
Alexa has improved a lot than what it was before in ranking sites. Before Alexa used to rank websites just based on alexa toolbar users. Now the criteria for ranking has changed they have tie up with other ranking companies and they use their own toolbar data with other company data to decide the final rankings. Still I feel alexa rankings are skewed and influenced more by toolbar related factors.
Well here the question whether Alexa sees sub domain as a separate entity to rank it? In most cases no alexa rarely sees separate sub domains as a different entity. I have seen BlogSpot domains having Alexa rank of 500. The 500 is not for BlogSpot sub domain but the usage of blogspot.com itself.
A word from the official Alexa blog:
Alexa’s traffic rankings are for top level domains only (e.g. domain.com). We do not provide separate rankings for subpages within a domain (e.g. domain.com/subpage.html) or sub domains (e.g. subdomain.domain.com) unless we are able to automatically identify them as personal home pages or blogs, like those hosted on Geocities and Tripod. If a site is identified as a personal home page or blog, its traffic ranking will have an asterisk (*) next to it: Personal Page Avg. Traffic Rank: 3,456*. Personal pages are ranked on the same scale as a regular domain, so a personal page ranked 3,456* is the 3,456th most popular page among Alexa users.
So they don’t separate if the don’t identify them automatically. They have such a dumb automated algorithm and it never detects proper sub domains. I feel Alexa need to improve their algorithm on working with sub domains because sub domains are actually different websites.
Website not getting crawled due to potential history of penalty:
We have this question asked by few of our website visitors, does a site stop getting crawled if Googlebot detects any previous penalty or ban on that website. I would say yes we have seen this being reported by some of our potential clients or by people in forums. I assume Google has a list of all sites that was previously blacklisted in Google rankings. When Googlebot detects a link to a website that was previously blacklisted then it will first take that link and will store in a database. Later Google’s algorithm will decide whether that link is crawl worthy and then Google crawler will be sent to crawl and index that website.
Best way to see if your site has any historical spam flag is to check in archive.org. Archive way back machine will have your old pages indexed if it ever existed. If you are sure what is the problem with your website then explain that carefully in a re-inclusion request and I am sure Google will accept your website.
Penalties apply to Google too:
2 months back Google.co.jp , Google’s japan domain was penalized for buying pay per post promotional links from leading Pay per post agency cyberbuzz.com . A visible example of Google Japan’s penalty can been seen if you have Pagerank enabled in Google toolbar. Google Japan has a pagerank of 5 now compared to pagerank 9 it previously had.
An extract from their promotional campaign:
“The Google Hot Keywords blog widget [link to Google’s page] can show you what is in fashion now, and what other people are interested in.
It’s appealing that you can view buzzwords from the previous day or the previous week. I am sometimes surprised to see that such words are so popular! Personally, I like the “fortune-teller” feature from the previous week’s ranking. When I click on a keyword, I am quickly taken to Google’s result page and so I enjoy the feature.
I might not have noticed them by myself, but now I understand that these things are what people care about.
I am participating in CyberBuzz’s campaign.
“
After realizing their problem and other bloggers reporting the problem with them, here is the apology from their official blog:
“”Google Japan is running several promotional activities to let people know more about our products.
It turns out that using blogs on the part of the promotional activities violates Google’s search guidelines, so we have ended the promotion. We would like to apologize to the people concerned and to our users, and are making an effort to make our communications more transparent in order to prevent the recurrence of such an incident.”
Stealth links and Googlebot :
Webmaster world owner and senior webmaster Brett Tabke posted an interesting thread what he calls stealth links in Google. Links that are not the same HREF links but still seem to count in Google. He calls them stealth links according to him following are some prominent stealth links:
- another site links to your graphics (img src)
- a site links to your javascript files
- a site links to your css files?
- rss feeds and other xml feeds that people can link to without notice or referrals necc being generated.
- links in email that some se’s can read (yahoo mail, hotmail, Gmail)
- links marked with noindex
- links marked with nofollow
- urls within javascript or js comments
- raw urls within css or css comments
- urls within meta data of graphics and video files
- urls within html comments
- urls within the head section or meta data of a html page
- links or pages that maybe surfed while visitor has page rank engaged on the toolbar
- the target of a constructed, obfuscated, or encrypted js url (hidden until executed)
- links behind pay walls that Google can spider via webmaster tools
- Domains that have been 301’d with links.
- Links in Flash movies (games, quizzes, etc).
- non href’ed url’s. (raw url on page http://www.webmasterworld.com)
- Links in any documents other than web pages e.g. .doc, .pdf, .txt, etc.
- blocking a page in robots.txt should make it blocked from bots, but they still spider it.
According to me most of links from the above sources are not counted by Google atleast for ranking purposes. Brett came up with the above list because of the discussion that was started in another thread about pages getting pagerank without any external links. I feel most of the people who are complaining about pages getting pagerank without external links either don’t know to check backlinks or just rely on yahoo and Google backlink data which is totally unreliable.
Blogroll
Categories
- AI Search & SEO
- author rank
- Authority Trust
- Bing search engine
- blogger
- CDN & Caching.
- Content Strategy
- Core Web Vitals
- Experience SEO
- Fake popularity
- gbp-optimization
- Google Adsense
- Google Business Profile Optimization
- google fault
- google impact
- google Investigation
- google knowledge
- Google panda
- Google penguin
- Google Plus
- Google Search Console
- Google Search Updates
- Google webmaster tools
- google-business-profile
- google-maps-ranking
- Hummingbird algorithm
- infographics
- link building
- Local SEO
- local-seo
- Mattcutts Video Transcript
- Microsoft
- Mobile Performance Optimization
- Mobile SEO
- MSN Live Search
- Negative SEO
- On-Page SEO
- Page Speed Optimization
- pagerank
- Paid links
- Panda and penguin timeline
- Panda Update
- Panda Update #22
- Panda Update 25
- Panda update releases 2012
- Penguin Update
- Performance Optimization
- Sandbox Tool
- search engines
- SEO
- SEO Audits
- SEO Audits & Monitoring
- SEO cartoons comics
- seo predictions
- SEO Recovery & Fixes
- SEO Reporting & Analytics
- seo techniques
- SEO Tips & Strategies
- SEO tools
- SEO Trends 2013
- seo updates
- Server Optimization
- Small Business Marketing
- social bookmarking
- Social Media
- SOPA Act
- Spam
- Technical SEO
- Uncategorized
- User Experience (UX)
- Webmaster News
- website
- Website Security
- Website Speed Optimization
- Yahoo





