SEO

Do search engines use whois information in search engine algorithms? Search engine optimization tip March 2nd 2005

Does search engine use whois info in search engine algorithms??

This is a difficult question to answer, the question can be framed in 2 ways,

1. Does search engine use whois info??

Yes they use whois information to detect possible cheaters, Google recently became ICANN accredited register, it means they have access to the whole whois database, A google PR lady said google dont have plans to become domain register but they use whois data to improve search engine results quality, so beware, Also google has an expired domain penalty they check whois to see when the site expires and if the domain is not renewed by the original owner immediately a penalty is imposed on it and is removed from google results for a period of 1 month to 2 years,

2. Does search engine use whois info in search engine ranking algorithms??

Nope google dont use it to determine relevancy in search results, but they do use it to detect people who run multiple sites, So if you want to cheat google shuffle your whois info if you run multiple domains, we dont like that tactic and will never do it, but for people who run multiple sites it is useful,

SEO BLog Team,

Amazon seeks search engine optimization manager for their site search engine optimization

Amazon one of webs leading online retail company is seeking search engine optimization manager for their website or group of websites,

Please read the following posted in craiglist,

Search Engine Optimization (QSEO)

What’s in it for me?Amazon.com, earth’s leading online retail technology company, is looking for an experienced manager to lead the search engine optimization program. As the leader for SEO, you will be responsible for SEO strategy and execution for the Amazon.com family of websites. You will use quantitative analysis and experimentation to develop and implement new techniques for search engine optimization. Your team will be cross-functional, including software and web engineers. Our Ideal Candidate:Candidates must be highly intelligent and have a passion for quantitative analysis and decision-making. The ideal candidate has a background in computer science and experience developing software. Candidates should have experience prioritizing and managing multiple concurrent cross-functional projects, and should be comfortable working with technical and non-technical staff and maintaining strong working relationships across teams and across companies.Qualifications:
BS or higher in computer science or related fields. Graduate degrees preferred.
3+ years of search engine optimization and management experience.
3+ years of software or web development experience.
Ability to communicate with technical and non-technical staff.
Ability to prioritize projects and formulate strategy.



http://www.craigslist.org/sfc/sof/58059851.html

Mini-informational sites for inbound links – Search engine optimization tip – March 1st 2005

when you are planning to build small information sites for backlinks make sure you don’t trouble the search engines and search results much, Search engines dont appreciate this tactic, So make sure you dont cross link those sites, cross links helps identify which are your sites and if search engines decide to ban them, they will ban your whole network identified by crosslinking,

Dont host all your sites under same ip address and same host, Just spread the site across couple of hosts,

For those mini sites get inbound links from directories or unique sites, make sure each of those mini sites gain link popularity by their own,

if you are linking your mini sites to the main site make it look as natural as possible, Dont link to your main site from all pages of your mini sites, it might trigger a penalty,

SEO BLog Team,

Does buying an expired domain still benefits in search engines? seo tip 28 Monday 2005

Buying expired domains are a long term business for domain holding companies and webmasters, Expired domains are mostly bought to send traffic to pay per click ads, send link popularity to an other site and send traffic to an other site,

These days it is not good in buying expired domains, especially for google, google has a strong whois database verification, if a domain expires and is not renewed for 1 month it suffers an automated expired domain penalty in google,

This expired domain penalty is severe and might last anywhere from 1 month upto 2 years, So play safe and check before buying any expired domains, Check to make sure it is not penalized by search engines,

Does buying an expired domain still benefits in search engines? seo tip 28 Monday 2005

Buying expired domains are a long term business for domain holding companies and webmasters, Expired domains are mostly bought to send traffic to pay per click ads, send link popularity to an other site and send traffic to an other site,

These days it is not good in buying expired domains, especially for google, google has a strong whois database verification, if a domain expires and is not renewed for 1 month it suffers an automated expired domain penalty in google,

This expired domain penatly is severe and might last anywhere from 1 month upto 2 years, So play safe and check before buying any expired domains, Check to make sure it is not penalized by search engines,

Is robots meta tag needed for frequent and effecting crawling – search engine optimization tip 27 sunday 2005

By default robots are set to index all pages of the site, So robots meta tags are never needed for a site, Unless you want to block a robot from a specific page you dont have any need to use robots meta tags,

Read more on this in this attached text file,

SEO Blog Team,

Does server side includes(SSI ) cause problem for search engine optimization? – Seo tip feb 26 saturday 2005

Server side includes are widely used these days in web design, Lots of forums, large template based sites etc use Server side includes for better editing of certain pages, Many people have doubt whether Server side includes AKA SSI cause any problem for search engine crawlers, From our extensive experience NO, server side coding are done server side, what ever is parsed to the browser only matters, Lots of search engine crawlers like googlebot ( google’s crawler ) , yahoo slurp ( yahoo’s crawler ) , MSNbot ( MSN’s crawler) are sophisticated enough to parse good html from complex server side coding, All these crawlers read clear html and they dont have any problem with server side includes,

Very important thing is the SSI code should be properly tested, if they are not tested properly server might not understand the coding and cannot parse proper html and sometimes important server side coding will be revealed to the browser,

it is very important only good clean code is delivered to the browsers, Revealing server side coding of a site might reveal important sql coding which might result in an attack on the server,

SSI code is never a problem these days for top search engine crawlers, For search engine optimization and search engine promotion purposes SSI is not an hindrance, SSI helps in editing large sites easily, We recommend using server side includes in your coding since it will be easy for you to add links especially template based sites like blogs, forums, message boards, large news sites etc,

SEO Blog Team,

Log file analysis is the way for better ROI – Analysing log files, seo tip feb 25 friday 2005

Log file analysis is very important in success of website,

Log file information provides a baseline of statistics that indicate use levels and support use and/or growth comparisons among parts of a site or over time. Such analysis also provides some technical information regarding server load, unusual activity, or unsuccessful requests, and can assist in marketing and site development and management activities. Server log file analysis is also an important part of search engine optimization, it also helps in determining the best keyword that converts and best keyword mostly used,

What’s in a Log File
Every communication between a client’s browser and a Web server results in an entry in the server’s log recording the transaction.

In general, a log file entry contains:
the address of the computer requesting the file
the date and time of the request
the URL for the file requested the protocol used for the request the size of the file requested
the referring URL the browser and operating system used by the requesting computer.

What Can You Learn From a Log File?

Data available from a log file can be compiled and combined in various ways, providing statistics or listings such as:

total files and kilobytes successfully served
number of requests by type of file, such as HTML page views
distinct IP addresses served and the number of requests each made
number of requests by domain suffix (derived from IP addresses)
number of requests by HTTP status codes (successful, failed, redirected, informational)
totals and averages by specific time periods (hours, days, weeks, months, years)
URLs from which user came to the site browsers and versions making the requests.
number of requests made (“hits”)
number of requests for specific files or directories

How good are log files?

We should be able to differentiate between hits, page view and unique visitors, not all hits and page views are visitors, there are good free tools for separating the data that way, some are funnel web analyser, awstats, webalizer etc,

For more information on tracking and logging visit this forum thread, Tracking and Logging http://www.webmasterworld.com/forum39/

Is Javascript a hindrance for search engines – search engine optimization tip 24 feb 2005

Javascript is a complex language which needs lot of browser understandable coding to make it friendly for browsers, Search engine crawlers are not too sophisticated browsers, they dont have high capabilities like Internet explorer or firefox in understanding Javascript,

So best is to reduce the use of javascript in web pages intend to be search engine friendly, Large javascript coding also covers a lot of the page and search engines find difficult to index contents buried lower than the javascript, Best option is to add the javascript used for your site in an external .js file, that way the size of the page is reduced,

Search engines like google used to crawl only the first 101kb of the html of the page, so it is important that the higher level of the page is well accessible by crawlers, Also it is best to avoid Javascript dropdown menus, rollovers, popups etc, Search engines find difficult to crawl these stuff, if you have a javascript menu best is to add text only links to your inner pages, that way search engines wont struggle in finding your precious inner pages,

Googlebot google’s crawler has recently been reported to crawl javascript, it is a great improvement but still it is in beginning stage, so better avoid javascript menus and avoid large coding in javascript,

Use external .js files to reduce the size of the page,

SEO BLOG TEAM,

Detecting "rel=nofollow" – way to detect the new anti comment spam tag rel=nofollow using FireFox browser — search engine optimization tip 23 2005

There is a new way to detect rel=nofollow tag using the firefox browser, Detecting this tag helps in catching cheating webmasters using this tag in their directories and link exchanges,

rel=nofollow tag can be highlighted in firefox by adding a simple line to the main CSS file ( usercontent.css ) which powers the appearance of the web pages in firefox browser,

we have to find the path of the usercontent.css file and add a simple line so that the rel=nofollow link appears in a different colour,

For Windows XP / windows 2000 users:

C:\Documents and Settings\[User Name]\Application Data\Mozilla\Firefox\Profiles\hmgpxvac.default\chrome

hmgpxvac can be anything on your system,

For PCs using Windows 98 / ME:

C:\WINDOWS\Application Data\Mozilla\Firefox\Profiles\hmgpxvac.default.default\chrome

Open up your new userContent.css file and add the following line:

a[rel~=”nofollow”] { border: thin dashed firebrick! important; background-color: rgb(255, 200, 200)! important; } save the file,

if above code is used firefox will automatically highlight links in red,

You can highlight links in internet explorer too refer the attached for it,

http://www.searchenginegenie.com/seo-blog/ie-detection-nofollow.txt

SEO Blog Team,

Request a Free SEO Quote