Webmaster News

webmasterworld cloaks robots.txt file for a good purpose.

I hope Everyone is aware of the recent move by webmasterworld.com to make all postings private, people can view and read their threads only after they login,

They banned all bots in their robots.txt file, This is what their robots.txt file says,

“#
# Please, we do NOT allow nonauthorized robots.
#
# http://www.webmasterworld.com/robots
# Actual robots can always be found here for: http://www.webmasterworld.com/robots2
# Old full robots.txt can be found here: http://www.webmasterworld.com/robots3
#
# Any unauthorized bot running will result in IP’s being banned.
# Agent spoofing is considered a bot.
#
# Fair warning to the clueless – honey pots are – and have been – running.
# If you have been banned for bot running – please sticky an admin for a reinclusion request.
#
# http://www.searchengineworld.com/robots/
# This code found here: http://www.webmasterworld.com/robots.txt?view=rawcode

User-agent: *
Disallow: /


User-agent: *
Disallow: /

The above robots file syntax means no bot whether its a search engine bot or a spam bot, No bot is allowed to crawl webmasterworld.com, But it was a bit strange when Greg boser mentioned this in his blog ( http://www.webguerrilla.com/clueless/welcome-back-brett ) ,

“I was doing some test surfing this morning using a new user agent/header checking tool Dax just built. Just for fun, I loaded up WebmasterWorld with a Slurp UA. Surprisingly, I was able to navigate through the site. I was also able to surf the site as Googlebot and MSNbot.

A quick check of the robots.txt with several different UA’s showed that MSN and Yahoo are now given a robots.txt that allows them to crawl. However, Google is still banned, and humans still must login in order to view content.

Apparently, it’s been this way for awhile because both engines already show a dramatic increase in page counts.

MSN 57,000
Yahoo 160,000

We were taken totally by surprise, So how does this work, Except for cloaking you cannot do this through any other method, thought we will do a bit of research on this and tried using a user Agent spoofer to navigate their site, As greg mentioned we tried using the following Useragents,

Yahoo-Slurp
Mozilla/5.0 (compatible; Yahoo! Slurp; http://help.yahoo.com/help/us/ysearch/slurp)

Googlebot/2.1 (+http://www.google.com/bot.html)

msnbot/1.0 (+http://search.msn.com/msnbot.htm)

with all the above useragents we were able to browse webmasterworld.com peacefully,

Update to greg’s post:

Googlebot is now allowed to crawl webmasterworld.com via robots.txt file cloaking, Google has about 250,000 pages now, First when webmasterworld.com didn’t cloak their robots.txt file and blocked all robots, Google removed all pages of webmasterworld.com from their index, It is mostly because the robots file URL was directly submitted to the automated URL removal system in google,

Google clearly talks about that here,

“Note: If you believe your request is urgent and cannot wait until the next time Google crawls your site, use our automatic URL removal system. In order for this automated process to work, the webmaster must first create and place a robots.txt file on the site in question.

Google will continue to exclude your site or directories from successive crawls if the robots.txt file exists in the web server root. If you do not have access to the root level of your server, you may place a robots.txt file at the same level as the files you want to remove. Doing this and submitting via the automatic URL removal system will cause a temporary, 180 day removal of the directories specified in your robots.txt file from the Google index, regardless of whether you remove the robots.txt file after processing your request. (Keeping the robots.txt file at the same level would require you to return to the URL removal system every 180 days to reissue the removal.)

http://www.google.com/webmasters/remove.html

This is the Robots.txt file we saw using the Googlebot useragent spoofer,

GET Header sent to the bot [Googlebot/2.1 (+http://www.google.com/bot.html)]:
HTTP/1.1 200 OK
Date: Sun, 18 Dec 2005 17:35:10 GMT
Server: Apache/2.0.52
Cache-Control: max-age=0
Pragma: no-cache
X-Powered-By: BestBBS v3.395
Connection: close
Transfer-Encoding: chunked
Content-Type: text/plain

326
#
# Please, we do NOT allow nonauthorized robots.
#
# http://www.webmasterworld.com/robots
# Actual robots can always be found here for: http://www.webmasterworld.com/robots2
# Old full robots.txt can be found here: http://www.webmasterworld.com/robots3
#
# Any unauthorized bot running will result in IP’s being banned.
# Agent spoofing is considered a bot.
#
# Fair warning to the clueless – honey pots are – and have been – running.
# If you have been banned for bot running – please sticky an admin for a reinclusion request.
#
# http://www.searchengineworld.com/robots/
# This code found here: http://www.webmasterworld.com/robots.txt?view=rawcode

User-agent: *
Disallow: /gfx/
Disallow: /cgi-bin/
Disallow: /QuickSand/
Disallow: /pda/
Disallow: /zForumFFFFFF/

This is the header response:

HEAD Header sent to the browser [Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0)]:
HTTP/1.1 200 OK
Date: Sun, 18 Dec 2005 17:35:10 GMT
Server: Apache/2.0.52
Cache-Control: max-age=0
Pragma: no-cache
X-Powered-By: BestBBS v3.395
Connection: close
Content-Type: text/plain

URI: www.webmasterworld.com/robots.txt
Source delivered to [Googlebot/2.1 (+http://www.google.com/bot.html)]:


User-agent: *
Disallow: /gfx/
Disallow: /cgi-bin/
Disallow: /QuickSand/
Disallow: /pda/
Disallow: /zForumFFFFFF/

From the above syntax you can see that webmasterworld.com doesnt ban googlebot or other main bots from crawling their site pages, This is not new for brett before webmasterworld.com went private Googlebot had access to paid section of webmasterworld while normal users need to subscribe,

Now the question does google endorse cloaking, Cloaking is bad as defined by Search engine guidelines, Now we can see that selective cloaking for selective sites are not bad, We dont blame brett for doing it because he has reasons to disallow spam bots and very good reasons to allow nice bots,

Brett explains why he banned bots, he says

“Seeing what effect it will have on unauthorized bots. We spend 5-8hrs a week here fighting them. It is the biggest problem we have ever faced.

We have pushed the limits of page delivery, banning, ip based, agent based, and down right cloaking to avoid the rogue bots – but it is becoming an increasingly difficult problem to control.

webmasterworld.com/forum9/9593-2-10.htm

So what is brett’s answer for cloaking?

A webmasterworld.com member asks

“Brett – do you cloak your robots.txt depending on IP address that requests it? “

Brett’s answer:

“only for hot/honey pot purposes. “

Webmasterworld.com is one the best place in internet, great webmasters and SEOs are born there, it is pretty harsh to complain about them but truth cannot be hidden for a long time, if not us someone will blog on this already, greg ( webguerilla ) has discussed a lot of this issue,

SEO Blog team.

Update Jagger moves into 2nd phase now,

Google has just started a series of updates, Webmasterworld named this update as Jagger and that is what googleguy matt cutts calls it, Google guy’s blog is a great source of information www.mattcutts.com/blog he gives valuable facts about this new update, Update 1 which started last week around 20th oct was called jagger 1 and next update which started today is called jagger 2, Googleguy has said this update will move into a third phase somewhere in the mid of next week, Lets hope everyone’s sites does well with google,

googleguy mattcutt’s posting in webmasterworld.com

“McMohan, good eyes in spotting some changes at 66.102.9.104. I expect Jagger2 to start at 66.102.9.x. It will probably stay at 1-2 data centers for the next several days rather than spreading quickly. But that data center shows the direction that things will be moving in (bear in mind that things are fluxing, and Jagger3 will cause flux as well).

If you’re looking at 66.102.9.x and have new feedback on what you see there (whether it be spam or just indexing related), please use the same mechanism as before, except use the keyword Jagger2. I believe that our webspam team has taken a first pass through the Jagger1 feedback and acted on a majority of the spam reports. The quality team may wait until Jagger3 is visible somewhere before delving into the non-spam index feedback.
If things stay on the same schedule (which I can’t promise, but I’ll keep you posted if I learn more), Jagger3 might be visible at one data center next week. Folks should have several weeks to give us feedback on Jagger3 as it gradually becomes more visible at more data centers. “

An other excellent post by brett tabke of webmasterworld.com on duplicate content issues with search engines,

A great post in webmasterworld by brett tabke explains how search engines treat duplicate content, It is worth a read by everyone,

What is dupe content?
a) Strip duplicate headers, menus, footers (eg:
the template)
This is quite easy to do mathematically. You just look for
string patters that match on more that a few pages.
b) Content is what is
left after the template is removed.
Comparing content is done the same way
with pattern matching. The core is the same type of routines that make up
compression algos like Lempel-Ziv (lz).
This type of pattern matching is
sometimes referred to as a sliding dictionary lookup. You build an index of a
page (dictionary) based on (most probably) words. You then start with the lowest
denominator and try to match it against other words in other pages.
How
close is duplicate content?
A few years ago, an intern (*not* Pugh) who
helped work on the dupe content routines (2000?), wrote a paper (now removed).
The figure 12% was used. Even after studying, we are left to ask how that 12% is
arrived at.
Cause for concern with some sites?
Absolutely. People that
should worry: a) repetitive content for language purposes. b) those that do auto
generated content with slightly different pages (such as weather sites, news
sites, travel sites). c) geo targeted pages on different domains. d) multiple
top level domains.
Can I get around it with random text within my template?
Debatable. I have heard some say that if a site of any size (more than
20pages) does not have a detectable template, that you are subject to another
quasi penalty.
When is dupe content checked?
I feel it is checked as a
background routine. It is a routine that could easily run 24×7 and hundreds of
machines if they wanted to crank it up that high. I am almost certain there is a
granularity setting to it where they can dialup or dial down how close they
check for dupe content. When you think about it, this is not a routine that
would actually have to be run all the time because one they flag a page as a
dupe, that would take care of it for a few months until they came back to check
again. So I agree with those that say it isn’t a set pattern.
Additionally,
we also agree that G’s indexing isn’t as static as it used to be. We are into
the “update all the time” era where the days of GG pressing the button are done
because it is pressed all the time. The tweaks are on-the-fly now – it’s pot
luck.
What does Google do if it detects duplicate content?
Penalizes the
second one found (with caveats). (As with almost ever Google penalty, there are
exceptions we will get to in a minute).
What generally happens is the first
page found is considered to be the original prime page. The second page will get
buried deep in the results.
The exception (as always) – we believe – is high
Page Rank. It is generally believe by some that mid-PR7 is considered the “white
list” where penalties are dropped on a page – quite possibly – an entire site.
This is why it is confusing to SEO’s when someone says they absolutely know the
truth about a penalty or algo nuance. The PR7/Whitelist exception takes the
arguments and washes them.
Who is best at detecting dupe content? Inktomi
used to be the undisputed king, but since G changed their routines (late
2003/Florida?), G has detected the tiny page to the large duplicate page without
fail.
On the other, I think we have all seen some classic dupe content that
has slipped by the filters with no explaination apparent.
For example, these
two pages:
The original: http://www.webmasterworld.com/forum3/2010.htm
The
duplicate: http://www.searchengineworld.com/misc/guide.htm
The
10,000 unauthorized rips: (10k is best count, but probably higher): Successful
Site in 12 Months with Google Alone
All-in-all, I think the dupe content
issue is far over rated and easy to avoid with quality original content. If
anything, it is a good way to watch a competitor penalized.

Do search engines have to act to all spam reports- An extra ordinary whining thread in webmasterworld.com

When i was browsing through webmasterworld.com the ultimate forum I found a thread where the thread starter was worried that his competitor using hidden text and the site was not penalized,

Read more in this thread here http://www.webmasterworld.com/forum30/28211.htm

Read answers of great experts like Brett Tabke, Ciml etc they have seen this all the time from newbies and they are taking all the efforts to correct newbies so called seo experts,

One thing people should know that search engines are not for seos, webmasters or site owners it is solely run for end users who perform searching, Webmasters are just an other group of google users, Google needn’t have to worry about how well they make webmasters happy, They just have to worry about their search users, If something is hurting their search users they will definitely take action on it, SEOs or webmasters needn’t have to whine all the time that their competitor site is not banned by google or any search engine for spam( spam in seos mind ),

According to experts search engine SPAM report for so called search engine optimization experts is simply a waste of time,

A great expert gave a clean expansion for useless SEOs definition of SPAM,

SPAM – Sites Positioned Above Mine.

Don’t whine if your site don’t rank, Keep working on contents and backlinks, stop worrying what others use to rank their sites,

An other expert clearly said “If you think sites using hidden text or hidden links ranks high and you cant outrank them, it simply means you SUCK at search engine optimization”

Yahoo! Search Tips for Webmasters: Saving Bandwidth

Yahoo has helpfully provided sometips on saving bandwidth for your sites, Yahoo slurp is known to hit hard on dynamic sites consuming a lot of bandwidth, Now there is a good solution for it provided by the yahoo search guys itself,

Some effective features they mention,

Gzipped files
Crawl Delay
Smart Caching:

An extract,

If you run a public webserver, you have likely seen our webcrawler, named Slurp, in your logs. Its job is to find, fetch, and archive all of the page content that is fed into the Yahoo! Search engine. We continuously improve our crawler to pick up new pages and changes of your sites, but the flip side is that our crawler will use up some of your bandwidth as we navigate your site. Here are a few features that Yahoo!’s crawler supports that you can use to help save bandwidth while ensuring that we get the latest content from your site:

More info here, http://www.ysearchblog.com/archives/000078.html

An interesting checklist of why a site could be dropped from google’ index posted in webmasterworld.com

An interesting post in webmasterworld.com describes how sites were dropped from google’s index, whether it is because of a penalty or because of google’s own problem, this checklist describes well,

One of the most common themes of posting here in WW starts something like:
“Last night, my site disappeared…”
“Losing” a site can be a painful and
frustrating experience. To help ease the pain, perhaps a starting list of
potential issues might help. I’ll probably miss more than I’m catching with this
list, but at least it’s a start.
Do a site search at the SE in question to
determine if all of some of your pages are gone. Some think that their site has
vanished, when in fact an algo update or tweak has occured causing their pages
to drop. Or, individual pages have been filtered or penalized, but not entire
sites:
If *all* of your pages are gone (search on URL’s to check that), then
perhaps: • your server was down at an inopportune time. • you have a robots.txt
problem. • you’ve been removed from the index based on a perception of bad
behavior (not good).
If only some pages are gone, or if your pages have
simply dropped badly in the SERP’s, then perhaps: • you have some other
technical issue not noted above (e.g., badly executed redirects), • the algo
changed, • you’ve done something recently that the SE did not like, or, • the
algo changed and something that was previously “OK” is now being filtered or
penalized.
Here are some specific things to look at:
Start with the
basics: Was your server down recently? Server failure is always a good item to
check off your list when searching for problems. No need to start remaking your
site if all that happened was a temporary problem.
Are you using a
robots.txt file, and if so, has it changed. , Is the syntax correct? There are a
variety of potential problems that can be caused by improper code in robots.txt
files, or placement of the robots.txt file in the wrong location. Search WW on
this topic if you’re not sure what you’re doing. Use the WW Server Header
Checker. At worst, a robots.txt file can tell a SE to go away, and you really
don’t want that. 😉
Have you more aggressively optimized recently? Internal
changes that can lead to potential problems include: • More aggressive kw
optimization, e.g., changes to Titles, META’s, tags, placement and
density of kw’s, etc. • Link structure changes, and especially link text
changes. Updates to link text or structure, if done for optimization reasons,
can push a site into filter/penalty territory. Look in particular for overuse of
kw’s.
Have you added redirects? The SE’s *can* sometimes become confused by
redirects. Assuming that the changes are intended to be permanent, use 301’s,
not 302’s. Be especially careful about large scale changes. If done properly,
redirects are important tools. Done without proper knowledge, they can lead to
short term pain, often on the order of 1-6 months.
webmasterworld.com/forum3/8706.htm
Do you have a significant number of
interlinking sites? If ever there was a strategy that might be summed up as:
“Here today, gone tomorrow…” interlinking is it. You can succeed with this
strategy. But if you add too many sites or links to the mini-net you’re
creating, or interlink too aggressively, it can catch up to you. Penalties can
range from soft filters to complete manual removal in rare cases. Even with no
recent changes to your sites, the SE algo’s can change, making something that
squeaked by yesterday illegal today. webmasterworld.com/forum3/4618.htm
Are
you linking to sites in “bad” neighborhoods? If ever there was a strategy that
might be summed up as: “Gone today…” linking to “bad” sites is it. If you
think that you might be linking to the dark-side, lose that link instantly, if
not sooner.
webmasterworld.com/forum3/8053.htm
Could you be suffering from a
duplicate content penalty? Some practices or occurances that can cause problems
in this regard include: • Use of a single, site-wide template • Use of one
template across multiple sites • Competitors stealing or mirroring your content
• Redirects from an old domain to a new one • Over reliance on robots.txt files
to exclude bots from content areas you don’t want exposed. WebmasterWorld
Thread: webmasterworld.com/forum3/22494.htm
Are you cloaking? Some cloak merely to deliver “accurate” pictures of
sites/pages to the SE’s. Examples of this are sites with lots of graphics and
little text. But if you’re a mainly text based site that is delivering one set
of content to the SE’s while users are seeing something
less…umm…optimized…then there’s always the risk that you’ve been caught.
Are you using AdWords? This is pure speculation on the part of some seniors
here, but some do seem to firmly believe that if you place highly with an
Adwords listing, it might actually hurt your position in the SERP’s. Don’t shoot
me. I’m just the messenger.
IF OTOH, the only issue is that you’re not as
high in the rankings as you’d like, then a better place to start would be
Brett’s 26 Steps to 15K a Day.

Source: http://www.webmasterworld.com/forum5/4584.htm

Google as Web King – An interesting thread discussion in webmasterworld

A interesting thread in webmasterworld discusses an article written by Charles H. Ferguson at Technology Review,

This article neatly describes the future of google and how they can improve them selves from their current standings,

2 significant posts in this thread are from 2 regular users, they had a very good insight into the article, Source of the article; www.technologyreview.com/articles/05/01/issue/ferguson0105.asp?p=1

Iguana says,

I think I understand what Charles H. Ferguson is saying. He seems to be
saying that Google needs to develop commercial APIs to their search – because
this is what Microsoft has done before with windows and their applications and
made (nearly) all the facilities available to developers.
But I don’t regard
‘Web Search’ as that important a function to require APIs. I still regard it as
a way to quickly access the statically delivered content on the web. Obviously
Google already have a developers API (with very limited usage) and Adsense
websearch. Amazon have their new e-commerce webservice that allows you access to
their search results (Google-derived and Alexa enhanced) that is in beta and may
be subject to a charge in the future. Both of these seem to be allowing websites
to incorporate their own websearch facilities. I don’t think they will be taken
up in large enough numbers to have a big impact on searches done.
What he is
talking about is the next generation of search – the one that includes the
‘hidden web’ desktop PC file systems, emails, handhelds, and Linux. To provide a
cross-platform access to all of this would be nice – but hardly a ‘killer app’.
I haven’t bothered to download Google or MS desktop search – I know where my
files are and what they contain and can use windows explorer to check them. I
only need a deep search of previous web pages/emails/files about every 2 weeks.
If you said I could search and access the text of any book ever written, any
software, any album details (cover/real lyrics/ track listing/sample), access
MP3s of my music and the music collections of any friends (wishing all my vinyl
was converted to MP3) – then I would be excited both in my working and home
life. But copyright prevents a lot of this and I couldn’t afford to actually
purchase these as products.
I just fail to make that jump from search being
a quick, sometimes frustrating, way to access web content to being the nervous
system that unifies my informational world. In the long term (10-20 years?) it
will be that. But for the next few years, when the Google/MS competition will
take place, it is the web search that will be the battleground. I used to think
the real crux could be how you access the search – when the browser disappears
from Windows and becomes part of the desktop then Microsoft can make it awkward
for people to change the default search from MSN to Google. It didn’t work last
time with the built-in IE search but maybe will work better this time. Luckily
enough Google should have the financial clout to quickly stop Microsoft using
any unfair tactics, unlike some other companies in the past who have had to wait
5 years for their multi-million dollar settlements that are just loose change
for Mr Gates.
I realise that I am holding up my hand and saying I just don’t
have the imagination/foresight to see how APIs and extending the search content
is the next step. Given that Microsoft won’t be able to just leverage control of
the major operating system to eliminate Google, I keep on coming back to the
thought that for the next 5 years it’s the same old, same old thing – quality of
the search results. All the pain of the Florida update, the obfuscations that
have reduced the power of Pagerank, the ‘filters/sandbox/hilltop/ anchor text/
over-optimisation penalties’ – has failed to produce better Google search
results. Google needed to move from a keyword-based search with Pagerank to
something else (now that Pagerank was understood and spammed rather than natural
web linking). I really believed that Google was going to move to the next stage
and figure out what a page was about before serving it as a result as opposed to
ever more elaborate counts/weighting of keywords in the document. But they have
failed. I think that Yahoo and Teoma may now be its equal and that Microsoft may
catch up in a year. Google could become a minor player long before the big
battle over control of access to digital content is fought – if one of the other
players comes up with a search engine that actually understands something about
what the user is searching for.

Namaste says,

Web search is a service, and in a service, the quality of service matters. MS
has never won a service war, only a product war.
From what I have seen of
Google’s strategy so far, it seems to be sound:
1. Index deep. 2. Go beyond
the web 3. Earn revenue from increased distribution 4. Make search convenient:
fast, desktop, etc. 5. Build a WebOS 6. Don’t be evil
These 6 are common
sense strategies and if they stick to them, they should have a sound future.
For all that is said, MS also sticks to some common sense strategies that
have seen it win many battles: 1. Make everything easy to use 2. Provide
reasonably good quality 3. Provide it cheap 4. Push it to the max 5. Get
developers on your side
It beat Netscape, Apple, Novell, IBM, etc. using
just these five strategies. But these strategies are blunt against Google,
because Google is already doing the first four, and there isn’t much scope for
the fifth in search.
The big question is what will happen when MS provides
integrated desktop search? The answer is that Google still wins if it follows
it’s own 1 and 2 and stays ahead of MS. People who are searching will goto
Google.
Further, we are moving to the high bandwidth era, where we are using
more web applications than ever before. If Google can successfully engineer some
key applications (such as Gmail) to be equivalent to desktop software (such as
Outlook), people will automatically migrate to web apps as they are completely
portable.
I am also surprised that the author hasn’t spoken about patent
acquisition as a strategic advantage. We have seen many tech wars won as a
result of patents (Minolta vs Carl Zeiss for example). This important factor
could decide the MS vs Google battle. Both players realize the importance of
patents and must be amassing them in hordes. Google ofcourse has a head start in
this as far as search and WebOS goes.
As far as APIs are concerned, I
believe, Google will provide full fledged APIs when it can successfully offer a
WebOS. Possibly just before Longhorn.
Let us not underestimate the Linux
factor in all this. In one or two years, Linux will be as friendly to use as
Windows (still some issues with fonts, installations, etc.). When the time comes
for people to discard Windows XP, the big question is will they go for Longhorn
or the new Linux. In my opinion, it will be the new Linux.
The future:
People will “upgrade” from Windows to Linux; and use more web apps as compared
to desktop apps.
Has MS considered building a WebOS? No news there so far.
If they do, then we are talking serious competition to Google in a few years.

Source: of the thread, www.webmasterworld.com/forum3/27178.htm

An unrelated thread turning into a Google Bashing thread in webmasterworld.com

Recently when i was browsing through the webmasterworld.com thread I came across the thread where the integration of search in MSN messenger was discussed, This thread completely went in a wrong direction and turned into a google bashing thread, It is so weird to see the number of webmasters who hate google these days, Are they just webmasters or SEOs, I feel most of them are professional search engine optimizers frustrated on seeing their new sites and new links sandboxed and many of their quality optimized sites not ranking,

This seem to be a common hatred among SEOs these days towards google, I personally use google for all my needs and am happy with what google does for me,

Check that thread here,

www.webmasterworld.com/forum97/278-2-10.htm

Is it right that SEOs hate google, Is this what google want to achieve from their new mission of sandboxing sites, Please feel free to comment on this,

Darker side of exchanging links, bad link exchange tactics deployed by some sneaky webmasters discussed here,

this is a really interesting thread in webmasterworld.com about how people use bad tactics while exchanging links, check the thread here,

webmasterworld.com/forum12/2036.htm

Bad webmaster! Bad…

Delink your links page

Bury the link to your

links page on a single page that takes about five clicks to get to

robots.txt

Run your outbounds through your cgi-bin counter script

Brett Tabke Founder webmasterworld.com – Brett one of the foremost SEO GUY

Brett Tabke- the authority in search engine optimization

The topmost SEO intellectual, Brett Tabke was exposed to computers and the net world from his school days itself. His interest and curiosity grew from his mere childhood and it did not take much of a time for Brett to establish himself in the SEO field. Brett is now the CEO of the famous optimization site, Webmasterworld.com

When Brett Tabke finished his college, he started working part time for Epyx, Berkeley Softworks, TwinCities, and others in Commodore software. He authored a book for Western Design and Ahoy magazine. Then, Brett started working with bulletin Board system (BBS) and Net Promotions.

BBS is the precursor of the present forums. Brett put the first BBS online in 1984. From there he picked up building websites and also learned about online traffic. It did not take much time for Brett to be the authority on internet traffic.

About search engine marketing Brett explains, “Search Engine Marketing is just a fancy name for checkbook SEO. That’s not what I deal with – it is optimization and promotion I am concerned with.”

Brett looks forward to expand the forum offerings to address the growing needs of the web world. He envisions a full service site that supports all the needs of webmasters. But Brett does not like to deviate from his own field of search engine promotion.

Brett’s site: http://www.webmasterworld.com

Request a Free SEO Quote