Pending Spammers – Aggressive affiliate Marketers
Google and other search engines have weeded out most of the spam out of their search engines. But still one industry that keeps Spamming the search engines using very innovative methods. is the ever aggressive affiliate marketing industry. I know many affiliates still depend on Search Engines for traffic to their spam site will affiliate links all over the place. So how many of us like affiliate.
Personally affiliate sites are a No-No for me. When I click on Search engine results and end up an a affiliate page first thing I do is to close the browser and find an other result. I hate to buy from someone who is reselling the product than buying directly from the dealer.
Affiliates still spam the search engines using methods search engines are never aware of before. Its very difficult to tackle this industry since lot of money is involved in this industry and many affiliate marketers are not willing to find better ways to do online business. I don’t blame all affiliate sites there are some rare sites which do provide Good information while have random affiliate links mixed on their site but most of the affiliate sites don’t add any value for any visitor. Also MFA ( Made For Adsense ) sites are an other disgrace to search engine users. When i click on a result and see a page full of adsense ads, affiliate links i never enjoy the site nor will i want to visit the site again.
i hope search engines completely get rid of all sites that have affiliate links and dont add any value for users this is the last type of death to spam I am looking for in Search engines.
Dont teach how to run business for Search Engines.
So what’s up with the paid text link debate. I have seen enough places where people complain Google is teaching them how to run their website and Business. Is that considered Joke of 2008. SEOs are here because Google and other search engines are here. There is no special technological industry named SEO. This whole industry is here because of flourishing Search engines so why complain them.
Text link advertising is not the traditional way you advertise? You do it for Search Engines. Everything you do for search engines be ready to face the consequences. Want to ride behind the back of a Search Engine make sure you play by the rules. Search engines have the rights to penalize text link publishers/advertisers manually, algorithmically by editorial review anyway they want way as long as it improves the quality of their results. Why complain that Search engine’s are teaching you to run a business while actually you are the one who is teaching Search engines how to run a business. As the Search Engine experts have stated you are free to do anything on your website and the same way Search engines have every right to do anything with their algorithm as long as its for the best for their users. I am part of a SEO company too and I always Bow to any changes Google make if there is a ranking change for our sites or our client sites we try to see what mistake was done and find a solution. We never put the blame on any Search Engine. As long as we are in Search Engine Optimization industry lets be close to Search Engines and play by their rules. If we move a bit away from their guidelines lets face the consequences.
I humbly request Search Quality engineers like Adam Lasnik, Matt Cutts not to try and defend what you are doing, keep doing what’s best for your users and don’t worry when somebody tries to curse Google for something they do with their algorithm. If 1000 SEOs join together and curse Google for penalizing paid links what does it show? They want Google to stop penalizing paid links so that they can keep buying links and keep manipulating results its as simple as that. I cannot find an alternate reason for it i am sure its not the reason you are looking for that is best for your users. So keep doing what’s best for your algorithm and users and please dont justify and try to defend your ideas. People who love Search Engines will know how to appreciate it.
Search Engine Genie.
Using nofollow on internal pages – Not a spam
Google has clearly stated using nofollow to prevent pagerank flow to your pages like the copyright, TOS etc is not considered spam. But in the recent webmaster chat Google guys did state its not worth preventing links from passing pagerank. Pagerank flows naturally across pages and just preventing some pagerank from passing to any of your pages is not worth the effort.
Crawl Date in Google’s Cache: Matt Cutts Video Transcript
Ok everybody we have a new illustration today. Vanessa Fox of Google webmaster central blog talked about this some people like to learn visually , some people like to learn screen shots, so I thought ill make a little movie so this is going to be a multi media presentation the 2 media we are buying today are skill and peanut butter red ones. So lets talk about Googlebot and how it crawls the web. First off what are the red imminent represent, well everyone knows red is bad so these are going to be 404s. The Googlebot is crawling around the web and it sees a 404 sucks it down and then later on it will come back to try to check it again.
So what are the purples mean well everybody knows purple means a http status code of 200 OK, That’s the only thing that it could possibly represent. So in other words Googlebot comes along and it sucks up the page and we got the page just fine. So we got a 404 we got couple http 200s so life is pretty good next, now lets talk about the cache crawl date and what they represent. So we are not able to tell that easily but this is purple we got two greens , purple and the rest greens. So what do you think the green imminent represent? Everybody knows the green imminent are great we know it’s the good ones so green represent a status code of 304. So in a browser Googlebot comes to a page they say hey I want to copy this page or you can just tell me if the page has been modified since I indexed and that the page if the page has not been modified since a certain date you can get 304 status back saying that this page hasn’t changed and all that Googlebot has to do is to ignore that page. SO this is what Googlebot does , this is going forward in time so in other words we crawl a page we get 200, the next 2 times Googlebot crawl the page it gets a 304 which is the If Modified Since that said that the page hasn’t really changed. And later on then here the webmaster actually changed the page and we see this purple that again means the page has been changed since the last crawl and now we get a 200 since the page is actually fetched.
Now going forward the page didn’t change so the web server is smart enough to return a 304 status code for each one of the visits by Googlebot. Now the thing that is interesting is if you want to check whether Googlebot cached the page it will show the last date that the page was last retrieved. But the interesting thing is that until recently the post that we checked on this date and this date it will still give us the very first time that we fetched that page. Now you fetch the page again and it would show this cache crawl date and this would continue and may be for 6 months if the page and the page hasn’t change we would still show the old cache crawl date. So the change in policy in what we are doing is if we check on this date and on this date to see if the page has changed we will now show that date in the cache crawl date. So in other words as Googlebot comes along , slipping stuff along it might used to a page which might look pretty old we update that so as we know about even if the page is changed or not we update the crawl date in the cached page so the pages look more fresh in the cache crawl date even for the fact we are showing the date to reflect in the fact that we have actually recently checked the pages has changed.
Lightning Round matt cutts video transcript
Alright this is Matt Cutts Coming to you on July 31st Monday 2006 this is probably the last one ill do tonight so lets see if I can kind of do a lightning round. Alright Peer rights in and says
Is it possible to search just for homepages? I tried doing minus in URL html and in URL htm , so and so URL php , ASP but that doesn’t filter out enough. That’s a really good suggestion peter having thought about that, fast used to offer something like that I think all they did was look for a tilde in the URL I would file that as a feature request and see if people are willing to prioritize that if we are willing to offer that. My guess is it will be relatively low on the priority list because the syntax you mentioned subtracting a bunch of extensions would probably work pretty well.
I got to clarify something about strong Vs Bold, Emphasis Vs Italic. There is a previous question where somebody asks whether it is better to use bold or it is better to use strong because bold is where it was used in olden days when the dinosaurs were roaming the earth and strong is what the w3c recommends. And that time last night I thought that we barely , barely like we prefer bold over strong and I said its not the most part that you would worry about it and the next thing is that a engineer really took me to a code and showed me in live and I can see that Google treats bold and strong exactly the same weight. So thank you for that paul I really appreciate it. And also I saw an other part of the code where the M ( emphasis ) and italics are exactly treated the same. So there you have it so mark it like W3C wants to do it, do it semantically well , do it and don’t worry about just small tags because Google will treat just the same way of both the versions.
Ok next in Lightning round Amanda Asks “ Do we have more kiddy posts in the future”?
I think we will, I tried to bring my cats here around me but they are afraid of lights and just jumped off. Ill see if I can bring them in future.
Tom Html asks, Where is Google SST, Google guest , google weaver, google market place, google RS 2.0 and other services discovered by tony rescow?
I think its very clear for tony to do a dictionary tag again, services check-in but I am not going to talk about what all those services are.
A preview Joseph Hunkins asks what many topics will be there in duplicate contents as yet, a little bit of a preview on one of the other sessions is on video but I think what I basically want to talk about is it will be there lot of people will be there it will be shingling
What I want to say is Google detects duplicate contents all the way from crawl to all the way people see things when searching. We do stuff that’s exact duplicate detection and we do stuff that’s near duplicate detection so we do a pretty good job all the along the line like detecting dupes and stuff like that.
And so the best advice I give is to make sure your duplicate contents like the page that has contents as much similar as possible to make it look as much different as possible if they are truly different content. A lot of people talked about word versions or .doc compared to html files typically no need to worry about that if it has similar contents on different domains may be French and an other version in English you really don’t need to worry about that, again if you do have the exact same content may be for a Canadian site and for a .com site so probably we will roll the dice and see which ever one looks better to us and just show that but it wouldn’t necessarily trigger any sort of penalty or anything like that if you want to avoid you can make sure your templates are very, very different but better if the contents are similar its better to show us which ever is the most ideal for representation and guess the best anyway. And Thomas writes in and says does Google index and rank blog sites different than regular sites?
That’s a good question not really, somebody asked me whether links from gov, edu’s , and links from two level deep govs and edus like gov.pl or gov.in are the same as .gov?
The fact is really we don’t have much of a difference in a way to say hey this is a link from ODP or .gov or .edu and so on. There is no some sort of special boost its just that those sites have higher Pagerank because more people tend to link to them and reputable people link to them so blog sites there aren’t anything distinct unless if you go off to blog search ofcourse and its blogs and totally restrained to blogs. SO in theory we could rank them differently but most part its just a general search the way it falls out.
Alright thanks.
TPR penalty websites still not ready to change their ways
Lots of TPR ( Toolbar Pagerank Reduction ) Penalty websites which got affected in december 2007 update are still not ready to change their ways. Most of them like searchenginejournal, forbes.com, washingtonpost.com got their pagerank back,
Searchenginejournal was slapped with 3 points reduction to PR4 now are back to PR7, Forbes lost 2 points and was slapped to PR5, now they removed the paid text link ads and are back to PR7, Washingtonpost.com lost 3 points and was slapped with PR5 now they are back to PR8.
So what is the secret behind getting the Pagerank back and remove TPR Penalty.
First if you check Searchenginejournal they removed all text link ads and now all ads pass through a redirect script that doesn’t pass pagerank,
Forbes completely removed all text links and possibly the pagerank came by itself or they might have sent a reconsideration to check their site for paid links.
Washington past now has Nofollow on all the text links they sell, if you take the source of their page you can see that the text link ads that go out of their page don’t pass pagerank anymore all of them go through a nofollow.
I know there are some sites who are not ready to go friendly with Google when it comes to TPR and are happy to be at PR4 or PR5 and continue to do what they are not recommended to do. Instead they have their usual blogging power to curse Google for what they did to protect their algorithm. I recommend those sites to correct what they did and possibly file a reconsideration request to get back their Toolbar Pagerank or just keep the credit of running website that has lost some trust in Google’s algorithm.
Though I would like to stay away from commenting about individual sites one site I personally admire is SEOchat.com they look too silly to have a PR of 4. Being hit with TPR a reduction of 3 points were applied and their regular pagerank was PR7. Obvious reason is their option to sell text link ads on their site without any nofollow which is not a great sign especially for a site which depends so much on Google to discuss in their forum and so many tools that query Google. Their forum’s discussion has 75% discussion on Google and remaining about common issues and other search engines.
For all the 100s of 1000s of users who are regular to SEOchat its not a great way to welcome them with a PR4, Pagerank is like a fashion issue for a website. Most of the people out there especially people who are into search engines see Pagerank as a value of quality for a website. I hope SEO Chat a great site for SEOs will get their Pagerank back by no-following text link ads on their sites. They should set an example for all users who visit their site for SEO purposes.
Google Terminology – Matt cutts Google quality engineer transcript
Hello Everyone I am back, Ok Why don’t we start off with a really interesting question. Dazzling donna wrote in all the way from Louisiana she says matt I mentioned before I love to see do the fine type of post, find terms that you at Google use that we non Googlers might get confused about. Things like data refresh etc. You may have defined them in various places but one sheet type list will be great.
Very good question, at some point I need to do a blog post about host, Host Vs Domain and bunch of stuff like that. Some of people had been asking question about June 27th to July 27th so let me talk that a little bit more in the context of a data refresh vs an algorithm update versus an index update. So ill use the metaphor of a car. Back in 2003 we were crawling the web and indexing the web once in every month when we did that that was called an index update algorithms could chance, data could change so everything could chance just in one shot. So that was pretty big deal webmasterworld will name those updates. Now that we pretty much crawl and index some of our datas every single day it’s a ever flux its always going on through a process the biggest change in the people’s tendency are algorithm updates. You don’t see any index updates any more because when we moved away from the monthly update cycle the only time you might see them is you might be completing an index which is incompatible with the old index. So for example if you do simulation of CJK its China, Japan and Korea to under stand this you might have to completely change your index and go to an other index in parallel. So index updates are relatively rare , algorithm updates basically are when we change our algorithm. So may be its with the scoring a particular page you said you know Pagerank matters this much less or this much more or something like that. And those can happen pretty much any time so we call that asynchronous because whenever we get an algorithm update and the tally rates positively and it improves quality, in improves relevance we go ahead and push that out and an other smaller change is called data refresh that is essentially like you are changing the input to the algorithm , changing the data that the algorithm works on. So an index update with a car metaphor would be like changing a large section of the car things like changing the car tyre whereas an algorithm update is like changing a part in the car may be a changing out the engine for a different engine or changing or other main parts of the car, a data refresh is more like changing the gas in your car every one or 2 weeks or 3 weeks your change will always go in and will see how the algorithm works on that data. So for most part data refreshes are more common one thing we got to be very careful about is how safely we check them, some data refreshes happens all the time so for example we compute Pagerank continually and continuously so its always back of Machines refining Pagerank based on incoming data and pagerank goes out all the time anytime when we make an update to the index it happens pretty much every day.
By contrast some algorithms are updated every week every couple of weeks so those are data refreshes that are happening in a slower page. So the particular algorithm that people are interested in, on June 27th and July 27th those algorithms, actually those algorithms happen to be live for over a year and a half so you are seeing data refreshes that you are seeing that people see as a way for sites to rank.
In general if your site has been affected go back to your site take a fresh look at see if there is anything that might be exceedingly over optimized or maybe hanging out in SEO forums for such a long time that I need to have a regular person come in and take a look at the site and see if its ok to me. Or if you tried all the regular stuff and still it looks Ok to you then I would keep building regular good content and make the site very useful and if the site is useful then Google should fight hard to make sure its ranking where it should be ranking. That’s about the most I can give about June 27th and july 27th data refreshes because it does goes into our secrets also a little bit but that hopefully gives you a little bit of an idea about the scale the magnitude of different changes.
Algorithm changes happen a little more rarely and data refreshes are always happening and sometimes it happen from day to day or sometimes week to week or month to month
Thanks
Matt Cutts
rel= external nofollow and rel=nofollow difference
I have seen many people ask in forums what’s the difference between rel=external nofollow and rel=nofollow,
Nofollow is a co-ordinated efforts from Google, yahoo , MSN to stop crawling links that are considered not trustworthy or spammy. Today many blogs, message boards, forums, news sites use nofollow tag to prevent spam in their comments or other areas of the blogs or forums. Now there is a new tag that is similar to rel=nofollow that is rel=external nofollow, external nofollow works the same as nofollow tag but this tag opens links in a new window instead of the same window. If you want to try just add rel=”external” it will open the link in a new window. it sort of works like target=_blank. Target blank opens all hypertext links in the new window and rel=external nofollow, blocks all link juice and opens the link in a new window.
So if you find a blog that uses this dont get too excited, links from these blogs dont count. Move on and better luck next time
Google ( Goog ) Gains more than 12% after Q1 2008 RESULTS
Google’s ( GOOG ) Q1 financial results are released and breaking everyone’s expectation Google has gained 42% profit over the first Quarterly of 2007 and has gained 7% over the last Quarter of 2007. This is much better than everyone’s expectations especially after decline in value of many company’s shares.
At this moment Google’s shares are trading around 13% increase which is a very good sign for Google.
Financial results Summary as per Google’s investor relations:
Q1 Financial Summary
Google’s results for the quarter ended March 31,
2008, include the operations of DoubleClick Inc. from the date of acquisition,
March 11, 2008, through the end of the quarter, and are compared to
pre-acquisition results of prior periods. The overall impact of DoubleClick in
the first quarter of 2008 was immaterial to revenue and only slightly dilutive
to both GAAP and non-GAAP operating income, net income and earnings per
share. Google reported revenues of $5.19 billion for the quarter ended March 31,
2008, an increase of 42% compared to the first quarter of 2007 and an increase
of 7% compared to the fourth quarter of 2007. Google reports its revenues,
consistent with GAAP, on a gross basis without deducting traffic acquisition
costs, or TAC. In the first quarter of 2008, TAC totaled $1.49 billion, or
29% of advertising revenues. Google reports operating income, net income, and
earnings per share (EPS) on a GAAP and non-GAAP basis. The non-GAAP
measures, as well as free cash flow, an alternative non-GAAP measure of
liquidity, are described below and are reconciled to the corresponding GAAP
measures in the accompanying financial tables.
GAAP operating income
for the first quarter of 2008 was $1.55 billion, or 30% of revenues. This
compares to GAAP operating income of $1.44 billion, or 30% of revenues, in the
fourth quarter of 2007. Non-GAAP operating income in the first quarter of
2008 was $1.83 billion, or 35% of revenues. This compares to non-GAAP operating
income of $1.69 billion, or 35% of revenues, in the fourth quarter of 2007.GAAP net income for the first quarter of 2008 was $1.31 billion
as compared to $1.21 billion in the fourth quarter of 2007. Non-GAAP net
income in the first quarter of 2008 was $1.54 billion, compared to $1.41 billion
in the fourth quarter of 2007.
GAAP EPS for the first quarter of 2008 was
$4.12 on 317 million diluted shares outstanding, compared to $3.79 for the
fourth quarter of 2007 on 318 million diluted shares outstanding. Non-GAAP
EPS in the first quarter of 2008 was $4.84, compared to $4.43 in the fourth
quarter of 2007.
Non-GAAP operating income, non-GAAP operating margin,
non-GAAP net income, and non-GAAP EPS are computed net of stock-based
compensation (SBC). In the first quarter of 2008, the charge related to
SBC was $281 million as compared to $245 million in the fourth quarter of
2007. Tax benefits related to SBC have also been excluded from these
non-GAAP measures. The tax benefit related to SBC was $51 million in the
first quarter of 2008 and $42 million in the fourth quarter of 2007.
Reconciliations of non-GAAP measures to GAAP operating income, operating margin,
net income, and EPS are included at the end of this release.
GOOG Financial results.
Labels categories for FTP blogger/blogspot blogs
Blogger Labels code download Free
We wrote a very simply PHP code for adding labels / categories in blogger ftp blogs. Already we have Page Element option in blogspot blogs where you just need to login to your blogger and enable the option. But for many years Blogger don’t have a proper solution for FTP blogger / your own domain hosted blogs.
We tried searching for a solution in internet and all the code infact most of them don’t work and most of them are too complicated for end users. So here we have a simple solution it has just 4 or 5 lines of coding and BOOM you are there with a nice labes menu on which ever location you include it .
Dowload this zip file and unzip in your local system, just follow the instruction in word document you should be able to work fine with it. If you are facing problems please post in comments we will try to fix the bug,
SEO Blog Team,
Blogger Labels code download Free
Blogroll
Categories
- AI Search & SEO
- author rank
- Authority Trust
- Bing search engine
- blogger
- CDN & Caching.
- Content Strategy
- Core Web Vitals
- Experience SEO
- Fake popularity
- gbp-optimization
- Google Adsense
- Google Business Profile Optimization
- google fault
- google impact
- google Investigation
- google knowledge
- Google panda
- Google penguin
- Google Plus
- Google Search Console
- Google Search Updates
- Google webmaster tools
- google-business-profile
- google-maps-ranking
- Hummingbird algorithm
- infographics
- link building
- Local SEO
- local-seo
- Mattcutts Video Transcript
- Microsoft
- Mobile Performance Optimization
- Mobile SEO
- MSN Live Search
- Negative SEO
- On-Page SEO
- Page Speed Optimization
- pagerank
- Paid links
- Panda and penguin timeline
- Panda Update
- Panda Update #22
- Panda Update 25
- Panda update releases 2012
- Penguin Update
- Performance Optimization
- Sandbox Tool
- search engines
- SEO
- SEO Audits
- SEO Audits & Monitoring
- SEO cartoons comics
- seo predictions
- SEO Recovery & Fixes
- SEO Reporting & Analytics
- seo techniques
- SEO Tips & Strategies
- SEO tools
- SEO Trends 2013
- seo updates
- Server Optimization
- Small Business Marketing
- social bookmarking
- Social Media
- SOPA Act
- Spam
- Technical SEO
- Uncategorized
- User Experience (UX)
- Webmaster News
- website
- Website Security
- Website Speed Optimization
- Yahoo





