I’ve uploaded my new book to the interwebs so you can now download and read through it, if the fancy takes you, completely free of charge (SEO Truth – A Bible For The Next Generation Of Search Engine Optimisation).
There may be a few blank pages because it’s been laid out for print; you can get yourself one of these hard copies from this website here.
Let me know what you think and submit some feedback over on lulu by all means! Cheers.
Edit: The download link works correctly now, oops.
As usual I’ve been spending a horrendously long time without writing anything on my blog – and for that I apologise. However, I have spent some of my time writing an SEO (Search Engine Optimisation) handbook, covering the importance of next generation techniques and practises.
I’m sure there are those of you who are all too familiar with the increasingly backwards approaches used by a few ‘special’ SEO agents and individuals out there and perhaps for you this will merely reinforce what you already knew to be true. For those of you who don’t know what I’m talking about -then please read the book and have a good laugh at yourself for being such a silly.
You can order print copies of the book – just not yet… more details on that coming soonly! I’ll be publishing online chapter by chapter (honestly I have finished writing it, but as an SEO, if I didn’t serialise it then it would look bad).
Enjoy the read and let me know what you think, if the first edition is terrible and you order, of course it’s going to be valuable in 200 years!
First off, I’d like to introduce myself. I’m a Search Engineer, a developer and programmer. I’ve worked with clients throughout the advertising industry at many different companies. My specialty is developing software that works with the search engines of companies like Google, Yahoo and MSN and attempts to influence the rankings of my client’s websites, as well as report on those ranking changes. I’ve never been to a lecture on computer science, read a book on development methodology and yet I’m in demand. My skills lie in understanding the technology of a search engine and how to capitalise on their ranking algorithms, web crawlers and content filters and it’s the ideas I generate in this area which have kept me in gainful employment.
SEO (Search Engine Optimisation) used to be a fairly simple task where you’d make sure every page on your client’s site had Meta tags, descriptions and content unique to that page. You might then try to analyse the keyword density of your key terms to keep them somewhere between 4 and 7 percent. More often than not most SEO companies wouldn’t even attempt that.
What most SEO companies would never tell you, and this is the industry’s most well kept secret, is that they’re intrinsically lazy. If you had a good client, with good content and a product of interest then their SERs (Search Engine Rankings) would climb entirely naturally to the top spots, you’d have nothing to do but sit back and reap the benefits of your lack of work.
This is of course a sad state of affairs which no real SEO company would allow and part of this book will help you to spot the difference between a professional outfit and rank amateurs and define the widening gap between the two camps.
As the title suggests I’m writing about the next generation of SEO. It’s becoming more difficult to increase the rankings of a particular website and it will only get more difficult to manipulate a website’s ranking without any understanding of how new search engine technology works. Lucky for you, my field is semantics (how to correlate the relationship between one word and another essentially) and you’re in for a whole chapter in manipulating a semantic index similar to those increasingly used by the major search engine players.
Chapter 1 – The Past
In order to proceed correctly in the future, the most important lesson is for us to understand what happened historically. There’s no shortage of information on the internet and amongst SEOs and webmasters about how Google’s original PageRank system worked. This is in large part thanks to a paper written by Google’s founders, Larry Page and Sergey Brin, whilst they were still studying for their PhDs at Stanford University. Not long after that they received their first investment from a company called Sun Microsystems which enabled them to build upon the hardware they had in their university dorm room and create the international phenomenon we know today.
PageRank was essentially a very simple system. It counted each link from one site to another as a vote for the destination site. By voting for another site the original gave away some of its own PageRank. The idea came from Salton’s Vector Space Model, which is a mathematical principal known to most Computer Science graduates today. This simple method of calculating which websites had the most votes, and therefore deserved higher rankings, is key to all search engine algorithms as it’s extremely fast to calculate. The most important factor in any search engine is its speed in returning and ranking results, especially when you’re dealing with an index of billions of pages.
The Anatomy of a Search Engine, based on the work of Larry Page and Sergey Brin whilst at Stanford.
If you understand that all calculations undertaken by a search engine must be as fast as possible, it allows you to draw logical conclusions:
· Thinking about a page as a machine would (which struggles to actually understand rather than just read), rather than as a human, is key to analysing your websites content for SEO value.
· Is every single underlined heading, keyword color, font size, image location, keyword relationship and page title length analysed when a page is crawled? It’s highly doubtful that anything too in depth is going to be indexed, when the crawler has another hundred thousand pages to visit and rank as quickly as possible, use some common sense here. Of course as processor speeds and bandwidth increase more in depth analysis will become possible in a shorter space of time.
· The search engine needs to maximise two things: the speed of its calculations and its measure of quality relevancy. Occasionally one is going to suffer at the importance of the other, if you were going to choose between indexing a page poorly – or not at all – which would you do?
SEOs in the past were able to capitalise on this speed issue by choosing to concentrate on areas of a page such as the Meta tags, description and page title. The content itself gradually became more important as time went on but still was subject to the speed of indexing. SEOs quickly realised that keyword density (how many times a keyword appears on a page out of the total number of words) was a very quick way to determine some kind of relevancy, and that the search engines were using it too.
Once the search engines got wise they implemented filters that stopped SEOs from flooding a page with keywords. Arguments in the SEO community followed over exactly what was the ideal keyword density for a term, and this usually settled somewhere between 4 and 7 percent.
Of course the PageRank model meant that agencies were keen to build as many links to their client websites as possible. To make matters worse however they were after links that already had high PageRank values to gain the maximum ranking as quickly as possible and this sprang up a cottage industry of people generating high PageRank links, purely to sell on. Google of course were unhappy about this and their anti-spam team began its work. Blacklisting of websites which ‘farmed links’ was becoming fairly common and this moved on to other aspects of ‘black hat’ SEO behavior – where an unfair advantage was being made by some nefarious companies and individuals.
Most SEO agencies at this stage relied heavily on staff who’d be subjected to some extremely tedious and repetitive labour. Going through page after page of a website and adjusting the number of keywords on a page, slightly changing each page title and Meta tag was a boring job and not well paid.
Directors and CEOs didn’t have a whole stack of problems though, if they kept building up link relationships with ranking websites and making sure their Meta tags were in place, their job was done. Often enough they’d have clients who already had an interesting product which did most of the work itself, spreading links around the internet as people registered their interests.
This natural traffic increase was what Google was looking for as they wanted sites which progressed on their own merits rather than trying to beat the system.
I’ve said many times before that so-called SEOs out there need to stop believing every word that spills from Google’s overactive pen. Google is a business just like any other and they feed information that’s deliberately misleading to stop people from gaining an unfair advantage with their search rankings.
It now appears that, in fact, Google assigns an estimated worth to each ranking on their pages – visible to members of their AdWords sales team they use the information from your PPC campaigns and analytics package in order to figure out whether you’re worth it.
I’m sure many of you will draw your own conclusions from this and in time we may see a Google press release, from that department which again knows as much about how their technology actually works as most of the SEOs do. Take it with a pinch of salt is my advice and invest the time to understand how a search engine really works.
This story was broken on the french blog Zorgloob, much credit to them for a brilliant find.
It would appear Valleywag’s Nick Denton is lacking a sense of irony and unfortunately I seem to have my commenting privileges revoked there now. Shame. He’s thoughtfully left this little nugget seemingly ending the argument with a resounding slap to my pride:
“Hey, Phil, I don’t mind being slagged off. Comes with the job. But you didn’t do it very effectively. One could make the point that mentions of Google itself have become more frequent. But sensationalism? I don’t think you proved your point”
What’s that Nick you can’t hear my answer from all the way over there because you blocked my account? Never mind. Sensationalist articles Nick, seeing as you are unaware, are those that are published without any proof behind them. So I put together my own sensationalist article on your sensationalist article and it appears you lack a sense of humour. Fortunately you’re unable to prove to me you have one because that’d mean you wrote something of substance. Unlike you Nick I won’t delete or remove negative comments even though I rate my blog above a tabloid so feel free to hurl insults from below if you wish.
THE ORIGINAL ARTICLE:
I saw over on Valleywag they’ve written yet another hack piece on the so-called Fear Of Google with the standard sensationalism and lack of humour. They’ve even drawn a pretty graph they collated data on from the Nexis newspaper database showing their spectacular lack of knowledge on current Google events.
Being a bit of a dry and sarcastic git I present to you Fear Of Google: As Seen On Google Timeline! which is a representation of how Google itself sees the phenomenon.
Personally I have no fear of Google (though I am typing this in the stationary cupboard but that’s because of my love of pens) and instead feel an increasing need to criticise them rather than run in fear. Then again, people react in the same way with governments and it’s surprising that a company can approach that level.
I get a lot of comments, trackbacks and email lambasting me for daring to suggest that Google is anything but the best and most ingenious search engine ever made. Someone even said :
‘I use Google because it’s the best, end of’
Wow, that’s telling me a few things. Firstly you think Google’s the best. Secondly you think that’s the end of the discussion. Thirdly you’re an idiot.
I can say these things because I’m quite sure he doesn’t read this blog because he’s not open to new ideas. That was the fourth thing.
People seem to think that because Google is the market leader, that they’ve revolutionised search that it isn’t right to criticise them. You’re wrong, they should be criticised now more than ever because they are the most visible search engine and should be setting an example in terms of innovation, human rights and design.
What is occurring with how people relate to Google is a fanboy mentality (or fangirl) of a similar vein to the much despised Apple fanboy (at least on this blog). People praise anything with the Google brand on it, regardless of its quality and originality, and that to me is a sad state of affairs because it means Google have increasingly less of a metric to measure whether they actually are producing a worthy product.
I know that in the UK, investors will still not touch search engine startups with a sh*tty stick. This is because of Google, they(the investors) feel nothing will ever surpass it. This means there are practically no search engine startups in the UK and it seems only recently that they are even coming to the fore again in the States. I can’t tell you how much this irritates me, I’ve spent years working around search because I love its fundemental simplicity as a human need, that thirst for knowledge and learning, and it’s only since I spent 2 years of my own time programming a search engine that I was ever able to convince a company in the UK to take on a property that would overlap that of the mighty Google.
So when you read my post in future don’t think that I hate Google or have some spite against them. It’s the opposite, I love Google and want them to innovate as much as possible because otherwise they’re dead in the water. Ultimately though, I love search and whoever delivers me with the most innovation and sheer simple brilliance is going to be my choice.
Capitalism comes after natural selection.
But, I expected too much of our relationship I think – which only started as an accident anyway if you care to remember (that’s another thing you never seem to forget any of my personal details, it’s creepy). Don’t get me wrong, we had some great times but you’re just not as exciting as you were.
The thing is, I’ve started seeing other search engines occasionally. Not all the time, no there’s no one serious… yet. They do things for me you never did though and aren’t afraid of making me a little frightened. You should have seen the pictures Snap showed me, the way Hakia listened to me, and well… to be honest I did have some group fun with Clusty.
I take full responsibility for this, I obviously expected too much and that wasn’t fair on you – just know that you’ll always have a special place in my head.
PS. And Google, please do write me back, I’m open to your views here and I know how personable you are. Heck, if you’re passing by in your awesome ride, just leave me a comment even. (Diggnation t-shirt for the best response!).
I have been writing search articles since November 2006 and as a search engineer it helps me to construct my thoughts and focus on my work. I was never really under the impression that anybody in industry paid a huge amount of attention to my thinking though – maybe they still don’t – but I appear to have predicted many of the features of Google’s new experimental search in quite some elaborate detail.
See results on a timeline or map. With the timeline and map views, Google’s technology extracts key dates and locations from select search results so you can view the information in a different dimension.
Timeline and map views work best for searches related to people, companies, events and places.
You can find the opinion at other levels as well though, and this is where the power comes in in terms of really targeting what the user is looking for quickly and efficiently. All the following mean that this is the first true example of social search:
- Find the opinion over a range of dates, good for current events, modern history, changes in trends.
The text in black is from Google’s experimental ‘Timeline and map views’ search, the text in blue is from this article published February 19th, 2007. I was talking about a semantic implementation used to search through results by a timeframe instead of a straight relevance search. You can even narrow down date ranges by clicking on the timeline to select an era.
Spooky eh? I don’t think anyone else predicted that.
I hear you though, that’s just one example and I got lucky, so lets see… what else did I talk about in the same article?
- When you begin a search, you enter just one or two keywords in the topic you’re interested in.
- Related keywords appear, which you can then select from to target your search and remove any doubts about dual meanings of a word for example.
The image above is from Google’s brand spanking new ‘Left-hand search navigation‘. Look at the white box (I’ve faded the rest out for clarity) and there you have it, exactly the feature I was talking about in all its living glory.
So what about the other features of Google’s new experimental search? Right hand search navigation… oh hang on that’s just the left hand navigation on the… urr… right side. Revolutionary. There’s also keyboard shortcuts! Oh hang on I just did a Ctrl-S.
In fact all the ‘new’ features of Google’s experimental search are ones I wrote about developing back in February and this I find slightly irritating for a few reasons:
- It’s very unlikely that they saw my work and copied it. The main worry for me is that they still can’t come up with anything new even in what they call experimental search and that’s what bothers me about a company with so many PhDs. Why don’t they just give me a job? And yes, I am bitter 🙂
- They are afraid to push the boundaries. For instance in my article I write about some ideas far more experimental than these, but they’ve not got anything exciting or new in there. I want to see the things that happen in the Googleplex, and then get binned for being too radical or wacky. I want to see binned.google.com never mind labs.google.com.
Again, Google copying me is hugely unlikely, this I’m sure is a coincidence. However, a coincidence that should never happen. I’m one person developing search technology with no PhD and heck I never even finished my degree I just cram programming languages into my brain and think of crazy ideas until they form into something I can use.
They have all the intelligence in one building, you have no idea what I’d give to use that kind of working environment and the batsh*t insane ideas we’d make a reality.