PSN Seems Fine To Me

PSN Down


Vwls r s lst yr

I’ve found a neat little trick for speeding up those Google searches when you need to find the answer asap.

Simply type your query without any vowels. So that’s wtht vwls.

This works for me, it’s pretty quick to figure out what letters not to press and with practice you’re going to become much faster at typing queries in this way. Generally I find that most searches bring back the correct results and this improves as query length increases. All those little stop words tend to get thrown out of the mix anyway.

I wonder though, were this kind of simple mechanic applied on the server side of a major search engine, whether they’d see a performance improvement and what that would do for search quality? I’m not sure, but I suspect it may also improve searches for mispelled queries. If anyone has seen any research on this – please let me know I’m intrigued!


SEO Truth – Apparently I’ve Written A Book On SEO

Hello!

As usual I’ve been spending a horrendously long time without writing anything on my blog – and for that I apologise. However, I have spent some of my time writing an SEO (Search Engine Optimisation) handbook, covering the importance of next generation techniques and practises.

I’m sure there are those of you who are all too familiar with the increasingly backwards approaches used by a few ‘special’ SEO agents and individuals out there and perhaps for you this will merely reinforce what you already knew to be true. For those of you who don’t know what I’m talking about -then please read the book and have a good laugh at yourself for being such a silly.

You can order print copies of the book – just not yet… more details on that coming soonly!  I’ll be publishing online chapter by chapter (honestly I have finished writing it, but as an SEO, if I didn’t serialise it then it would look bad).

Enjoy the read and let me know what you think, if the first edition is terrible and you order, of course it’s going to be valuable in 200 years!

Preface

First off, I’d like to introduce myself. I’m a Search Engineer, a developer and programmer. I’ve worked with clients throughout the advertising industry at many different companies. My specialty is developing software that works with the search engines of companies like Google, Yahoo and MSN and attempts to influence the rankings of my client’s websites, as well as report on those ranking changes. I’ve never been to a lecture on computer science, read a book on development methodology and yet I’m in demand. My skills lie in understanding the technology of a search engine and how to capitalise on their ranking algorithms, web crawlers and content filters and it’s the ideas I generate in this area which have kept me in gainful employment.

SEO (Search Engine Optimisation) used to be a fairly simple task where you’d make sure every page on your client’s site had Meta tags, descriptions and content unique to that page. You might then try to analyse the keyword density of your key terms to keep them somewhere between 4 and 7 percent. More often than not most SEO companies wouldn’t even attempt that.

What most SEO companies would never tell you, and this is the industry’s most well kept secret, is that they’re intrinsically lazy. If you had a good client, with good content and a product of interest then their SERs (Search Engine Rankings) would climb entirely naturally to the top spots, you’d have nothing to do but sit back and reap the benefits of your lack of work.

This is of course a sad state of affairs which no real SEO company would allow and part of this book will help you to spot the difference between a professional outfit and rank amateurs and define the widening gap between the two camps.

As the title suggests I’m writing about the next generation of SEO. It’s becoming more difficult to increase the rankings of a particular website and it will only get more difficult to manipulate a website’s ranking without any understanding of how new search engine technology works. Lucky for you, my field is semantics (how to correlate the relationship between one word and another essentially) and you’re in for a whole chapter in manipulating a semantic index similar to those increasingly used by the major search engine players.

 

 

Chapter 1 – The Past

In order to proceed correctly in the future, the most important lesson is for us to understand what happened historically. There’s no shortage of information on the internet and amongst SEOs and webmasters about how Google’s original PageRank system worked. This is in large part thanks to a paper written by Google’s founders, Larry Page and Sergey Brin, whilst they were still studying for their PhDs at Stanford University. Not long after that they received their first investment from a company called Sun Microsystems which enabled them to build upon the hardware they had in their university dorm room and create the international phenomenon we know today.

PageRank was essentially a very simple system. It counted each link from one site to another as a vote for the destination site. By voting for another site the original gave away some of its own PageRank. The idea came from Salton’s Vector Space Model, which is a mathematical principal known to most Computer Science graduates today. This simple method of calculating which websites had the most votes, and therefore deserved higher rankings, is key to all search engine algorithms as it’s extremely fast to calculate. The most important factor in any search engine is its speed in returning and ranking results, especially when you’re dealing with an index of billions of pages.

 Anatomy of a search engine


The Anatomy of a Search Engine, based on the work of Larry Page and Sergey Brin whilst at Stanford.

If you understand that all calculations undertaken by a search engine must be as fast as possible, it allows you to draw logical conclusions:

·       Thinking about a page as a machine would (which struggles to actually understand rather than just read), rather than as a human, is key to analysing your websites content for SEO value.

·       Is every single underlined heading, keyword color, font size, image location, keyword relationship and page title length analysed when a page is crawled? It’s highly doubtful that anything too in depth is going to be indexed, when the crawler has another hundred thousand pages to visit and rank as quickly as possible, use some common sense here. Of course as processor speeds and bandwidth increase more in depth analysis will become possible in a shorter space of time.

·       The search engine needs to maximise two things: the speed of its calculations and its measure of quality relevancy. Occasionally one is going to suffer at the importance of the other, if you were going to choose between indexing a page poorly – or not at all – which would you do?

SEOs in the past were able to capitalise on this speed issue by choosing to concentrate on areas of a page such as the Meta tags, description and page title. The content itself gradually became more important as time went on but still was subject to the speed of indexing. SEOs quickly realised that keyword density (how many times a keyword appears on a page out of the total number of words) was a very quick way to determine some kind of relevancy, and that the search engines were using it too.

Once the search engines got wise they implemented filters that stopped SEOs from flooding a page with keywords. Arguments in the SEO community followed over exactly what was the ideal keyword density for a term, and this usually settled somewhere between 4 and 7 percent.

Of course the PageRank model meant that agencies were keen to build as many links to their client websites as possible. To make matters worse however they were after links that already had high PageRank values to gain the maximum ranking as quickly as possible and this sprang up a cottage industry of people generating high PageRank links, purely to sell on. Google of course were unhappy about this and their anti-spam team began its work. Blacklisting of websites which ‘farmed links’ was becoming fairly common and this moved on to other aspects of ‘black hat’ SEO behavior – where an unfair advantage was being made by some nefarious companies and individuals.

Most SEO agencies at this stage relied heavily on staff who’d be subjected to some extremely tedious and repetitive labour. Going through page after page of a website and adjusting the number of keywords on a page, slightly changing each page title and Meta tag was a boring job and not well paid.

Directors and CEOs didn’t have a whole stack of problems though, if they kept building up link relationships with ranking websites and making sure their Meta tags were in place, their job was done. Often enough they’d have clients who already had an interesting product which did most of the work itself, spreading links around the internet as people registered their interests.

This natural traffic increase was what Google was looking for as they wanted sites which progressed on their own merits rather than trying to beat the system.

 


I Bought A New Wig Today

Is the name of my new blog on the slightly odd spam that Akismet catches for me.

We need to relate to the spammers in order to understand their needs, and you can do so right here.


Sunbeam Is Your Search Engine

Sunbeam The First User Search EngineIn previous versions (for those of you lucky enough to see the Alpha of the world’s first search engine to run directly from the user’s own desktop) Sunbeam would ask you to input your favorite websites as a starting point for its indexing routines. This was a problem for two reasons:

  1. Nobody ever wants to enter anything they don’t have to, especially when that information exists somewhere on their machine.
  2. It limited the ‘profile’ of the user initially available to Sunbeam and how quickly they’d be able to retrieve information actually relevant to them.

It also meant that the semantic engine that appeared in the earliest release was not capable of returning accurate matches for a period whilst the engine cranked up and had indexed at least a few hundred pages.

I’d been musing over these problems for a while, I wanted an experience where the user would be able to just install the program, let it do its work without going through any configuration screens, which they may not understand or that might put them off the install completely.

The solution as it turned out, was fairly simple. Using the browsing history of the user we can track down the urls that are visited most frequently and most recently without damaging privacy. After all these are just starting points to build a profile of interests. Data like this is a goldmine for Sunbeams advanced statistical algorithms and will enable it to deliver the results that mimic the language used in the websites in your browsing history.

It doesn’t stop there though, also added are routines that scan your outlook sent messages, tracking the semantics of your own typed words. These again, are not stored as complete messages anywhere in the system, are not tied to email addresses or even subject lines and privacy here is key. What is most important here is that you as a user will never have to go through a slew of irritating questions when you install Sunbeam, that inadequately attempt to locate and disect your interests.

Seeing as I expect privacy to be such an issue here, let’s turn to another reason to use Sunbeam over Google or Yahoo:

  • Your searches are your own.
  • Your data will never be sent anywhere else (there isn’t the server space for it!).
  • If you choose to share your search database with anyone else (as easy as emailing the one file), then that’s completely up to you and not something you have to ‘opt-in’ to.

This software is entirely your own to play with, these are the things I’m really loving about it:

  • You can play with the open source search algorithm.
  • You can swap, share and amalgamate databases with friends or download one from the web.
  • There are no adverts, no pop ups and no interruptions.
  • If you don’t remember the exact word you’re looking for, just put in a similar one, or a descriptive phrase.
  • If you want to use the same database when you get home, just mail it to yourself.
  • If you don’t like the results you’re getting, run a seperate database for work and for home to match your corporate and downtime moods.
  • If you have to do market research on teenagers, just use the database your nephew compiled.


Fear Of Google: As Seen On Google Timeline!

UPDATE:

It would appear Valleywag’s Nick Denton is lacking a sense of irony and unfortunately I seem to have my commenting privileges revoked there now. Shame. He’s thoughtfully left this little nugget seemingly ending the argument with a resounding slap to my pride:

“Hey, Phil, I don’t mind being slagged off. Comes with the job. But you didn’t do it very effectively. One could make the point that mentions of Google itself have become more frequent. But sensationalism? I don’t think you proved your point”

What’s that Nick you can’t hear my answer from all the way over there because you blocked my account? Never mind. Sensationalist articles Nick, seeing as you are unaware, are those that are published without any proof behind them. So I put together my own sensationalist article on your sensationalist article and it appears you lack a sense of humour. Fortunately you’re unable to prove to me you have one because that’d mean you wrote something of substance. Unlike you Nick I won’t delete or remove negative comments even though I rate my blog above a tabloid so feel free to hurl insults from below if you wish.

THE ORIGINAL ARTICLE:

I saw over on Valleywag they’ve written yet another hack piece on the so-called Fear Of Google with the standard sensationalism and lack of humour. They’ve even drawn a pretty graph they collated data on from the Nexis newspaper database showing their spectacular lack of knowledge on current Google events.

Being a bit of a dry and sarcastic git I present to you Fear Of Google: As Seen On Google Timeline! which is a representation of how Google itself sees the phenomenon.

As Seen On Google Timeline!

Don’t bother reading Valleywag’s article, go and read what Scoble says instead if I was you.

Personally I have no fear of Google (though I am typing this in the stationary cupboard but that’s because of my love of pens) and instead feel an increasing need to criticise them rather than run in fear. Then again, people react in the same way with governments and it’s surprising that a company can approach that level.


I Think I Just Invented The Real Search 2.0

Ignore the numbers in the title for the moment if you will and focus on these keywords: social, networking, search, community.

Web 2.0, by many definitions is all about allowing users to network, interact and the read/write web. Search 2.0 in that context does not yet exist. There are in some instances communities that happen to be built around a search engine such as Yahoo and there are new semantic search engines that let the users tag pages and documents to be found (something I’ve talked about before and pointed out as next to useless). None of these let the users actually interact with which results are returned. There is no networking or interaction that takes place with the search engine itself and this is just plain wrong.

Do you know how many people are on the internet at any one time? I sure as hell don’t but it’s a big number 🙂

Who creates all the content that ends up on the internet anyway? It’s not machines, it’s people, human beings are ultimately responsible for all the content on the internet and that’s never going to change. So why are we asking machines about content created by fellow humans when most of the humans are online anyway and know far more about their subject, and where it’s covered on the internet than any machine is ever likely to?

Enough rhetorical questions, I’m going to tell you what the real Search 2.0 is and you’re going to shout at me and tell me I should have patented and I’m a fool. However, if you don’t hire me to build it for you then you’re a fool because I get these ideas on a daily basis and I will crush you at some point in my life. Just kidding, I’m a fan of open ideas as well as open source especially when they’re for the benefit of us all.

Search 2.0

  • An instant messenger application or website with a live AJAX interface forms the centerpeice of the front end.
  • Users create accounts and select their areas of interest by entering specific key phrases for those topics they feel most knowledgeable about.
  • Users can then also select web pages that match those highly specific key phrases if they choose to.
  • The search box appears as normal, you enter your query and the fun part of search 2.0 begins.
  • Your query is analysed against users on the system, what occurs at this stage is actually a search for users with the best matching key areas against your query.
  • If these users are online they can respond directly to your query, either suggesting a web link or entering a chat with you.
  • If no users are matched online, then the suggested web pages are searched for the best matching content.

That’s the basis of it, but let’s have a look at the immense social power here.

Firstly, you get to rate the responses you receive, meaning that people can gain a reputation score for specific subjects and topics giving them an online credibility for that topic.

Sorry we just had an office fly-hunting session. Don’t ask.

Right, where was I? This system is by its nature, very low spam, it can’t be manipulated to provide results that are less useful because if you try and peddle a corporate product that’s crap, your reputation will drop very quickly and you’ll be banned. If the product is good on the other hand then who’s going to mind being directed to it if it answers their specific need and that’s better advertising than any money’s going to get you.

This concept is all about the users, no massively complicated algorithms need writing here it’s just using the very advanced and articulate knowledge of the very people who create the content you’re looking for, and to get the best answer you’ll ever get from a search engine is it not worth answering a couple of questions every now and again about the subjects you enjoy?

You can also bookmark people just like in any other IM and make friends with people holding the same interests, who you’d never meet on any other social network, and certainly would never think to find from a search engine.