SEO – How Search Engines Work
I asked ?a kid for the definition of Internet and he replied – “Google is the Internet”. This is an answer I wasn’t expecting at all but this thing made me wonder the importance of Search Engines and the wonders it has brought into everyone’s life. It is one of the biggest invention after the Internet itself. In fact, Internet and Google are so tightly linked together that anyone would easily get confused between these two. It is our default homepage, it is the first page from where we find our way on the internet. It is the most viewed page on the internet. Therefore, there is a need to learn SEO & How Search Engines Work.
Ever imagined a life without Search Engines? How did we search the contents we want? What would happen to the flow of information? The world will simply shrink up. There will be no common place from where you can easily find the information you want. Then people will have to maintain separate dictionaries for different websites and it will all become unmanageable and time-consuming. Hopefully, we have search engines to solve our queries.
How Search Engines Work
There is two most important job which is at the heart of every Search Engine. The first job of a search engine is to CRAWL and the second most important job is to INDEX. Let’s understand each one by one.
The Internet is a big network which consisting millions of websites. Each website has connecting links to page or websites present on the same network.
Imagine you are on a tour to some city. You need some kind of transportation medium to move from one place to another. Consider each stop as unique document (i.e, a webpage, a document, etc…). You need a way to crawl entire city and visit all the stops. The best possible way to accomplish this are the buses (LINKS) that will take you from one stop to another. This is how Search Engines work.
Search engines crawl your webpage and every other interconnecting links present on that page. It does it similarly to a human but only faster (way too fast). This is how search engines visit each and every link of your website.
Crawling is the process of finding different pages on the internet. Once the pages are crawled and the information has been retrieved then this information is saved in the massive database. Indexing is the process to accumulate this information in a structured way which is efficient to retrieve later. It is similar to the process of indexing library books with their author, date of publish and much such information but it is done on a very large scale. You simply cannot imagine the amount of indexing done each minute. Here are the stats –
- In 1999, it took Google one month to crawl and build an index of about 50 million pages.
- In 2012, the same task was accomplished in less than one minute.
Can you imagine the index rate now? Phewww? Un-imaginable.
The amount of data produced is needed to be structured and stored somewhere in order to efficiently process it later. Complex algorithms are used to process queries from thousands of servers located worldwide to provide the relevant search results to the user. If you are curious to know more than read Google Search Statistics.
Relavance and Popularity
In the early days of search engines, finding relevant search result was no more than finding the matching keywords. As search engines advanced, they have become more intelligent than ever. Engineers have devised better algorithms and relevancy of the content is affected by hundred of different factors.
In the early stage, the relevancy was found by analysing the frequency of keywords used in the article. This was exploited by overuse of the keywords which is known as “Keyword Stuffing”. This anomaly was solved by inventing new ways to rank pages. One such way is to view the number of backlinks to a website.?The more referrals a website get, the more relevant it is thought to be. But this algorithm was also cracked and the number of links started to grow up immensely all around the web. Thus resulting in the bad search result.
The algorithm was further modified to keep the weight of that particular link. Suppose, your website has been referred from some reputed website who ranks high and is related to the context of your article, then it is of more importance to the search engine than any other less known website. Therefore, fake links will not work anymore. ??
Nowadays the algorithms are so complex and deal with thousand of different variables that it is very hard to crack. These algorithms are kept hidden as there will be people trying to hack it and use it in unfavourable ways. From past 4 years, google has managed to keep it secret and has been successful in keeping the algorithm shrouded.
Bottom line – Make Content for People and not for Search Engines
How to get some success rolling in your favour. Well, follow these guidelines by Google Webmaster.
- Primarily make content for users and not for search engines.
- Do not try to deceive search engines by showing the different result to the user, a technique commonly knows as “cloaking”.
- Make a website with clear hierarchical links which are easily indexable by the search engines. Must provide a sitemap to the search engines which will reduce the job of search engines and able to better structure your webpages.
- Make sure you follow the proper guidelines for creating valuable content. Follow this article which will tell you about making your content SEO friendly. Tips for making your pages SEO Friendly.
- Use keywords to create more descriptive human friendly URLs. Avoid use of stop words in the URL. Read more about stop words here.
Go through this video for a more descriptive visual and audio presentation of the concept.
I hope I have made the concept clear for every budding individual wants to know about the process search engines follows to give the most relevant and popular search results. If you want to add anything to the above content then do comment below.
Be My Aficionado 🙂