Subscribe to enjoy similar stories. In the early days of the internet, finding content was a challenge. Even though hundreds of people were creating new websites everyday, unless you knew their URLs, there was no way to find them.
In those early days, I used a protocol called Gopher to access remote content. Servers called Gopherholes provided rudimentary search functionality by combining various online resources into a unified system that users could navigate in order to find what they were looking for. There was also a primitive search engine called Veronica that allowed us to search for information across multiple Gopherholes.
While all of this was better than memorizing a string of URLs, it was still very hard to find what you needed. The World Wide Web gave rise to a whole new breed of search engines. Websites like AltaVista, AskJeeves, and Yahoo tried to create a better search experience by comprehensively indexing as many websites as they could, using techniques that librarians have used for decades to organize books.
This approach, however was also largely ineffective at finding relevant information—particularly given the massive amount of content being uploaded onto the internet. And then came Google’s PageRank algorithm that categorized individual webpages based on the number of links it received from other websites. This allowed us to algorithmically determine the ‘importance’ of a page, thus offering us a far more effective way to find exactly what we were looking for.
Read more on livemint.com