Crawling
Google uses software known as Web Crawlers to discover publicly available webpages. The most well-known crawler is called Googlebot. Crawlers look at webpages and follow links on those pages and go from link to link and bring data about those webpages back to Google’s servers.Google essentially gathers the pages during the crawl process and then creates an index much like the index in the back of a book. The Google index includes information about words and their locations. When we search, at the most basic level, their algorithms look up our search terms in the index to find the appropriate pages.
Algorithm
Algorithms are the computer processes and formulas that take our queries and solve them from thousands of webpages with helpful information. Google uses PageRank Algorithm developed by its founders Sergey Brin and Larry Page. Today Google’s algorithms rely on more than 200 unique signals which include things like the terms on websites, the freshness of content and our region that make it possible to guess what we might really be looking for.
Search algorithms take a query (usually a set of words) and returns a set of results related to those words.
In Google’s case, that set of results are links to webpages that, hopefully, answer or provide relevant information to your query.
The magic happens in that pseudoscientific ‘F’ function.
Which information googlebot take
- Thee title of the page
- how recently it’s been updated
- how fast it loads
- what words are on the page
- how many and what kind of images are on the page
- what topics the page covers.
0 Comments