The work of the search engine is divided into three stages, i.e., crawling, indexing, and retrieval.
Crawling
This is the first step in which a search engine uses web crawlers to find out the webpages on the World Wide Web. A web crawler is a program used by Google to make an index. It is designed for crawling, which is a process in which the crawler browses the web and stores the information about the webpages visited by it in the form of an index.
So, the search engines have the web crawlers or spiders to perform crawling, and the task of crawler is to visit a web page, read it, and follow the links to other web pages of the site. Each time the crawler visits a webpage, it makes a copy of the page and adds its URL to the index. After adding the URL, it regularly visits the sites like every month or two to look for updates or changes.
Indexing
After a page is discovered, Google tries to understand what the page is about. This process is called indexing. Google analyzes the content of the page, catalogs images and video files embedded on the page, and otherwise tries to understand the page. This information is stored in the Google index, a huge database stored in many, many (many!) computers.
To improve your page indexing:
Create short, meaningful page titles.
Use page headings that convey the subject of the page.
Use text rather than images to convey content. (Google can understand some image and video, but not as well as it can understand text.
No comments:
Post a Comment