Monday 1 September 2014

What is a Search Engine, Crawling and Indexing?

Search Engine Optimization
A Search Engine is a programme that usually finds or searches for and recognizes different items in and from a database which corresponds to relevant and important keywords stated by the user and especially used for the purpose of searching the specific sites or blogs or forums or so forth on the WWW (World Wide Web). It is a system of software which is maintained and designed to recognize information on the World Wide Web. The search results are commonly given in a line search results and sometimes mentioned to as SERPs or called Search Engine Results Pages. The information can be of different types such as, images, written documents or contents, videos or other types of files. Almost all the search engine preserves real-time information in running crawler and algorithm.

What is Searching?

A Search Query or a Web Search Query is a term or words or keyword enters into a web-search-engine to satisfy an individual’s information needs. It can be sometimes plain text or hypertext. This can be varied with simple to standard query languages that are regulated and managed by strict syntax rules as command languages with proper keyword.

     What are the categories of Searching?

There are three categories of Searching which cater most of the web search queries such as transactional, informational and navigational.

Transactional
Some queries which is considered the purpose of user to perform a specific task, like buying a car or downloading a screen saver.

Informational
Some queries that generally cover with a wide range of topics for which there can be thousands and thousands of pertinent outcome.

Navigational
Some queries that find a single website or blog or web page of a single operation. For instance, YouTube. 

What is Web Crawling?

A Web Crawler is an Internet bot which systematically browses the WWW (World Wide Web), for web indexing in a typical manner. A Web Crawler sometimes known as Web Spider or an Ant or often called an Automatic Indexer or a Web Scutter. But it is popularly known as Web Crawler. Generally all the giant search engines use this web crawling system software to update its web contents and also to index other websites contents and information.  Crawlers can also validate HTML and Hyperlinks. It also used for the purpose of web scraping.

     What is Indexing?

A Search Engine Index, collects and recognizes, parses and stores data to facilitate the information very quickly and accurately. The indexing outline assimilates the ideas of more than one branches of knowledge like linguistics, cognitive psychology, informatics, mathematics, physics and computer science. More easily we can say a Search Engines designed to search, find and recognize web pages on the internet is called web-indexing or Search Engine Indexing. Almost all the popular Search Engines mainly look for the full text indexing of online as well as documents written in natural language. The search documents can be of several types like, contents, videos, audio etc. 



No comments:

Post a Comment