Seminartopics.in

IT Seminar Topics >>Working Of Web Search Engine Seminar Report, PPT, PDF
 

 

Abstract of Working Of Web Search Engine

 

A search engine is an information retrieval system designed to help find information stored on a computer system. Search engines help to minimize the time required to find information and the amount of information which must be consulted, akin to other techniques for managing information overload. The most public, visible form of a search engine is a Web search engine which searches for information on the World Wide Web. Engineering a web search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been conducted on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. There are differences in the ways various search engines work, but they all perform three basic tasks:

•  They search the Internet or select pieces of the Internet based on important words.

•  They keep an index of the words they find, and where they find them.

•  They allow users to look for words or combinations of words found in that index.

The most important measure for a search engine is the search performance, quality of the results and ability to crawl, and index the web efficiently. The primary goal is to provide high quality search results over a rapidly growing World Wide Web. Some of the efficient and recommended search engines are Google, Yahoo and Teoma, which share some common features and are standardized to some extent

Web crawlers are an essential component to search engines; running a web crawler is a challenging task. There are tricky performance and reliability issues and even more importantly, there are social issues. Crawling is the most fragile application since it involves interacting with hundreds of thousands of web servers and various name servers, which are all beyond the control of the system. Web crawling speed is governed not only by the speed of one's own Internet connection, but also by the speed of the sites that are to be crawled. Especially if one is a crawling site from multiple servers, the total crawling time can be significantly reduced, if many downloads are done in parallel. Despite the numerous applications for Web crawlers, at the core they are all fundamentally the same. Following is the process by which Web crawlers work:

•  Download the Web page.

•  Parse through the downloaded page and retrieve all the links.

•  For each link retrieved, repeat the process.

|

|

|

|

|

|

|

|

|

|

|

|

|

|

|

|

|

|

|

|

|

|

|

 

You may also like this :

Virtual LAN Technology
Virtual Private Network
Voice Over Internet Protocol
Voice Portals
Windows DNA
Wireless Application Protocol
Wireless Internet
Wireless Networked Digital Devices
X- Internet
Working Of Web Search Engine
Y2K38

 
Copyright © 2012 www.seminartopics.in      Contact us: seminar990@gmail.com