A web search engine is a software system that is designed to search for information on the World Wide Web. The search results are generally presented in a line of results often referred to as search engine results pages (SERPs). The information may be a specialist in web pages, images, information and other types of files. Some search engines also mine data available in databases or open directories. Unlike web directories, which are maintained only by human editors, search engines also maintain real-time information by running an algorithm on a web crawler.
List of Major Search Engines
A search engine spider is a program that a search engine uses to seek out information on web in an automated manner. Spider also known as a bot, robot, or crawler. It follows links throughout the Internet, grabbing content from sites and adding it to search engine databases.
Crawling is the process in which the crawler, bot, or spider which finds the documents on the World Wide Web.
Web crawling is how the web spiders find documents on the web. The web spider typically starts crawling with the help of URL’s from the popular websites like Yahoo, MSN, etc, extracting the outgoing URL’S and crawls them. The spider crawls through the documents and run them through a spam and duplicates content filter placing then in the web index.
Indexing is the process of recording information for easy and quick retrieval upon a search query.
Indexing is the second major step a search engine takes to deliver information to your fingertips. A search engine must maintain a copy of all the content it finds during the crawl process, and store it in an index for easy retrieval. Without an index a search engine would have to re-run the crawl process for every search query performed. How exactly a search engine’s web index is designed and maintained is fairly complex and as an SEO it is beyond the need to know all the technical details.
As SEO’s we spend most of our time figuring out the Ranking part of the process, but the basics of indexing should be known and understood.
It’s the Date and time left by the spider when it has crawled the particular page of your website.
What is Website?
A website is a set of related web pages which are served from a single web domain and is hosted on at least one web server, accessible via a network such as the Internet through an Internet address known as a Uniform resource locator(URL). A webpage is a page with formatting instructions of Hypertext Mark-up Language (HTML, XHTML).
What is Keyword?
Keyword research is considered an important aspect in the search marketing field. Ranking for the “right” keywords is the main factor for your website. While analyzing your market’s keyword demand, you not only learn which terms and phrases to target with SEO, but also learn more about your customers as a whole.
What is Keyword Density?
Keyword density is the percentage of times a keyword or phrase appears on a web page compared to the total number of words on the page. In the context of search engine optimization keyword density can be used as a factor in determining whether a web page is relevant to a specified keyword or keyword phrase. Many SEO experts consider the optimum keyword density to be 1 to 3 percent. Using a keyword more than that could be considered search spam.
Keywords Research and Analysis
Keyword research is considered as one of the most important aspect, valuable, and high return activities in the search marketing field.
Keyword research is the first step of analysis that any SEO practitioner should work upon in a search engine optimization (SEO) project. It’s an extremely important component in the overall analysis since it is from the keyword research that we determine the optimum keywords to rank for.
You really need to study the website wholly which you are optimizing for and its business competitors to get a feel of the industry and get an idea once you complete the keyword research.
You need to understand the audience you will be marketing to and the relevancy of these keywords to that audience. There are also other subtle differences such as whether your business markets to the consumer or to other businesses.
Google Sandbox effect
Whenever you make new website and make it live, the website would not appear in Google Search immediately. It may take up to 2 – 3 weeks or also a month’s time to show in search results. Why So?? Reason being, whenever search engines see a new registered website they don’t index or save the pages immediately. Search Engines will check many factors like the Quality of the Website, links, content. Basically Google takes time to examine your website until it gets mature enough to be allowed to the Top Positions club. Many SEO experts have seen in practice that new sites, no matter how well optimized, don’t rank high on Google, while on MSN and Yahoo they catch quickly. So it’s simple. It’s never easy for newcomers to enter a market and there are obstacles of different kinds. For newcomers to the world of search engines, the obstacle is called a sandbox.
Page Rank is an algorithm used by Google Search to rank websites in their search engine results. Page Rank is a way of measuring the importance of website pages.
According to Google: “Page Rank works by counting the number and quality of links to a page to determine a rough estimate of how important the website is. The underlying assumption is that more important websites are likely to receive more links from other websites.”
Page Rank is a link analysis algorithm and it assigns a numerical weighting to each element of a hyperlinked set of documents, such as the World Wide Web, with the purpose of “measuring” its relative importance within the set. The algorithm may be applied to any collection of entities with reciprocal quotations and references. The numerical weight that it assigns to any given element E is referred to as the Page Rank of E and denoted by PR(E)