Search Engine Optimization: Part 1 What is SEO?

Search Engines use "spiders" or "crawlers" to build an index of the webpages available, the words on each page, and where on the page those words were located. These program robots begin on popular webpages and add important words found on the page to the search engine's index. From there, they follow every link on the page and index the corresponding pages before using the links on those pages to go to the next set of pages, and so on and so forth. This process is known as crawling.

Each search engine uses different rules for determining which words are indexed and which words aren't. Some search engines index every word on the page. Others focus on the most common words, the words in titles and subtitles, and the first few lines of text.

Meta tags are keywords that describe the webpages content, but don't appear on the webpage itself. They can be used to clarify meanings of words used in the article, prevent unwanted traffic, and optimize search engine response to a page. On hubpages, hubbers don't control the meta tags. Hubpages uses meta tags to control which hubs are searchable and which hubs aren't.

If you've ever tried searching the same phrase on different search engines, you probably noticed that you got different results. This is because each engine uses different algorithms to weight and index keywords and determine search result rankings. Search engine ranking algorithms use website popularity, meta tags, number of back links (links to the page), keyword frequency and location and a wide variety of other factors to rank webpages and how well they correlate to viewer searches.

In addition to relevance, website popularity is taken into account in determining search rankings. As more interested users are directed to your page by your SEO techniques, not only will the techniques increase your relevancy scores, but the popularity component of the ranking algorithms will increase as well.

(For more concrete information on specific ranking factors and their relative weight please see https://moz.com/search-ranking-factors)

Once this information is gathered by the spiders, it is encoded and stored for indexing. In order to even out the difference between the time needed to search a term beginning with a popular letter like 't' and a search term beginning with a less popular letter like 'q,' a numerical value is applied to each word. This process is known as hashing. Not only does hashing even out problems related to letter frequency, but it condenses the index. Only the numerical value and a link to the actual information is stored in the index. This increases index and search speed, especially with more complicated searches that involve multiple words.

When a user performs a search, he or she types a query into the search box. Boolean operators can be used to define specific relationships between the terms in a query. Some of the most common operators are:

AND-requires that both terms are on the pageOR-requires that one term or the other is on the pageNOT-excludes pages that include the following termNEAR-requires that two terms be near each other on the page"quotation marks"-requires the query be treated as a phrase, instead of each significant word in the query being considered an individual keywordFOLLOWED BY-requires that one term be followed by another

These searches are defined as literal searches. Research is currently underway on concept based searching which uses statistics statistical analysis of webpages containing your query to recommend pages you might be interested in, as well as natural language searching which allows users to type a question into the search box using the simple language they would use to ask a friend their question instead of using Boolean operators.