Everyone knows that search engine optimization marketing is so complex and complicated. If you talk to any SEO Company you will get a breakdown on how they will improve your rankings for the search engines and most specifically Bing. There are a ton of factors that go into how Bing ranks you on their search engine. Here is a good piece by Search Engine Land on the “Human Search Quality Raters” that Bing uses to judge websites.

A web page that fully satisfies a searcher’s intent is considered “Perfect.” These pages should appear at the top of Bing search results.

On the other hand, spammy pages offer little to no value. Such pages are classified as “Bad.”

That’s a bit of how Bing instructs the people in its Human Relevance System (HRS) project to grade web pages. Bing explains this in a 52-page document called the ‘HRS Judging Guidelines.’

The HRS project is similar to the Quality Rater program that Google uses. Microsoft’s version has been around in some form since shortly after MSN Search began generating its own search results in late 2004. Like Google, Microsoft uses testing services (like Lionbridge and others) to hire human search evaluators and administer the program. (Microsoft often refers to the evaluators as “judges,” and I’ll do the same in this article.)

Very little, if anything, has been written about Microsoft’s HRS project, and the company’s communications team was understandably reluctant to discuss it with Search Engine Land when we contacted them recently. But, when we shared a copy of the guidelines document that was given to us by a former judge, a Bing spokesperson did confirm that it’s the current version of the HRS guidelines. The document is dated March 15, 2012.

What’s inside? How does Bing ask its human search quality judges to grade web pages? Read on for details.

The Rating Matrix

The HRS Judging Guidelines ask judges to rely on a Rating Matrix to grade web documents. The matrix combines A) likely searcher intent with B) how well the document satisfies that intent. A document that strongly satisfies the most likely intent earns an Excellent/Perfect grade, while a document that poorly matches the most likely intent receives a Bad grade.

Rating Options

The matrix above shows the five rating options that judges can use, while the guidelines explain them in greater detail. This is really the heart of the document, the section that reveals what Bing looks for in grading (and likely in ranking) web pages/documents.

Here’s how Bing explains the five possible ratings:

1.) Perfect

“The LP is the definitive or official page that strongly satisfies the most likely intent.”

The document says that a Perfect landing page “should appear as the top search result.” It also says that only one landing page will typically deserve a Perfect rating, but for some generic queries (such as “loans” or “insurance”) there will not be a Perfect landing page. A Perfect page should address the intent of at least 50 percent of searchers.

2.) Excellent

Bing describes this as a landing page that “strongly satisfies a very likely or most likely intent” and “closely matches the requirements of the query in scope, freshness, authority, market and language.”Users finding an Excellent landing page“could end their search here and move on.” An Excellent page should address the intent of at least 25 percent of searchers.

An example in the document is that Barnes & Noble’s home page is an “Excellent” result for the search query “buy books.”

3.) Good

A Good landing page “moderately satisfies a very likely or most likely intent, or strongly satisfies a likely intent.” Bing says most searchers wouldn’t be completely satisfied with one of these pages and would continue searching. A Good page should address the intent of at least 10 percent of searchers.

4.) Fair

This rating applies to pages that are only useful to some searchers. A Fair page “weakly satisfies a very likely or most likely intent, moderately satisfies a likely intent, or strongly satisfies an unlikely intent.” A Fair page addresses the intent of at least one percent of searchers.

5.) Bad

In addition to being useful to almost no one and not satisfying user intent, this rating applies to a web page that “uses spam techniques” or “misleadingly provides content from other sites,” as well as to parked domains and pages that attempt to install malware. A Bad page addresses the intent of less than one percent of searchers.

The document explains additional ratings like “Detrimental,” which applies to web pages that display adult-only content, and “No Judgment,” which covers pages that reviewers cannot access for various reasons.

Freshness

There’s a fairly detailed section on freshness. It explains why judges should take freshness into account when reviewing web documents and suggests situations when fresh content is more valuable and others when it’s not as important. The document explains that there are “essentially” three categories of freshness-related queries Fresh Not Important, Very Likely Fresh and Most Likely Fresh and offers this chart with example search queries to distinguish them.

Additional Considerations

There are also sections addressing queries where the search term is a URL, how to judge misspelled queries, and how to judge local queries. For example, the home page of the Arizona Hispanic Chamber of Commerce ranks as “Perfect” for the query hispanic chamber of commerce glendale az because Glendale is a suburb of Phoenix, and no Hispanic Chamber of Commerce office exists in Glendale.

As I said above, very little has ever been written about Microsoft’s Human Relevance System project for rating search results. From reading through the guidelines doc, I’d say it’s not all that different from Google’s handbook for its ratings.


At Noxster SEO, we simplify and clarify search complexities. Our SEO strategies, backed by data, work in accordance with the intent of the user, which helps your business gain top placements on Google Search and Bing. We don’t simply optimize websites; we construct visibility engines that attract, engage, and convert.