Rand Fishkin's slide deck from SMX Advanced Seattle 2011 showing several interesting datapoints from the 2011 Search Engine Rankings Factors survey and correlation data on SEOmoz.
Scaling API-first – The story of a global engineering organization
Interesting Data from the 2011 Ranking Factors
1. Interesting Findings from the2011 Search Ranking Factors The full data is now online at http://bit.ly/rankfactors2011 This deck is available online at http://bit.ly/randsmxdeck Presented for SMX Advanced, Seattle Rand Fishkin, SEOmoz CEO, June 2011
2. Understanding, Interpreting & Using Survey Opinion Data Everybody’s wrong sometimes, but there’s a lot we can learn from the aggregation of opinions
3. #1: Opinions are Not Fact (these are smart people, but they can’t know everything about Google’s rankings) #2: Not Everyone Agrees(standard deviation can help show us the degree of consensus) #3: We Had 132 Contributors (but this group could be biased as they were editorially selected via a nomination process) http:/googleblog.blogspot.com/2010/06/our-new-search-index-caffeine.html Many thanks to all who contributed their time to take the survey!
4. Understanding, Interpreting & Using Correlation Data This is powerful, useful information, but with that power comes responsibility to present it accurately
5. Methodology 10,271 Keywords, pulled from Google AdWords US Suggestions (all SERPs were pulled from Google in March 2011, after the Panda/Farmer update) Top 30 Results Retrieved for Each Keyword(excluding all vertical/non-standard results) Correlations are for Pages/Sites that Appear Higher in the Top 30(we use the mean of Spearman’s correlation coefficient across all SERPs) Results Where <2 URLs Contain a Given Feature Are Excluded(this also holds true for results where all the URLs contain the same values for a feature) More details, including complete documentation and the raw dataset is now available at http://www.seomoz.org/article/search-ranking-factors http:/googleblog.blogspot.com/2010/06/our-new-search-index-caffeine.html
6. Correlation & Dolphins Dolphins who swim at the front of the pod tend to have larger dorsal fins, more muscular tails and more damage on their flippers. The first two might have a causal link, but the damaged flippers is likely a result of swimming at the front (i.e. having damaged flippers doesn’t make a dolphin a better front-of-the-pod-swimmer). Likewise, with ranking correlations, there’s probably many features that are correlated but not necessarily the cause of the positive/negative rankings. http:/googleblog.blogspot.com/2010/06/our-new-search-index-caffeine.html
7. Correlation IS NOT Causation Earning more linking root domains to a URL may indeed increase that page’s ranking. But, will adding more characters to the HTML code of a page increase rankings? Probably not. Just because a feature is correlated, even very highly, doesn’t necessarily mean that improving that metric on your site will necessarily improve your rankings. http:/googleblog.blogspot.com/2010/06/our-new-search-index-caffeine.html
8. How Confident Can We Be in the Accuracy of these Correlations? Because we have such a large data set, standard error is extremely low. This means even for small correlations, our estimates of the mean correlation are close to the actual mean correlation across all searches. Standard error won’t be reported in this presentation, but it’s less than 0.0035 for all of Spearman correlation results (so we can feel quite confident about our numbers) http:/googleblog.blogspot.com/2010/06/our-new-search-index-caffeine.html
9. Do Correlations in this Range Have Value/Meaning? A factor w/ 1.0 correlation would explain 100% of Google’s algorithm across 10K+ keywords Most of our data is in this range A rough rule of thumb with linear fit numbers is that they explain the number squared of the system’s variance. Thus, a factor with correlation 0.3 would explain ~9% of Google’s algorithm. http:/googleblog.blogspot.com/2010/06/our-new-search-index-caffeine.html
11. The Changing Landscape of Google’s Ranking Algorithm These compare opinion/survey data from 2009 vs. 2011
12. In 2009, link-based factors (page and domain-level) comprised ~65% of voters’ algorithmic assessment http:/googleblog.blogspot.com/2010/06/our-new-search-index-caffeine.html
13. In 2011, link-based factors (page and domain-level) have shrunk in the voters’ minds from ~65% to ~45% of algorithmic components. Note: because the question options changed slightly (and more options were added), direct comparison may not be entirely fair. http:/googleblog.blogspot.com/2010/06/our-new-search-index-caffeine.html
14. What Do SEOs Believe Will Happen w/ Google’s Use of Ranking Features in the Future? While there was some significant contention about issues like paid links and ads vs. content, the voters nearly all agreed that social signals and perceived user value signals have bright futures. http:/googleblog.blogspot.com/2010/06/our-new-search-index-caffeine.html
15. Diversity + Anchor Text:Well Correlated with Higher Rankings These metrics are based on links that point specifically to the ranking page
16. In the rest of this deck, we’ll use linking c-blocks as a reference point, hence the red This data is exactly what an SEO would expect – the more diverse the sources, the greater the correlation with higher rankings. These numbers are relatively similar to our June 2010 correlation data (from http://www.seomoz.org/blog/google-vs-bing-correlation-analysis-of-ranking-elements). http:/googleblog.blogspot.com/2010/06/our-new-search-index-caffeine.html
17. Correlations of Page-Level, Anchor Text-Based Link Data No Surprise: Total links (including internal) w/ anchor text is less well-correlated than external links w/ anchor text Partial anchor text matches have greater correlation than exact match. This might be correlation only, or could indicate that the common SEO wisdom to vary anchor text is accurate. http:/googleblog.blogspot.com/2010/06/our-new-search-index-caffeine.html
18. ComparingPage + Domain-Level Link Signals These metrics are based on links that point to anywhere on the ranking domain
19. Correlation of Domain-Level Link Data Suggests page-level + domain-level link signals have relatively similar weighting, just as voters predicted. http:/googleblog.blogspot.com/2010/06/our-new-search-index-caffeine.html Domain-level link data is surprisingly similar to page-level link data in correlation
20. Have Exact Match DomainsLost their Lustre? These signals are based on keyword-use in the root domain name.
21. Spearman’s Correlation with Google Rankings for Exact Match Domain Names June 2010 vs. March 2011 Exact match domains (.com and all TLDs) have both fallen considerably in the past 10 months This suggests that Google’s statements last year about devaluing exact match domains may have not only been serious, but are already getting into the results. http:/googleblog.blogspot.com/2010/06/our-new-search-index-caffeine.html
22. Is Google Evil? These metrics come from a variety of places in the dataset, but mostly on-page stuff.
23. Google has said that linking externally is good; slow pages are bad; and using Google services won’t give any special benefit. This data supports those statements! This data suggests that, by-and-large, there’s not much “evil” in Google’s rankings, at least, none that correlation research will reveal. Good job keeping it honest, Googlers! http:/googleblog.blogspot.com/2010/06/our-new-search-index-caffeine.html
24. Social Signals These signals are based on data from users of Twitter, Facebook & Google Buzz via their APIs
25. Most Important Social Media-Based Factors (as voted on by 132 SEOs) Curious: For Twitter, voters felt authority matters more, while for Facebook, it’s raw quantity (could be because GG doesn’t have as much access to FB graph data). Although we didn’t ask voters for a cutoff on what they believe matters vs. doesn’t, I suspect many/most would have said that Google Buzz and Digg/Reddit/SU aren’t used in the rankings. http:/googleblog.blogspot.com/2010/06/our-new-search-index-caffeine.html
26. Correlation of Social Media-Based Factors(data via Topsy API & Google Buzz API) Amazing: Facebook Shares is our single highest correlated metric with higher Google rankings. Although voters thought Twitter data / tweets to URLs were more influential, Facebook’s metrics are substantially better correlated with rankings. Time to get more FB Shares! http:/googleblog.blogspot.com/2010/06/our-new-search-index-caffeine.html
27. Percent of Results (from our 10,200 Keyword Set) in Which the Feature Was Present It amazed me that Facebook Share data was present for 61% of pages in the top 30 results For most link factors, 99%+ of results had data from Linkscape; for social data, this was much lower, but still high enough that standard error is below 0.0025 for each of the metrics. http:/googleblog.blogspot.com/2010/06/our-new-search-index-caffeine.html
28. Correlation of Social Metrics, Controlling for Links(i.e. Are pages ranking well because of links and social metrics are simply good predictors of linking activity?) Correlations Controlling for Links Raw Correlations Twitter’s correlation wanes dramatically, but Facebook features, while lower, still appear quite influential. Facebook likely deserves much more SEO attention than it currently receives. http:/googleblog.blogspot.com/2010/06/our-new-search-index-caffeine.html
29. IMPORTANT! Don’t Misuse or Misattribute Correlation Data! Think of correlation data as a way of seeing features of sites that rank well, rather than a way of seeing what metrics search engines are actually measuring and counting. A well-correlated metric can often be its own reward, even if it doesn’t directly impact search engine rankings. Virtually all the data in this report reflect the best practices of inbound marketing overall – and using the data to help support these is an excellent application Thanks much! Rand http:/googleblog.blogspot.com/2010/06/our-new-search-index-caffeine.html Be a responsible user of correlation data – thank you!