How to Use Tools to Determine Which Content to Re-Optimize: A Step-by-Step Guide
Posted by Jeff_Baker
Why is everyone and their grandparents writing about content re-optimization?
I can’t speak for the people writing endless streams of blogs on the subject, but in Brafton’s case, it’s been the fastest technique for improving rankings and driving more traffic.
As a matter of fact, in this previous Moz post, we showed that rankings can improve in a matter of minutes after re-indexing.
But why does it work?
It’s probably a combination of factors (our favorite SEO copout!), which may include:
- Age value: In a previous study we observed a clear relationship between time indexed and keyword/URL performance, absent of links:
- More comprehensive content: Presumably, when re-optimizing content you are adding contextual depth to existing topics and breadth to related topics. It’s pretty clear at this point that Google understands when content has fully nailed a topic cluster.
- It’s a known quantity: You’re only going to be re-optimizing content that has a high potential for return. In this blog post, I’ll explain how to identify content with a high potential for return.
How well does it work?
Brafton’s website is a bit of a playground for our marketing team to try new strategies. And that makes sense, because if something goes horribly wrong, the worst case scenario is that I look like an idiot for wasting resources, rather than losing a high-paying client on an experiment.
You can’t try untested procedures on patients. It’s just dangerous.
So we try new strategies and meticulously track the results on Brafton.com. And by far, re-optimizing content results in the most immediate gains. It’s exactly where I would start with a client who was looking for fast results.
Example: Top Company Newsletters
Example: Best Social Media Campaigns
In many cases, re-optimizing content is not a “set it and forget it,” by any means. We frequently find that this game is an arms race, and we will lose rankings on an optimized article, and need to re-re-optimize our content to stay competitive.
(You can clearly see this happening in the second example!)
So how do you choose which content to re-optimize? Let’s dig in.
Step 1: Find your threshold keywords
If a piece of content isn’t ranking in the top five positions for its target keyword, or a high-value variant keyword, it’s not providing any value.
We want to see which keywords are just outside a position that could provide more impact if we were able to give them a boost. So we want to find keywords that rank worse than position 5. But we also want to set a limit on how poorly they rank.
Meaning, we don’t want to re-optimize for a keyword that ranks on page eleven. They need to be within reach (threshold).
We have found our threshold keywords to exist between positions 6–29.
Note: you can do this in any major SEO tool. Simply find the list of all keywords you rank for, and filter it to include only positions 6-29. I will jump around a few tools to show you what it looks like in each.
You have now filtered the list of keywords you rank for to include only threshold keywords. Good job!
Step 2: Filter for search volume
There’s no point in re-optimizing a piece of content for a keyword with little-to-no search volume. You will want to look at only keywords with search volumes that indicate a likelihood of success.
Advice: For me, I set that limit at 100 searches per month. I choose this number because I know, in the best case scenario (ranking in position 1), I will drive ~31 visitors per month via that keyword, assuming no featured snippet is present. It costs a lot of money to write blogs; I want to justify that investment.
You’ve now filtered your list to include only threshold keywords with sufficient search volume to justify re-optimizing.
Step 3: Filter for difficulty
Generally, I want to optimize the gravy train keywords — those with high search volume and low organic difficulty scores. I am looking for the easiest wins available.
You do not have to do this!
Note: If you want to target a highly competitive keyword in the previous list, you may be able to successfully do so by augmenting your re-optimization plan with some aggressive link building, and/or turning the content into a pillar page.
I don’t want to do this, so I will set up a difficulty filter to find easy wins.
But where do you set the limit?
This is a bit tricky, as each keyword difficulty tool is a bit different, and results may vary based on a whole host of factors related to your domain. But here are some fast-and-loose guidelines I provide to owners of mid-level domains (DA 30–55).
Tool |
KW Difficulty |
---|---|
Ahrefs |
<10 |
Moz |
<30 |
Semrush |
<55 |
KW Finder |
<30 |
Here’s how it will look in Moz. Note: Moz has predefined ranges, so we won’t be able to hit the exact thresholds outlined, but we will be close enough.
Now you are left with only threshold keywords with significant search volume and reasonable difficulty scores.
Step 4: Filter for blog posts (optional)
In our experience, blogs generally improve faster than landing pages. While this process can be done for either type of content, I’m going to focus on the immediate impact content and filter for blogs.
If your site follows a URL hierarchy, all your blogs should live under a ‘/blog’ subfolder. This will make it easy for you to filter and segment.
Each tool will allow you to segment keyword rankings by its corresponding segment of the site.
The resulting list will leave you with threshold keywords with significant search volume and reasonable difficulty scores, from blog content only.
Step 5: Select for relevance
You now have the confidence to know that the remaining keywords in your list all have high potential to drive more traffic with proper re-optimization.
What you don’t know yet, is whether or not these keywords are relevant to your business. In other words, do you want to rank for these keywords?
Your website is always going to accidentally rank for noise, and you don’t want to invest time optimizing content that won’t provide any commercial value. Here’s an example:
I recommend exporting your list into a spreadsheet for easy evaluation.
Go through the entire list and feel out what may be of value, and what is a waste of time.
Now that you have a list of only relevant keywords, you now know the following: Each threshold keyword has significant search volume, reasonable keyword difficulty, corresponds to a blog (optional), and is commercially relevant.
Onto an extremely important step that most people forget.
Step 6: No cannibals here
What happens when you forget about your best friend and give all your attention to a new, but maybe not-so-awesome friend?
You lose your best friend.
As SEOs, we can forget that any URL generally ranks for multiple keywords, and if you don’t evaluate all the keywords a URL ranks for, you may “re-optimize” for a lower-potential keyword, and lose your rankings for the current high value keyword you already rank for!
Note: Beware, there are some content/SEO tools out there that will make recommendations on the pieces of content you should re-optimize. Take those with a grain of salt! Put in the work and make sure you won’t end up worse off than where you started.
Here’s an example:
This page shows up on our list for an opportunity to improve the keyword “internal newsletters”, with a search volume of 100 and a difficulty score of 6.
Great opportunity, right??
Maybe not. Now you need to plug the URL into one of your tools and determine whether or not you will cause damage by re-optimizing for this keyword.
Sure enough, we rank in position 1 for the keyword “company newsletter,” which has a search volume of 501-850 per month. I’m not messing with this page at all.
On the flipside, this list recommended that I re-optimize for “How long should a blog post be.” Plugging the URL into Moz shows me that this is indeed a great keyword to reoptimize the content for.
Now you have a list of all the blogs that should be reoptimized, and which keywords they should target.
Step 6: Rewrite and reindex
You stand a better chance of ranking for your target keyword if you increase the depth and breadth of the piece of content it ranks for. There are many tools that can help you with this, and some work better than others.
We have used MarketMuse at Brafton for years. I’ve also had some experience with Ryte’s content optimizer tool, and Clearscope, which has a very writer-friendly interface.
Substep 1: Update the old content in your CMS with the newly-written content.
Substep 2: Keep the URL. I can’t stress this enough. Do not change the URL, or all your work will be wasted.
Substep 3: Update the publish date. This is now new content, and you want Google to know that as you may reap some of the benefits of QDF.
Substep 4: Fetch as Google/request indexing. Jump into Search Console and re-index the page so that you don’t have to wait for the next natural crawl.
Step 7: Track your results!
Be honest, it feels good to outrank your competitors, doesn’t it?
I usually track the performance of my re-optimizations a couple ways:
- Page-level impressions in Search Console. This is the leading indicator of search presence.
- A keyword tracking campaign in a tool. Plug in the keywords you re-optimized for and follow their ranking improvements (hopefully) over time.
- Variant keywords on the URL. There is a good chance, through adding depth to your content, that you will rank for more variant keywords, which will drive more traffic. Plug your URL into your tool of choice and track the number of ranking keywords.
Conclusion
Re-optimizing content can be an extremely powerful tool in your repertoire for increasing traffic, but it’s very easy to do wrong. The hardest part of rewriting content isn’t the actual content creation, but rather, the selection process.
Which keywords? Which pages?
Using the scientific approach above will give you confidence that you are taking every step necessary to ensure you make the right moves.
Happy re-optimizing!
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!
This Is What Happens When You Accidentally De-Index Your Site from Google
Posted by Jeff_BakerDoes reading that title give you a mini-panic attack?
Having gone through exactly as the title suggests, I can guarantee your anxiety is fully warranted.
If you care to relive my nightmare with me — perhaps as equal parts catharsi…
7 Search Ranking Factors Analyzed: A Follow-Up Study
Posted by Jeff_Baker
Grab yourself a cup of coffee (or two) and buckle up, because we’re doing maths today.
Again.
Back it on up…
A quick refresher from last time: I pulled data from 50 keyword-targeted articles written on Brafton’s blog between January and June of 2018.
We used a technique of writing these articles published earlier on Moz that generates some seriously awesome results (we’re talking more than doubling our organic traffic in the last six months, but we will get to that in another publication).
We pulled this data again… Only I updated and reran all the data manually, doubling the dataset. No APIs. My brain is Swiss cheese.
We wanted to see how newly written, original content performs over time, and which factors may have impacted that performance.
Why do this the hard way, dude?
“Why not just pull hundreds (or thousands!) of data points from search results to broaden your dataset?”, you might be thinking. It’s been done successfully quite a few times!
Trust me, I was thinking the same thing while weeping tears into my keyboard.
The answer was simple: I wanted to do something different from the massive aggregate studies. I wanted a level of control over as many potentially influential variables as possible.
By using our own data, the study benefited from:
- The same root Domain Authority across all content.
- Similar individual URL link profiles (some laughs on that later).
- Known original publish dates and without reoptimization efforts or tinkering.
- Known original keyword targets for each blog (rather than guessing).
- Known and consistent content depth/quality scores (MarketMuse).
- Similar content writing techniques for targeting specific keywords for each blog.
You will never eliminate the possibility of misinterpreting correlation as causation. But controlling some of the variables can help.
As Rand once said in a Whiteboard Friday, “Correlation does not imply causation (but it sure is a hint).”
Caveat:
What we gained in control, we lost in sample size. A sample size of 96 is much less useful than ten thousand, or a hundred thousand. So look at the data carefully and use discretion when considering the ranking factors you find most likely to be true.
This resource can help gauge the confidence you should put into each Pearson Correlation value. Generally, the stronger the relationship, the smaller sample size needed to be be confident in the results.
So what exactly have you done here?
We have generated hints at what may influence the organic performance of newly created content. No more, and no less. But they are indeed interesting hints and maybe worth further discussion or research.
What have you not done?
We have not published sweeping generalizations about Google’s algorithm. This post should not be read as a definitive guide to Google’s algorithm, nor should you assume that your site will demonstrate the same correlations.
So what should I do with this data?
The best way to read this article, is to observe the potential correlations we observed with our data and consider the possibility of how those correlations may or may not apply to your content and strategy.
I’m hoping that this study takes a new approach to studying individual URLs and stimulates constructive debate and conversation.
Your constructive criticism is welcome, and hopefully pushes these conversations forward!
The stat sheet
So quit jabbering and show me the goods, you say? Alright, let’s start with our stats sheet, formatted like a baseball card, because why not?:
*Note: Only blogs with complete ranking data were used in the study. We threw out blogs with missing data rather than adding arbitrary numbers.
And as always, here is the original data set if you care to reproduce my results.
So now the part you have been waiting for…
The analysis
To start, please use a refresher on the Pearson Correlation Coefficient from my last blog post, or Rand’s.
1. Time and performance
I started with a question: “Do blogs age like a Macallan 18 served up neat on a warm summer Friday afternoon, or like tepid milk on a hot summer Tuesday?”
Does the time indexed play a role in how a piece of content performs?
Correlation 1: Time and target keyword position
First we will map the target keyword ranking positions against the number of days its corresponding blog has been indexed. Visually, if there is any correlation we will see some sort of negative or positive linear relationship.
There is a clear negative relationship between the two variables, which means the two variables may be related. But we need to go beyond visuals and use the PCC.
Days live vs. target keyword position |
|
---|---|
PCC |
-.343 |
Relationship |
Moderate |
The data shows a moderate relationship between how long a blog has been indexed and the positional ranking of the target keyword.
But before getting carried away, we shouldn’t solely trust one statistical method and call it a day. Let’s take a look at things another way: Let’s compare the average age of articles whose target keywords rank in the top ten against the average age of articles whose target keywords rank outside the top ten.
Average age of articles based on position |
|
---|---|
Target KW position ≤ 10 |
144.8 days |
Target KW position > 10 |
84.1 days |
Now a story is starting to become clear: Our newly written content takes a significant amount of time to fully mature.
But for the sake of exhausting this hint, let’s look at the data one final way. We will group the data into buckets of target keyword positions, and days indexed, then apply them to a heatmap.
This should show us a clear visual clustering of how articles perform over time.
This chart, quite literally, paints a picture. According to the data, we shouldn’t expect a new article to realize its full potential until at least 100 days, and likely longer. As a blog post ages, it appears to gain more favorable target keyword positioning.
Correlation 2: Time and total ranking keywords on URL
You’ll find that when you write an article it will (hopefully) rank for the keyword you target. But often times it will also rank for other keywords. Some of these are variants of the target keyword, some are tangentially related, and some are purely random noise.
Instinct will tell you that you want your articles to rank for as many keywords as possible (ideally variants and tangentially related keywords).
Predictably, we have found that the relationship between the number of keywords an article ranks for and its estimated monthly organic traffic (per SEMrush) is strong (.447).
We want all of our articles to do things like this:
We want lots of variants each with significant search volume. But, does an article increase the total number of keywords it ranks for over time? Let’s take a look.
Visually this graph looks a little murky due to the existence of two clear outliers on the far right. We will first run the analysis with the outliers, and again without. With the outliers, we observe the following:
Days live vs. total keywords ranking on URL (w/outliers) |
|
---|---|
PCC |
.281 |
Relationship |
Weak/borderline moderate |
There appears to be a relationship between the two variables, but it isn’t as strong. Let’s see what happens when we remove those two outliers:
Visually, the relationship looks stronger. Let’s look at the PCC:
Days live vs. total keywords ranking on URL (without outliers) |
|
---|---|
PCC |
.390 |
Relationship |
Moderate/borderline strong |
The relationship appears to be much stronger with the two outliers removed.
But again, let’s look at things another way.
Let’s look at the average age of the top 25% of articles and compare them to the average age of the bottom 25% of articles:
Average age of top 25% of articles versus bottom 25% |
|
---|---|
Top 25% |
148.9 days |
Bottom 25% |
73.8 days |
This is exactly why we look at data multiple ways! The top 25% of blog posts with the most ranking keywords have been indexed an average of 149 days, while the bottom 25% have been indexed 74 days — roughly half.
To be fully sure, let’s again cluster the data into a heatmap to observe where performance falls on the time continuum:
We see a very similar pattern as in our previous analysis: a clustering of top-performing blogs starting at around 100 days.
Time and performance assumptions
You still with me? Good, because we are saying something BIG here. In our observation, it takes between 3 and 5 months for new content to perform in organic search. Or at the very least, mature.
To look at this one final way, I’ve created a scatterplot of only the top 25% of highest performing blogs and compared them to their time indexed:
There are 48 data plots on this chart, the blue plots represent the top 25% of articles in terms of strongest target keyword ranking position. The orange plots represent the top 25% of articles with the highest number of keyword rankings on their URL. (These can be, and some are, the same URL.)
Looking at the data a little more closely, we see the following:
90% of the top 25% of highest-performing content took at least 100 days to mature, and only two articles took less than 75 days.
Time and performance conclusion
For those of you just starting a content marketing program, remember that you may not see the full organic potential for your first piece of content until month 3 at the earliest. And, it takes at least a couple months of content production to make a true impact, so you really should wait a minimum of 6 months to look for any sort of results.
In conclusion, we expect new content to take at least 100 days to fully mature.
2. Links
But wait, some of you may be saying. What about links, buddy? Articles build links over time, too!
It stands to reason that, over time, a blog will gain links (and ranking potential) over time. Links matter, and higher positioned rankings gain links at a faster rate. Thus, we are at risk of misinterpreting correlation for causation if we don’t look at this carefully.
But what none of you know, that I know, is that being the terrible SEO that I am, I had no linking strategy with this campaign.
And I mean zero strategy. The average article generated 1.3 links from .5 linking domains.
Nice.
Linking domains vs. target keyword position |
|
---|---|
PCC |
-.022 |
Relationship |
None |
Average linking domains to top 25% of articles |
.46 |
Average linking domains to bottom 25% of articles |
.46 |
The one thing consistent across all the articles was a shocking and embarrassing lack of inbound links. This is demonstrated by an insignificant correlation coefficient of -.022. The same goes for the total number of links per URL, with a correlation coefficient of -.029.
These articles appear to have performed primarily on their content rather than inbound links.
(And they certainly would have performed much better with a strong, or any, linking strategy. Nobody is arguing the value of links here.) But mostly…
Shame on me.
Shame. Shame. Shame.
But on a positive note, we were able to generate a more controlled experiment on the effects of time and blog performance. So, don’t fire me just yet?
Note: It would be interesting to pull link quality metrics into the discussion (for the precious few links we did earn) rather than total volume. However, after a cursory look at the data, nothing stood out as being significant.
3. Word count
Content marketers and SEOs love talking about word count. And for good reason. When we collectively agreed that “quality content” was the key to rankings, it would stand to reason that longer content would be more comprehensive, and thus do a better job of satisfying searcher intent. So let’s test that theory.
Correlation 1: Target keyword position versus total word count
Will longer articles increase the likelihood of ranking for the keyword you are targeting?
Not in our case. To be sure, let’s run a similar analysis as before.
Word count vs. target keyword position |
|
---|---|
PCC |
.111 |
Relationship |
Negligible |
Average word count of top 25% articles |
1,774 |
Average word count of bottom 25% articles |
1,919 |
The data shows no impact on rankings based on the length of our articles.
Correlation 2: Total keywords ranking on URL versus word count
One would think that longer content would result in is additional ranking keywords, right? Even by accident, you would think that the more related topics you discuss in an article, the more keywords you will rank for. Let’s see if that’s true:
Total keywords ranking on URL vs. word count |
|
---|---|
PCC |
-.074 |
Relationship |
None |
Not in this case.
Word count, speculative tangent
So how can it be that so many studies demonstrate higher word counts result in more favorable rankings? Some reconciliation is in order, so allow me to speculate on what I think may be happening in these studies.
- Most likely: Measurement techniques. These studies generally look at one factor relative to rankings: average absolute word count based on position. (And, there actually isn’t much of a difference in average word count between position one and ten.)
- Likely: High quality content is longer, by nature. We know that “quality content” is discussed in terms of how well a piece satisfies the intent of the reader. In an ideal scenario, you will create content that fully satisfies everything a searcher would want to know about a given topic. Ideally you own the resource center for the topic, and the searcher does not need to revisit SERPs and weave together answers from multiple sources. By nature, this type of comprehensive content is quite lengthy. Long-form content is arguably a byproduct of creating for quality. Cyrus Shepard does a better job of explaining this likelihood here.
- Less likely: Long-form threshold. The articles we wrote for this study ranged from just under 1,000 words to nearly as high as 4,000 words. One could consider all of these as “long-form content,” and perhaps Google does as well. Perhaps there is a word count threshold that Google uses.
As we are demonstrating in this article, there may be many other factors at play that need to be isolated and tested for correlations in order to get the full picture, such as: time indexed, on-page SEO (to be discussed later), Domain Authority, link profile, and depth/quality of content (also to be discussed later with MarketMuse as a measure). It’s possible that correlation does not imply correlation, and by using word count averages as the single method of measure, we may be painting too broad of a stroke.
This is all speculation. What we can say for certain is that all our content is 900 words and up, and shows no incremental benefit to be had from additional length.
Feel free to disagree with any (or all) of my speculations on my interpretation of the discrepancies of results, but I tend to have the same opinion as Brian Dean with the information available.
4. MarketMuse
At this point, most of you are familiar with MarketMuse. They have created a number of AI-powered tools that help with content planning and optimization.
We use the Content Optimizer tool, which evaluates the top 20 results for any keyword and generates an outline of all the major topics being discussed in SERPs. This helps you create content that is more comprehensive than your competitors, which can lead to better performance in search.
Based on the competitive landscape, the tool will generate a recommended content score (their proprietary algorithm) that you should hit in order to compete with the competing pages ranking in SERPs.
But… if you’re a competitive fellow, what happens if you want to blow the recommended score out of the water? Do higher scores have an impact on rankings? Does it make a difference if your competition has a very low average score?
We pulled every article’s content score, along with MarketMuse’s recommended scores and the average competitor scores, to answer these questions.
Correlation 1: Overall MarketMuse content score
Does a higher overall content score result in better rankings? Let’s take a look:
Absolute MarketMuse score vs. target keyword position |
|
---|---|
PCC |
.000 |
Relationship |
None |
A perfect zero! We weren’t able to beat the system by racking up points. I also checked to see if a higher absolute score would result in a larger number of keywords ranking on the URL — it doesn’t.
Correlation 2: Beating the recommended score
As mentioned, based on the competitive landscape, MarketMuse will generate a recommended content score. What happens if you blow the recommended score out of the water? Do you get bonus points?
In order to calculate this correlation, we pulled the content score percentage attainment and compared it to the target keyword position. For example, if we scored a 30 of recommended 25, we hit 120% attainment. Let’s see if it matters:
Percentage content score attainment vs. target keyword position |
|
---|---|
PCC |
.028 |
Relationship |
None |
No bonus points for doing extra credit!
Correlation 3: Beating the average competitors’ scores
Okay, if you beat MarketMuse’s recommendations, you don’t get any added benefit, but what if you completely destroy your competitors’ average content scores?
We will calculate this correlation the same way we previously did, with percentage attainment over the average competitor. For example, if we scored a 30 over the average of 10, we hit 300% attainment. Let’s see if that matters:
Percentage attainment over average competitor score versus target KW position |
|
---|---|
PCC |
-.043 |
Relationship |
None |
That didn’t work either! Seems that there are no hacks or shortcuts here.
MarketMuse summary
We know that MarketMuse works, but it seems that there are no additional tricks to this tool.
If you regularly hit the recommended score as we did (average 110% attainment, with 81% of blogs hitting 100% attainment or better) and cover the topics prescribed, you should do well. But don’t fixate on competitor scores or blowing the recommended score out of the water. You may just be wasting your time.
Note: It’s worth noting that we probably would have shown stronger correlations had we intentionally bombed a few MarketMuse scores. Perhaps a test for another day.
5. On-page optimization
Ah, old-school technical SEO. This type of work warms the cockles of a seasoned SEO’s heart. But does it still have a place in our constantly evolving world? Has Google advanced to the point where it doesn’t need technical cues from SEOs to understand what a page is about?
To find out, I have pulled Moz’s on-page optimization score for every article and compared them to the target keywords’ positional rankings:
Let’s take a look at the scatterplot for all the keyword targets.
Now looking at the math:
On-page optimization score vs. target keyword position |
|
---|---|
PCC |
-.384 |
Relationship |
Moderate/strong |
Average on-page score for top 25% |
91% |
Average on-page score for bottom 25% |
87% |
If you have a keen eye you may have noticed a few strong outliers on the scatterplot. If we remove three of the largest outliers, the correlation goes up to -.435, a strong relationship.
Before we jump to conclusions, let’s look at this data one final way.
Let’s take a look at the percentage of articles with their target keywords ranking 1–10 that also have a 90% on-page score or better. We will compare that number to the percentage of articles ranking outside the top ten that also have a 90% on-page score or better.
If our assumption is correct, we will see a much higher percentage of keywords ranking 1–10 with an on-page score of 90% or better, and a lower number for articles ranking greater than 10.
On-page optimization score by rankings |
|
---|---|
Percentage of KWs ranking 1–10 with ≥ 90% score |
73.5% |
Percentage of keywords ranking >10 with ≥ 90% score |
53.2% |
This is enough of a hint for me. I’m implementing a 90% minimum on-page score from here on out.
Old school SEOs, rejoice!
6. The competition’s average word count
We won’t put this “word count” argument to bed just yet…
Let’s ask ourselves, “Does it matter how long the average content of the top 20 results is?”
Is there a relationship between the length of your content versus the average competitor?
What if your competitors are writing very short form, and you want to beat them with long-form content?
We will measure this the same way as before, with percentage attainment. For example, if the average word count of the top 20 results for “content marketing agency” is 300, and our piece is 450 words, we hit 150% attainment.
Let’s see if you can “out-verbose” your opponents.
Percentage word count attainment versus target KW position |
|
---|---|
PCC |
.062 |
Relationship |
None |
Alright, I’ll put word count to bed now, I promise.
7. Keyword density
You’ve made it to the last analysis. Congratulations! How many cups of coffee have you consumed? No judgment; this report was responsible for entire coffee farms being completely decimated by yours truly.
For selfish reasons, I couldn’t resist the temptation to dispel this ancient tactic of “using target keywords” in blog content. You know what I’m talking about: when someone says “This blog doesn’t FEEL optimized… did you use the target keyword enough?”
There are still far too many people that believe that littering target keywords throughout a piece of content will yield results. And misguided SEO agencies, along with certain SEO tools, perpetuate this belief.
Yoast has a tool in WordPress that some digital marketers live and die by. They don’t think that a blog is complete until Yoast shows the magical green light, indicating that the content has satisfied the majority of its SEO recommendations:
Uh oh, keyword density is too low! Let’s see if it that ACTUALLY matters.
Not looking so good, my keyword-stuffing friends! Let’s take a look at the PCC:
Target keyword ranking position vs. Yoast keyword density |
|
---|---|
PCC |
.097 |
Relationship |
None/Negligible |
Believers would like to see a negative relationship here; as the keyword density goes down, the ranking position decreases, producing a downward sloping line.
What we are looking at is a slightly upward-sloping line, which would indicate losing rankings by keyword stuffing — but fortunately not TOO upward sloping, given the low correlation value.
Okay, so PLEASE let that be the end of “keyword density.” This practice has been disproven in past studies, as referenced by Zyppy. Let’s confidently put this to bed, forever. Please.
Oh, and just for kicks, the Flesch Reading Ease score has no bearing on rankings either (-.03 correlation). Write to a third grade level, or a college level, it doesn’t matter.
TL;DR (I don’t blame you)
What we learned from our data
- Time: It took 100 days or more for an article to fully mature and show its true potential. A content marketing program probably shouldn’t be fully scrutinized until month 5 or 6 at the very earliest.
- Links: Links matter, I’m just terrible at generating them. Shame.
- Word count: It’s not about the length of the content, in absolute terms or relative to the competition. It’s about what is written and how resourceful it is.
- MarketMuse: We have proven that MarketMuse works as it prescribes, but there is no added benefit to breaking records.
- On-page SEO: Our data demonstrates that it still matters. We all still have a job.
- Competitor content length: We weren’t successful at blowing our competitors out of the water with longer content.
- Keyword density: Just stop. Join us in modern times. The water is warm.
In conclusion, some reasonable guidance we agree on is:
Wait at least 100 days to evaluate the performance of your content marketing program, write comprehensive content, and make sure your on-page SEO score is 90%+.
Oh, and build links. Unlike me. Shame.
Now go take a nap.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!
Ranking the 6 Most Accurate Keyword Research Tools
Posted by Jeff_Baker
In January of 2018 Brafton began a massive organic keyword targeting campaign, amounting to over 90,000 words of blog content being published.
Did it work?
Well, yeah. We doubled the number of total keywords we rank for in less than six months. By using our advanced keyword research and topic writing process published earlier this year we also increased our organic traffic by 45% and the number of keywords ranking in the top ten results by 130%.
But we got a whole lot more than just traffic.
From planning to execution and performance tracking, we meticulously logged every aspect of the project. I’m talking blog word count, MarketMuse performance scores, on-page SEO scores, days indexed on Google. You name it, we recorded it.
As a byproduct of this nerdery, we were able to draw juicy correlations between our target keyword rankings and variables that can affect and predict those rankings. But specifically for this piece…
How well keyword research tools can predict where you will rank.
A little background
We created a list of keywords we wanted to target in blogs based on optimal combinations of search volume, organic keyword difficulty scores, SERP crowding, and searcher intent.
We then wrote a blog post targeting each individual keyword. We intended for each new piece of blog content to rank for the target keyword on its own.
With our keyword list in hand, my colleague and I manually created content briefs explaining how we would like each blog post written to maximize the likelihood of ranking for the target keyword. Here’s an example of a typical brief we would give to a writer:
This image links to an example of a content brief Brafton delivers to writers.
Between mid-January and late May, we ended up writing 55 blog posts each targeting 55 unique keywords. 50 of those blog posts ended up ranking in the top 100 of Google results.
We then paused and took a snapshot of each URL’s Google ranking position for its target keyword and its corresponding organic difficulty scores from Moz, SEMrush, Ahrefs, SpyFu, and KW Finder. We also took the PPC competition scores from the Keyword Planner Tool.
Our intention was to draw statistical correlations between between our keyword rankings and each tool’s organic difficulty score. With this data, we were able to report on how accurately each tool predicted where we would rank.
This study is uniquely scientific, in that each blog had one specific keyword target. We optimized the blog content specifically for that keyword. Therefore every post was created in a similar fashion.
Do keyword research tools actually work?
We use them every day, on faith. But has anyone ever actually asked, or better yet, measured how well keyword research tools report on the organic difficulty of a given keyword?
Today, we are doing just that. So let’s cut through the chit-chat and get to the results…
While Moz wins top-performing keyword research tool, note that any keyword research tool with organic difficulty functionality will give you an advantage over flipping a coin (or using Google Keyword Planner Tool).
As you will see in the following paragraphs, we have run each tool through a battery of statistical tests to ensure that we painted a fair and accurate representation of its performance. I’ll even provide the raw data for you to inspect for yourself.
Let’s dig in!
The Pearson Correlation Coefficient
Yes, statistics! For those of you currently feeling panicked and lobbing obscenities at your screen, don’t worry — we’re going to walk through this together.
In order to understand the relationship between two variables, our first step is to create a scatter plot chart.
Below is the scatter plot for our 50 keyword rankings compared to their corresponding Moz organic difficulty scores.
We start with a visual inspection of the data to determine if there is a linear relationship between the two variables. Ideally for each tool, you would expect to see the X variable (keyword ranking) increase proportionately with the Y variable (organic difficulty). Put simply, if the tool is working, the higher the keyword difficulty, the less likely you will rank in a top position, and vice-versa.
This chart is all fine and dandy, however, it’s not very scientific. This is where the Pearson Correlation Coefficient (PCC) comes into play.
Phew. Still with me?
So each of these scatter plots will have a corresponding PCC score that will tell us how well each tool predicted where we would rank, based on its keyword difficulty score.
We will use the following table from statisticshowto.com to interpret the PCC score for each tool:
Coefficient Correlation R Score |
Key |
---|---|
.70 or higher |
Very strong positive relationship |
.40 to +.69 |
Strong positive relationship |
.30 to +.39 |
Moderate positive relationship |
.20 to +.29 |
Weak positive relationship |
.01 to +.19 |
No or negligible relationship |
0 |
No relationship [zero correlation] |
-.01 to -.19 |
No or negligible relationship |
-.20 to -.29 |
Weak negative relationship |
-.30 to -.39 |
Moderate negative relationship |
-.40 to -.69 |
Strong negative relationship |
-.70 or higher |
Very strong negative relationship |
In order to visually understand what some of these relationships would look like on a scatter plot, check out these sample charts from Laerd Statistics.
And here are some examples of charts with their correlating PCC scores (r):
The closer the numbers cluster towards the regression line in either a positive or negative slope, the stronger the relationship.
That was the tough part – you still with me? Great, now let’s look at each tool’s results.
Test 1: The Pearson Correlation Coefficient
Now that we’ve all had our statistics refresher course, we will take a look at the results, in order of performance. We will evaluate each tool’s PCC score, the statistical significance of the data (P-val), the strength of the relationship, and the percentage of keywords the tool was able to find and report keyword difficulty values for.
In order of performance:
#1: Moz
Revisiting Moz’s scatter plot, we observe a tight grouping of results relative to the regression line with few moderate outliers.
Moz Organic Difficulty Predictability |
|
---|---|
PCC |
0.412 |
P-val |
.003 (P<0.05) |
Relationship |
Strong |
% Keywords Matched |
100.00% |
Moz came in first with the highest PCC of .412. As an added bonus, Moz grabs data on keyword difficulty in real time, rather than from a fixed database. This means that you can get any keyword difficulty score for any keyword.
In other words, Moz was able to generate keyword difficulty scores for 100% of the 50 keywords studied.
#2: SpyFu
Visually, SpyFu shows a fairly tight clustering amongst low difficulty keywords, and a couple moderate outliers amongst the higher difficulty keywords.
SpyFu Organic Difficulty Predictability |
|
---|---|
PCC |
0.405 |
P-val |
.01 (P<0.05) |
Relationship |
Strong |
% Keywords Matched |
80.00% |
SpyFu came in right under Moz with 1.7% weaker PCC (.405). However, the tool ran into the largest issue with keyword matching, with only 40 of 50 keywords producing keyword difficulty scores.
#3: SEMrush
SEMrush would certainly benefit from a couple mulligans (a second chance to perform an action). The Correlation Coefficient is very sensitive to outliers, which pushed SEMrush’s score down to third (.364).
SEMrush Organic Difficulty Predictability |
|
---|---|
PCC |
0.364 |
P-val |
.01 (P<0.05) |
Relationship |
Moderate |
% Keywords Matched |
92.00% |
Further complicating the research process, only 46 of 50 keywords had keyword difficulty scores associated with them, and many of those had to be found through SEMrush’s “phrase match” feature individually, rather than through the difficulty tool.
The process was more laborious to dig around for data.
#4: KW Finder
KW Finder definitely could have benefitted from more than a few mulligans with numerous strong outliers, coming in right behind SEMrush with a score of .360.
KW Finder Organic Difficulty Predictability |
|
---|---|
PCC |
0.360 |
P-val |
.01 (P<0.05) |
Relationship |
Moderate |
% Keywords Matched |
100.00% |
Fortunately, the KW Finder tool had a 100% match rate without any trouble digging around for the data.
#5: Ahrefs
Ahrefs comes in fifth by a large margin at .316, barely passing the “weak relationship” threshold.
Ahrefs Organic Difficulty Predictability |
|
---|---|
PCC |
0.316 |
P-val |
.03 (P<0.05) |
Relationship |
Moderate |
% Keywords Matched |
100% |
On a positive note, the tool seems to be very reliable with low difficulty scores (notice the tight clustering for low difficulty scores), and matched all 50 keywords.
#6: Google Keyword Planner Tool
Before you ask, yes, SEO companies still use the paid competition figures from Google’s Keyword Planner Tool (and other tools) to assess organic ranking potential. As you can see from the scatter plot, there is in fact no linear relationship between the two variables.
Google Keyword Planner Tool Organic Difficulty Predictability |
|
---|---|
PCC |
0.045 |
P-val |
Statistically insignificant/no linear relationship |
Relationship |
Negligible/None |
% Keywords Matched |
88.00% |
SEO agencies still using KPT for organic research (you know who you are!) — let this serve as a warning: You need to evolve.
Test 1 summary
For scoring, we will use a ten-point scale and score every tool relative to the highest-scoring competitor. For example, if the second highest score is 98% of the highest score, the tool will receive a 9.8. As a reminder, here are the results from the PCC test:
And the resulting scores are as follows:
Tool |
PCC Test |
---|---|
Moz |
10 |
SpyFu |
9.8 |
SEMrush |
8.8 |
KW Finder |
8.7 |
Ahrefs |
7.7 |
KPT |
1.1 |
Moz takes the top position for the first test, followed closely by SpyFu (with an 80% match rate caveat).
Test 2: Adjusted Pearson Correlation Coefficient
Let’s call this the “Mulligan Round.” In this round, assuming sometimes things just go haywire and a tool just flat-out misses, we will remove the three most egregious outliers to each tool’s score.
Here are the adjusted results for the handicap round:
Adjusted Scores (3 Outliers removed) |
PCC |
Difference (+/-) |
---|---|---|
SpyFu |
0.527 |
0.122 |
SEMrush |
0.515 |
0.150 |
Moz |
0.514 |
0.101 |
Ahrefs |
0.478 |
0.162 |
KWFinder |
0.470 |
0.110 |
Keyword Planner Tool |
0.189 |
0.144 |
As noted in the original PCC test, some of these tools really took a big hit with major outliers. Specifically, Ahrefs and SEMrush benefitted the most from their outliers being removed, gaining .162 and .150 respectively to their scores, while Moz benefited the least from the adjustments.
For those of you crying out, “But this is real life, you don’t get mulligans with SEO!”, never fear, we will make adjustments for reliability at the end.
Here are the updated scores at the end of round two:
Tool |
PCC Test |
Adjusted PCC |
Total |
---|---|---|---|
SpyFu |
9.8 |
10 |
19.8 |
Moz |
10 |
9.7 |
19.7 |
SEMrush |
8.8 |
9.8 |
18.6 |
KW Finder |
8.7 |
8.9 |
17.6 |
AHREFs |
7.7 |
9.1 |
16.8 |
KPT |
1.1 |
3.6 |
4.7 |
SpyFu takes the lead! Now let’s jump into the final round of statistical tests.
Test 3: Resampling
Being that there has never been a study performed on keyword research tools at this scale, we wanted to ensure that we explored multiple ways of looking at the data.
Big thanks to Russ Jones, who put together an entirely different model that answers the question: “What is the likelihood that the keyword difficulty of two randomly selected keywords will correctly predict the relative position of rankings?”
He randomly selected 2 keywords from the list and their associated difficulty scores.
Let’s assume one tool says that the difficulties are 30 and 60, respectively. What is the likelihood that the article written for a score of 30 ranks higher than the article written on 60? Then, he performed the same test 1,000 times.
He also threw out examples where the two randomly selected keywords shared the same rankings, or data points were missing. Here was the outcome:
Resampling |
% Guessed correctly |
---|---|
Moz |
62.2% |
Ahrefs |
61.2% |
SEMrush |
60.3% |
Keyword Finder |
58.9% |
SpyFu |
54.3% |
KPT |
45.9% |
As you can see, this tool was particularly critical on each of the tools. As we are starting to see, no one tool is a silver bullet, so it is our job to see how much each tool helps make more educated decisions than guessing.
Most tools stayed pretty consistent with their levels of performance from the previous tests, except SpyFu, which struggled mightily with this test.
In order to score this test, we need to use 50% as the baseline (equivalent of a coin flip, or zero points), and scale each tool relative to how much better it performed over a coin flip, with the top scorer receiving ten points.
For example, Ahrefs scored 11.2% better than flipping a coin, which is 8.2% less than Moz which scored 12.2% better than flipping a coin, giving AHREFs a score of 9.2.
The updated scores are as follows:
Tool |
PCC Test |
Adjusted PCC |
Resampling |
Total |
---|---|---|---|---|
Moz |
10 |
9.7 |
10 |
29.7 |
SEMrush |
8.8 |
9.8 |
8.4 |
27 |
Ahrefs |
7.7 |
9.1 |
9.2 |
26 |
KW Finder |
8.7 |
8.9 |
7.3 |
24.9 |
SpyFu |
9.8 |
10 |
3.5 |
23.3 |
KPT |
1.1 |
3.6 |
-.4 |
.7 |
So after the last statistical accuracy test, we have Moz consistently performing alone in the top tier. SEMrush, Ahrefs, and KW Finder all turn in respectable scores in the second tier, followed by the unique case of SpyFu, which performed outstanding in the first two tests (albeit, only returning results on 80% of the tested keywords), then falling flat on the final test.
Finally, we need to make some usability adjustments.
Usability Adjustment 1: Keyword Matching
A keyword research tool doesn’t do you much good if it can’t provide results for the keywords you are researching. Plain and simple, we can’t treat two tools as equals if they don’t have the same level of practical functionality.
To explain in practical terms, if a tool doesn’t have data on a particular keyword, one of two things will happen:
- You have to use another tool to get the data, which devalues the entire point of using the original tool.
- You miss an opportunity to rank for a high-value keyword.
Neither scenario is good, therefore we developed a penalty system. For each 10% match rate under 100%, we deducted a single point from the final score, with a maximum deduction of 5 points. For example, if a tool matched 92% of the keywords, we would deduct .8 points from the final score.
One may argue that this penalty is actually too lenient considering the significance of the two unideal scenarios outlined above.
The penalties are as follows:
Tool |
Match Rate |
Penalty |
---|---|---|
KW Finder |
100% |
0 |
Ahrefs |
100% |
0 |
Moz |
100% |
0 |
SEMrush |
92% |
-.8 |
Keyword Planner Tool |
88% |
-1.2 |
SpyFu |
80% |
-2 |
Please note we gave SEMrush a lot of leniency, in that technically, many of the keywords evaluated were not found in its keyword difficulty tool, but rather through manually digging through the phrase match tool. We will give them a pass, but with a stern warning!
Usability Adjustment 2: Reliability
I told you we would come back to this! Revisiting the second test in which we threw away the three strongest outliers that negatively impacted each tool’s score, we will now make adjustments.
In real life, there are no mulligans. In real life, each of those three blog posts that were thrown out represented a significant monetary and time investment. Therefore, when a tool has a major blunder, the result can be a total waste of time and resources.
For that reason, we will impose a slight penalty on those tools that benefited the most from their handicap.
We will use the level of PCC improvement to evaluate how much a tool benefitted from removing their outliers. In doing so, we will be rewarding the tools that were the most consistently reliable. As a reminder, the amounts each tool benefitted were as follows:
Tool |
Difference (+/-) |
---|---|
Ahrefs |
0.162 |
SEMrush |
0.150 |
Keyword Planner Tool |
0.144 |
SpyFu |
0.122 |
KWFinder |
0.110 |
Moz |
0.101 |
In calculating the penalty, we scored each of the tools relative to the top performer, giving the top performer zero penalty and imposing penalties based on how much additional benefit the tools received over the most reliable tool, on a scale of 0–100%, with a maximum deduction of 5 points.
So if a tool received twice the benefit of the top performing tool, it would have had a 100% benefit, receiving the maximum deduction of 5 points. If another tool received a 20% benefit over of the most reliable tool, it would get a 1-point deduction. And so on.
Tool |
% Benefit |
Penalty |
---|---|---|
Ahrefs |
60% |
-3 |
SEMrush |
48% |
-2.4 |
Keyword Planner Tool |
42% |
-2.1 |
SpyFu |
20% |
-1 |
KW Finder |
8% |
-.4 |
Moz |
– |
0 |
Results
All told, our penalties were fairly mild, with a slight shuffling in the middle tier. The final scores are as follows:
Tool |
Total Score |
Stars (5 max) |
---|---|---|
Moz |
29.7 |
4.95 |
KW Finder |
24.5 |
4.08 |
SEMrush |
23.8 |
3.97 |
Ahrefs |
23.0 |
3.83 |
Spyfu |
20.3 |
3.38 |
KPT |
-2.6 |
0.00 |
Conclusion
Using any organic keyword difficulty tool will give you an advantage over not doing so. While none of the tools are a crystal ball, providing perfect predictability, they will certainly give you an edge. Further, if you record enough data on your own blogs’ performance, you will get a clearer picture of the keyword difficulty scores you should target in order to rank on the first page.
For example, we know the following about how we should target keywords with each tool:
Tool |
Average KD ranking ≤10 |
Average KD ranking ≥ 11 |
---|---|---|
Moz |
33.3 |
37.0 |
SpyFu |
47.7 |
50.6 |
SEMrush |
60.3 |
64.5 |
KWFinder |
43.3 |
46.5 |
Ahrefs |
11.9 |
23.6 |
This is pretty powerful information! It’s either first page or bust, so we now know the threshold for each tool that we should set when selecting keywords.
Stay tuned, because we made a lot more correlations between word count, days live, total keywords ranking, and all kinds of other juicy stuff. Tune in again in early September for updates!
We hope you found this test useful, and feel free to reach out with any questions on our math!
Disclaimer: These results are estimates based on 50 ranking keywords from 50 blog posts and keyword research data pulled from a single moment in time. Search is a shifting landscape, and these results have certainly changed since the data was pulled. In other words, this is about as accurate as we can get from analyzing a moving target.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!
Ranking the 6 Most Accurate Keyword Research Tools
Posted by Jeff_Baker
In January of 2018 Brafton began a massive organic keyword targeting campaign, amounting to over 90,000 words of blog content being published.
Did it work?
Well, yeah. We doubled the number of total keywords we rank for in less than six months. By using our advanced keyword research and topic writing process published earlier this year we also increased our organic traffic by 45% and the number of keywords ranking in the top ten results by 130%.
But we got a whole lot more than just traffic.
From planning to execution and performance tracking, we meticulously logged every aspect of the project. I’m talking blog word count, MarketMuse performance scores, on-page SEO scores, days indexed on Google. You name it, we recorded it.
As a byproduct of this nerdery, we were able to draw juicy correlations between our target keyword rankings and variables that can affect and predict those rankings. But specifically for this piece…
How well keyword research tools can predict where you will rank.
A little background
We created a list of keywords we wanted to target in blogs based on optimal combinations of search volume, organic keyword difficulty scores, SERP crowding, and searcher intent.
We then wrote a blog post targeting each individual keyword. We intended for each new piece of blog content to rank for the target keyword on its own.
With our keyword list in hand, my colleague and I manually created content briefs explaining how we would like each blog post written to maximize the likelihood of ranking for the target keyword. Here’s an example of a typical brief we would give to a writer:
This image links to an example of a content brief Brafton delivers to writers.
Between mid-January and late May, we ended up writing 55 blog posts each targeting 55 unique keywords. 50 of those blog posts ended up ranking in the top 100 of Google results.
We then paused and took a snapshot of each URL’s Google ranking position for its target keyword and its corresponding organic difficulty scores from Moz, SEMrush, Ahrefs, SpyFu, and KW Finder. We also took the PPC competition scores from the Keyword Planner Tool.
Our intention was to draw statistical correlations between between our keyword rankings and each tool’s organic difficulty score. With this data, we were able to report on how accurately each tool predicted where we would rank.
This study is uniquely scientific, in that each blog had one specific keyword target. We optimized the blog content specifically for that keyword. Therefore every post was created in a similar fashion.
Do keyword research tools actually work?
We use them every day, on faith. But has anyone ever actually asked, or better yet, measured how well keyword research tools report on the organic difficulty of a given keyword?
Today, we are doing just that. So let’s cut through the chit-chat and get to the results…
While Moz wins top-performing keyword research tool, note that any keyword research tool with organic difficulty functionality will give you an advantage over flipping a coin (or using Google Keyword Planner Tool).
As you will see in the following paragraphs, we have run each tool through a battery of statistical tests to ensure that we painted a fair and accurate representation of its performance. I’ll even provide the raw data for you to inspect for yourself.
Let’s dig in!
The Pearson Correlation Coefficient
Yes, statistics! For those of you currently feeling panicked and lobbing obscenities at your screen, don’t worry — we’re going to walk through this together.
In order to understand the relationship between two variables, our first step is to create a scatter plot chart.
Below is the scatter plot for our 50 keyword rankings compared to their corresponding Moz organic difficulty scores.
We start with a visual inspection of the data to determine if there is a linear relationship between the two variables. Ideally for each tool, you would expect to see the X variable (keyword ranking) increase proportionately with the Y variable (organic difficulty). Put simply, if the tool is working, the higher the keyword difficulty, the less likely you will rank in a top position, and vice-versa.
This chart is all fine and dandy, however, it’s not very scientific. This is where the Pearson Correlation Coefficient (PCC) comes into play.
Phew. Still with me?
So each of these scatter plots will have a corresponding PCC score that will tell us how well each tool predicted where we would rank, based on its keyword difficulty score.
We will use the following table from statisticshowto.com to interpret the PCC score for each tool:
Coefficient Correlation R Score |
Key |
---|---|
.70 or higher |
Very strong positive relationship |
.40 to +.69 |
Strong positive relationship |
.30 to +.39 |
Moderate positive relationship |
.20 to +.29 |
Weak positive relationship |
.01 to +.19 |
No or negligible relationship |
0 |
No relationship [zero correlation] |
-.01 to -.19 |
No or negligible relationship |
-.20 to -.29 |
Weak negative relationship |
-.30 to -.39 |
Moderate negative relationship |
-.40 to -.69 |
Strong negative relationship |
-.70 or higher |
Very strong negative relationship |
In order to visually understand what some of these relationships would look like on a scatter plot, check out these sample charts from Laerd Statistics.
And here are some examples of charts with their correlating PCC scores (r):
The closer the numbers cluster towards the regression line in either a positive or negative slope, the stronger the relationship.
That was the tough part – you still with me? Great, now let’s look at each tool’s results.
Test 1: The Pearson Correlation Coefficient
Now that we’ve all had our statistics refresher course, we will take a look at the results, in order of performance. We will evaluate each tool’s PCC score, the statistical significance of the data (P-val), the strength of the relationship, and the percentage of keywords the tool was able to find and report keyword difficulty values for.
In order of performance:
#1: Moz
Revisiting Moz’s scatter plot, we observe a tight grouping of results relative to the regression line with few moderate outliers.
Moz Organic Difficulty Predictability |
|
---|---|
PCC |
0.412 |
P-val |
.003 (P<0.05) |
Relationship |
Strong |
% Keywords Matched |
100.00% |
Moz came in first with the highest PCC of .412. As an added bonus, Moz grabs data on keyword difficulty in real time, rather than from a fixed database. This means that you can get any keyword difficulty score for any keyword.
In other words, Moz was able to generate keyword difficulty scores for 100% of the 50 keywords studied.
#2: SpyFu
Visually, SpyFu shows a fairly tight clustering amongst low difficulty keywords, and a couple moderate outliers amongst the higher difficulty keywords.
SpyFu Organic Difficulty Predictability |
|
---|---|
PCC |
0.405 |
P-val |
.01 (P<0.05) |
Relationship |
Strong |
% Keywords Matched |
80.00% |
SpyFu came in right under Moz with 1.7% weaker PCC (.405). However, the tool ran into the largest issue with keyword matching, with only 40 of 50 keywords producing keyword difficulty scores.
#3: SEMrush
SEMrush would certainly benefit from a couple mulligans (a second chance to perform an action). The Correlation Coefficient is very sensitive to outliers, which pushed SEMrush’s score down to third (.364).
SEMrush Organic Difficulty Predictability |
|
---|---|
PCC |
0.364 |
P-val |
.01 (P<0.05) |
Relationship |
Moderate |
% Keywords Matched |
92.00% |
Further complicating the research process, only 46 of 50 keywords had keyword difficulty scores associated with them, and many of those had to be found through SEMrush’s “phrase match” feature individually, rather than through the difficulty tool.
The process was more laborious to dig around for data.
#4: KW Finder
KW Finder definitely could have benefitted from more than a few mulligans with numerous strong outliers, coming in right behind SEMrush with a score of .360.
KW Finder Organic Difficulty Predictability |
|
---|---|
PCC |
0.360 |
P-val |
.01 (P<0.05) |
Relationship |
Moderate |
% Keywords Matched |
100.00% |
Fortunately, the KW Finder tool had a 100% match rate without any trouble digging around for the data.
#5: Ahrefs
Ahrefs comes in fifth by a large margin at .316, barely passing the “weak relationship” threshold.
Ahrefs Organic Difficulty Predictability |
|
---|---|
PCC |
0.316 |
P-val |
.03 (P<0.05) |
Relationship |
Moderate |
% Keywords Matched |
100% |
On a positive note, the tool seems to be very reliable with low difficulty scores (notice the tight clustering for low difficulty scores), and matched all 50 keywords.
#6: Google Keyword Planner Tool
Before you ask, yes, SEO companies still use the paid competition figures from Google’s Keyword Planner Tool (and other tools) to assess organic ranking potential. As you can see from the scatter plot, there is in fact no linear relationship between the two variables.
Google Keyword Planner Tool Organic Difficulty Predictability |
|
---|---|
PCC |
0.045 |
P-val |
Statistically insignificant/no linear relationship |
Relationship |
Negligible/None |
% Keywords Matched |
88.00% |
SEO agencies still using KPT for organic research (you know who you are!) — let this serve as a warning: You need to evolve.
Test 1 summary
For scoring, we will use a ten-point scale and score every tool relative to the highest-scoring competitor. For example, if the second highest score is 98% of the highest score, the tool will receive a 9.8. As a reminder, here are the results from the PCC test:
And the resulting scores are as follows:
Tool |
PCC Test |
---|---|
Moz |
10 |
SpyFu |
9.8 |
SEMrush |
8.8 |
KW Finder |
8.7 |
Ahrefs |
7.7 |
KPT |
1.1 |
Moz takes the top position for the first test, followed closely by SpyFu (with an 80% match rate caveat).
Test 2: Adjusted Pearson Correlation Coefficient
Let’s call this the “Mulligan Round.” In this round, assuming sometimes things just go haywire and a tool just flat-out misses, we will remove the three most egregious outliers to each tool’s score.
Here are the adjusted results for the handicap round:
Adjusted Scores (3 Outliers removed) |
PCC |
Difference (+/-) |
---|---|---|
SpyFu |
0.527 |
0.122 |
SEMrush |
0.515 |
0.150 |
Moz |
0.514 |
0.101 |
Ahrefs |
0.478 |
0.162 |
KWFinder |
0.470 |
0.110 |
Keyword Planner Tool |
0.189 |
0.144 |
As noted in the original PCC test, some of these tools really took a big hit with major outliers. Specifically, Ahrefs and SEMrush benefitted the most from their outliers being removed, gaining .162 and .150 respectively to their scores, while Moz benefited the least from the adjustments.
For those of you crying out, “But this is real life, you don’t get mulligans with SEO!”, never fear, we will make adjustments for reliability at the end.
Here are the updated scores at the end of round two:
Tool |
PCC Test |
Adjusted PCC |
Total |
---|---|---|---|
SpyFu |
9.8 |
10 |
19.8 |
Moz |
10 |
9.7 |
19.7 |
SEMrush |
8.8 |
9.8 |
18.6 |
KW Finder |
8.7 |
8.9 |
17.6 |
AHREFs |
7.7 |
9.1 |
16.8 |
KPT |
1.1 |
3.6 |
4.7 |
SpyFu takes the lead! Now let’s jump into the final round of statistical tests.
Test 3: Resampling
Being that there has never been a study performed on keyword research tools at this scale, we wanted to ensure that we explored multiple ways of looking at the data.
Big thanks to Russ Jones, who put together an entirely different model that answers the question: “What is the likelihood that the keyword difficulty of two randomly selected keywords will correctly predict the relative position of rankings?”
He randomly selected 2 keywords from the list and their associated difficulty scores.
Let’s assume one tool says that the difficulties are 30 and 60, respectively. What is the likelihood that the article written for a score of 30 ranks higher than the article written on 60? Then, he performed the same test 1,000 times.
He also threw out examples where the two randomly selected keywords shared the same rankings, or data points were missing. Here was the outcome:
Resampling |
% Guessed correctly |
---|---|
Moz |
62.2% |
Ahrefs |
61.2% |
SEMrush |
60.3% |
Keyword Finder |
58.9% |
SpyFu |
54.3% |
KPT |
45.9% |
As you can see, this tool was particularly critical on each of the tools. As we are starting to see, no one tool is a silver bullet, so it is our job to see how much each tool helps make more educated decisions than guessing.
Most tools stayed pretty consistent with their levels of performance from the previous tests, except SpyFu, which struggled mightily with this test.
In order to score this test, we need to use 50% as the baseline (equivalent of a coin flip, or zero points), and scale each tool relative to how much better it performed over a coin flip, with the top scorer receiving ten points.
For example, Ahrefs scored 11.2% better than flipping a coin, which is 8.2% less than Moz which scored 12.2% better than flipping a coin, giving AHREFs a score of 9.2.
The updated scores are as follows:
Tool |
PCC Test |
Adjusted PCC |
Resampling |
Total |
---|---|---|---|---|
Moz |
10 |
9.7 |
10 |
29.7 |
SEMrush |
8.8 |
9.8 |
8.4 |
27 |
Ahrefs |
7.7 |
9.1 |
9.2 |
26 |
KW Finder |
8.7 |
8.9 |
7.3 |
24.9 |
SpyFu |
9.8 |
10 |
3.5 |
23.3 |
KPT |
1.1 |
3.6 |
-.4 |
.7 |
So after the last statistical accuracy test, we have Moz consistently performing alone in the top tier. SEMrush, Ahrefs, and KW Finder all turn in respectable scores in the second tier, followed by the unique case of SpyFu, which performed outstanding in the first two tests (albeit, only returning results on 80% of the tested keywords), then falling flat on the final test.
Finally, we need to make some usability adjustments.
Usability Adjustment 1: Keyword Matching
A keyword research tool doesn’t do you much good if it can’t provide results for the keywords you are researching. Plain and simple, we can’t treat two tools as equals if they don’t have the same level of practical functionality.
To explain in practical terms, if a tool doesn’t have data on a particular keyword, one of two things will happen:
- You have to use another tool to get the data, which devalues the entire point of using the original tool.
- You miss an opportunity to rank for a high-value keyword.
Neither scenario is good, therefore we developed a penalty system. For each 10% match rate under 100%, we deducted a single point from the final score, with a maximum deduction of 5 points. For example, if a tool matched 92% of the keywords, we would deduct .8 points from the final score.
One may argue that this penalty is actually too lenient considering the significance of the two unideal scenarios outlined above.
The penalties are as follows:
Tool |
Match Rate |
Penalty |
---|---|---|
KW Finder |
100% |
0 |
Ahrefs |
100% |
0 |
Moz |
100% |
0 |
SEMrush |
92% |
-.8 |
Keyword Planner Tool |
88% |
-1.2 |
SpyFu |
80% |
-2 |
Please note we gave SEMrush a lot of leniency, in that technically, many of the keywords evaluated were not found in its keyword difficulty tool, but rather through manually digging through the phrase match tool. We will give them a pass, but with a stern warning!
Usability Adjustment 2: Reliability
I told you we would come back to this! Revisiting the second test in which we threw away the three strongest outliers that negatively impacted each tool’s score, we will now make adjustments.
In real life, there are no mulligans. In real life, each of those three blog posts that were thrown out represented a significant monetary and time investment. Therefore, when a tool has a major blunder, the result can be a total waste of time and resources.
For that reason, we will impose a slight penalty on those tools that benefited the most from their handicap.
We will use the level of PCC improvement to evaluate how much a tool benefitted from removing their outliers. In doing so, we will be rewarding the tools that were the most consistently reliable. As a reminder, the amounts each tool benefitted were as follows:
Tool |
Difference (+/-) |
---|---|
Ahrefs |
0.162 |
SEMrush |
0.150 |
Keyword Planner Tool |
0.144 |
SpyFu |
0.122 |
KWFinder |
0.110 |
Moz |
0.101 |
In calculating the penalty, we scored each of the tools relative to the top performer, giving the top performer zero penalty and imposing penalties based on how much additional benefit the tools received over the most reliable tool, on a scale of 0–100%, with a maximum deduction of 5 points.
So if a tool received twice the benefit of the top performing tool, it would have had a 100% benefit, receiving the maximum deduction of 5 points. If another tool received a 20% benefit over of the most reliable tool, it would get a 1-point deduction. And so on.
Tool |
% Benefit |
Penalty |
---|---|---|
Ahrefs |
60% |
-3 |
SEMrush |
48% |
-2.4 |
Keyword Planner Tool |
42% |
-2.1 |
SpyFu |
20% |
-1 |
KW Finder |
8% |
-.4 |
Moz |
– |
0 |
Results
All told, our penalties were fairly mild, with a slight shuffling in the middle tier. The final scores are as follows:
Tool |
Total Score |
Stars (5 max) |
---|---|---|
Moz |
29.7 |
4.95 |
KW Finder |
24.5 |
4.08 |
SEMrush |
23.8 |
3.97 |
Ahrefs |
23.0 |
3.83 |
Spyfu |
20.3 |
3.38 |
KPT |
-2.6 |
0.00 |
Conclusion
Using any organic keyword difficulty tool will give you an advantage over not doing so. While none of the tools are a crystal ball, providing perfect predictability, they will certainly give you an edge. Further, if you record enough data on your own blogs’ performance, you will get a clearer picture of the keyword difficulty scores you should target in order to rank on the first page.
For example, we know the following about how we should target keywords with each tool:
Tool |
Average KD ranking ≤10 |
Average KD ranking ≥ 11 |
---|---|---|
Moz |
33.3 |
37.0 |
SpyFu |
47.7 |
50.6 |
SEMrush |
60.3 |
64.5 |
KWFinder |
43.3 |
46.5 |
Ahrefs |
11.9 |
23.6 |
This is pretty powerful information! It’s either first page or bust, so we now know the threshold for each tool that we should set when selecting keywords.
Stay tuned, because we made a lot more correlations between word count, days live, total keywords ranking, and all kinds of other juicy stuff. Tune in again in early September for updates!
We hope you found this test useful, and feel free to reach out with any questions on our math!
Disclaimer: These results are estimates based on 50 ranking keywords from 50 blog posts and keyword research data pulled from a single moment in time. Search is a shifting landscape, and these results have certainly changed since the data was pulled. In other words, this is about as accurate as we can get from analyzing a moving target.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!
Categories
- 60% of the time… (1)
- A/B Testing (2)
- Ad placements (3)
- adops (4)
- adops vs sales (5)
- AdParlor 101 (43)
- adx (1)
- algorithm (1)
- Analysis (9)
- Apple (1)
- Audience (1)
- Augmented Reality (1)
- authenticity (1)
- Automation (1)
- Back to School (1)
- best practices (2)
- brand voice (1)
- branding (1)
- Build a Blog Community (12)
- Case Study (3)
- celebrate women (1)
- certification (1)
- Collections (1)
- Community (1)
- Conference News (1)
- conferences (1)
- content (1)
- content curation (1)
- content marketing (1)
- contests (1)
- Conversion Lift Test (1)
- Conversion testing (1)
- cost control (2)
- Creative (6)
- crisis (1)
- Curation (1)
- Custom Audience Targeting (4)
- Digital Advertising (2)
- Digital Marketing (6)
- DPA (1)
- Dynamic Ad Creative (1)
- dynamic product ads (1)
- E-Commerce (1)
- eCommerce (2)
- Ecosystem (1)
- email marketing (3)
- employee advocacy program (1)
- employee advocates (1)
- engineers (1)
- event marketing (1)
- event marketing strategy (1)
- events (1)
- Experiments (21)
- F8 (2)
- Facebook (64)
- Facebook Ad Split Testing (1)
- facebook ads (18)
- Facebook Ads How To (1)
- Facebook Advertising (30)
- Facebook Audience Network (1)
- Facebook Creative Platform Partners (1)
- facebook marketing (1)
- Facebook Marketing Partners (2)
- Facebook Optimizations (1)
- Facebook Posts (1)
- facebook stories (1)
- Facebook Updates (2)
- Facebook Video Ads (1)
- Facebook Watch (1)
- fbf (11)
- first impression takeover (5)
- fito (5)
- Fluent (1)
- Get Started With Wix Blog (1)
- Google (9)
- Google Ad Products (5)
- Google Analytics (1)
- Guest Post (1)
- Guides (32)
- Halloween (1)
- holiday marketing (1)
- Holiday Season Advertising (7)
- Holiday Shopping Season (4)
- Holiday Video Ads (1)
- holidays (4)
- Hootsuite How-To (3)
- Hootsuite Life (1)
- how to (5)
- How to get Instagram followers (1)
- How to get more Instagram followers (1)
- i don't understand a single thing he is or has been saying (1)
- if you need any proof that we're all just making it up (2)
- Incrementality (1)
- influencer marketing (1)
- Infographic (1)
- Instagram (39)
- Instagram Ads (11)
- Instagram advertising (8)
- Instagram best practices (1)
- Instagram followers (1)
- Instagram Partner (1)
- Instagram Stories (2)
- Instagram tips (1)
- Instagram Video Ads (2)
- invite (1)
- Landing Page (1)
- link shorteners (1)
- LinkedIn (22)
- LinkedIn Ads (2)
- LinkedIn Advertising (2)
- LinkedIn Stats (1)
- LinkedIn Targeting (5)
- Linkedin Usage (1)
- List (1)
- listening (2)
- Lists (3)
- Livestreaming (1)
- look no further than the new yorker store (2)
- lunch (1)
- Mac (1)
- macOS (1)
- Marketing to Millennials (2)
- mental health (1)
- metaverse (1)
- Mobile App Marketing (3)
- Monetizing Pinterest (2)
- Monetizing Social Media (2)
- Monthly Updates (10)
- Mothers Day (1)
- movies for social media managers (1)
- new releases (11)
- News (72)
- News & Events (13)
- no one knows what they're doing (2)
- OnlineShopping (2)
- or ari paparo (1)
- owly shortener (1)
- Paid Media (2)
- People-Based Marketing (3)
- performance marketing (5)
- Pinterest (34)
- Pinterest Ads (11)
- Pinterest Advertising (8)
- Pinterest how to (1)
- Pinterest Tag helper (5)
- Pinterest Targeting (6)
- platform health (1)
- Platform Updates (8)
- Press Release (2)
- product catalog (1)
- Productivity (10)
- Programmatic (3)
- quick work (1)
- Reddit (3)
- Reporting (1)
- Resources (34)
- ROI (1)
- rules (1)
- Seamless shopping (1)
- share of voice (1)
- Shoppable ads (4)
- Skills (28)
- SMB (1)
- SnapChat (28)
- SnapChat Ads (8)
- SnapChat Advertising (5)
- Social (169)
- social ads (1)
- Social Advertising (14)
- social customer service (1)
- Social Fresh Tips (1)
- Social Media (5)
- social media automation (1)
- social media content calendar (1)
- social media for events (1)
- social media management (2)
- Social Media Marketing (49)
- social media monitoring (1)
- Social Media News (4)
- social media statistics (1)
- social media tracking in google analytics (1)
- social media tutorial (2)
- Social Toolkit Podcast (1)
- Social Video (5)
- stories (1)
- Strategy (601)
- terms (1)
- Testing (2)
- there are times ive found myself talking to ari and even though none of the words he is using are new to me (1)
- they've done studies (1)
- this is also true of anytime i have to talk to developers (1)
- tiktok (8)
- tools (1)
- Topics & Trends (3)
- Trend (12)
- Twitter (15)
- Twitter Ads (5)
- Twitter Advertising (4)
- Uncategorised (9)
- Uncategorized (13)
- url shortener (1)
- url shorteners (1)
- vendor (2)
- video (10)
- Video Ads (7)
- Video Advertising (8)
- virtual conference (1)
- we're all just throwing mountains of shit at the wall and hoping the parts that stick don't smell too bad (2)
- web3 (1)
- where you can buy a baby onesie of a dog asking god for his testicles on it (2)
- yes i understand VAST and VPAID (1)
- yes that's the extent of the things i understand (1)
- YouTube (13)
- YouTube Ads (4)
- YouTube Advertising (9)
- YouTube Video Advertising (5)