How Google Search Ranking Works – Darwinism in Search via @jasonmbarnard

Are featured snippets are driven by a specific ranking algorithm that is separate from the core algorithm?

That’s my theory. For me, the idea holds (a lot) of water.

And I’m not alone. Experts such as Eric Enge, Cindy Krum, and Hannah Thorpe have the same idea.

To try to get confirmation or a rebuttal of that theory, I asked Gary Illyes this question:

Does the Featured Snippet function on a different algorithm than the 10 blue links?

The answer absolutely floored me.

He gave me an overview of what a new search engineer learns when they start working at Google.

Please remember that the system described in this article is confirmed to be true, but that some conclusions I draw are not (in italics), and that all the numbers here are completely invented by me.

The aim of this article is to give an overview of how ranking functions. Not what the individual ranking factors are, nor their relative weighting / importance, nor the inner workings of the multi-candidate bidding system. Those remain a super-secret (I 100% see why that is the case).

How Ranking Works in Google Search

What Are the Ranking Factors?

There are hundreds/thousands of ranking factors. Google doesn’t tell us what they are in detail (which, by the by, seems to me to be reasonable).

They do tell us that they group them: Topicality, Quality, PageSpeed, RankBrain, Entities, Structured Data, Freshness… and others.

A couple of things to point here:

  • Those seven are real ranking factors we can count on (in no particular order).
  • Each ranking factor includes multiple signals, for example Quality is mostly PageRank but also includes other signals and Structured Data includes not only but also tables, lists, semantic HTML5 and certainly a few others.

Google calculates a score for a page for each of the ranking factors.

Example of Google Ranking Factors

Example of Google Ranking FactorsSomething like this

Remember that throughout this article, all these numbers are completely hypothetical.

How Ranking Factors Contribute to the Bid

Google takes the individual ranking factor scores and combines them to calculate the total score (the term ‘bid’ is used, which makes super good sense to me).

Importantly, the total bid is calculated by multiplying these scores.

Ranking Score Example Google

Ranking Score Example GoogleSomething like this

The total score has an upper limit of 2 to the power of 64… not 100% sure, but I think that is what Illyes said, so perhaps it is a reference to the Wheat and Chessboard problem where the numbers on the second half of the chessboard are so phenomenally off-the-scale that it is effectively a kind of fail-safe buffer).

That means these individual scores could be single, double, triple, or even quadruple digits and the total would never hit that upper limit.

That very high ceiling also means that Google can continue to throw in more factors and never have a need to “dampen” the existing scores to make space for the new one.

Just up to there, my mind was already swirling. But it gets better.

Watch out – One Single Low Score Can Kill a Bid

And the fact that the total is calculated by multiplication is a phenomenal insight. Why? Because any single score under 1 will seriously handicap that bid, whatever the other scores are.

Low Scoring Result Example Google

Low Scoring Result Example Google

Look at how the score tanks as just one factor drops slightly below 1. That is enough to put this page out of contention.

Dropping further below 1 will generally kill it off. It is possible to overcome a sub-1 ranking factor. But the other factors would need to be phenomenally strong.

Looking at the numbers below, one gets an idea of just how strong. Ignoring a weak factor is not a good strategy. Working to get that factor above 1 is a great strategy.

My bet here is that the super impressive ‘up and to the right SEO wins’ examples we (often) see in the SEO industry are examples of when a site *simply* corrects a sub-1 ranking factor.

Very Low Scoring Results Example

Very Low Scoring Results Example

This system rewards pages that have good scores across the board. Pages that perform well on some factors, but badly on others will always struggle. A balanced approach wins.

Credit to Brent D Payne for making this great analogy: “Better to be a straight C student than 3 As and an F”.

What a Bid-Based Ranking Looks like

Google Bid-Based Ranking Example

Google Bid-Based Ranking ExampleTis is just an example

Refining the Bids for a Final Ranking

The top results (let’s say 10) are sent to a second algorithm that is designed to refine the ranking and remove any unacceptable results that slipped through the net.

The factors taken into account here are different and appear to be aimed at specific cases.

This recalculation can raise or lower a bid (or conceivably leave it the same).

My understanding is that it is most likely to push a bid down.  I’ll take that further and suggest that this is a filter currently aimed principally at blocking irrelevant, low quality and black hat content that the initial algorithm missed.

So we are looking at a final set of bids that might look something like this.

Google Refined Ranking Example

Google Refined Ranking Example

Note that in this example, one result gets one zero score and is therefore completely removed from consideration / eliminated (remember, because we are multiplying, any individual zero score will guarantee that the overall score is also zero). And that is seriously radical. And a very significant fact, however you look at it.

Such a zero can be generated algorithmically.

My guess is that a zero could additionally serve as a way to implement some manual actions (this is a pretty big jump from what I was told, and is my conclusion and has in no way be confirmed by anyone at Google).

What is sure is that the order changes and we have a final list of results for the web / “10 blue links.”

If that weren’t enough for one day, now it gets really interesting.

Rich Elements Are ‘Candidate Result Sets’ (My Term, Not Google’s)

Candidate Result Sets Compete for a Place on Page 1

Each type of result/rich element is effectively competing for a place on page 1.

News, images, videos, featured snippets, carousels, maps, GMB, etc. – each one provides a list of candidates for Page 1 with their bids.

There is already quite a variety competing to appear on Page 1, and that list keeps on growing.

Rich SERP Result Types

Rich SERP Result TypesWith this system, there is no theoretical limit to the number of rich elements that Google can create to bid for a place.

Candidate Result Ranking Factors

The terms ‘Candidate Result’ and ‘Candidate Result Set’ are from me, not from Google

The combination of factors that affect ranking in these candidate result sets is necessarily specific to each since some factors will be unique to an individual candidate result set and some will not apply.

An example would be alt tags that apply to the Images candidate result set, but not to others, or a news sitemap that would be necessary for the News candidate result set, but have no place in a calculation for the others.

Candidate Result Set Ranking Factor Weightings

The relative weighing of each factor will also necessarily be different for each candidate result set since each one provides a specific type of information in a specific format.

And the aim is to provide the most appropriate elements to the user in terms:

  • The content itself.
  • The media format.
  • The place on the page.

For example, freshness is going to be a heavily weighted factor in News,  and perhaps RankBrain for Featured Snippets.

Candidate Result Set Bid Calculations

The bids provided by each candidate result set are calculated in the same way as the first Web/blue links example (by multiplication and, I assume, with the second refinement algorithm).

Google then has multiple candidates bidding for a place (or several places, depending on the type).

How Google Search Ranking Works – Darwinism in Search

How Google Search Ranking Works – Darwinism in Search

Pulling It All Together for Page 1

Candidate Result Sets Bid Against Each Other

My initial question was about the Featured Snippet, and I am certain that the top bid from that specific candidate result set had to outbid the top result for the Web to “win.”

For the others, that doesn’t make 100% sense. So I am assuming the rules to “win” are different for each candidate result set. 

Rich Elements Winning Bids Example

Rich Elements Winning Bids ExampleThe rules I used to make these winning choices are fictional, and not how Google really does this.

Google is looking for any rich result that will provide a “better” solution for the user.

When it does identify a “better” candidate result, that result is given a place (at the expense of one or more classic blue links).

The Final Choice of Rich Elements on Page 1

Each candidate result set is subject to specific limitations – and all are subservient to that traditional web result/classic blue links (for the moment, at least).

  • One result, one possible position (featured snippet, news, for example)
  • Multiple results, multiple possible positions (images, videos, for example)
  • Multiple results, one possible position (news, carousel, for example)

And the winners in my example are (remember that the rules I used to make these choices are fictional, and not how Google really does this)…

  • News: Failed to outbid the #1 Web bid and is therefore not sufficiently relevant and does not win a place.
  • Images: We have one winner. The space allotted is 5 so the other 4 get a free ride.
  • Video: Two are outbidding the top web result so they both get a place.
  • Featured Snippet: We have several winners. But only one is used. Because this is “the” answer.

Final Ranking Example

Final Ranking ExampleWe have our final page and it looks something like this.

As places are given to rich elements, the lower positioned web results drop onto Page 2. Which rather hammers home that we really should not take our eyes off the ongoing demise of blue links.

I reiterate: I have no information about how positions are attributed to the videos or images – I attributed positions to them with my own invented simplistic system, not Google’s. 🙂

To End – A Little Theorizing From Me

All of this last chunk is my initial thoughts as I digest all this. Not attributable to Gary or Google.

Darwinism in Search Results

It seems to me that some rich elements will “naturally” grow and win a place on Page 1 increasingly often (featured snippet being an example that we are seeing in action today).

Others will “naturally” shrink (classic blue links on mobile). And some could “naturally” die out entirely. All very Darwinian!

This System Isn’t Going Away Any Time Soon

Google’s “rich element ranking” system has an in-built capacity to expand and adapt to changes in result/answer delivery. Organically!

New devices, new content formats, personalization… Google can simply create a new rich element, add it to the system and let it bid for a place. It will win a place in results when it is a more appropriate option than the classic blue links. Potentially, over time it will naturally dominate in the micro-moment it is most suited to.

Darwinism in Search. Wow!

Don’t know about you, but all in all, my mind is blown.

More Resources:

Image Credits

Featured Image & In-Post Images: Created by Véronique Barnard, May 2019

Link Distance Ranking Algorithms via @martinibuster

There is a kind of link algorithm that isn’t widely discussed, not nearly enough. This article is meant as introduction to link and link distance ranking algorithms. It’s something that may play a role in how sites are ranked. In my opinion it’s important to be aware of this.

Does Google Use This?

While the algorithm under consideration is from a patent that was filed by Google, Google’s official statement about patents and research papers is that they produce many of them and that not all of them are used and sometimes they are used in a way that is different than what is described.

That said, the details of this algorithm appear to resemble the contours of what Google has officially said about how it handles links.

Complexity of Calculations

There are two sections of the patent (Producing a Ranking for Pages Using Distances in a Web-link Graph) that state how complex the calculations are:

“Unfortunately, this variation of PageRank requires solving the entire system for each seed separately. Hence, as the number of seed pages increases, the complexity of computation increases linearly, thereby limiting the number of seeds that can be practically used.”

Hence, what is needed is a method and an apparatus for producing a ranking for pages on the web using a large number of diversified seed pages…”

The above points to the difficulty of making these calculations web wide because of the large number of data points. It states that breaking these down by topic niches the calculations are easier to compute.

What’s interesting about that statement is that the original Penguin algorithm was calculated once a year or longer. Sites that were penalized pretty much stayed penalized until the next seemingly random date that Google recalculated the Penguin score.

At a certain point Google’s infrastructure must have improved. Google is constantly building it’s own infrastructure but apparently doesn’t announce it. The Caffeine web indexing system is one of the exceptions.

Real-time Penguin rolled out in the fall of 2016.

It is notable that these calculations are difficult. It points to the possibility that Google would do a periodic calculation for the entire web, then assign scores based on the distances from the trusted sites to all the rest of the sites. Thus, one gigantic calculation, done a year.

So when a SERP is calculated via PageRank, the distance scores are also calculated. This sounds a lot like the process we know as the Penguin Algorithm.

“The system then assigns lengths to the links based on properties of the links and properties of the pages attached to the links. The system next computes shortest distances from the set of seed pages to each page in the set of pages based on the lengths of the links between the pages. Next, the system determines a ranking score for each page in the set of pages based on the computed shortest distances.”

What is the System Doing?

The system creates a score that is based on the shortest distance between a seed set and the proposed ranked pages. The score is used to rank these pages.

So it’s basically an overlay on top of the PageRank score to help weed out manipulated links, based on the theory that manipulated links will naturally have a longer distance of link connections between the spam page and the trusted set.

Ranking a web page can be said to consist of three processes.

  • Indexing
  • Ranking
  • Ranking Modification (usually related to personalization)

That’s an extreme reduction of the ranking process. There’s a lot more that goes on.

Interestingly, this distance ranking process happens during the ranking part of the process. Under this algorithm there’s no chance of ranking for meaningful phrases unless the page is associated with the seed set.

Here is what it says:

“One possible variation of PageRank that would reduce the effect of these techniques is to select a few “trusted” pages (also referred to as the seed pages) and discovers other pages which are likely to be good by following the links from the trusted pages.”

This is an important distinction, to know in what part of the ranking process the seed set calculation happens because it helps us formulate what our ranking strategy is going to be.

This is different from the Yahoo TrustRank thing. YTR was shown to be biased.

Majestic’s Topical TrustFlow can be said to be an improved version, similar to a research paper that demonstrated that by using a seed set that is organized by niche topics is more accurate. Research also showed that organizing a seed set algorithm by topic is several orders better than not doing so.

Thus, it makes sense that Google’s distance ranking algorithm also organizes it’s seed set by niche topic buckets.

As I understand this, this Google patent calculates distances between a seed set and assigns distance scores.

Reduced Link Graph

“In a variation on this embodiment, the links associated with the computed shortest distances constitute a reduced link-graph.”

What this means is that there’s a map of the Internet commonly known as the Link Graph and then there’s a smaller version the link graph populated by web pages that have had spam pages filtered out. Sites that primarily obtain links outside of the reduced link graph might never get inside. Dirty links thus get no traction.

What is a Reduced Link Graph?

I’ll keep this short and sweet. The link to the document follows below.

What you really need to know is this part:

“The early success of link-based ranking algorithms was predicated on the assumption that links imply merit of the target pages. However, today many links exist for purposes other than to confer authority. Such links bring noise into link analysis and harm the quality of retrieval.

In order to provide high quality search results, it is important to detect them and reduce their influence… With the help of a classifier, these noisy links are detected and dropped. After that, link analysis algorithms are performed on the reduced link graph.”

Read this PDF for more information about Reduced Link Graphs.

If you’re obtaining links from sites like news organizations, it may be fair to assume they are on the inside of the reduced link graph. But are they a part of the seed set? Maybe we should’t obsess over that.

Is This Why Google Says Negative SEO Doesn’t Exist?

“…the links associated with the computed shortest distances constitute a reduced link-graph”

A reduced link graph is different from a link graph. A link graph can be said to be a map of the entire Internet organized by the link relationships between sites, pages or even parts of pages.

Then there’s a reduced link graph, which is a map of everything minus certain sites that don’t meet specific criteria.

A reduced link graph can be a map of the web minus non-spam sites. The sites outside of the reduced link graph will have zero effect on the sites inside the link graph, because they’re on the outside.

That’s probably why a spam site linking to a normal site will not cause a negative effect on a non-spam site. Because the spam site is outside of the reduced link graph, it has no effect whatsoever. The link is ignored.

Could this be why Google is so confident that it’s catching link spam and that negative SEO does not exist?

Distance from Seed Set Equals Less Ranking Power?

I don’t think it’s necessary to try to map out what the seed set is.  What’s more important, in my opinion, is to be aware of topical neighborhoods and how that relates to where you get your links.

At one time Google used to publicly display a PageRank score for every page, so I can remember what kinds of sites tended to have low scores. There are a class of sites that have low PageRank and low Moz DA, but they are closely linked to sites that in my opinion are likely a few clicks away from the seed set.

What Moz DA is measuring is an approximation of a site’s authority. It’s a good tool. However, what Moz DA is measuring may not be a distance from a seed set, which cannot be known because it’s a Google secret.

So I’m not putting down the Moz DA tool, keep using it. I’m just suggesting you may want to expand your criteria and definition of what a useful link may be.

What Does it Mean to be Close to a Seed Set?

From a Stanford university classroom document, page 17 asks, What is a good notion of proximity? The answers are:

  •  Multiple connections
  • Quality of connection
  • Direct & Indirect connections
  • Length, Degree, Weight

That is an interesting consideration.


There are many people who are worried about anchor text ratios, DA/PA of inbound links, but I think those considerations are somewhat old.

The concern with DA/PA is a throwback to the hand-wringing about obtaining links from pages with a PageRank of 4 or more, which was a practice that began from a randomly chosen PageRank score, the number four.

When we talk about or think about when considering links in the context of ranking, it may be useful to consider distance ranking as a part of that conversation.

Read the patent here

Images by Shutterstock, Modified by Author

Collaborating to protect nearly anonymous animalsCollaborating to protect nearly anonymous animals

According to WWF, wildlife populations have dwindled by 60 percent in less than five decades. And with nearly 50 species threatened with extinction today, technology has a role to play in preventing endangerment.

With artificial intelligence (AI), advanced analytics and apps that speed up collaboration, Google is helping companies like WWF in their work to save our precious planets’ species. Here are some of the ways.

  • Curating wildlife data quickly. A big part of increasing conservation efforts is having access to reliable data about the animals that are threatened. To help, WWF and Google have joined a number of other partners to create the Wildlife Insights platform, a way for people to share wildlife camera trap images. Using AI, the species are automatically identified, so that conservationists can act quicker to help recover global wildlife populations.
  • Predicting wildlife trade trends. Using Google search queries and known web page content, Google can help organizations like WWF predict wildlife trade trends similar to how we can help see flu outbreaks coming. This way, we can help prevent a wildlife trafficking crisis quicker.
  • Collaborating globally with people who can help. Using G Suite, which includes productivity and collaboration apps like Docs and Slides, Google Cloud, WWF and Netflix partnered together to draft materials and share information quickly to help raise awareness for Endangered Species Day (not to mention, cut back on paper).

What you can do to help
Conservation can seem like a big, hairy problem that’s best left to the experts to solve. But there are small changes we can make right now in our everyday lives. When we all collaborate together to make these changes, they can make a big difference.

Check out this Slides presentation to find out more about how together, we can help our friends. You can also take direct action to help protect our planet on the “Our Planet” website.

User-Centric Optimization: 3 Ways to Improve Your Website Experience via @rachellcostello

SEO is multifaceted and each optimization factor is dependent on the others.

You can create first-class content that engages users and that is relevant to their search intent, but if your pages load slowly, your users will never get the chance to read this outstanding content you’re creating for your website.

Users are impatient and they will bounce if they have to wait for more than a few seconds.

Load time vs bounce rate graphic by Think With Google

Load time vs bounce rate graphic by Think With GoogleData from Think With Google

Can you blame them, though? Think about how frustrated you feel when you have to watch a loading wheel spinning, for what feels like an eternity.

Three different loading wheel icons

Three different loading wheel icons

This is the mindset we need to have when we approach any performance optimization work because the most meaningful improvements will happen when you approach things from a place of empathy for your users.

Understanding the Different Browsing Conditions of Users

Empathy for users is a great starting point, but we also need to support that with an understanding of how your users are accessing your website.

For example, what devices and browsers are they using to visit your website? What kind of internet connections are they browsing with?

These differences in browsing conditions can have a bigger impact on performance than you might expect.

This is demonstrated by the results from testing JavaScript processing times for the CNN homepage across different devices from WebPageTest.

JavaScript processing times for CNN graph

JavaScript processing times for CNN graph

The iPhone 8, which is a higher-end device with a better CPU, loaded the CNN homepage in 4 seconds compared to the Moto G4 which loaded in 13 seconds.

However, the results were even more dramatic for the Alcatel 1X which loaded the same page in 36 seconds.

Processing times for three different phones

Processing times for three different phones

Performance isn’t a ‘one score fits all’ scenario. It can vary drastically depending on each user’s browsing conditions.

The Audience tab in Google Analytics is a great place to start digging around and doing some research into how your users are accessing your website.

For example, you can see the split of the most commonly used devices under Audience > Mobile > Devices.

Google Analytics mobile devices report

Google Analytics mobile devices report

That’s just one report of many, so take a closer look in your analytics account to get a better understanding of your users and the factors that could be impacting their experience on your website.

User-Centric Performance Optimization Is the Future

Considering the varying nature of performance depending on the browsing conditions of each individual user, there’s a lot more that marketers can be doing to improve the way we speed up websites.

The future of site speed should be focused on tailoring performance around the user and their particular browsing environment.

Here are three areas that can be optimized to improve how users experience your website:

  • The user’s device
  • The user’s internet connection
  • The user’s journey

1. Optimizing Performance Based on the User’s Device

The key to ensuring that every user has a positive, fast experience on your website is to implement a baseline level of performance that works for the most basic device you’re optimizing for.

Two web development strategies that work around this concept are:

  • Progressive enhancement
  • Graceful degradation

Progressive Enhancement

Progressive enhancement focuses on making the core content of a page accessible, and then progressively adds more technically advanced features on top of the capabilities of the user’s device or browser allow for.

For example, the website might provide clean, accessible content in the HTML first as a priority.

Then if it is detected that the user’s browsing conditions can handle more complex features, some additional CSS visual alterations can be layered on top, and perhaps some more advanced interactivity via JavaScript.

Graceful Degradation

Graceful degradation is basically the opposite of progressive enhancement.

The website will start with the full experience, but will then start falling back to a gradually less complex experience by switching off certain low-importance elements if the user’s device is unable to handle the more advanced features.

These web strategies can be really powerful because if your website loads quickly and performs well even on the most basic device, think about how much faster it will load on higher-end devices.

2. Optimizing Performance Based on the User’s Internet Connection

Internal connection is one of the most inconstant factors of a user’s browsing conditions, especially for those on mobile. As we use our devices on the move, internet connectivity is bound to fluctuate and drop off.

However, it is possible to optimize for different levels of internet connectivity to ensure that users will still have a good experience of your website on a 3G or 2G connection.

Network Information API

The Network Information API provides information on a user’s internet connection status, including the type and strength of their connection.

You can use the Network Information API to detect changes in the user’s internet connection, by using this code example:

Network Information API code example

Network Information API code example

You can also set instructions for what should happen if the internet connection changes, and how the content on a website should adopt.

As demonstrated at Google I/O 2018, if a user’s connection is 4G you can set a video to be loaded as this connection would be able to handle this rich experience.

However, if a user is browsing on a 2G or 3G connection you can set a static image to be loaded in place of the video so you’re not putting too much strain on the user’s already limited connection.

Google I/O example of swapping a video for an image depending on connection type

Google I/O example of swapping a video for an image depending on connection type

In this circumstance, the user doesn’t have the expectation of watching a video or animation and doesn’t know what they’re missing. The important thing is that they’re seeing content quickly.

This contributes to the user’s perception of speed as they’re getting a fast experience rather than having to wait a long time for a non-critical video to load.

3. Optimizing Performance Based on the User’s Journey

One way of prioritizing the most important resources to be loaded as quickly as possible is by the user’s journey.

When a user is on a particular page, where are they most likely to click next? Which links and resources will be needed for that next page in the user’s journey?

Again, this is another method of optimizing what is needed as a priority rather than optimizing every page that a user could potentially land on and every resource they could potentially need.

A fast, seamless journey between pages contributes a great deal to a user’s perception of speed.

Resource Hints

Leaving the browser to load every single resource all at once can be an inefficient process which adds more time for the user as they sit and wait for a page to load.

This is where resource hints can help. Resource hints are instructions that you can give to a browser to help it prioritize what is most important to be loaded first.


Preload specifies the highest priority resources that impact the current navigation that should be loaded first.

<link rel=”preload” as=”script” href=”example.js”>


Preconnect establishes connections with the server and other origins earlier. This process can take a long time for users with poor connectivity.

<link rel=”preconnect” href=””>


Prefetch specifies key links and resources that will be needed as part of the future navigation or for the next step in the user’s journey.

<link rel=”prefetch” href=”example.jpg”>


Guess.js takes resource hints to the next level by automating the process of prefetching important resources and prioritizing the ones that are most likely to be needed next in the user’s journey.

It works by using Google Analytics data to analyze how users navigate your website, using metrics like pageviews, previous page paths, and exits.

It then uses machine learning to model predictions for what the next page is most likely to be in a user’s journey from any given page.

It then prefetches the pages that a user is likely to visit in the next step of their journey through your site. This means that the next page will already have been loaded by the time the user goes to click on it, providing a fast, seamless navigational experience.

How Guess.js works

How Guess.js works

The optimization methods mentioned in this article will require developer work.

If you liked the look of any of them while reading through, then make sure you sit down with your development agency or engineering team to talk through what will be possible for your website from an implementation perspective.

In Conclusion

We need to stop assuming that everyone is accessing our websites in optimal conditions.

Each user will have their own unique browsing environment. This is why we need to work harder to tailor our performance optimization efforts around our users and the different variables that make up their browsing experience, such as their device and internet connection.

Doing this isn’t easy, however. It certainly isn’t something that an SEO or marketer should try to tackle by themselves.

We need to spend more time talking to developers and learning from them about the latest technologies and methods available for user-centric performance optimization.

More Resources:

Image Credits

Featured Image: Unsplash
All screenshots taken by author, May 2019

Competitor Keyword Analysis: 5 Ways to Fill the Gaps in Your Organic Strategy & Get More Traffic via @Kammie_Jenkins

In the eternal pursuit of market share, businesses need to keep tabs on their competitors. Without insight into their relative strengths and weaknesses, businesses will struggle to stay competitive.

In the digital world, it’s no different.

If you manage the organic search presence of a website, at one point or another, your client or boss is going to ask you for competitive SEO insights. There are many reasons they might want this:

  • Staying competitive: They want to avoid falling behind.
  • Finding new opportunities: They want to know what’s working for others in their niche that they may not have thought of.
  • Diagnosing performance issues: They want to understand why their competitor is outranking them or getting more organic traffic.

But what exactly does this type of analysis entail?

What Should a Competitor Analysis Include?

What a competitor analysis (or “competitive analysis”) includes will depend on your unique goals.

For example, if your focus was growing your backlink profile, you could perform a competitor link analysis.

With this option, you might choose to look at a competitor’s top-linked pages to get ideas for linkable assets or analyze their referring URLs to get a better idea of what link building methods they’re using.

Search Engine Journal has some great posts detailing a few of the many different types and methods of competitor analysis, including:

In this article, we’ll be exploring five different methods for finding keyword ideas you can use in your organic search strategy.

By analyzing what keywords our competitors are targeting, we can come up with ideas for new content or improving existing content with the goal of capturing more organic search traffic.

Before You Start: How to Identify Your Competitors

Before diving into a competitor analysis, it’s important to know who your competitors are.

The most obvious type of competitor is the “direct” or “market” competitor. This is a business that offers the same or similar product/ service as you do. Think Coke vs. Pepsi or Nike vs. Adidas.

When you turn to Google, however, the game changes. You likely have different competitors for every keyword that’s important to your business, many of whom aren’t even your direct competitors.

Let’s take the query “How to write a cover letter” for example. Both and are ranking prominently for the query.

SERP competitors

SERP competitors

This makes them digital competitors even though Zety is a resume builder and Glass Door is a job search engine.

Sometimes your market competitors will also be your digital competitors, but not always. When the tips below call for a competitor’s domain or page, it will note what type of competitor to choose.

Now let’s dive in!

1. Find Keywords Competitors Rank for That You Don’t

One of the most-used methods of SEO competitor analysis is finding keywords that your competitors rank well for that you don’t.

To do this, you’ll want a keyword research tool that allows you to see not only what keywords are ranking for any domain, but also allows you to compare the ranking keywords of two or more domains.

As an example, let’s use market competitors ConvertKit and MailChimp. We’ll pretend ConvertKit is trying to find keywords MailChimp ranks for that they don’t.

You can use a tool like Moz’s Keyword Explorer “Explore by Site” to evaluate the ranking keywords of both domains side-by-side. (Disclosure: I work with Moz! But I also genuinely love their tools and used them long before I started working with them.)

Moz Competitor Keyword Comparison

Moz Competitor Keyword Comparison

Then, select MailChimp and set the rank position to show only positions 1-10 to view keywords that MailChimp ranks on page 1 for that ConvertKit does not.

Moz Competitor Overlap

Moz Competitor Overlap

You can also select the areas of overlap in the middle to view keywords both you and a competitor rank for, and then sort by your competitor’s highest rankings. This will reveal areas where you may already have content, it just isn’t performing as well as your competitor’s.

You’ll be left with a list of keywords that shows you:

  • Existing content you need to improve: Keywords you’re already targeting and just not performing very well for.
  • Content gaps you might want to fill: Keywords you’re not yet targeting and might want to consider creating content for.

2. Find High-Ranking Competitor Content That Doesn’t Match Query Intent Well

Google’s priority is serving up satisfying answers to searchers’ questions. To do this, Google’s algorithm attempts to rank content that best matches the searcher intent behind the query.

I like to think of this as a “desire-content” fit. In other words, there’s a fit between what the searcher likely wanted and the content on the page. It’s exactly what the searcher wanted to find when they typed their question into Google.

For example, if you typed in “how to change a tire” you’d likely be looking for step-by-step instructions, maybe even with pictures, to guide you through that process.

A sales page for a tire store or a long-form article about tires would be much less relevant, and therefore we could say that they don’t match the intent of the query well.

search intent

search intent

How is that helpful in the context of competitor content analysis?

Well, sometimes websites rank for queries they don’t really “deserve” to rank for. They may currently be ranking with a page that’s not a great fit for the query.

Because Google prioritizes the searcher experience, if you come along with more relevant, helpful content, there’s a likelihood that you could overtake your competitor for that query.

Again, using a keyword research tool, pop in a competitor’s domain to see what keywords they’re ranking on Page 1 for.

The next part is a bit more manual, but click to open the URLs ranking for each query and ask yourself, “If I had queried this phrase and landed on this page, would I be fully satisfied?”

If your honest answer is “no,” create a page that is.

3. Find Keywords Competitors Are Paying for That You Can Rank For

Using a tool like SEMrush, you can see what keywords your market competitors are paying for that you don’t yet rank organically for.

semrush keyword gap

semrush keyword gap

Why are we looking at paid keywords in a post on organic competitor analysis?

If your competitor is paying for these terms, that often means they’re valuable and generate highly qualified traffic.

Instead of paying to show up for these keywords, the idea is to see if there’s any opportunity to get free traffic by ranking organically for that term.

In many cases, the keywords you find via this route will be predominantly bottom-funnel, commercial-intent keywords. This may mean that the SERP is crowded with paid ad results and extremely powerful domains.

While there could still be some organic opportunity available, you can also look for research-intent queries containing your competitor’s paid keywords.

For example, if your competitor is running PPC for “automated email tool” you could target “how to create an automated email.”

4. Identify Valuable Topics by Looking for High-Investment Content Assets

Do you know how long it takes to produce a white paper?

How about an ebook or a webinar?

While effort varies, we can say with certainty that the barrier to entry for these content formats is typically much higher than a blog post.

Companies tend not to invest a ton of time and resources on projects that they don’t expect will generate valuable traffic and leads.

If a market competitor has chosen a topic for a big content project, consider that a pre-vetted idea that your desired customers will likely be interested in.

For example, if your competitor is hosting a webinar on “Understanding Your Cash Flow” it might be a good idea to type the phrase “cash flow” into your keyword research tool of choice and see how you could start owning that topic organically.

You can even use Google itself combined with plugins like Keywords Everywhere to find a whole bunch of topically-relevant keywords to target.

cash flow serp

cash flow serp

5. Find Content Improvement Ideas by Viewing Competitor Keyword Overlap

Content not ranking as well as you’d like for your target keyword? Use competitor analysis to see what topics your content might be missing!

One of my favorite ways to do this is by using Moz Keyword Explorer to compare keywords by exact page.

Say you’re trying to rank for the keyword phrase “how to run faster.” Perform a Google search for that term and grab two or three of the top-ranked URLs (your digital competitors) and pop them into the tool.

Moz ranking overlap

Moz ranking overlap

Then, click on the areas of most overlap and set the filter to view keywords ranking on page 1.

moz competitor overlap report

moz competitor overlap report

What you’ll be left with is a list of related keyword ideas that you can use to make your page targeting “how to run faster” more relevant! If Google’s top picks for the best answer to “how to run faster” also touch on these concepts, that’s an indicator that your content should too.

Go Forth & Analyze! (But Proceed With Caution)

Any of the competitor analysis methods mentioned in this article can offer great ideas for capturing more organic traffic, but there’s are some huge caveats here:

  • If your only strategy is imitation, the best you’ll ever become is the second best version of your competitors.
  • Plagiarism can get you into trouble: with search engines, legally, and it can ruin your reputation. Just don’t do it.
  • Don’t be a lemming. Blindly following your competitors could send you straight over a cliff.

Before conducting a competitor analysis, make sure you’re familiar with the business’s current goals and priorities.

Once you do that, it’ll be much easier to identify which competitor strategies you should use to inform your own and which are best left alone.

Now the next time your client or boss asks you for a competitor analysis, I hope you have a few new tricks up your sleeve.

More Resources:

Image Credits

Featured Image: Pexels
All screenshots taken by author, May 2019

How Your Company Can Prevent ADA Website Accessibility Lawsuits via @kim_cre8pc

Every day, websites and mobile apps prevent people from using them. Ignoring accessibility is no longer a viable option.

How do you prevent your company from being a target for a website accessibility ADA lawsuit?

Guidelines for websites wanting to be accessible to people with disabilities have existed for nearly two decades thanks to the W3C Web Accessibility Initiative.

A close cousin to usability and user experience design, accessibility improves the overall ease of use for webpages and mobile applications by removing barriers and enabling more people to successfully complete tasks.

We know now that disabilities are only one area that accessibility addresses.

Most companies do not understand how people use their website or mobile app, or how they use their mobile or assistive tech devices to complete tasks.

Even riskier is not knowing about updates in accessibility guidelines and new accessibility laws around the world.

Investing in Website Accessibility Is a Wise Marketing Decision

Internet marketers found themselves taking accessibility seriously when their data indicated poor conversions. They discovered that basic accessibility practices implemented directly into content enhanced organic SEO.

Many marketing agencies include website usability and accessibility reviews as part of their online marketing strategy for clients because a working website performs better and generates more revenue.

Adding an accessibility review to marketing service offerings is a step towards avoiding an ADA lawsuit, which of course, is a financial setback that can destroy web traffic and brand loyalty.

Convincing website owners and companies of the business case for accessibility is difficult. One reason is the cost.  Will they see a return on their investment?

I would rather choose to design an accessible website over paying for defense lawyers and losing revenue during remediation work.

Another concern is the lack of skilled developers trained in accessibility. Do they hire someone or train their staff?

Regardless of whether an accessibility specialist is hired or in-house developers are trained in accessibility, the education never ends.

Specialists are always looking for solutions and researching options that meet guidelines. In other words, training never ends.

Many companies lack an understanding of what accessibility is and why it is important. They may not know how or where to find help.

Accessibility advocates are everywhere writing articles, presenting webinars, participating in podcasts, and writing newsletters packed with tips and advice.

ADA lawsuits make the news nearly every day in the U.S. because there are no enforceable regulations for website accessibility. This is not the case for government websites.

Federal websites must adhere to Section 508 by law. State and local websites in the U.S. are required to check with their own state to see what standards are required.

Most will simply follow Section 508 or WCAG2.1 AAA guidelines.

If your website targets customers from around the world, you may need to know the accessibility laws in other countries. The UK and Canada, for example, are starting to enforce accessibility.

In the U.S., there has been no change in the status of ADA website accessibility laws this year.

Some judges have ruled that the lack of regulation or legal standards for website accessibility does not mean that accessibility should be ignored.

Is Your Website At Risk of an ADA Lawsuit?

Some businesses feel as though they are sitting ducks, and rightly so, since in some states, there are individuals and law firms searching for websites that fail accessibility.

Since the Federal government has put a hold on addressing accessibility standards for websites, several states are taking matters into their own hands.

In California’s Riverside County, the DA’s office is pushing back against a law firm and individuals accused of filing more than 100 ADA lawsuits against website owners and small businesses. According to a report by the Orange County Breeze:

“Abusive ADA lawsuit practices are not new, but the defendants in this case are responsible for a significant volume of the ADA lawsuits that have been filed in Southern California over the last several years. Rutherford has been a party-plaintiff in more than 200 separate ADA lawsuits the defendants have filed against businesses in San Diego, Orange, Los Angeles and San Bernardino counties.”

In New York, which saw 1,564 ADA cases in 2018, two plaintiffs filed over 100 ADA lawsuits against art galleries this year. Artnet News reports:

“Technology has changed, that’s why we’re dealing with this, says Frank Imburgio, founder and president of the website development firm Desktop Solutions. “The state of speech recognition and speech synthesis that’s in everyone’s Alexa? That same piece of software embedded in your browser means blind people can avail themselves of your website, but the websites were not designed with that in mind” five or ten years ago.”

New York State Senator Diane Savino, D-Staten Island, chairs the Senate internet and technology committee that considers legislation affecting issues related to technological advancements, like artificial intelligence and digital currency.

It was recently announced they are planning legislative action to curb the surge in the number of lawsuits.

Florida is a hotbed of ADA lawsuits. Flagler County paid over $15,000 to settle an ADA lawsuit brought by a blind person who was unable to use their PDFs.

They removed 7,500 informational PDF documents from their website because they were not optimized for screen readers. In doing so, sighted users no longer had access to this information either.

In the case of Robles v. Domino’s Website and Mobile App accessibility lawsuit, Dominos is taking it to the Supreme Court to fight back.

Every type of website has been the target of an ADA lawsuit. It doesn’t matter if it is owned by one person, a small business or a major corporation.

A Title III public-facing website or mobile app includes travel, hotels, finance, ecommerce, services, healthcare, real estate, and education.

Educational websites and software applications are a growing ADA lawsuit target not only for accessibility to the public, but also for employees and students who use school software.

One recent study found that 95% of U.S. K-12 school websites had errors that made the page difficult for a person with a disability to use. At the state level, schools and universities are facing an avalanche of ADA lawsuits.

What Can Companies Do to Prevent an ADA Lawsuit?

The only way to prevent an ADA lawsuit is to plan for, design and build for web accessibility.

Inclusive design should be a priority and considered the foundation of any website business plan.

Every business with a website, mobile app or internet software application should hire an accessibility specialist who is trained in the application of WCAG guidelines and has knowledge of accessibility laws and guidelines from all countries.

There are only a few companies that specialize in accessibility services, tools or training. They are competitive and busy. You can find alternatives with accessibility consultants focused on just testing or remediation.

If you live outside the USA, you will find accessibility experts and companies who have been doing this work for years and sharing information through podcasts, building new automated testing tools, and stepping forward as accessibility advocates through writings and webinars.

Companies are facing a shortage of accessibility-trained designers and developers. This is a real burden because putting designers on projects who do not know how to build for accessibility is almost as risky as not having anyone at all.

For example, applying ARIA with HTML5 is commonly done incorrectly or image alt attributes are not written properly, especially for infographics or images over background images.

The source of most ADA lawsuits is the inability to access webpages or mobile apps with assistive devices used by sight-impaired users and people who can not use a mouse pointer.

The fun of web design for designers is the visual presentation.

Elementor, a wildly popular WordPress theme building and page design plug-in, makes it easy to incorporate parallax, dynamic content, and animation, can be enhanced to increase the accessibility of the website, and it allows the creation of new themes, headings and footers with more developer control.

What If I Can’t Afford to Hire An Accessibility Specialist?

This question applies to all businesses, but for small and medium-sized businesses, adding an accessibility specialist is out of the question because of budget constraints.

Most small businesses are a team of one person, or the owner has a website person wearing all the hats from SEO to site maintenance, but not accessibility. That is a separate skill.

Find a website company that offers accessibility services. They may provide accessibility testing, accessibility site audits or affordable package deals for their clients such as monthly remediation for PDFs, documents, images, forms, and content spread out over time.

Accessibility reports performed after a company has received a letter of complaint are extremely expensive and unless performed by skilled accessibility specialists, will not hold up in court.

Should I Just Put Up an Accessibility Statement?

The original purpose of an accessibility statement was to show that a site was tested, what standards it meets, what was not tested, and how to contact the company if there are any accessibility issues.

Some accessibility professionals don’t advise using them at all because the pace of technology creates ongoing adjustments to accessibility guidelines.

Unless your company has gone through formal accessibility testing and remediation, I don’t recommend providing an accessibility statement.

Some companies want to put up one that says they are in the process of testing, but users have no proof and there is no accountability here. It won’t hold up in court either.

As a courtesy, every website or application should make it obvious and easy to contact by email or phone, not a form (because they are most often not accessible) and invited to describe the issue they found.

What Can I Do Now to Improve Accessibility?

Optimally, every website or app should not prevent anyone from using it regardless of any physical, mental or emotional impairment, permanent or temporary.

Understanding how to plan, build, and test for accessibility requires advanced knowledge of accessibility to meet Section 508, WCAG2.1 A+ AA guidelines and regulations required by states and countries.

Finding that miracle person who can do all that is unlikely, expensive, and too overwhelming to think about.

Yet, so is an ADA lawsuit, floundering conversion rates, search engine rank roller coaster rides, and a negative brand reputation.

Starting somewhere, here are steps to jump in:

  • Do accessibility testing using a free automated accessibility testing tool like WAVE, Axe, or Tenon. You may not understand how to make the repairs, but you will see errors, warnings, and alerts you didn’t know existed.
  • Hire an accessibility specialist to perform formal accessibility testing that goes beyond the limitations of automated tools. Some tools are better than others. Some are not kept up to date on standards. No accessibility expert relies on automated tools. They incorporate manual testing, too.
  • Ask for a quote for a limited accessibility review or site audit. This is where a sampling of pages are tested rather than every single one.
  • Look for agencies that include accessibility design or testing services. They are worth gold for your bottom line.
  • Train your web designers and developers. Invest in them. Your online business may depend on their skills. They not only need to know how to code for accessibility but also how to develop the entire methodology for planning, development, testing, and long term maintenance.
  • Large corporations should hire accessibility companies that specialize in user testing with disabled users. This is the same as user testing, but with the addition of new personas and real users with various impairments.
  • Use your keyboard to navigate your webpage or mobile app. No mouse. If you can’t figure out where you are, where to go or got lost, this is a major issue for accessibility.
  • Turn on any screen reader app, download a trial of JAWS, or use your mobile phone accessibility settings, and go to your website or app. You will quickly learn what the experience is like for blind and sight impaired people or multi-taskers who are adapting to the use of audible alternatives for reading.
  • If you use any third-party software applications or WordPress plugins, require that it meet accessibility compliance by contract.
  • WordPress site owners and designers need to know the basics that can be adjusted from the front-end to improve the accessibility of the site. This includes:
    • Font sizes (use em), font faces (use sans-serif)
    • Proper heading tags in the right schematic order (H1, H2, H3, not H2, H1, H4, H2)
    • Test that colors contrast properly (use any free tool).
    • Avoid using color as the only visual indicator that the state of something changed or is an alert.
    • Make all PDF’s accessible (Adobe has a tool.)
    • Underline text links. If you don’t want every link underlined, create rules in CSS to choose.
    • Absolutely no centered text unless it is a heading or sub-heading.
    • Describe each image using the alt attribute option. At a minimum, describe what the image is. There are lots of rules for alt text. Start with that one.
    • Add the WordPress Accessibility plug-in (see resources below) and use it to add focus states, skip to content and other courtesies. And send Joe a donation for using it. He keeps it updated.

And finally, if you don’t need a CMS website, there is more control over the code if you return to an HTML website.

You will need someone with HTML5, ARIA, CSS, and JavaScript knowledge to build it for you, but the appeal is having complete control over performance, speed, SEO, and accessibility.

JavaScript can be accessible. Pretty much any image, table, script, and dynamic content can be, but it requires education.

Fortunately, most of the information is available at free or affordable fees. There are accessibility communities, podcasts, webinars, and the WCAG guidelines themselves.

Microsoft, IBM, Google, and Adobe provide detailed how-to advice.

As you apply inclusive design practices, you will see the benefits for SEO, usability and conversions, brand, reputation, referrals, and customer satisfaction.

Accessibility at its most basic level is a human right. Investing in people is worth it.

Accessibility Resources

In addition to the resources listed in Top 36 Web Accessibility Resources for Digital Marketing Companies, check out:

More Resources:

The Forgotten History of Link Ranking Algorithms via @martinibuster

It’s important to understand how the search engines have analyzed links in the past and compare that to how search engines analyze links in the present. Yet the history is not well known.

As a consequence there are misunderstandings and myths about how Google handles links. Some concepts that some SEOs believe are true have been shown to be outdated.

Reading about what actual algorithms did and when they were superseded by better algorithms will make you a better search marketer. It gives you a better idea of what is possible and what is not.

Link Analysis Algorithms

Circa 2004 Google began to employ link analysis algorithms to try to spot unnatural link patterns. It was announced at a PubCon Marketing Conference Meet the Engineers event in 2005. Link analysis consisted of creating statistical graphs of linking patterns like number of inbound links per page, ratio of home page to inner page links, outbound links per page, etcetera.

When that information is plopped into a graph you can see that a great majority of sites tended to form a cluster. The interesting part was that link spammers tended to cluster on the outside edges of the big clusters.

By 2010 the link building community generally became better at avoiding many of the link spam signals. Thus in 2010, Microsoft researchers published this statement in reference to statistical link analysis, admitting that statistical analysis was no longer working:

“…spam websites have appeared to be more and more similar to normal or even good websites in their link structures, by reforming their
spam techniques. As a result, it is very challenging to automatically detect link spams from the Web graph.”

The above paper is called, Let Web Spammers Expose Themselves. This is a data mining/machine learning exercise that crawled URLs in seven SEO forums, discarding navigational URLs and URLs from non-active members, and focusing on the URLs of members who were active.

What they discovered is that they were able to discover link spam networks that would not have been discovered through conventional statistical link analysis methods.

This paper is important because it provides evidence that the statistical link analysis may have reached it’s limit by 2010.

The other reason this document is of interest is that it shows that the search engines were developing link spam detection methods above and beyond statistical link analysis.

This means that if we wish to understand the state of the art of link algorithms, then we must consider that there are methods that go beyond statistical analysis and give them a proper analysis.

Today’s Algorithm May Go Beyond Statistical Analysis

I believe that the Penguin algorithm is more than statistical analysis. In a previous article I took a deep dive into a new way to analyze links. It was a new method that measured distances from a seed set of trusted sites, link distance ranking algorithms. Those are a type of algorithms that go beyond statistical link analysis.

The above referenced Microsoft research paper concluded that 14.4% of the link spam discovered belonged to high quality sites, sites judged to be high quality by human quality raters.

That statistic, although it’s somewhat old, is nevertheless important because it indicates that a significant amount of high quality sites may be ranking due to manipulative link methods or, more likely, that those manipulative links are being ignored. Google’s John Mueller has expressed confidence that the vast majority of spam links are being ignored.

Google Ignores Links

Many of us already intuited that Google was ignoring spam links and post-Penguin algorithm, Google has revealed that real-time Penguin is catching spam links at an unprecedented scale. It’s so good that Googlers like Gary Illyes have said that out of hundreds of negative SEO cases he has examined, not a single one was being affected by the spam links.

Real Time Penguin

Several years ago I published the first article to connect the newest link ranking algorithms with what we know about Penguin. If you are a geek about algorithms, this article is for you: What is Google’s Penguin Algorithm, Really? [RESEARCH] 

Penguin is Still Improving

Gary Illyes announced that the real-time Penguin algorithm will be improving. It already does a good job catching spam and at the time of this writing, it’s possible that the new and improved Penguin may already be active.

Gary didn’t say what kinds of improvements but it’s probably not unrealistic to assume that speed of identifying spam links and incorporating that data into the algorithm is a possible area.

Read: Google’s Gary Illyes on Real-Time Penguin, Negative SEO & Disavows

Anchor Text Algorithm Change

A recent development in how Google might handle links is with anchor text. Bill Slawski noted that a patent was updated to include a new way to use the text around the anchor text link to give meaning to the link.

Read: Add to Your Style Guide Annotation Text: A New Anchor Text Approach

I followed up with an article that explored the impact of this algorithm to improve link building.
Read: Google Patent Update Suggests Change to Anchor Text Signal

Implied Links

There are research papers that mention implied links. A clear explanation is seen in a research paper published by Ryan Rossi titled, Discovering Latent Graphs with Positive and Negative Links to Eliminate Spam in Adversarial Information Retrieval

What the researcher discovered was that discovering spam networks could be improved by creating what he called latent links. Basically he used the linking patterns between sites to imply a link relationship between sites that had links in common between them. Adding these virtual links to the link graph (map of the Internet) caused the spam links to become more prominent, making it easier to isolate them from normal non-spam sites.

While that algorithm is not from a Googler, the patent described by my article, Google’s Site Quality Algorithm Patent, is by Google, and it contains a reference to implied links.


Google’s first original algorithm that started it all is nicknamed Backrub. The research paper is called, The Anatomy of a Large-Scale Hypertextual Web Search Engine. It’s an interesting research paper, from a long time ago.

Everyone in search marketing should read it at least once. Any discussion of link algorithms should probably include this if only because there is guaranteed to be that one person complaining that it wasn’t included.

So to that one quibbler, this link is for you.


This is not intended to be a comprehensive review of link related algorithms. It’s a selected review of where we are at the moment. Perhaps the most important change in links is the distance ranking algorithms that I believe may be associated with the Penguin algorithm.

Images by Shutterstock, Modified by Author

10 Basic SEO Tips to Index + Rank New Content Faster – Whiteboard Friday

Posted by Cyrus-Shepard

In SEO, speed is a competitive advantage.

When you publish new content, you want users to find it ranking in search results as fast as possible. Fortunately, there are a number of tips and tricks in the SEO toolbox to help you accomplish this goal. Sit back, turn up your volume, and let Cyrus Shepard show you exactly how in this week’s Whiteboard Friday

[Note: #4 isn’t covered in the video, but we’ve included in the post below. Enjoy!]

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Howdy, Moz fans. Welcome to another edition of Whiteboard Friday. I’m Cyrus Shepard, back in front of the whiteboard. So excited to be here today. We’re talking about ten tips to index and rank new content faster.

You publish some new content on your blog, on your website, and you sit around and you wait. You wait for it to be in Google’s index. You wait for it to rank. It’s a frustrating process that can take weeks or months to see those rankings increase. There are a few simple things we can do to help nudge Google along, to help them index it and rank it faster. Some very basic things and some more advanced things too. We’re going to dive right in.


1. URL Inspection / Fetch & Render

So basically, indexing content is not that hard in Google. Google provides us with a number of tools. The simplest and fastest is probably the URL Inspection tool. It’s in the new Search Console, previously Fetch and Render. As of this filming, both tools still exist. They are depreciating Fetch and Render. The new URL Inspection tool allows you to submit a URL and tell Google to crawl it. When you do that, they put it in their priority crawl queue. That just simply means Google has a list of URLs to crawl. It goes into the priority, and it’s going to get crawled faster and indexed faster.

2. Sitemaps!

Another common technique is simply using sitemaps. If you’re not using sitemaps, it’s one of the easiest, quickest ways to get your URLs indexed. When you have them in your sitemap, you want to let Google know that they’re actually there. There’s a number of different techniques that can actually optimize this process a little bit more.

The first and the most basic one that everybody talks about is simply putting it in your robots.txt file. In your robots.txt, you have a list of directives, and at the end of your robots.txt, you simply say sitemap and you tell Google where your sitemaps are. You can do that for sitemap index files. You can list multiple sitemaps. It’s really easy.

Sitemap in robots.txt

You can also do it using the Search Console Sitemap Report, another report in the new Search Console. You can go in there and you can submit sitemaps. You can remove sitemaps, validate. You can also do this via the Search Console API.

But a really cool way of informing Google of your sitemaps, that a lot of people don’t use, is simply pinging Google. You can do this in your browser URL. You simply type in, and you put in the sitemap with the URL. You can try this out right now with your current sitemaps. Type it into the browser bar and Google will instantly queue that sitemap for crawling, and all the URLs in there should get indexed quickly if they meet Google’s quality standard.


3. Google Indexing API

(BONUS: This wasn’t in the video, but we wanted to include it because it’s pretty awesome)

Within the past few months, both Google and Bing have introduced new APIs to help speed up and automate the crawling and indexing of URLs.

Both of these solutions allow for the potential of massively speeding up indexing by submitting 100s or 1000s of URLs via an API.

While the Bing API is intended for any new/updated URL, Google states that their API is specifically for “either job posting or livestream structured data.” That said, many SEOs like David Sottimano have experimented with Google APIs and found it to work with a variety of content types.

If you want to use these indexing APIs yourself, you have a number of potential options:

Yoast announced they will soon support live indexing across both Google and Bing within their SEO WordPress plugin

Indexing & ranking

That’s talking about indexing. Now there are some other ways that you can get your content indexed faster and help it to rank a little higher at the same time.

4. Links from important pages

When you publish new content, the basic, if you do nothing else, you want to make sure that you are linking from important pages. Important pages may be your homepage, adding links to the new content, your blog, your resources page. This is a basic step that you want to do. You don’t want to orphan those pages on your site with no incoming links. 

Adding the links tells Google two things. It says we need to crawl this link sometime in the future, and it gets put in the regular crawling queue. But it also makes the link more important. Google can say, “Well, we have important pages linking to this. We have some quality signals to help us determine how to rank it.” So linking from important pages.

5. Update old content 

But a step that people oftentimes forget is not only link from your important pages, but you want to go back to your older content and find relevant places to put those links. A lot of people use a link on their homepage or link to older articles, but they forget that step of going back to the older articles on your site and adding links to the new content.

Now what pages should you add from? One of my favorite techniques is to use this search operator here, where you type in the keywords that your content is about and then you do a This allows you to find relevant pages on your site that are about your target keywords, and those make really good targets to add those links to from your older content.

6. Share socially

Really obvious step, sharing socially. When you have new content, sharing socially, there’s a high correlation between social shares and content ranking. But especially when you share on content aggregators, like Reddit, those create actual links for Google to crawl. Google can see those signals, see that social activity, sites like Reddit and Hacker News where they add actual links, and that does the same thing as adding links from your own content, except it’s even a little better because it’s external links. It’s external signals.

7. Generate traffic to the URL

This is kind of an advanced technique, which is a little controversial in terms of its effectiveness, but we see it anecdotally working time and time again. That’s simply generating traffic to the new content. 

Now there is some debate whether traffic is a ranking signal. There are some old Google patents that talk about measuring traffic, and Google can certainly measure traffic using Chrome. They can see where those sites are coming from. But as an example, Facebook ads, you launch some new content and you drive a massive amount of traffic to it via Facebook ads. You’re paying for that traffic, but in theory Google can see that traffic because they’re measuring things using the Chrome browser. 

When they see all that traffic going to a page, they can say, “Hey, maybe this is a page that we need to have in our index and maybe we need to rank it appropriately.”


Once we get our content indexed, talk about a few ideas for maybe ranking your content faster. 

8. Generate search clicks

Along with generating traffic to the URL, you can actually generate search clicks.

Now what do I mean by that? So imagine you share a URL on Twitter. Instead of sharing directly to the URL, you share to a Google search result. People click the link, and you take them to a Google search result that has the keywords you’re trying to rank for, and people will search and they click on your result.

You see television commercials do this, like in a Super Bowl commercial they’ll say, “Go to Google and search for Toyota cars 2019.” What this does is Google can see that searcher behavior. Instead of going directly to the page, they’re seeing people click on Google and choosing your result.

  1. Instead of this:
  2. Share this:

This does a couple of things. It helps increase your click-through rate, which may or may not be a ranking signal. But it also helps you rank for auto-suggest queries. So when Google sees people search for “best cars 2019 Toyota,” that might appear in the suggest bar, which also helps you to rank if you’re ranking for those terms. So generating search clicks instead of linking directly to your URL is one of those advanced techniques that some SEOs use.

9. Target query deserves freshness

When you’re creating the new content, you can help it to rank sooner if you pick terms that Google thinks deserve freshness. It’s best maybe if I just use a couple of examples here.

Consider a user searching for the term “cafes open Christmas 2019.” That’s a result that Google wants to deliver a very fresh result for. You want the freshest news about cafes and restaurants that are going to be open Christmas 2019. Google is going to preference pages that are created more recently. So when you target those queries, you can maybe rank a little faster.

Compare that to a query like “history of the Bible.” If you Google that right now, you’ll probably find a lot of very old pages, Wikipedia pages. Those results don’t update much, and that’s going to be harder for you to crack into those SERPs with newer content.

The way to tell this is simply type in the queries that you’re trying to rank for and see how old the most recent results are. That will give you an indication of what Google thinks how much freshness this query deserves. Choose queries that deserve a little more freshness and you might be able to get in a little sooner.

10. Leverage URL structure

Finally, last tip, this is something a lot of sites do and a lot of sites don’t do because they’re simply not aware of it. Leverage URL structure. When Google sees a new URL, a new page to index, they don’t have all the signals yet to rank it. They have a lot of algorithms that try to guess where they should rank it. They’ve indicated in the past that they leverage the URL structure to determine some of that.

Consider The New York Times puts all its book reviews under the same URL, They have a lot of established ranking signals for all of these URLs. When a new URL is published using the same structure, they can assign it some temporary signals to rank it appropriately.

If you have URLs that are high authority, maybe it’s your blog, maybe it’s your resources on your site, and you’re leveraging an existing URL structure, new content published using the same structure might have a little bit of a ranking advantage, at least in the short run, until Google can figure these things out.

These are only a few of the ways to get your content indexed and ranking quicker. It is by no means a comprehensive list. There are a lot of other ways. We’d love to hear some of your ideas and tips. Please let us know in the comments below. If you like this video, please share it for me. Thanks, everybody.

Video transcription by

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Twitter Updates TweetDeck With Support for GIFs, Threads, and More via @MattGSouthern


Twitter is rolling out several updates to TweetDeck that include support for features it has been lacking.

TweetDeck is finally catching up to the desktop version of Twitter with support for GIFs, polls, emojis, threads, and image tagging.

These new features were added after TweetDeck polled users on which new features they’d most like to see added.

A day later, every feature in the poll was added to TweetDeck and then some.

As mentioned in the tweet, this is just a test for now so it may take some time before everyone gets access to the new features.

TweetDeck giveth and TweetDeck taketh away

TweetDeck has removed the ability to schedule tweets in the updated tweet composer.

You could revert back to the old composer to schedule tweets, but then you would not be able to utilize the new features that were added.

The notice says scheduling tweets is “not yet” available, which indicates it will be available at some point in the future. Most likely after the features have been fully tested.

Although, if the sporadic updates to TweetDeck are anything to go by, it could take some time for the issue with scheduled tweets to be resolved.

Why are Googlers So Confident About Link Spam? via @martinibuster

Google’s John Mueller recently discussed the disavow tool and how some inside Google feel it’s unnecessary. The reason it’s presumed to be unnecessary is that Google’s ability to ignore spam links makes the disavow tool no longer useful to Google or to web publishers. Does Google really catch and ignore all spam links?

Disavow Tool Causes Unnecessary Work

John referred to the disavow tool as a tool that causes people to do unnecessary work. He shared that many within Google feel that it’s an unnecessary tool because, presumably, Google already catches spam links.

“There’s a lot to be said for removing a feature that worries many folks, and suggests they need to do unnecessary work (assuming we can be sure that we handle it well automatically)…

That’s certainly one way to look at it, and it’s a view that some folks here share as well. If we can remove unnecessary complexity from these tools, I’m all for that — there’s enough other work involved with running a good website.”

Disavow Tool is for Relieving Anxiety?

John then acknowledged that the tool helped web publishers deal with the anxiety that Google was attributing spam links to their websites and allowing their rankings to suffer as a consequence.

Here is what Google’s John Mueller said:

“On the other hand, some sites see a lot of weird links, and I can understand that site owners don’t want to have to trust an algorithm to figure out that these links aren’t something they want to be associated with.

I kinda like the angle of “if you’re really worried, then just take care of it yourself” which is possible here.”

Screenshot of Google's Gary Illyes speaking at a marketing conference

Screenshot of Google's Gary Illyes speaking at a marketing conferenceGoogle’s Gary Illyes confirmed that Google is able to catch and ignore adult links, spam links and even negative SEO links.

Gary Illyes Confirms Negative SEO Doesn’t Work

Gary Illyes stated at PubCon Florida 2019 that out of hundreds of negative SEO reports he has examined, none of them were real. The reason the sites experienced ranking drops were due to other reasons.

Gary affirmed that they’re making real-time Penguin even better but that publishers should not worry about spam links or perceived negative SEO attacks. Gary specifically  mentioned adult links as nothing to be worried about.
Read more: Google’s Gary Illyes on Real-Time Penguin, Negative SEO & Disavows

Google is Confident About Spam Links

The interesting point in the above statements is that John Mueller’s statement expresses confidence that Google is ignoring spam link relationships. This point is clearer when he shared that Googlers have expressed the opinion that the tool should be removed because it causes “unnecessary work” for publishers.

The implication of the phrase “unnecessary work” is that Googlers have a high confidence that Google is already ignoring spam links, making the tool “unnecessary work” for publishers.

Is the Disavow Tool Necessary?

That level of confidence must come from somewhere. It may be reasonable to assume that the Googlers who oversee the workflow associated with the disavow tool see firsthand that the tool is not useful for discovering spam. This could happen if Googlers are consistently seeing that the links in the disavow reports are already being ignored.

John Mueller has said in the past that the disavow tool is not necessary for the vast majority of sites. The tool itself was originally meant for publishers to be able to disavow spam links that they are responsible for and to help alleviate fears that negative SEO was subverting their rankings (Read: How Negative SEO Shaped Disavow Tool).

Should Google Share Disavow Tool Statistics?

If Google is going to remove the disavow tool, it may be useful for Google to share with the Web Publishing community what percentage of links uploaded via the Disavow are already ignored by Google.

Knowing how well Google catches spam may help relieve publishers of the anxiety that spam links are hurting them. This in turn will convince them to avoid the “unnecessary work” of disavowing links. And that will help publishers focus on creating quality web experiences that satisfies site visitors. That’s a win-win for Google and publishers.

Read Google’s John Mueller’s Reddit comment here.

Images by Shutterstock, Modified by Author
Screenshots by Author, Modified by Author