Google Forgets to Announce a Major Change – SEO Community Disappointed by @martinibuster

Several years ago Google quietly stopped using the Rel=prev/next indexing signal. Google continued to encourage publishers to the indexing signal.  Years later Google tweeted an announcement that the indexing signal was no longer in use. The SEO and publishing community responded with disappointment and confusion.

Google Forgets to Announce a Major Change – SEO Community Disappointed

Google Forgets to Announce a Major Change – SEO Community Disappointed

What is the Rel=prev/next Indexing Signal?

Rel=prev/next was an indexing signal that Google advised publishers to use as a hint that a group of pages were part of a series of related pages. This allowed publishers to break up a document into several pages while still having the entire multi-page document considered as one document.

This was useful for long articles as well as for long forum discussions that can span to multiple pages.

Was it a Major Change?

From the perspective of web publishers it certainly felt like a major change. The indexing signal gave publishers the ability to help Google figure out complex site navigation.

Did Google Hope Nobody Would Notice?

There was no official announcement. Google simply issued a years late tweet.

Google removed the webmaster support page entirely and replaced it with a 404 response. No explanation.

Google Forgets to Announce a Major Change – SEO Community Disappointed

Google Forgets to Announce a Major Change – SEO Community Disappointed

Then Google updated the original blog post from 2011 to note that the guidance in the announcement was cancelled.

Google Forgets to Announce a Major Change – SEO Community Disappointed

Google Forgets to Announce a Major Change – SEO Community Disappointed

Publishers are Disappointed with Google

Under the leadership of Matt Cutts, Google endeavored to liaison with publishers to keep them updated on ways publishers could help improve their sites in a way that adhered to Google best practices.

This is why it came as a shock that Google had stopped using an important indexing signal and didn’t bother to tell publishers.

Google Encouraged Publishers to Use a Signal that Didn’t Work

As recently as January 2019, Google’s John Mueller was encouraging publishers to use the indexing signal, even though Google no longer used it.

In a Google Webmaster Hangout from January 2019, a publisher asked Google’s John Mueller about what he could to do force Google to show content from the first page of a paginated set of content instead of one of the inner pages.

John Mueller responded by affirming that Google tries to use the Rel=prev/next. He didn’t say that Google had already stopped using Rel=prev/next.

Mueller affirmed that Google was using it, even though Google had in fact been using it.

Here is John Mueller’s response:

“That’s something where we try to use rel next/previous to understand that this is a connected set of items.”

It may be that John Mueller did not know that it had been years since Google had used that indexing signal.

Many are Disappointed in Google

The SEO and publishing community responded in two ways. Some accepted the development quietly. But it seemed like most people were upset that Google had continued telling publishers to use something that Google had stopped responding to.

Dustin Woodard tweeted:

“Google stopped using rel=prev & next years ago, but forgot to tell the web. They claim users like single page. That’s like a librarian saying your book is somewhere in this unorganized library. Just look until you find it. “

Should You Take Down Existing Code?

Edward Lewis, a search marketer since 1995, noted that link prev-rel is a part of the HTML specification. So while Google may not be using them as a pagination hint, it is still a relevant HTML element and there is no need to take down existing code.

“Link Relationships (next, prev) have been in the HTML Specification long before Google finally read the instructions on how to use them.

So “Google Says” and now everybody is whining about the time they invested to setup their pages properly which should have been done to begin with. I wonder how many will now remove their link relationships just because “Google Says.”

Rel=prev/next Serves a Purpose

Others in the community noted that Rel=prev/next was an important tool for helping Google make sense of complex site architecture.

Alan Bleiweiss observed that some sites are highly complex. He remarked that he did not trust Google to automatically be able to sort out the complexity.

He commented:

“This is insanity. …If I’ve got 50 paginated pages in a single sub-category, on a site with 10 categories, and a total of 10 sub-categories, there’s no way I can trust Google to “figure it all out”.

“Google is a mess of an organization. The lack of consistency, cohesiveness, and reinforcing signals within their own organization ends up being reflected by the lack of consistency, cohesiveness, and reinforcing signals site owners, managers and team members put out when doing the work on individual sites.”

The Pragmatic Response

Cyrus Shepard was non-judgmental. He tweeted a proactive and pragmatic course of action.

Google Forgets to Announce a Major Change – SEO Community Disappointed

Google Forgets to Announce a Major Change – SEO Community Disappointed

We who work online are pretty much living in Google’s world. Google is the hand that feeds many publishers.

Yet it is, as Danny Sullivan called it, an ecosystem. Google thrives when publishers thrive.

Google Made More Linking Practices Less Effective at Manipulating Rankings by @MattGSouthern

In Google’s newly released webspam report, the company reveals how it dealt with link spam in 2018.

Link spam is one of three types of spam discussed in the report. Other standouts include spam on hacked sites and user-generated spam.

Here’s more about how Google fought those types of spam last year.

Link Spam

Link spam refers to any type of link building practice that violates Google’s webmaster guidelines.

Google stresses the value of links as a search ranking factor when explaining why it’s important to fight link spam.

“We continued to protect the value of authoritative and relevant links as an important ranking signal for Search.”

Egregious link spam is dealt with swiftly, Google says. A number of link building practices were even made less effective last year

Without getting too specific, Google says it “made a number of bad linking practices less effective for manipulating ranking.”

Lastly, Google touts its webmaster and SEO outreach efforts:

“Above all, we continued to engage with webmasters and SEOs to chip away at the many myths that have emerged over the years relating to linking practices.”

The best way to avoid getting penalized for link spam, Google explains, is to avoid building links primarily as an attempt to rank better.

Other ways Google fought webspam in 2018

Here’s a quick recap of other key highlights from the report:

  • Less than 1% of results visited by users are for spammy pages.
  • 80% reduction on the impact of user-generated spam on search users.
  • In 2018, Google received over 180,000 search spam user reports.
  • Google took action on 64% of the reports it processed.
  • Google sent over 186 million messages to website owners regarding their site’s appearance in search results.
  • Around 2%—or about 4 million—of the messages Google sent were related to manual actions.

Google Explains the Difference Between Neural Matching and RankBrain by @MattGSouthern

Google addressed some questions going around the SEO community as of late related to neural matching and how it’s used in search.

Danny Sullivan, via Google’s Search Liaison account, published a series of tweets explaining the difference between neural matching and RankBrain.

Here is an overview of what was shared by Sullivan.

How Google Uses Neural Matching

Neural matching helps Google better relate words to searches. It’s an AI-based system that has been in use since 2018.

Sullivan describes neural matching as a “super-synonym” system:

“For example, neural matching helps us understand that a search for “why does my TV look strange” is related to the concept of “the soap opera effect.” We can then return pages about the soap opera effect, even if the exact words aren’t used…”

In September 2018, Google stated that neural matching is used in 30% of searches.

It’s not known how widely used neural matching is right now, though it would be reasonable to assume its use has only expanded.

Sullivan’s description of neural matching closely resembles what my colleague Roger Montti wrote about it last year: What is Google’s Neural Matching Algorithm?

What is RankBrain?

RankBrain helps Google relate pages to concepts, even when the pages do not include the exact words used in a query.

It’s also an AI-based system which has been in use since 2016, two years before Google implemented neural matching.

There are theories which suggest RankBrain also takes into account user behavior signals, but those theories have been debunked.

So, to sum up, RankBrain relates pages to concepts and neural matching relates words to searches.

More Resources

Google Stopped Supporting Rel=prev/next in Search Indexing Years Ago by @MattGSouthern

Google finally decided to tell the search community that rel=”next” and rel=”prev” haven’t been used in years.

John Mueller from Google broke the news on Twitter earlier today:

Shortly after, the Google Webmasters account made an official announcement:

Google has long recommended using rel=prev/next markup when publishing a paginated series of web pages.

The markup would communicate to Google that the individual pages are all part of the same series.

Rel=prev/next markup also sent signals to Google about which page in the series is first, second, third, and so on.

Now, Google doesn’t support the markup at all.

For years (apparently) Google hasn’t been using signals from rel=prev/next when indexing content in search results.

What has Google been doing instead?

No More Rel=prev/next

Google has been indexing content as it’s found by Google’s crawlers, Mueller says.

In other words, web pages in a series are indexed the same as any other piece of single-page content.

As it turns out, publishers are good at sending the appropriate signals to Google without rel=prev/next.

Publishers can send signals to Google in other ways, such as linking to other pages of a series within the body content.

Think about how you would communicate to a searcher that the page they landed on is part 3 out of 5 in a series.

When the pagination is obvious to a reader it should be obvious to Google as well.

Another option is to create more single-page content instead of paginated content. Google says users prefer it, although multi-page content is still acceptable for search.

John Mueller Discusses Links that Google Ignores by @martinibuster

In a Webmaster Hangout, Google’s John Mueller answered a question related to link building. John Mueller’s answer gave a peek into a little known part of Google’s algorithm that handles links.

How Does Google Treat Links?

The question was very specific. It asked how Google treated specific link building tactics that are easy to do.

Here is the question asked:

How does Google treat backlinks from website analysis websites or user profiles?

Website Analysis websites may be a reference to websites that provide technical information about domains.

User profiles is a reference to joining web forums and creating a profile. Spammers add a link to their profile that is shown. Typically a forum links to the membership list where Google finds and crawls the profile link. Every post on a forum features a link back to the member’s profile. A spammer will sometimes seek to exploit that be creating a few posts that result in a link to the profile which then links to the target site.

This is one of the easiest ways to build links. There is even software for automating the process of registering and creating a profile with a link.

John Mueller answered the question but broadened the scope to address a wider range of easy link building techniques.

“I guess user generated content and automatically generated content sites. For the most part we ignore those because like, they link to everything and it’s easy to recognize so that’s something that we essentially ignore.”

Google Ignores Forum Profile Links

John Mueller made it clear that Google ignores forum profile links. He called them, user generated content links. What’s interesting is that Mueller said that they were easy to recognize. This is important.

That means that Google doesn’t use fancy link analysis or anything like that. Google’s algorithms easily recognize them as forum profile links and automatically do not count them. The context probably sends a signal that they are forum profile links and they are removed from the link graph and have no ranking benefit.

Are Easy Links Ignored?

John Mueller stated that links that are automatically generated have zero ranking benefit. The reason is because the context of the link is not the context that Google wants to count for ranking purposes. It’s possible that they are useful for discovery.

Discovery means Google’s ability to find a web page. Discovery of a web page is not useful for ranking.

Good Links are Hard to Create

Cyrus Shepard (@CyrusShepard) created a Twitter poll that asked what is the most difficult part of SEO?

The most popular result was Outreach/PR, which is another way of saying building links. The second most difficult part of SEO was creating link worthy content. The two SEO activities related to links were voted as the most difficult.

Screenshot of a Twitter poll created by Cyrus Shepard

Screenshot of a Twitter poll created by Cyrus ShepardA poll conducted on Twitter showed that link building was the most difficult part of SEO.

It’s not surprising that link building voted as most difficult. Google depreciates a wide range of links. Many web publishers do not respond to link building outreach.

It kind of seems like the old way of building links is broken. And that contributes to the feeling that link building is hard.

  1. What if the reason link building is difficult is because we’re going about it the wrong way?
  2. What if the reason link building is difficult is because it’s time to change how link building is conducted?

Definition of a Link that Helps a Site Rank

A real link was created independently by someone who is so enthusiastic about something that they link to it.

In some ways, we tend to focus on the wrong metrics such as if the site is an “authority” and so on.

What about the enthusiasm factor?

Google encourages publishers to be awesome. That’s excellent advice, but it’s kind of vague.  I’m not criticizing the “be awesome” advice. There’s truth to it. It simply lacks a lot of nuance.

Enthusiasm, Loyalty and Desire

There are many ways of building enthusiasm, loyalty and desire.

  • What made Zappo’s so popular?
  • What makes you ask for a website by name?
  • Why did you pick your car? Was it because of what it says about you? Or because it’s cheap?

(Cheap can be awesome to some people.)

Those are relatively easy questions to answer. They are difficult to answer when it comes to SEO.

We all know that Google doesn’t respond to anchor text the way it did twelve years ago. So why do so many publishers still worry about anchor text percentages in 2019?

SEO seems to get locked into a way of doing things and even in the face of evidence that Google has changed, the rote methods and tactics tend to stay the same.

In my experience, considering those questions in relation to SEO results in long term strategies.

Link Building Evolves

Maybe it’s time for a shift in the link building paradigm. This wouldn’t be the first time the SEO industry evolved the way it creates links.

That doesn’t mean you have to stop doing what you feel is successful. It simply means exploring long term alternative that react and bend with how web publishers spontaneously give links and how Google rewards links.

Watch the Webmaster Hangout here

More Resources

Images by Shutterstock, Modified by Author
Screenshots by Author, Modified by Author

Bing Rolls Out Text-to-Speech for Search Results by @MattGSouthern

ADVERTISEMENT

Bing introduced a number of new search features this week including text-to-speech, expanded intelligent answers, and enhanced visual search.

Text-to-Speech

Bing’s mobile app can now read text in a voice that’s said to be indistinguishable from a human’s. So Bing can speak search results back to users in natural-sounding language.

Bing says its text-to-speech AI has been developed to clearly articulate words with human-like intonation.

Intelligent Answers

Bing sometimes answers queries with ‘intelligent answers,’ which contain multiple pieces of information pulled from different sources.

As a result of advancements in processing power, Bing is now able to provide intelligent answers for harder questions.

“For example, instead of the relatively simple answer to ‘what is the capital of Bangladesh’, Bing can now provide answers to more complex questions, such as ‘what are different types of lighting for a living room’, quicker than before.”

pic

Visual Search

Bing’s visual search allows users to search using an image. It has been upgraded with the ability to ability to quickly see multiple objects auto-detected within an image.

So a user uploads an image, then Bing scans it and automatically detects the objects within the image. Lastly, Bing conducts searches for visual matches.

In the search results Bing places clickable hotspots over objects in an image, which users can click on to learn more about.

More Resources

LinkedIn Now Lets Marketers Target Ads to ‘Lookalike Audiences’ by @MattGSouthern

ADVERTISEMENT

LinkedIn is upgrading its ad targeting options with the ability to reach more of the right people.

Lookalike audience targeting reaches people who are similar to your ideal customer.

“LinkedIn’s lookalike audiences combine the traits of your ideal customer with our rich member and company data to help you market to new professional audiences similar to your existing customers, website visitors and target accounts.”

LinkedIn emphasizes the following benefits of lookalike audience targeting:

  • Reach high-converting audiences: Discover audiences similar to those who are already interested in your business.
  • Get results at scale: Extend the reach of your campaigns to more qualified prospects.
  • Engage new target accounts: Target your ads to additional companies you may not have previously considered. These companies match a similar company profile to your ideal customer.

LinkedIn says customers in the pilot were able to improve their campaign reach by 5-10x while still reaching the audiences that matter most to their organizations.

Get started with lookalike audiences by creating a Matched Audience in Campaign Manager. A matched audience could be subscribers to an email list, for example.

Other LinkedIn Ads Updates

In addition to lookalike audience targeting, LinkedIn also announced audience templates and expansion of interest targeting.

Interest Targeting

LinkedIn just introduced interest targeting back in January. Now it’s being expanded with more targeting capabilities.

Interest targeting allows marketers to reach LinkedIn users with relevant ads that match their professional interests.

Now it will also allow marketers to target users based on the professional topics and content they engage with through the Bing search engine.

Audience Templates

LinkedIn is introducing a tool for people who are new to advertising on LinkedIn, or existing advertisers looking to reach new audiences.

Audience templates give marketers a selection of over 20 predefined B2B audiences. Templates include audience characteristics, like member skills, job titles, groups, and so on.

Just click on one of the characteristics in the template to target those types of users.

All of these features are rolling out over the next two weeks.

Google Updates the Sitemaps Report in Search Console & Adds Ability to Delete Sitemaps by @MattGSouthern

Google has updated the sitemaps report in Search Console with new features, including the ability to delete a sitemap.

Updates to the sitemaps report will let users perform actions such as:

  • Opening the sitemap content in a new tab
  • Deleting a sitemap
  • Reviewing granular details for sitemaps with errors
  • Presenting RSS and Atom feed sitemaps

Google shared an example screenshot on Twitter:

Google Updates the Sitemaps Report in Search Console & Adds Ability to Delete Sitemaps

Google Updates the Sitemaps Report in Search Console & Adds Ability to Delete Sitemaps

Industry folks seem most enthused about the ‘remove a sitemap’ feature finally being available in the new Search Console.

Previously, users could only remove sitemaps by using the classic version of Search Console.

Google still hasn’t ported over all features from the classic version to the new version, but the company is clearly still working on it.

A note about deleting sitemap files

Remember, deleting a sitemap file from Search Console will only stop the data from being recorded in Search Console. It will not stop the sitemap from being crawled by Google.

Google will still know where to find the sitemap and will crawl it whether or not it’s in Search Console.

If you want to stop Google from crawling a sitemap you need to remove it from your server, which cannot be done through Search Console.

Facebook Removes Targeting Options From Certain Ads to Fight Discrimination by @MattGSouthern

ADVERTISEMENT

Facebook is taking steps to stop discriminatory advertising practices by removing targeting options that were used to exclude people.

Housing, employment, and credit ads can no longer be targeted by age, race, or gender.

As Facebook COO Sheryl Sandberg explains in a blog post, these changes are the result of settlement agreements with civil rights organizations.

“There is a long history of discrimination in the areas of housing, employment and credit, and this harmful behavior should not happen through Facebook ads.

Last year, one of the US’s top housing civil rights organizations, the National Fair Housing Alliance, as well as the American Civil Liberties Union, the Communication Workers of America, and other private parties, filed litigation against us, saying that we need to build stronger protections against abuse.”

Here are the changes being implemented as part of Facebook’s settlements with the NFHA, ACLU, CWA, and other groups:

  • Housing, employment or credit ads can no longer be targeted by age, gender or zip code.
  • Advertisers offering housing, employment and credit opportunities will have a much smaller set of targeting categories to use in their campaigns overall.
  • Facebook has agreed to build a tool that will allow users to search for and view all current housing ads in the US, regardless of whether the ads were shown to them.

Sandberg concludes her announcement by saying housing, employment, and credit ads are meant to help people, not exclude them.

“Getting this right is deeply important to me and all of us at Facebook because inclusivity is a core value for our company.”

Expect to see further efforts from Facebook with regards to preventing discrimination and promoting inclusion.

John Mueller on Why Google Ranks Sites with Spammy Links by @martinibuster

Google’s John Mueller was asked in a Webmaster Hangout why Google ranked websites that used bad link building. John Mueller explained how Google treats those bad links. Then shared the ranking factors that caused those sites to rank number one.

Why Reporting Bad Links Does Not Always Work

A web publisher asked why sites with bad links ranked. The publisher added that they had reported the site but the site continued to rank well.

Here is the question:

“I see a disturbing amount of link networks and nefarious link building schemes being used… I reported these as suggested but is there anything else that we can do?

This is really frustrating.”

John Mueller responded that the spam report form works, but not always the way you hope it will:

“Reporting them in the… search console, the spam report form, the link spam report form, that’s kind of a good place to go.

That helps us to be better understand that these are pages that we need to review from a manual web spam point of view.”

John Mueller then cautioned against high expectations when  reporting spam:

“It’s not really a guarantee that we drop those pages completely.”

Mueller explained why the spam report form does not result in an automatic penalty:

“…when it comes to competitive areas, what we’ll often see is that some sites do some things really well and some sites do some things really bad.

We try to take the overall picture and use that for ranking.”

John is explaining that the bad links are ignored and that the real reason the site is ranking is because they do some things really well.

Ranking Signals that Power Sites with Spammy Links

Now here is where John Mueller alludes to the ranking factors that cause the link spammers to rank:

“For example, it might be that one site uses keyword stuffing in a really terrible way but actually their business is fantastic and people really love going there, they love finding it in search and we have lots of really good signals for that site.

So we might still show them at number one, even though we recognize they’re doing keyword stuffing.”

  1. The business is fantastic
  2. Data shows users like visiting the site
  3. Users react enthusiastically to seeing the site ranking at the top of Google.
  4.  “lots of good signals”

As you can see, many of those signals that influence the rankings have to do with user interactions with the SERPs and user expectations. What Mueller seems to be implying is that users themselves are some of the sources of the ranking signals that power sites with spammy link building.

John Mueller explains that ignoring bad links is something Google does.

“A lot of times what will happen is also that our algorithms will recognize these kind of bad states and try to ignore them.

So we do that specifically with regards to links… where if we can recognize that they’re doing something really weird with links… then we can kind of ignore that and just focus on the good parts where we have reasonable signals that we can use for ranking.”

…we try to look at the bigger picture when it comes to search, to try to understand the relevance a little bit better.”

Bad Links and Competitor Research

The important takeaway here is that what you see in the backlinks is not necessarily the reason why a site is ranking. Some publishers feel they need to copy the competitor’s link building in order to compete. But that’s not necessarily the case, especially if the links are spammy.

That kind of false evidence is called a Red Herring. A red herring is a tool that authors use to trick a reader into believing that one of the characters is guilty. What they do is write big obvious clues that point to the red herring as the guilty person when in reality it’s someone else entirely.

This happens with competitive research. My opinion is that you shouldn’t stop your competitor research when you come across spammy backlinks. Dig deeper and you’ll likely find the real reason why a site is ranking.

Isn’t understanding why a competitor ranks the point of competitor research?

Watch the Webmaster Hangout here.

More Resources

Images by Shutterstock, Modified by Author