Highly Visible and Low Quality (or Unhelpful) – A Most Dangerous SEO Combinatio


Highly visible content that's low quality is dangerous SEO-wise.

As I’ve been helping companies that were heavily impacted by the “2023 Fall of Insane Algorithm Updates”, I have once again heard from several site owners that had their highest levels of traffic right before getting hammered. I’ll cover more about that soon, but it’s something I have seen for a long time. And now that we’ve read Google’s testimony about Navboost and user interaction signals from the latest antitrust trial, several ideas have been running around my head.

While helping companies impacted by major algorithm updates (like broad core updates), it’s super-important to run what I call a “delta report” to identify the top queries and landing pages that dropped during those updates. Delta reports can be powerful and can help site owners determine if a drop was due to relevancy adjustments, intent shifts, or overall site quality problems. And if quality was the problem, then digging into the queries and landing pages that dropped the most can often yield important findings. And those findings could help identify low-quality content, thin content, low-quality AI content, user experience barriers, aggressive advertising, a deceptive or aggressive affiliate setup, and more.

This also had me once again thinking about “quality indexing”, which I have covered many times in my posts and presentations about major algorithm updates. That’s making sure your highest-quality content is indexed, while ensuring low-quality or thin content is not indexed. That ratio matters. That’s because we know Google takes every page indexed into account when evaluating quality. Google’s John Mueller has explained that a number of times and I have shared those video clips over the years.

But are all things created equal from a quality standpoint? In other words, if you find a batch of lower-quality cruft on your site that’s not showing up in the search results often, is that the same as lower-quality content (or unhelpful content) that’s highly visible in the search results?

John Mueller’s comments about removing low-quality content have changed slightly over the years on that front. I noticed that over time and it always piqued my curiosity. John went from explaining that if a piece of content is indexed, it can be evaluated for quality to explaining that he would focus first on what’s highly visible and low-quality. I’ll explain more about why this makes complete sense (especially with fresh information from the latest Google antitrust trial).

Below, I’ll quickly cover some background information about user-engagement signals and SEO, then I’ll cover some interesting comments from Google’s John Mueller from over the years about removing low-quality content, and then I’ll tie that together with Google’s recent testimony about leveraging user interaction signals to assist with ranking (i.e. Navboost).

First, let’s take a walk down history lane and discuss user interaction signals and the SEO impact.

User interaction signals and SEO and Google’s uncanny ability to understand user happiness:
With Google’s latest antitrust trial, we learned a lot about how Google leverages user interaction signals to impact rankings. For example, Google’s Pandu Nayak spoke about Navboost, how Google collects 13 months of click data (previously 18 months), and how that data can help Google understand user happiness (satisfied search users).

Although learning about Navboost blew many people away (rightfully so), there were always signs that Google was doing something with user engagement data from a rankings perspective. Nobody really knew for sure, but it was hard to ignore the concept while digging into many sites that dropped over time based on major algorithm updates (especially broad core updates).

For example, I wrote a post back in 2014 about the misleading sinister surge in traffic before a major algorithm hit. Note, Search Engine Watch is having issues with older posts, so I’ve linked to the wayback machine version of my post. I wrote that post after having many companies reach out to me about huge Panda hits and explaining they had just experienced their highest levels of traffic ever until… they crashed with the next update.

To me, it seemed like Google was gaining so much intelligence based on how users were engaging with the content from the SERPs that it ended up contributing to the drop. In other words, Google could see unhappy users at scale for those sites. And then boom, they tanked.

Here’s a quote from my SEW article:

Also, AJ Kohn also wrote a piece in 2015 about CTR as a ranking factor. AJ has also worked on many sites over the years and had a serious hunch that user interaction signals played a role in rankings. Here’s a screenshot from AJ’s post referencing implicit user feedback.

I have also written heavily about the danger of low dwell time and how implementing something like adjusted bounce rate could help site owners get a better feel for content that isn’t meeting or exceed user expectations. That post is from 2012. And I expanded on that idea with another post in 2018 about using several tracking mechanisms as a proxy for user happiness (including adjusted bounce rate, scroll depth tracking, and time on page). Note, this was for GA3, not GA4.

And as a great example of how user engagement problems could impact SEO, there was a case study I wrote in 2020 about the “SEO engagement trap”, where I found users running in circles around a site, and I’m sure ultimately back to the SERPs, for better answers. That site got pummeled by a broad core update and recovered down the line after addressing many of those issues. So yep, more negative user interaction signals for Google to evaluate.

Here is a screenshot from that post showing behavior flow reporting for frustrated users:

And there’s the slide (and motto) I’ve been using for a long time in my presentations about broad core updates titled, “Hell hath no fury like a user scorned”. This relates to more than just content quality and includes aggressive advertising, user experience barriers, and more. Basically, don’t annoy the heck out of users. You will pay a heavy price. And by the way, what’s a great way for Google to understand annoyed users? Well, by tracking user interaction signals like low dwell time (someone running back to the SERPs quickly after visiting a site from Google).

And on the topic of dwell time, one of my favorite quotes was from years ago when Bing’s Duane Forrester wrote a blog post explaining how low dwell time could impact search visibility. Duane doesn’t work for Bing anymore, but he was the Matt Cutts of Bing for several years. In that post, which has now been deleted (but can be found via the wayback machine), Duane explained the following:

“If your content does not encourage them to remain with you, they will leave. The search engines can get a sense of this by watching the dwell time. The time between when a user clicks on our search result and when they come back from your website tells a potential story. A minute or two is good as it can easily indicate the visitor consumed your content. Less than a couple of seconds can be viewed as a poor result. And while that’s not the only factor we review when helping to determine quality, it’s a signal we watch.”

Then in another post on SEJ, Duane went on to say:

“Dwell time was one of those concepts that made perfect sense once explained, and it was immediately obvious that such a metric could be very important in determining searcher satisfaction.”

And finally, Duane explained that it was considered “in a mix of factors”, which is important to understand. He explained that chasing dwell time was a bad idea. He explained to focus on making broader improvements for a site that increases user engagement (and dwell time may increase as a result of that work). This is also why I believe the “kitchen sink” approach to remediation is a powerful way to go. Tackle as many quality issues as you can, and great things can happen down the line.

Note, don’t focus on the amount of time Duane provided in 2011 for a “short engagement”… I’m sure that’s one of the reasons the post was removed. For some types of content, a shorter period of time is totally fine from a user engagement standpoint. But the main point about low dwell time, especially when seen across many pieces of content from a site, is super important to understand. It was then, and it surely is now.

The main point I’m trying to convey is that user interaction always seemed to be a factor somehow… we just didn’t know exactly how. That’s until Navboost was exposed. I’ll cover more about that soon.

Why “highly visible” is important: Comments from Google’s John Mueller.
OK, now let’s take a closer look at what I call “quality indexing”. Again, that’s making sure only your highest-quality content that can meet or exceed user expectations is indexed, while ensuring low-quality or thin content is not indexed.

Google has explained over time that all pages indexed are taken into account when evaluating quality. I have covered that many times in my blog posts and presentations about major algorithm updates (especially when covering broad core updates).

Here is John from 2017 explaining that:

But over time, I noticed a change in John’s response when asked about removing lower-quality content. John began to explain that he would focus on content that was “highly visible” and lower-quality. I heard that in several Search Central hangout videos, and he also covered that in an interview with Marie Haynes.

Here are several examples:

First, here is John explaining that all things are not equal content-wise. Google will try to figure out the most important pages and focus there:

Here is John explaining that Google can understand a site’s core content, where visitors are going, etc.

And here is John explaining that if you know people are visiting certain pages, then it would make sense to address those lower-quality pages first (so people can stick around.):

So why is this important? Well, now that we know more about Google leveraging user interaction signals to impact visibility and rankings via Navboost, that statement from John makes a lot of sense. The more user interaction data Google has for query and landing page combinations, the more it can understand user happiness. If a page is low-quality and isn’t visible, then Google has no user interaction data. Sure, Google can use other signals, natural language processing, etc. to better understand the content, but user interaction signals are amazing for understanding searcher satisfaction.

Note, before moving on, I wanted to be clear that I would still tackle all content quality problems across a site (even those that aren’t “highly visible”), but I would start with the most visible. That’s an important point for site owners dealing with a big drop from broad core updates, helpful content updates, or reviews updates.

Google’s antitrust trial and user-engagement signals influencing rankings.
I have mentioned Navboost several times in this post, and I wanted to touch on it in this section. I won’t go super in-depth since it’s been covered across a number of other industry posts recently, but it’s important to understand.

In Google’s antitrust testimony, Pandu Nayak explained the use of Navboost, a system for leveraging user interaction data to help Google understand user happiness. It basically enables Google to learn from its users over time and adjust rankings based on those learnings. Navboost is not the only factor at play obviously, but it’s an important factor when determining rankings.

Here is Pandu Nayak on the importance of Navboost:

“I mean, it’s hard to give sort of a rank ordering like this. I will say navboost is important, right. So I don’t want to minimize it in any way.  But I will also say that there are plenty of other signals that are also important, like the ones I mentioned.  And you can’t really turn off some of these things.  I don’t know what it would mean to turn off, like, the index to the documents.  That is in some ways like the most important thing, the words on the page and so forth, right.  So it’s a little hard to judge in that way.  So I would say that navboost is one of the important signals that we have.

So, if pages are highly visible for a query, Google is actively collecting user-engagement signals for those urls and query combinations (now collecting 13 months of click data). And Navboost can use those user interaction signals to help Google’s systems understand user happiness (including understanding dwell time, long clicks, scrolls, hovers, and more).

And on the flip side, if pages are not highly visible for queries, then Google cannot leverage those user interaction signals… or not enough signals to be impactful. It doesn’t mean Google can’t understand page quality, but it will lack real user signals for those query and landing page combinations. And that’s incredibly important.

Note, if you’re interested in learning more about user interaction signals from Google’s antitrust trial, AJ Kohn wrote a great post covering that information based on Pandu Nayak’s testimony. I recommend reading AJ’s post if you haven’t already since it covers more about Navboost, user interaction signals, CTR as a ranking factor, etc.

Here is a slide from a presentation from Google that was evidence during the trial. It underscores the importance of user interaction signals for determining searcher satisfaction. Keep in mind this slide was from several years ago. It’s about the “source of Google’s magic”:

And don’t forget about site-level quality algorithms:
Again, I don’t want to bog down this post with too much information from the antitrust trial, but my core point is that “highly visible” content in the SERPs can gain a ton of user interaction signals, which Google can use as a factor for rankings. It’s also possible that Google can also use that data in aggregate to influence site-level quality algorithms. And those site-level algorithms can play a huge role in how sites fare during broad core updates (and other major algorithm updates). I have covered that for a long time as well.

What this means for you. My recommendation for site owners impacted by major algorithm updates:
For site owners heavily impacted by major algorithm updates, it’s important to take a step back and analyze a site overall from a quality perspective. And as Google’s John Mueller has explained previously, “quality” is more than just content. It’s also about UX issues, the advertising situation, how things are presented, and more.

And now with a confirmation of Google leveraging user interaction signals to impact visibility and rankings via Navboost, I would push “highly visible” urls up the list when deciding what to address. Yes, you should tackle all quality problems you surface, even cruft that might not be highly visible, but I would prioritize the urls that were the most visible leading up to a major algorithm update. So yes, running a delta report just got even more important.

Summary: Surface all low-quality content, but start with what’s “highly visible”.
Over the years, I have always explained that content needs to meet or exceed user expectations based on query. Now with the latest testimony from Google’s antitrust trial, we have hard evidence that Google is indeed leveraging user interaction signals to adjust rankings (and to possibly influence site-level quality algorithms). As a site owner, do not let low-quality, thin, or unhelpful content remain in the SERPs for users to engage with. As I’ve seen over the years with major algorithm updates, that will not end up well. Definitely tackle all potential quality problems on a site, but don’t overlook what’s highly visible. That’s a great place to start when improving site quality overall.

GG



Source link

Social media & sharing icons powered by UltimatelySocial
error

Enjoy Our Website? Please share :) Thank you!