
Why a Sudden Spike and Drop in Indexing in Google Search Console Might Not Be a Big Deal
Recently, many site owners have been concerned about unexpected spikes and subsequent drops in indexing numbers, as shown in Google Search Console’s coverage report. Some worry that this could indicate a major SEO issue, like Google indexing unintended URLs that might affect site quality. However, this isn’t usually a major cause for concern. Let’s explore why this happens and why it’s often nothing to stress about.
The Role of “Indexed, Though Blocked by robots.txt”
One common reason for these sudden fluctuations is the “Indexed, though blocked by robots.txt” section at the bottom of the coverage report. This category often contributes to the spikes and drops in indexing numbers. Many site owners may overlook this section or question its significance, but it’s key to understanding these fluctuations.
When this category surges, it affects the total indexing count because “Indexed, though blocked by robots.txt” URLs are still considered indexed, even though they are blocked from crawling. This can create confusion if you’re not aware of this detail.
For example, if you see two sites experiencing similar spikes and drops in indexing, you’ll likely notice a corresponding trend in the “Indexed, though blocked by robots.txt” section in Search Console. This behavior is normal and doesn’t signal an SEO problem.
Check the “Indexed, Though Blocked by robots.txt” Section
If you notice a sudden spike and drop in indexing, first check the “Indexed, though blocked by robots.txt” section in Search Console. Often, this section explains the fluctuation and usually indicates that there’s no significant SEO issue.
Here are a couple of examples where the spike and drop in indexing align with changes in the “Indexed, though blocked by robots.txt” section:
![Example Image 1]
![Example Image 2]
Why Spikes in Indexing Might Not Be a Big Deal
Google has clarified that blocking URLs via robots.txt does not prevent those pages from being indexed. While robots.txt can stop Google from crawling the content, it doesn’t prevent Google from indexing the URLs. If Google discovers links to these blocked URLs, they may still be indexed, though the content remains inaccessible for crawling.
If these URLs show up in search results, you might see a message indicating that a snippet could not be generated, with a link to Google’s documentation explaining this issue. This message often notes that blocking pages with robots.txt could be the reason.
Even though Google indexes these URLs, it doesn’t usually impact your site’s SEO, quality, or rankings significantly. Typically, Google’s systems deindex these URLs shortly after indexing them.
Why “Indexed, Though Blocked by robots.txt” Isn’t a Problem
I’ve observed many cases of spikes in the “Indexed, though blocked by robots.txt” category over the years and questioned their impact on site quality. In a 2021 Search Central Hangout, Google’s John Mueller clarified that URLs blocked by robots.txt don’t affect site quality because Google can’t crawl or view their content. You can find more details in my tweet about this, which includes a link to the video with John.
In another clip, John explained that the “Indexed, though blocked by robots.txt” status serves as a warning for site owners who might want these URLs indexed. If you prefer to keep these URLs blocked and not crawlable, there’s no problem. Since Google can’t access the content, these URLs won’t harm your site.
Summary: Sudden Spikes in Indexing Are Often Not a Concern
If you notice a sudden spike in indexing followed by a drop, it’s important to review all reports in Google Search Console. Often, the issue is related to URLs categorized as “Indexed, though blocked by robots.txt.” If this is the case, there’s usually no cause for concern. These URLs are indexed but not crawled, so they won’t affect your site’s quality. If this situation arises, take a breath and focus on other critical aspects of SEO.