Now Reading
What Happens If Googlebot Can’t Crawl Your Website?

What Happens If Googlebot Can’t Crawl Your Website?

Avatar photo
Googlebot Crawling

In the world of SEO, understanding how search engines like Google crawl and index websites is crucial for ensuring optimal visibility and organic traffic. Technical SEO expert Kristina Azarenko recently conducted an experiment where she intentionally prevented Googlebot from crawling her website for a few weeks. The results were both surprising and enlightening, shedding light on the consequences of blocking Googlebot’s access. In this article, we’ll explore the unexpected outcomes of this experiment and how they can impact your website’s performance in search rankings.

The Experiment: Blocking Googlebot’s Access

From October 5th to November 7th, Kristina Azarenko prevented Googlebot from crawling her website. During this period, she observed several notable changes that occurred as a result of blocking Googlebot’s access. Let’s delve into each of these surprising outcomes:

1. Favicon Removed from Google Search Results

One of the unexpected consequences of preventing Googlebot from crawling Azarenko’s site was the removal of the website’s favicon from Google’s search results. The favicon is the small icon displayed next to the website’s URL in search listings. This change highlights the importance of Googlebot’s ability to crawl a website in order to gather necessary information, such as the favicon, for search result display.

2. Decline in Video Search Results

Azarenko noticed a significant decrease in video search results during the experiment, and even after the experiment concluded, the video search rankings did not fully recover. This suggests that when Googlebot is unable to crawl a website, it may have difficulty indexing and ranking video content. Website owners who heavily rely on video content should take note of this potential impact on their search visibility.

3. Stable Positions with Slight Volatility in Canada

Despite Googlebot’s inability to crawl Azarenko’s site, the overall positions of her website’s pages remained relatively stable. However, she did observe slightly more volatility in search rankings specifically in Canada. This finding suggests that Google’s algorithms may handle blocked crawling differently in certain regions, potentially affecting search rankings to a greater extent in some areas.

4. Slight Decrease in Traffic

Interestingly, despite Googlebot’s inability to crawl the website, Azarenko only experienced a slight decrease in traffic during the experiment. This finding suggests that other factors, such as existing search visibility and user behavior, may have a more significant impact on website traffic than Googlebot crawling alone. However, it’s important to note that the experiment was conducted for a relatively short period, and the long-term effects on traffic may vary.

5. Increase in Reported Indexed Pages

One surprising outcome of blocking Googlebot’s access to Azarenko’s website was an increase in the number of reported indexed pages in Google Search Console. This occurred because pages with “noindex” meta robots tags, which were intended to prevent indexing, ended up being indexed due to Google’s inability to crawl the site and recognize those tags. This finding underscores the importance of regular monitoring and optimization of meta tags to ensure accurate indexing and control over search visibility.

6. Multiple Alerts in Google Search Console

Throughout the experiment, Azarenko received multiple alerts in Google Search Console indicating issues related to blocked crawling. These alerts included messages such as “Indexed, though blocked by robots.txt” and “Blocked by robots.txt.” This highlights the importance of regularly monitoring the health and accessibility of a website in Google Search Console, especially when intentionally blocking or restricting Googlebot’s access.

Why Do These Findings Matter?

Understanding the implications of blocking Googlebot’s access to a website is crucial for SEO practitioners and website owners. By conducting experiments like Azarenko’s, we gain valuable insights into how search engines like Google react to restricted crawling. These findings help us make informed decisions about website optimization and avoid unintended consequences.

While most companies may not be able to conduct such experiments themselves, the information gathered from these experiments provides valuable knowledge that can be applied to various scenarios. It reinforces the importance of regular testing, monitoring, and optimization to ensure optimal search visibility, rankings, and organic traffic.

Similar Experiment: Impact on Ranking and Featured Snippets

In a similar experiment by Patrick Stox of Ahrefs, two high-ranking pages were blocked from crawling by robots.txt for five months. The impact on ranking was found to be minimal, but both pages lost all their featured snippets. This emphasizes the potential consequences of blocking Googlebot’s access to specific pages and the impact it can have on featured snippets, which are valuable for increasing visibility and click-through rates.

See first source: Search Engine Land

See Also
Recipe Ranks

FAQ

Q1: What was the purpose of Kristina Azarenko’s experiment?

A1: The purpose of the experiment was to intentionally prevent Googlebot from crawling her website to observe the consequences of blocking Googlebot’s access.

Q2: What were some of the unexpected outcomes of blocking Googlebot’s access during the experiment?

A2: The unexpected outcomes included the removal of the website’s favicon from Google’s search results, a decline in video search results, stable positions with slight volatility in Canada, a slight decrease in traffic, an increase in reported indexed pages with “noindex” meta tags, and multiple alerts in Google Search Console indicating issues related to blocked crawling.

Q3: Why is it important to understand the implications of blocking Googlebot’s access to a website?

A3: Understanding these implications is crucial for SEO practitioners and website owners as it helps make informed decisions about website optimization and avoid unintended consequences. It underscores the importance of regular testing, monitoring, and optimization for optimal search visibility, rankings, and organic traffic.

Q4: What impact can blocking Googlebot’s access have on featured snippets?

A4: Blocking Googlebot’s access to specific pages can result in the loss of featured snippets for those pages. Featured snippets are valuable for increasing visibility and click-through rates.

Q5: What should website owners and SEO practitioners take away from these experiments?

A5: Website owners and SEO practitioners should recognize the importance of regular testing, monitoring, and optimization to ensure optimal search visibility. They should also be aware of the potential consequences of blocking Googlebot’s access to specific pages and be prepared to address any issues that may arise as a result.

Featured Image Credit: Photo by Aideal Hwa; Unsplash – Thank you!

Scroll To Top