The Complete Guide to Fixing "Googlebot Cannot Access CSS and JS Files" in WordPress

You‘ve put hours of work into designing your WordPress site and making it fast and user-friendly. But in Google Search Console, you see the dreaded warning: "Googlebot cannot access CSS and JS files".

What does this cryptic message mean exactly? How do you resolve it and prevent it from hurting your organic search traffic? Don‘t worry – this in-depth guide will walk you through identifying blocked resources, fixing them, and keeping them unblocked.

Table of Contents

  1. What Does "Googlebot Cannot Access CSS and JS Files" Mean?
  2. Why Googlebot Needs to Access These Files
  3. How to Identify Which Files Are Blocked
  4. Allowing Access in robots.txt
  5. Other Causes of Blocked Resources
  6. Risks of Blocking CSS and JS
  7. Preventing Future Blocking
  8. Frequently Asked Questions

What Does "Googlebot Cannot Access CSS and JS Files" Mean? {#meaning}

This warning from Google Search Console indicates that their web crawler, Googlebot, is unable to access certain resource files (specifically CSS and JavaScript) used by your pages.

CSS (Cascading Style Sheets) is code that tells browsers how to display your content – things like fonts, colors, and layout. JavaScript is a programming language used to add interactivity and dynamic behavior to web pages.

When Googlebot says it "cannot access" these files, it means it‘s being blocked or restricted from crawling and indexing them. This can happen a few different ways, which we‘ll cover later.

Why Googlebot Needs to Access These Files {#why-access-needed}

To rank web pages, Google considers many factors beyond the words on the page. They use signals like mobile-friendliness, speed, and visual stability to assess the page experience. Google has stated that:

"We may not index all of your site‘s content if Googlebot doesn‘t have access to your site‘s resources like CSS and JavaScript files." – Google Search Central Documentation

A study by Searchmetrics found that the average web page in 2020 was 2MB in size, with CSS and JS accounting for over 60% of that payload. If Google can‘t access more than half of a page‘s resources, that‘s sure to negatively impact how the page is evaluated.

Additionally, any content loaded via JavaScript may not be indexed if the JS files are blocked. If key content is unindexed, the page is less likely to rank for relevant queries.

Practical example: Say you have an ecommerce product page with the price and "Add to Cart" button loaded in via JavaScript. If Googlebot can‘t access that JS file, it won‘t see the price or CTA – and the page has much lower chances of showing up in searches for that product.

How to Identify Which Files Are Blocked {#identify-blocked}

There are a couple ways to check which CSS and JavaScript files Googlebot is having trouble with.

Google Search Console Coverage Report

The Coverage report in Search Console shows stats and details on pages that Google has indexed or tried to index. To see blocked resources:

  1. Open the Coverage report
  2. Under "Details", click the row for "Blocked by robots.txt" (if present)
  3. Click a specific example URL to see more info on blocked resources

Fetch as Google Tool

The Fetch as Google tool lets you submit a URL and see exactly how Googlebot renders it. Here‘s how to use it:

  1. Enter a page URL and click "Test Live URL"
  2. Once the test completes, click "View Tested Page" to see a side-by-side of the live page vs Googlebot‘s view
  3. If the Googlebot view has missing content, scroll to the bottom to see a list of blocked resources

For example, here‘s what part of the results look like for a page with blocked CSS and JS:

Live PageGooglebot View
Live page viewGooglebot view with missing CSS

And here are the specific blocked files listed:

List of blocked CSS and JS files

Running this test for key pages (homepage, top product/category pages, blog posts, etc.) is a good way to gauge the extent of any blocking issues.

Allowing Access in robots.txt {#allow-robots}

The robots.txt file is a plain text file that tells search engine bots which URLs on your site they are and aren‘t allowed to access. It‘s the first place to look if you‘re seeing "Googlebot cannot access CSS and JS" issues.

Some common ways CSS and JS get disallowed in robots.txt:

  • Disallowing entire directories like /wp-content/ or /wp-includes/
  • Explicitly disallowing file types like .css or .js
  • Overly broad disallow rules like disallow: /

Here‘s an example of a robots.txt file that blocks access to CSS and JS in the WordPress themes and plugins directories:

User-agent: *
Disallow: /wp-content/themes/
Disallow: /wp-content/plugins/

To allow access, you need to either delete those disallow lines or modify them to allow crawling of CSS/JS files. For example:

User-agent: *
# Allow CSS and JS in themes directory
Allow: /wp-content/themes/*.css
Allow: /wp-content/themes/*.js

# Allow CSS and JS in plugins directory  
Allow: /wp-content/plugins/*.css
Allow: /wp-content/plugins/*.js

Some SEO best practices for robots.txt:

  • Don‘t disallow crawling of your entire site (Disallow: /). This will severely limit indexing.
  • Only disallow access to admin or private sections that don‘t need to be indexed.
  • Use specific disallow rules instead of broad ones. For example, disallow /wp-admin/ instead of all of /wp-*/.
  • If disallowing a file type (like PDFs), use a full disallow rule like *.pdf rather than a directory

There are a few ways to edit robots.txt in WordPress:

  • Directly edit the file via FTP/SFTP. Download the file found in your site‘s root directory, make changes, and re-upload.
  • Use the Yoast SEO plugin‘s robot.txt editor.
  • Add rules using the virtual robots.txt generated by WordPress.

After saving changes to robots.txt, use the robots.txt tester in Search Console to ensure your rules are working as expected.

Other Causes of Blocked Resources {#other-causes}

If your robots.txt file looks okay but you‘re still seeing "Googlebot cannot access CSS and JS" warnings, there are a few other potential reasons.

.htaccess Rules

.htaccess is a server configuration file that can control access to your site‘s files. It‘s possible it has rules preventing crawling of CSS/JS.

Look for lines like:

RewriteRule ^wp-content/.*\.(css|js)$ - [F,L]

This regex tells servers not to serve any CSS or JS files requested from the wp-content directory. Delete this rule to allow access.

Robots Meta Tags

Pages can use a robots meta tag to control how search engines treat that specific page. For example:

<meta name="robots" content="noindex, nofollow">

This tag tells crawlers not to index the page or follow any of its links. If accidentally added to a page template, it could block CSS/JS.

Check key pages‘ source code for any restrictive robots meta tags and remove them. The URL Inspection tool is handy for checking how Googlebot sees meta tags.

CDNs or Edge Networks

If you use a content delivery network (CDN) or edge network like Cloudflare to speed up your site, it may be blocking access to resources.

Some CDNs require you to explicitly allow crawling of files or directories that are okay for search engines to access. Check your CDN‘s support documentation or ask their support team for help.

WordPress Plugins

Occasionally, WordPress security or performance plugins can unintentionally block access to CSS and JS files. Some plugins to check:

  • Wordfence Security
  • All in One WP Security & Firewall
  • iThemes Security
  • W3 Total Cache
  • WP Super Cache

Look through plugin settings for any options related to blocking crawlers or JS/CSS minification and concatenation. If you‘re unsure whether a plugin is the cause, you can temporarily deactivate plugins one by one and re-test to isolate the issue.

Risks of Blocking CSS and JS {#risks}

Blocking Googlebot from CSS and JavaScript files on your WordPress site can seriously hurt your SEO performance.

Some of the risks of letting this issue go unfixed:

  • Slower indexing of new content – If Googlebot has to constantly deal with inaccessible resources, it won‘t crawl your site as quickly or deeply. New pages may take much longer to show up in search results.
  • Lower rankings for mobile and UX – Google has said that sites with blocked CSS/JS "may not be able to take advantage of mobile-friendly ranking because our algorithms wouldn‘t be able to see that their pages are mobile-friendly." Even for desktop results, pages that aren‘t fully accessible may be seen as having a poorer experience and be ranked lower.
  • Decreased crawl budget – Googlebot has a limited "crawl budget" for each site – time and resources allotted to crawling your URLs. If this budget is wasted on pages with blocked resources, fewer pages overall will be indexed.

One study by Bartosz Góralewicz found that sites with poor Googlebot access had a 70% lower crawl budget than those with proper access. This translated to indexing delays of up to 2-3 weeks for some sites.

Preventing Future Blocking {#prevent-blocking}

Once you‘ve fixed any current CSS/JS accessibility issues, you‘ll want to avoid new ones cropping up.

Some tips:

  • Keep WordPress core updated. Each new release tends to add hardening measures that can better protect resource files without fully blocking crawlers.
  • Carefully configure security plugins. Check settings related to firewalls, blocklists, and crawler access. A little protection is good – too aggressive is dangerous.
  • Test site changes with Fetch as Google. Any time you push major front-end updates, quickly check a few key pages to ensure Googlebot still sees what it needs to.
  • Consider a robots.txt plugin. If you make manual changes directly to robots.txt, it‘s easy to make a mistake. Plugins add guardrails and make it harder to accidentally disallow important files/directories.
  • Stay on top of Search Console‘s Coverage report. Look for increases in excluded pages or new warnings related to blocked resources. The earlier you catch issues, the easier they are to resolve.

Frequently Asked Questions {#faq}

Q: How long does it take Google to remove "Googlebot cannot access CSS and JS" warnings after a fix?
A: It depends how frequently the affected pages are crawled. High-traffic pages may be re-crawled and warnings removed within a day or two. For less-frequently crawled pages, it could take a couple weeks.

Q: Will fixing blocked CSS/JS lead to better rankings?
A: Properly accessible CSS and JavaScript is important for rankings, but usually not sufficient on its own. Fixing blocked resources removes a barrier to good rankings, but the pages still need quality content, links, and other positive signals. That said, most SEOs see organic traffic improvements within a month or two of making important technical fixes.

Q: Could blocking Googlebot from CSS/JS cause de-indexing of pages?
A: It‘s possible but uncommon. If Googlebot encounters enough significant errors accessing page resources, it may temporarily remove the page from its index until the issues are resolved. This is usually limited to an extreme case like a misconfigured robots.txt disallowing crawling of an entire directory of resources.

Q: If my WordPress theme/plugins load scripts from external servers, do I need to worry about those being blocked?
A: Yes, Googlebot needs to be able to access scripts loaded from third-party servers as well. You may need to check with the theme/plugin developer or CDN provider to ensure their servers allow crawling.

Wrapping Up

"Googlebot cannot access CSS and JS files" may seem like an intimidating technical issue, but by methodically checking robots.txt, .htaccess, plugins and more, you can usually track down the source of the problem.

The stakes for SEO are too high to ignore these kinds of warnings – your page experience and organic visibility depend on all important resources being accessible.

If you get stuck, don‘t be afraid to reach out to an SEO professional or developer for help. Googlers like John Mueller are also quite responsive to questions on this topic – you can find him on Twitter @JohnMu.

Have you dealt with Googlebot being blocked from your site‘s CSS or JavaScript? Where was the culprit hiding? Let me know on Twitter (@yourtwitterhandle) – I‘d love to hear your experience.

Did you like this post?

Click on a star to rate it!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this post.