Free Noindex Checker
Check if a page can be crawled and indexed by Google. Detects meta noindex, X-Robots-Tag, robots.txt blocks, and more.
How the noindex checker works
Three checks. Five seconds. Zero signup.
Enter a URL to check
Paste the URL of any page you want to check for indexability. Our noindex checker fetches the page and inspects every signal that can block search engine indexation.
We check 7 indexability signals
Meta robots noindex, X-Robots-Tag header, robots.txt, canonical tag, HTTP status, meta refresh redirect, and Googlebot-specific directives. Each one can silently prevent your page from appearing in Google.
Get your indexability verdict
See a clear pass/fail for each check. If your page is blocked, you will know exactly which signal is causing the problem and how to fix it.
Everything to know about noindex and page indexability
A page can be invisible to Google for seven different reasons. Most site owners only check one or two. Here is the full picture.
The 7 signals that block indexation
- 01
Meta robots noindex
The most common way to block indexation. A meta robots tag with a noindex directive tells search engines not to include the page in results. Often added accidentally during development or by CMS plugins.
- 02
X-Robots-Tag HTTP header
Works the same as meta robots but set at the server level. Useful for non-HTML files like PDFs, but can accidentally block entire sections of a site when misconfigured in the web server.
- 03
Robots.txt disallow
Robots.txt tells crawlers not to access certain URLs. If Googlebot cannot crawl a page, it cannot index it. A common mistake is blocking CSS or JavaScript files that Google needs to render the page.
- 04
Canonical tag mismatch
A canonical tag pointing to a different URL tells Google that the current page is a duplicate. Google may choose to index only the canonical version and drop yours from results.
- 05
HTTP status errors
Pages returning 4xx or 5xx status codes cannot be indexed. A 301 redirect sends Google to the target URL instead. A 200 status is required for a page to be indexable.
- 06
Meta refresh redirect
A meta refresh tag in the HTML head acts as a redirect. Google may not index the original page if it detects a meta refresh, treating the destination as the canonical version.
- 07
Googlebot-specific directives
A meta googlebot tag can set different rules than meta robots. Some sites block Googlebot specifically while allowing other search engines, often unintentionally.
Frequently asked questions about noindex
Common questions about page indexability, noindex tags, and how to fix indexation issues.
What does noindex mean?
Noindex is a directive that tells search engines not to include a page in their search results. It can be set via a meta tag, HTTP header, or robots.txt file.
What is the difference between noindex and robots.txt?
A meta noindex tag tells search engines not to index a page they can still crawl. Robots.txt prevents crawling entirely. Both can prevent a page from appearing in search results, but they work differently.
Can a canonical tag cause deindexation?
A canonical tag pointing to a different URL signals to Google that the current page is a duplicate. Google may choose not to index the non-canonical version.
What is the X-Robots-Tag header?
X-Robots-Tag is an HTTP response header that can contain the same directives as a meta robots tag (noindex, nofollow, etc.). It is useful for non-HTML files like PDFs.
How often should I check for noindex issues?
Check after any site migration, CMS update, or when you notice pages disappearing from search results. Regular monthly audits are recommended.
Stop checking manually
Robot Speed monitors your entire site for indexability issues, broken links, and SEO problems. Every week. Automatically.
Start free trial