The "Crawled - currently not indexed" status in Google Search Console means that Googlebot successfully crawled your page, but it hasn't been indexed in the search results yet. This could happen for several reasons. Here are some possible fixes for the issue:
1. Check for Content Quality Issues
- Thin Content: If the page has very little content, Google may decide not to index it. Ensure that your page has high-quality, unique, and valuable content that addresses user intent.
- Duplicate Content: Google may not index pages with duplicate content. Ensure the content is original and not copied from other pages or sites.
- Low User Engagement: Pages with low engagement (low traffic, high bounce rate) may be less likely to get indexed.
2. Review the Noindex Tag
Meta Tags: Ensure your page doesn't have a
noindex
meta tag that tells Google not to index it.- To check this, view the page source and look for this tag:
- If it's present, either remove it or change it to
index
.
- To check this, view the page source and look for this tag:
Robots.txt: Ensure your robots.txt file is not blocking the page from being crawled.
- Go to your Google Search Console > Crawl > robots.txt Tester to check.
- Example of a good robots.txt:
3. Fix URL Structure
- Avoid URL Parameters: Google may ignore URLs with excessive or unnecessary parameters (e.g.,
/page/?id=123
). Use canonical tags to specify the preferred version of a page if you have multiple versions of the same content. - Redirect Issues: If your page has redirects (301 or 302) to another URL, it may not get indexed. Check if your page redirects and ensure it doesn’t result in a redirect loop.
4. Check for Crawl Errors
- Go to Google Search Console > Coverage to see if there are any crawl errors or issues with the page.
- If errors like Soft 404 or Server Errors appear, they may prevent indexing.
5. Use the URL Inspection Tool
- Use the URL Inspection Tool in Google Search Console to check if there are any issues preventing indexing.
- If Googlebot has crawled the page successfully, click on Request Indexing to ask Google to recheck and index the page.
6. Internal Linking
- Ensure the page is linked from other pages on your website. Googlebot relies on internal links to discover and index pages.
- Internal Links: Link to the page from high-authority pages within your website.
7. Speed and Mobile-Friendly Issues
- Google favors pages that load quickly and are mobile-friendly.
- Use Google PageSpeed Insights to test your page’s speed and improve it.
- Ensure your page passes Google’s Mobile-Friendly Test.
8. Address Google’s Algorithm and Guidelines
- Google’s Algorithms: Sometimes, Google may choose not to index a page because it doesn’t meet certain quality standards (e.g., it’s too spammy, contains too many ads, or has poor user experience).
- Make sure your page complies with Google’s Webmaster Guidelines.
9. Request Re-Crawling
- Once you've fixed the issues, use the URL Inspection Tool in Google Search Console to request Google to crawl and index the page again.
10. Patience
- Sometimes, Google takes a while to index pages. Even after crawling, it may take time to assess whether the page is worthy of being indexed.
- Wait for a few weeks and monitor in Google Search Console.
Common Issues That Can Lead to "Crawled - Not Indexed":
- Low-quality or spammy content.
- Noindex tags or blocked robots.txt.
- Excessive use of URL parameters or redirects.
- Duplicate content or similar content across pages.
- Crawl errors or performance issues.
Next Steps:
After addressing the issues, monitor your page’s status in Google Search Console. If everything is fixed, the page should eventually be indexed. If the issue persists, consider requesting re-crawling or reviewing your content strategy.
Would you like more help with analyzing specific pages or technical fixes for your website?