The priority order (what matters first)
Technical SEO is easier when you follow a strict order. Fixing schema won’t help if your page is blocked, noindexed, or canonicalized away.
- Indexability (noindex, canonical, duplicates) + Crawlability (robots, auth, errors)
- Correct URLs (canonical consistency, redirects, trailing slash rules)
- Discovery (sitemap + internal links)
- Signals & enhancements (structured data, performance, content)
1) Crawlability: can bots fetch your pages?
Crawling means a bot can request your URL and get a useful response. Common blockers:
- robots.txt blocks important routes
- Authentication walls (login required, IP restrictions)
- Server errors (5xx), timeouts, or broken rendering
- Soft 404 pages (a 200 page that looks like an error)
Minimum crawlability rules
- Important pages should return 200 OK.
- robots.txt should not block pages you want indexed.
- Avoid requiring JS for critical text content (SSR/SSG helps).
2) Indexability: can pages appear in search?
Indexing is “can Google include this page in its results?”. The most common reasons pages don’t index:
- <meta name="robots" content="noindex"> on the page (or via HTTP header).
- Canonical points elsewhere (Google may index the canonical target, not this URL).
- Duplicate or thin pages that don’t add value (Google may ignore them).
- Wrong status codes (404/410) or redirect loops.
Quick indexability checks
- Open the page source → confirm there is no noindex.
- Confirm the canonical URL matches your preferred URL.
- Use Search Console URL Inspection → see “Crawled” and “Indexing allowed”.
3) Canonicals: tell Google the “main” URL
Canonicals prevent duplicates (http vs https, www vs non-www, trailing slash, query params). A wrong canonical is one of the fastest ways to “disappear” from search.
Canonical best practices
- Use absolute URLs in canonical tags.
- Every indexable page should have exactly one canonical.
- Canonical should point to the preferred version (https, correct host, correct slash).
- Don’t canonicalize everything to the homepage (common mistake).
Example
https://blog.olamisan.com/posts/technical-seo-basics
4) Sitemap: help discovery and clarity
A sitemap doesn’t “force” indexing, but it helps search engines discover and prioritize your URLs. It also acts like a contract: “these are my canonical pages”.
Sitemap rules that keep it clean
- Include only canonical, indexable URLs.
- Exclude parameter duplicates and filtered pages unless you intentionally want them indexed.
- Keep it updated when you add/remove posts.
- If you have many URLs, split into multiple sitemaps and use a sitemap index.
5) Internal linking: make important pages easy to find
Internal links are one of the highest-impact SEO levers you control. They help discovery and signal which pages matter.
Internal linking checklist
- Every important page should be reachable within 3 clicks from the homepage.
- Create topic hub pages that link to related posts (and posts link back).
- Use descriptive anchor text (avoid “click here”).
- Add “Related posts” sections inside articles where it makes sense.
6) Status codes & redirects: avoid wasting crawl budget
Clean redirects improve crawl efficiency and reduce confusion. Prefer one hop redirects and consistent URL rules.
Common status mistakes
- Multiple redirect hops (A → B → C)
- Redirect loops
- Returning 200 for “not found” pages (soft 404)
- Redirecting deleted pages to the homepage instead of returning 404/410 (context matters)
Practical redirect rules
- Use 301 for permanent redirects.
- Normalize to one version: https + preferred host + preferred trailing slash rule.
7) Structured data (schema): qualify for rich results
Schema helps search engines understand your content type. It can also make your pages eligible for enhanced results. Start simple: WebSite, BlogPosting, and FAQ (when you actually have FAQ content on the page).
Good starter schemas
- BlogPosting for articles (headline, description, dates, author).
- BreadcrumbList for navigation context.
- FAQPage for real FAQs (don’t spam).
Quick checklist (before you ship)
- ✅ Page returns 200 OK and loads reliably
- ✅ No accidental noindex (meta or header)
- ✅ Canonical points to the correct preferred URL
- ✅ robots.txt does not block important routes
- ✅ Sitemap includes canonical/indexable URLs only
- ✅ Internal links exist to the page from relevant hubs/posts
- ✅ One clean redirect rule (no chains/loops)
- ✅ Basic schema added (BlogPosting + Breadcrumb; FAQ only if present)
FAQ
What should I fix first in technical SEO?
Indexability and crawlability: unblock crawling, remove accidental noindex, and fix wrong canonicals and status codes.
Do I need a sitemap if I have internal links?
Yes. A sitemap is a clean list of canonical URLs and helps discovery and maintenance.
robots.txt vs noindex — which one should I use?
robots controls crawling; noindex controls indexing. Use them intentionally based on what you want shown in search.