Technical SEO Basics for Developers: What to Fix First (Checklist)

If you’re not getting traffic, start here. This guide covers the technical SEO basics that decide whether your pages can be crawled, indexed, and ranked. It’s written for developers and includes a prioritized checklist you can run before shipping.

By Olamisan~10 min readSEO

The priority order (what matters first)

Technical SEO is easier when you follow a strict order. Fixing schema won’t help if your page is blocked, noindexed, or canonicalized away.

  1. Indexability (noindex, canonical, duplicates) + Crawlability (robots, auth, errors)
  2. Correct URLs (canonical consistency, redirects, trailing slash rules)
  3. Discovery (sitemap + internal links)
  4. Signals & enhancements (structured data, performance, content)

Tip: if a page is not indexed, the fastest path is usually Google Search Console URL inspection.

1) Crawlability: can bots fetch your pages?

Crawling means a bot can request your URL and get a useful response. Common blockers:

  • robots.txt blocks important routes
  • Authentication walls (login required, IP restrictions)
  • Server errors (5xx), timeouts, or broken rendering
  • Soft 404 pages (a 200 page that looks like an error)

Minimum crawlability rules

  • Important pages should return 200 OK.
  • robots.txt should not block pages you want indexed.
  • Avoid requiring JS for critical text content (SSR/SSG helps).

2) Indexability: can pages appear in search?

Indexing is “can Google include this page in its results?”. The most common reasons pages don’t index:

  • <meta name="robots" content="noindex"> on the page (or via HTTP header).
  • Canonical points elsewhere (Google may index the canonical target, not this URL).
  • Duplicate or thin pages that don’t add value (Google may ignore them).
  • Wrong status codes (404/410) or redirect loops.

Quick indexability checks

  1. Open the page source → confirm there is no noindex.
  2. Confirm the canonical URL matches your preferred URL.
  3. Use Search Console URL Inspection → see “Crawled” and “Indexing allowed”.

3) Canonicals: tell Google the “main” URL

Canonicals prevent duplicates (http vs https, www vs non-www, trailing slash, query params). A wrong canonical is one of the fastest ways to “disappear” from search.

Canonical best practices

  • Use absolute URLs in canonical tags.
  • Every indexable page should have exactly one canonical.
  • Canonical should point to the preferred version (https, correct host, correct slash).
  • Don’t canonicalize everything to the homepage (common mistake).

Example

Preferred URL (canonical):

https://blog.olamisan.com/posts/technical-seo-basics

4) Sitemap: help discovery and clarity

A sitemap doesn’t “force” indexing, but it helps search engines discover and prioritize your URLs. It also acts like a contract: “these are my canonical pages”.

Sitemap rules that keep it clean

  • Include only canonical, indexable URLs.
  • Exclude parameter duplicates and filtered pages unless you intentionally want them indexed.
  • Keep it updated when you add/remove posts.
  • If you have many URLs, split into multiple sitemaps and use a sitemap index.

6) Status codes & redirects: avoid wasting crawl budget

Clean redirects improve crawl efficiency and reduce confusion. Prefer one hop redirects and consistent URL rules.

Common status mistakes

  • Multiple redirect hops (A → B → C)
  • Redirect loops
  • Returning 200 for “not found” pages (soft 404)
  • Redirecting deleted pages to the homepage instead of returning 404/410 (context matters)

Practical redirect rules

  • Use 301 for permanent redirects.
  • Normalize to one version: https + preferred host + preferred trailing slash rule.

7) Structured data (schema): qualify for rich results

Schema helps search engines understand your content type. It can also make your pages eligible for enhanced results. Start simple: WebSite, BlogPosting, and FAQ (when you actually have FAQ content on the page).

Good starter schemas

  • BlogPosting for articles (headline, description, dates, author).
  • BreadcrumbList for navigation context.
  • FAQPage for real FAQs (don’t spam).

Quick checklist (before you ship)

  • ✅ Page returns 200 OK and loads reliably
  • ✅ No accidental noindex (meta or header)
  • ✅ Canonical points to the correct preferred URL
  • ✅ robots.txt does not block important routes
  • ✅ Sitemap includes canonical/indexable URLs only
  • ✅ Internal links exist to the page from relevant hubs/posts
  • ✅ One clean redirect rule (no chains/loops)
  • ✅ Basic schema added (BlogPosting + Breadcrumb; FAQ only if present)

Next steps: SEO Foundations guideTechnical SEO topic hub

FAQ

What should I fix first in technical SEO?

Indexability and crawlability: unblock crawling, remove accidental noindex, and fix wrong canonicals and status codes.

Do I need a sitemap if I have internal links?

Yes. A sitemap is a clean list of canonical URLs and helps discovery and maintenance.

robots.txt vs noindex — which one should I use?

robots controls crawling; noindex controls indexing. Use them intentionally based on what you want shown in search.

Related reading