Next.js makes it easier to build SEO-friendly sites, but it does not remove the need for route discipline. Modern Next.js projects mix static, dynamic, and client-driven experiences, and each layer can introduce indexing or route-integrity problems if the public surface is not clearly defined.
Worked example from the active Next.js frontend
VeriFalcon now uses a Next.js App Router frontend with route-specific metadata and a shared public-content registry for sitemap output.
That combination is a practical Next.js baseline: keep the public route set explicit, keep operational routes out of search, and avoid metadata shortcuts that only work on tiny sites.
Current Next.js Public Surfaces
Next.js SEO fundamentals that still matter
- set metadata per route instead of relying on a generic site-wide fallback
- make sure `NEXT_PUBLIC_SITE_URL` or the equivalent canonical base is correct in production
- generate a sitemap only for truly public pages
- mark search, results, and other operational routes as noindex
- keep redirects and canonical targets consistent between apex and www handling
Where route-heavy Next.js sites go wrong
Teams often focus on rendering mode and forget route hygiene. Broken internal links, stale route segments, soft 404s, and pages that hydrate into a failure state can still degrade the public surface even when metadata is technically present.
That is especially common when a project mixes marketing pages with app-like behavior or authenticated entry points.
Why crawl the site like a user
A route can look correct in code review and still fail after navigation, data loading, or auth transitions. A browser-aware crawl helps validate the public Next.js surface before release, especially when pages are partially dynamic or rely on client-side fetches.