Most teams run link checks and still miss route-integrity failures that only show up after hydration, navigation, or data fetches. This checklist is designed for that gap: practical checks you can run before release without pretending every issue is a security incident.

Worked example from the current VeriFalcon workflow

The active product already exposes a checklist-compatible reporting model for route-integrity QA.

JavaScript and static crawler modes are available for different route surfaces.
Reports separate broken pages, broken resources, protected routes, soft 404s, and scanner-error classes.
Live and post-scan outputs include grouped links and discovered-but-not-crawled visibility.
The strongest checks are point-in-time quality findings, not vulnerability claims.

That structure is what makes checklist-driven remediation possible: each issue class has a different owner and fix path.

Current Product Surfaces Behind This Checklist

Scan setup for JavaScript routesThe checklist starts from a real scan-entry workflow with route scope and crawler controls.Open full image
Categorized route-integrity outputIssue classes are separated in the current report model, which enables targeted fixes.Open full image

Pre-scan checklist

  • confirm whether the target surface is JavaScript-heavy, static, or mixed
  • set realistic crawl limits so discovered-versus-crawled gaps are visible
  • exclude destructive interaction paths and keep to read-only checks
  • define ownership for broken pages, resources, and runtime errors before starting

During-scan checklist

Watch for issue-class imbalance. If broken resources spike while broken pages remain low, the route graph may be healthy while assets are not. If soft 404s rise on 200 responses, content/state routing likely needs review.

For JavaScript-heavy routes, treat API failures and JS errors as first-class outcomes because they often explain user-visible breakage with healthy status codes.

Post-scan checklist

  • triage by issue class instead of one flat broken-link queue
  • review discovered-but-not-crawled pages for coverage gaps
  • export and assign fixes by team ownership
  • schedule a follow-up crawl after fixes and compare deltas

Related Resources