Reports score only what can be supported by public evidence or legitimate third-party APIs.
Methodology
How isitready.dev decides what to trust.
The scanner is weighted, but not opaque. Each check records direct evidence, categorizes the result, and only scores what can be supported from the public web or legitimate external APIs.
Coverage remains visible so teams can see whether a result is exhaustive or sampled.
Priority fixes are weighted to surface the highest-impact failures first.
Observed
Direct HTTP fetches, headers, bodies, parsed metadata, and validated well-known resources.
Heuristic
Checks that infer readiness from public markup or representative crawl samples without pretending certainty.
Unavailable
Signals requiring ownership, credentials, or unavailable public data remain explicit instead of being guessed.
What we evaluate
The five categories behind every report.
Each category captures a different part of public website readiness, but all of them follow the same evidence-first scoring model.
AI Readiness
AI Readiness
Content discoverability, markdown signals, agent protocols, and machine-readable readiness.
Checks probe robots.txt, llms.txt, markdown negotiation, API discovery, MCP or A2A signals, and related metadata.Technical SEO
Technical SEO
Crawlability, canonical integrity, structured metadata, and indexability signals.
Checks use adaptive page sampling to inspect titles, descriptions, canonicals, headings, structured data, and social metadata.Security
Security
HTTPS posture, response headers, cookies, and externally validated transport safeguards.
Checks combine direct header inspection with trusted public security tooling when this deployment has access to it.Performance
Performance
Page speed, loading quality, and public lab or field performance evidence.
Checks use PageSpeed Insights and CrUX-style signals when available, without inventing private metrics.Production Quality
Production Quality
Reliability, metadata consistency, broken signals, and operational polish on the public web.
Checks focus on production-facing best practices, broken public signals, consistency, and crawl-surface quality.