watchmycompetitor.com — Agent-Adoption Score
25 checks evaluated · score 45 / 100
AI access policies declared; markdown-negotiation surface for agent workflows not yet in place.
To reach L3 Agent-Optimized, pass:
Accept: text/markdown negotiation serves HTML to humans and agent-readable markdown to agents from one URL — no duplicate-URL strategy.
Current: Server ignored Accept: text/markdown — returned HTML instead
Subscores
Where watchmycompetitor.com is most cited — and how those brands compare on agent-readiness
Top category where AI agents recommend watchmycompetitor.com most. Each card shows the most-cited brands in that category and where each sits on the agent-adoption ladder.
- #137 L1
- #248 L1
- #341 L1
- #448 L1
- #542 L1
- #28This scan45 L2
Per-check breakdown
Discoverability3 pass · 0 fail100
robots.txt served with 10 User-agent directives
robots.txt is the first file crawlers and agents check for access rules; silence defaults to blanket-allow. Per RFC 9309.
Sitemap served at https://watchmycompetitor.com/sitemap.xml (<urlset> root)
An XML sitemap is the route map agents use to find your pages. Without one they link-walk and miss deep or orphaned content.
Homepage serves a Link: header with agent-useful rel types (1 link with rel values: https://api.w.org/)
Link: response headers expose related resources — API catalogs, service docs, alternates — before an agent parses HTML. Per RFC 8288.
Access Control2 pass · 0 fail · 2 informational100
robots.txt posture is not blanket-allow: 9 per-UA blanket-block groups (Amazonbot, Applebot-Extended, Bytespider, …)
A blanket-allow posture (wildcard User-agent, Allow: /, no cross-bot blocks) declares that every crawler is welcome. Informational — no pass/fail.
Content Readability1 pass · 2 fail · 2 informational48
No llms.txt found at primary or fallback path
An llms.txt file gives agents a curated entry point into your docs — sitemap-equivalent, but sized for context windows. Per llmstxt.org.
llms-txt-valid cannot evaluate without llms.txt body
A well-formed llms.txt (H1 title, summary blockquote, linked sections) parses cleanly; a malformed one is skipped silently — worse than no file. Per llmstxt.org.
llms-txt-has-optional-section cannot evaluate without llms.txt body
Reports the shape of your llms.txt — Optional section, H2 count, link count — so you can tell at a glance whether agents get a skeleton or a full map.
Server-side rendering confirmed
Classifies the site as server-rendered, hydrated, or client-rendered (SPA) — what agents see without running JavaScript. A pure SPA reads as blank.
No sitemap URLs available to sample — cannot evaluate page size
Measures how much markdown each page feeds into an agent's context window. Under 50K fits cleanly; over 100K truncates mid-page — pages have context budgets too.
Server returned HTTP 200 for a non-existent path — soft-404 detected
Soft-404s (HTTP 200 on a missing page) make agents cache garbage as canonical content. An honest 4xx tells agents the URL is dead — drop it.
No sitemap URLs available to sample — cannot evaluate redirect behavior
Same-domain HTTP 3xx redirects work for agents. JavaScript redirects break agents without JS; cross-domain jumps read as tracking.
AGENTS.md not found at /AGENTS.md — non-markdown content-type (text/html)
AGENTS.md is a coding-agent convention. ETH Zurich research (2026) found it often hurts those agents; we track presence to test the effect on websites. Informational.
Homepage response carries none of Cache-Control / ETag / Last-Modified
Cache-Control, ETag, and Last-Modified headers let agents re-fetch only what changed — missing headers force full re-downloads. Informational.
Agent Endpoints0 pass · 3 fail · 3 informational0
OAuth Protected Resource metadata at /.well-known/oauth-protected-resource is not valid JSON
Protected Resource metadata identifies which authorization server protects your API. Paired with oauth-discovery, agents complete auth without reading docs. Per RFC 9728.