freshworks.com — Agent-Adoption Score
25 checks evaluated · score 48 / 100
AI access policies declared; markdown-negotiation surface for agent workflows not yet in place.
To reach L3 Agent-Optimized, pass:
Accept: text/markdown negotiation serves HTML to humans and agent-readable markdown to agents from one URL — no duplicate-URL strategy.
Current: Server ignored Accept: text/markdown — returned HTML instead
Subscores
Where freshworks.com is most cited — and how those brands compare on agent-readiness
Top 4 categories where AI agents recommend freshworks.com most. Each card shows the most-cited brands in that category and where each sits on the agent-adoption ladder.
- #148 L1
- #241 L1
- #315 L1
- #4This scan48 L2
- #542 L1
- #144 L1
- #231 L1
- #341 L1
- #448 L1
- #5This scan48 L2
- #141 L1
- #248 L1
- #348 L1
- #444 L1
- #543 L1
- #7This scan48 L2
- #148 L1
- #241 L1
- #323 L1
- #448 L1
- #544 L1
- #7This scan48 L2
Per-check breakdown
Discoverability1 pass · 1 fail64
robots.txt served with 1 User-agent directive
robots.txt is the first file crawlers and agents check for access rules; silence defaults to blanket-allow. Per RFC 9309.
No valid sitemap found at any probed path
An XML sitemap is the route map agents use to find your pages. Without one they link-walk and miss deep or orphaned content.
Homepage returned no Link header — v1 does not penalize
Link: response headers expose related resources — API catalogs, service docs, alternates — before an agent parses HTML. Per RFC 8288.
Access Control2 pass · 0 fail · 2 informational100
robots.txt posture is not blanket-allow: wildcard group has Disallow rules and no explicit Allow: /
A blanket-allow posture (wildcard User-agent, Allow: /, no cross-bot blocks) declares that every crawler is welcome. Informational — no pass/fail.
Content Readability3 pass · 1 fail · 2 informational76
llms.txt discovered at https://freshworks.com/llms.txt
An llms.txt file gives agents a curated entry point into your docs — sitemap-equivalent, but sized for context windows. Per llmstxt.org.
llms.txt matches llmstxt.org structure
A well-formed llms.txt (H1 title, summary blockquote, linked sections) parses cleanly; a malformed one is skipped silently — worse than no file. Per llmstxt.org.
llms.txt has 7 H2 sections and 78 markdown links, no Optional section
Reports the shape of your llms.txt — Optional section, H2 count, link count — so you can tell at a glance whether agents get a skeleton or a full map.
Server-side rendering confirmed
Classifies the site as server-rendered, hydrated, or client-rendered (SPA) — what agents see without running JavaScript. A pure SPA reads as blank.
No sitemap URLs available to sample — cannot evaluate page size
Measures how much markdown each page feeds into an agent's context window. Under 50K fits cleanly; over 100K truncates mid-page — pages have context budgets too.
Correct HTTP 404 returned for non-existent path
Soft-404s (HTTP 200 on a missing page) make agents cache garbage as canonical content. An honest 4xx tells agents the URL is dead — drop it.
No sitemap URLs available to sample — cannot evaluate redirect behavior
Same-domain HTTP 3xx redirects work for agents. JavaScript redirects break agents without JS; cross-domain jumps read as tracking.
AGENTS.md not found at /AGENTS.md — HTTP 404 response
AGENTS.md is a coding-agent convention. ETH Zurich research (2026) found it often hurts those agents; we track presence to test the effect on websites. Informational.
Homepage response carries 2 of Cache-Control / ETag / Last-Modified: cache-control, etag
Cache-Control, ETag, and Last-Modified headers let agents re-fetch only what changed — missing headers force full re-downloads. Informational.
Agent Endpoints0 pass · 3 fail · 3 informational0
No OAuth Protected Resource metadata at /.well-known/oauth-protected-resource
Protected Resource metadata identifies which authorization server protects your API. Paired with oauth-discovery, agents complete auth without reading docs. Per RFC 9728.