Skip to main content

We Optimized Our Website for AI Agents Before Our Competitors Knew It Was a Thing

March 23, 2026|By Brantley Davidson|Founder, Prometheus Agency
SEO & AI Visibility
Technical SEO
9 min read

Key Takeaways

  • AI answer engines are a third audience — websites need explicit AI-readable infrastructure, not just traditional SEO
  • The three-layer stack (llms.txt, Markdown endpoints, auto-discovery + brand data) makes content extractable and attributable by AI systems
  • After 90 days: 13 human sessions from 4 AI platforms with 54% engaged; bot crawlers consistently hitting deep technical content
  • Mid-size companies have a strategic window — large enterprises are slower to implement, and the AI citation landscape is still forming

Most B2B websites are invisible to AI answer engines. Here's exactly how we made prometheusagency.co readable by ChatGPT, Perplexity, and Google AI Overviews.

How to make your website AI-readable — the three-layer content stack

Table of Contents

Most B2B websites are invisible to AI answer engines. Here's exactly how we made prometheusagency.co readable by ChatGPT, Perplexity, and Google AI Overviews.

For most of the history of business websites, there were two audiences: human visitors and search engine crawlers. You optimized for both — great content for readers, clean structure and metadata for Google.

There's now a third audience. AI answer engines — ChatGPT, Perplexity, Claude, Google's AI Overviews — are increasingly where people go with business questions. These systems don't read your website the way Google does. They need clean, structured, machine-readable content to extract and cite your expertise accurately. Most B2B websites are completely invisible to this third audience.

Ours isn't. We built the AI-readable content layer into prometheusagency.co before most agencies had started thinking about it. This post documents what we built, why, and how you can replicate it.

We're publishing this not as a teaser for a consulting service but because this is exactly what we mean when we say Prometheus practices what it preaches. We don't advise clients on AI strategy from the outside. We build and run it ourselves, then bring those capabilities to our clients.

The third audience: why AI crawlers are different from Google

Google's crawlers index your content so humans can find it through search queries. The ranking signal is relevance and authority. The output is a list of links.

AI answer engines work differently. They're trying to extract and synthesize accurate information from your content to answer a user's question directly — often without the user ever clicking to your site. The ranking signal isn't just relevance. It's extractability: can the AI engine reliably pull accurate, well-attributed information from your content?

A website that's technically sophisticated but not structured for extraction will be poorly cited or not cited at all — even if it ranks well in traditional search.

Rand Fishkin, co-founder of SparkToro, reported in his 2025 search behavior analysis that approximately 60% of Google searches now end without a click to any website, up from 50% in 2022. AI answer engines are accelerating this trend. The implication: being cited inside the AI-generated answer is becoming as important as ranking on the SERP.

What we built: the three-layer AI content stack

The implementation on prometheusagency.co involves three layers that work together to make our content accessible, attributable, and well-structured for AI consumption.

Layer 1: llms.txt — the AI content index

We built a dynamically generated llms.txt file served at prometheusagency.co/llms.txt. Think of it as robots.txt for AI — it tells large language models what our site is about, what content is most authoritative, and how to navigate it.

Our implementation is a Netlify serverless function that generates the file on every request. It pulls all published blog posts from our Supabase database, groups them by category, and builds a structured index alongside our main pages, services, and tools. Every link in the file points to the Markdown version of the page (e.g., prometheusagency.co/insights/post-slug.md) so AI systems get clean text rather than rendered HTML.

The file includes our main pages and services, tools and playbooks, and every published blog post — currently over 50 articles, organized by topic. It's updated automatically whenever we publish new content, so the AI content index is always current.

The llms.txt standard is emerging — not yet universally adopted — but the major AI platforms are moving toward supporting it. Implementing it now establishes your site as a recognized entity in AI systems before the field gets crowded.

Layer 2: Markdown content endpoints

Most B2B websites serve content as HTML with embedded navigation, ads, sidebars, and other elements that are noise for AI extraction. AI engines prefer clean, structured text.

We built Markdown endpoints for every page on our site. The convention is simple: append .md to any URL. prometheusagency.co/services returns the full HTML page. prometheusagency.co/services.md returns clean Markdown — just the content, structured with headings, lists, and links. No navigation chrome, no JavaScript, no rendering overhead.

The technical implementation uses two components. A Netlify Edge Function intercepts any request ending in .md (or /llms.txt) at the edge — before any routing, redirects, or static file serving happens. It proxies the request to a serverless function that resolves the content. For blog posts, it fetches from our cms_blog_posts table in Supabase and converts HTML content to Markdown on the fly. For landing pages, it pulls from programmatic_landing_pages. For static pages (services, about, etc.), we maintain a route map with pre-written content optimized for AI consumption — including AI summaries, topic tags, and entity type classification.

Each Markdown response includes rich frontmatter: title, description, canonical URL, modification date, author, topics, and entity type. This structured metadata gives AI systems the attribution and context data they need to cite our content accurately.

The system also identifies AI crawler traffic by user agent — tracking requests from GPTBot (OpenAI), ClaudeBot (Anthropic), PerplexityBot, Google-Extended, Amazonbot, Applebot-Extended, and others — and logs that activity for analysis.

Layer 3: Auto-discovery meta tags and machine-readable brand data

Every HTML page on our site includes a <link rel="alternate" type="text/markdown"> tag pointing to its Markdown equivalent. This is the same discovery pattern that RSS feeds use — it tells any system that understands the convention exactly where to find the machine-readable version of the content.

Beyond that, we maintain a machine-readable brand identity file at prometheusagency.co/.well-known/brand-facts.json. This JSON file contains structured data about Prometheus — our legal name, address, contact information, founding year, team members with titles, core services with descriptions and deliverables, technology stack, industries served, certifications, social profiles, and links to all key resources including the llms.txt file.

This is the kind of structured data AI systems use when answering questions like "What does Prometheus Agency do?" or "Who is the CEO of Prometheus Agency?" Without it, AI systems have to infer these facts from unstructured page content — which is how attribution errors and hallucinations happen.

Gartner's 2025 report on AI-driven search predicted that by 2028, 30% of B2B discovery queries will be resolved through AI answer engines rather than traditional search results. The companies that will be cited accurately in those answers are the ones making their content explicitly machine-readable today.

What happened after we implemented it

We track AI referral traffic through GA4, and we deployed a server-side bot tracker that logs crawler activity from GPTBot, ClaudeBot, PerplexityBot, and 13 other AI user agents to a separate database — because GA4 filters bot traffic out entirely.

Over the first 90 days, we recorded 13 human sessions from four AI platforms: ChatGPT (7), Perplexity (2), Microsoft Copilot (2), and Google Gemini (2). These are small numbers and we're not going to inflate them. But 54% of those visitors were engaged sessions by GA4's definition, and one ChatGPT referral spent nearly nine minutes reading our predictive churn modelling guide. That's not accidental traffic — that's someone who asked an AI a question about their business, got pointed to our content, and stayed to read.

The bot crawler data tells the other side of the story. Our server-side tracker — which logs AI user agents that GA4 filters out entirely — recorded 10 crawl events from three AI platforms within the first day of going live. ChatGPT's browsing crawler hit six unique pages across the site. PerplexityBot crawled our fine-tuning LLMs guide three times. OpenAI's dedicated search bot checked our robots.txt. The crawlers are coming from the US and Poland, which tracks with where the major AI infrastructure is hosted.

The pages getting crawled match the same pattern as the human referral data: deep technical content — our fine-tuning guide, marketing automation platform comparison, AI center of excellence setup guide, and RAG for ROI analysis. Not our homepage. Not our service pages. The extractable expertise. That pattern confirms what the three-layer stack was designed to do: surface structured, extractable expertise in response to specific business questions — enterprise AI maturity models, CRM automation guides, risk management frameworks — not our homepage or service pages. That pattern confirms what the three-layer stack was designed to do: surface structured, extractable expertise in response to specific business questions.

Here's why the competitive angle matters. We're targeting keywords like "how to make website AI readable," "AI transformation for growing businesses," and "AEO for B2B." These are emerging, high-intent keywords where the SERP is still forming. Most competitors in our space — agencies, consultancies, even the Big Four — haven't built AI-readable infrastructure yet. They're writing about AEO without actually implementing it on their own sites. The companies that build the infrastructure now, while search behavior is shifting, will own the citations when the volume arrives. By the time "how to make your website AI readable" is a 5,000-search-per-month keyword, the AI answer engines will already have their preferred sources established.

What made this possible operationally is the speed of our internal platform. We built the AI-readable content layer — llms.txt, Markdown endpoints, bot tracking, GA4 integration, and a content performance grading system — in a matter of days, not months. Our internal content platform generates, reviews, publishes, and monitors content performance in a single workflow. When we publish a new post, it's automatically indexed in our llms.txt, available as a Markdown endpoint, tracked by our bot crawler logger, and graded by our GA4 analytics pipeline. There's no manual handoff between creation and measurement. That tight loop means we know within days whether a piece of content is being crawled, cited, and engaged with — and we can adjust accordingly.

We're being deliberately transparent that this is early data. The compounding value of AI-readable content infrastructure — like the compounding value of traditional SEO — builds over months and years, not weeks. We'll update this section as our dataset grows.

What surprised me most about building this

The implementation was faster than expected — the technical lift was days, not weeks. What takes longer is building the content that's worth crawling. The AI-readable infrastructure is the plumbing. The content strategy that feeds it is what determines whether AI platforms actually cite you. If you're considering this for your own site, start with the infrastructure (it's the easy part), then invest heavily in the structured, expertise-driven content that gives AI systems something worth extracting.

How to implement this on your own site

None of these steps require deep technical expertise, but all of them require someone comfortable editing website configuration files or working with your development team.

  1. Create your llms.txt file. A plain text file at yourdomain.com/llms.txt. Include your organization description, primary expertise areas, most authoritative content pages, and attribution preferences. Keep it concise — signal clarity, not comprehensiveness.
  2. Audit your existing content for AI extractability. Review your most important pages with extraction in mind. Is the key information in the first paragraph? Are claims in structured formats (lists, headers, definitions) that AI engines can pull cleanly? Are H2s framed as questions?
  3. Implement structured data schema. Add Article, FAQPage, Organization, and Person schema markup to your key pages. This structured data is the most reliable signal to AI engines about who you are and what your content is about.
  4. Build Markdown endpoints for key content (optional but high-value). If your site is on a modern framework (React, Next.js, Vite), adding Markdown endpoints is a one-to-two-day project. On WordPress or HubSpot, plugins can generate clean content exports that approximate the same function.
  5. Add auto-discovery meta tags. <link rel="alternate" type="text/markdown"> on each page, pointing to its Markdown equivalent. Also consider a machine-readable brand identity file at /.well-known/brand-facts.json.
  6. Monitor AI crawler activity. Check server logs for GPTBot, ClaudeBot, PerplexityBot, and GoogleBot. These crawlers identify themselves by user agent. Monitoring tells you which content is being accessed and how frequently.

Stanford HAI's 2025 AI Index report documented that the number of queries processed by AI answer engines grew 340% year-over-year in 2025, with B2B discovery queries growing fastest. The infrastructure described in this guide positions your content for that shift.

Why this matters beyond SEO

We built this infrastructure for a reason that goes beyond search ranking. Prometheus advises clients on AI transformation. The credibility of that advice is undermined if our own digital presence isn't AI-native.

This is the broader principle: we build and operate the systems we recommend. We run AI agents internally before deploying them for clients. We dogfood our own methodology. The AI-readable website is one visible example, but it applies to everything we do.

Frequently asked questions

What is llms.txt?

A plain text file at the root of your website domain, similar to robots.txt. Its purpose is to communicate information about your website to large language models — what your organization does, what content is most authoritative, and how to attribute it. The standard is emerging and not yet universally adopted, but implementing it now positions your site ahead of the majority of B2B websites.

How do I know if AI engines can read my website?

Check your server logs for user agent strings from GPTBot (OpenAI), ClaudeBot (Anthropic), PerplexityBot, and Googlebot-extended. Also manually query ChatGPT, Perplexity, and Claude with questions your target customers would ask and observe whether your site is cited.

Does AEO replace SEO?

No — they're complementary. Traditional SEO drives traffic from keyword searches. AEO drives attribution and citation in AI-generated answers, which increasingly influences discovery before the click. The companies that will win in search over the next five years are building for both audiences simultaneously.

Is this only for large companies?

No. Smaller companies actually have a strategic window. Large enterprises are slower to implement because they have more complexity, more stakeholders, and more legacy content. A focused mid-size B2B company with a clear expertise area can establish AI authority in its niche faster than an enterprise generalist.

Brantley Davidson

Brantley Davidson

Founder, Prometheus Agency

About Prometheus Agency: We are the technology team middle-market operators don’t have — embedded in their business, accountable for their results. AI, CRM, and ERP transformation for manufacturing, construction, distribution, and logistics companies.

Book a 30-minute discovery call

We are the technology team middle-market leaders don’t have — embedded in their business, accountable for their results.

© 2026 Prometheus Growth Architects. All rights reserved.