How to write comparison pages that AI engines cite

March 18, 2026
0 minute read

Organic traffic is dropping steadily, with a growing number of users turning to AI chatbots for answers instead of visiting your website. High SERP is no longer the leading metric in online discoverability. In 2026, you need to get cited and quoted by AI agents, and the tactics have changed.


Comparison content (like “X vs. Y”) is an effective way to ground AI models in fundamental truths about a brand or a product, ensuring your clients’ (and your own) unique value proposition (UVP) is the one the answer engine pulls. For the human buyer, well-built comparisons tap into demand at the most critical stage of the funnel, drawing buyers closer to a closed deal. Moreover, getting cited is the new top spot on Google search with
90% of buyers surveyed by TrustRadius saying that they click through the sources cited in AI Overviews for fact-checking purposes.


Before we discuss the ins and outs of writing and publishing comparison pages that AI engines cite and users read, let’s start with a sensitive point: getting your clients to see the contribution of comparison content to their bottom line.


Making the case for comparisons


Comparison content can be a hard sell with many stakeholders - both internal and external. Some worry about offending competitors; others don’t want to mention them at all to avoid giving them free airtime. And then there are the issues of ethics and legality of comparisons.


Before you can convince your clients to include comparison pages on their website, it’s important that you understand and believe why this type of content is a business reality in 2026.


Filling the empty chair


If you don’t define the comparison, an AI model will do it for you, often hallucinating features or pulling biased data from a competitor’s website. Publishing this content is the only way to ensure you control the narrative and provide a reliable source for LLMs and users alike.


AI anti-hallucinogenics


AI chatbots are statistical prediction engines. When they lack data, they guesstimate—generating believable but (too often) incorrect answers. With clear, structured comparison pages, you anchor the AI with a crawlable and categorized ground truth about products, preventing hallucinated features or services from showing up.


Keeping it fair


Some of the most common objections to comparison content publishing are legal and ethical. Businesses are worried that emphasizing their superiority over competitors might not only offend competitors but could also lead to legal trouble if the competitors decide to sue. Here are a few things to keep in mind when dealing with client objections:


  • It’s only a matter of time before the competition catches up with answer engine optimization (AEO) and publishes their own comparison pages - probably ones that position them as a superior choice.
  • There’s nothing illegal about comparison content as long as it adheres to the FTC’s rules regarding honest reviews and comparisons
  • You can protect yourself by linking to information sources, including screenshots, and timestamping the data you include (like “according to the manufacturer website, as of January 2026…”). 
  • If a competitor is better at a specific use case (like Enterprise scale vs. SMB agility), admitting it actually builds trust and disqualifies bad-fit leads.


Before you start mass-producing comparison pages for your clients, it’s important to plan those pages to inform and perform for AI answer engines and humans at the evaluation stage of the buyer journey.


Formatting comparisons for AI (and buyers)


Side-by-side comparisons, especially ones that are authentic and reliable, have always been a powerful type of marketing content. With AI, their ability to push forward buyer journeys is multiplied by your ability to get your comparisons cited by AI engines. That’s where your SEO expertise comes in handy - technical answer engine optimization (AEO) for comparison pages.


So how do you make a versus page citable?


Answer first


Humans are used to skimming through content, and with comparison pieces we often scroll to the bottom to see the verdict. AI models read content differently, looking for relevance as early as possible in the page. To get the models’ attention, start comparison pages with a TL;DR conclusion that provides a clear and definitive summary.


Example: “The main difference between X and Y is that X is best suited for enterprises with complex and advanced permission stacks, while Y is best suited for startups looking for rapid deployment and pay-as-you-go pricing.”


Chunk it


Neither bots nor people like to ingest long blocks of text. General purpose AI models struggle with wall-of-text narratives but excel at extracting data from clearly labeled sections. This means that breaking content into logical self-contained blocks with clear headers (H2s and H3s) will make it much more citable.


Tables as databases


LLMs rely heavily on structured data to avoid hallucinations, so they are drawn to tables that contain it. Tables are an effective tool for communicating with AI bots probing your website as they provide a clear, structured set of categorized entity relationships. These allow the algorithms to turn data in your comparisons into knowledge graphs with entities, attributes, and map the relationships between them to generate answers for user queries.


Mark up your schema


While tables help AIs read the data, schema markup helps it understand the entities included, and their attributes. For comparison pages, you can layer standard Product and FAQPage schemas to inform bots on what is being compared. This is especially critical in helping AI agents distinguish reviews from product specification pages.


With Duda, you can deploy AI-friendly comparisons at scale across multiple clients as the platform
automatically handles schema for many widgets (like FAQs and Product pages), eliminating manual code updates for AEO.


Making comparison content work with Duda


Comparative content is one of the most effective content types in 2026. It doesn’t require expensive video production or abstract creative campaigns. It is simply about organizing facts in a way that is useful to humans and readable to machines.


When done right (using the structures and ethical guardrails we’ve outlined) comparison pages create a Winning Content Trifecta:


  • For the user: You remove friction at the bottom of the funnel, giving them the confidence to choose your client over the competition.
  • For the client: You turn a source of anxiety (competitors) into a source of authority, controlling the narrative while adhering to ethical and legal best practices.
  • For the bots: You provide the structured data and grounded facts that answer engines seek out, ensuring your client gets cited where it counts.


And the best part for agencies? With the right tools at your disposal, you don’t need to spend time reinventing the proverbial wheel for every client and every page. Duda’s agency-centric website builder platform lets you produce comparison pages at scale without technical overhead.

With native support for Schema markup, dynamic pages for rapid template replication, and AI-assisted online discoverability tools, you can deploy high-performance comparison libraries for your clients in a fraction of the time it takes on legacy platforms.


Did you find this article interesting?


Thanks for the feedback!
By Shawn Davis April 1, 2026
Core Web Vitals aren't new, Google introduced them in 2020 and made them a ranking factor in 2021. But the questions keep coming, because the metrics keep changing and the stakes keep rising. Reddit's SEO communities were still debating their impact as recently as January 2026, and for good reason: most agencies still don't have a clear, repeatable way to measure, diagnose, and fix them for clients. This guide cuts through the noise. Here's what Core Web Vitals actually measure, what good scores look like today, and how to improve them—without needing a dedicated performance engineer on every project. What Core Web Vitals measure Google evaluates three user experience signals to determine whether a page feels fast, stable, and responsive: Largest Contentful Paint (LCP) measures how long it takes for the biggest visible element on a page — usually a hero image or headline — to load. Google considers anything under 2.5 seconds good. Above 4 seconds is poor. Interaction to Next Paint (INP) replaced First Input Delay (FID) in March 2024. Where FID measures the delay before a user's first click is registered, INP tracks the full responsiveness of every interaction across the page session. A good INP score is under 200 milliseconds. Cumulative Layout Shift (CLS) measures visual stability — how much page elements unexpectedly move while content loads. A score below 0.1 is good. Higher scores signal that images, ads, or embeds are pushing content around after load, which frustrates users and tanks conversions. These three metrics are a subset of Google's broader Page Experience signals, which also include HTTPS, safe browsing, and mobile usability. Core Web Vitals are the ones you can most directly control and improve. Why your clients' scores may still be poor Core Web Vitals scores vary dramatically by platform, hosting, and how a site was built. Some of the most common culprits agencies encounter: Heavy above-the-fold content . A homepage with an autoplay video, a full-width image slider, and a chat widget loading simultaneously will fail LCP every time. The browser has to resolve all of those resources before it can paint the largest element. Unstable image dimensions . When an image loads without defined width and height attributes, the browser doesn't reserve space for it. It renders the surrounding text, then jumps it down when the image appears. That jump is CLS. Third-party scripts blocking the main thread . Analytics pixels, ad tags, and live chat tools run on the browser's main thread. When they stack up, every click and tap has to wait in line — driving INP scores up. A single slow third-party script can push an otherwise clean site into "needs improvement" territory. Too many web fonts . Each font family and weight is a separate network request. A page loading four font files before rendering any text will fail LCP, especially on mobile connections. Unoptimized images . JPEGs and PNGs served at full resolution, without compression or modern formats like WebP or AVIF, add unnecessary weight to every page load. How to measure them accurately There are two types of Core Web Vitals data you should be looking at for every client: Lab data comes from tools like Google PageSpeed Insights, Lighthouse, and WebPageTest. It simulates page loads in controlled conditions. Lab data is useful for diagnosing specific issues and testing fixes before you deploy them. Field data (also called Real User Monitoring, or RUM) comes from actual users visiting the site. Google collects this through the Chrome User Experience Report (CrUX) and surfaces it in Search Console and PageSpeed Insights. Field data is what Google actually uses as a ranking signal — and it often looks worse than lab data because it reflects real-world device and connection variability. If your client's site has enough traffic, you'll see field data in Search Console under Core Web Vitals. This is your baseline. Lab data helps you understand why the scores are what they are. For clients with low traffic who don't have enough field data to appear in CrUX, you'll be working primarily with lab scores. Set that expectation early so clients understand that improvements may not immediately show up in Search Console. Practical fixes that move the needle Fix LCP: get the hero image loading first The single most effective LCP improvement is adding fetchpriority="high" to the hero image tag. This tells the browser to prioritize that resource over everything else. If you're using a background CSS image for the hero, switch it to anelement — background images aren't discoverable by the browser's preload scanner. Also check whether your hosting serves images through a CDN with caching. Edge delivery dramatically reduces the time-to-first-byte, which feeds directly into LCP. Fix CLS: define dimensions for every media element Every image, video, and ad slot on the page needs explicit width and height attributes in the HTML. If you're using responsive CSS, you can still define the aspect ratio with aspect-ratio in CSS while leaving the actual size fluid. The key is giving the browser enough information to reserve space before the asset loads. Avoid inserting content above existing content after page load. This is common with cookie banners, sticky headers that change height, and dynamically loaded ad units. If you need to show these, anchor them to fixed positions so they don't push content around. Fix INP: reduce what's competing for the main thread Audit third-party scripts and defer or remove anything that isn't essential. Tools like WebPageTest's waterfall view or Chrome DevTools Performance panel show you exactly which scripts are blocking the main thread and for how long. Load chat widgets, analytics, and ad tags asynchronously and after the page's critical path has resolved. For most clients, moving non-essential scripts to load after the DOMContentLoaded event is a meaningful INP improvement with no visible impact on the user experience. For websites with heavy JavaScript — particularly those built on frameworks with large client-side bundles — consider breaking up long tasks into smaller chunks using the browser's Scheduler API or simply splitting components so the main thread isn't locked for more than 50 milliseconds at a stretch. What platforms handle automatically One of the practical advantages of building on a platform optimized for performance is that many of these fixes are applied by default. Duda, for example, automatically serves WebP images, lazy loads below-the-fold content, minifies CSS, and uses efficient cache policies for static assets. As of May 2025, 82% of sites built on Duda pass all three Core Web Vitals metrics — the highest recorded pass rate among major website platforms. That baseline matters when you're managing dozens or hundreds of client sites. It means you're starting each project close to or at a passing score, rather than diagnosing and patching a broken foundation. How much do Core Web Vitals actually affect rankings? Honestly, they're a tiebreaker — not a primary signal. Google has been clear that content quality and relevance still dominate ranking decisions. A well-optimized site with thin, irrelevant content won't outrank a content-rich competitor just because its CLS is 0.05. What Core Web Vitals do affect is the user experience that supports those rankings. Pages with poor LCP scores have measurably higher bounce rates. Sites with high CLS lose users mid-session. Those behavioral signals — time on page, return visits, conversions — are things search engines can observe and incorporate. The practical argument for fixing Core Web Vitals isn't just "because Google said so." It's that faster, more stable pages convert better. Every second of LCP improvement can reduce bounce rates by 15–20% depending on the industry and device mix. For client sites that monetize through leads or eCommerce, that's a revenue argument, not just an SEO argument. A repeatable process for agencies Audit every new site before launch. Run PageSpeed Insights and record LCP, INP, and CLS scores for both mobile and desktop. Flag anything in the "needs improvement" or "poor" range before the client sees the live site. Check Search Console monthly for existing clients. The Core Web Vitals report surfaces issues as they appear in field data. Catching a regression early — before it compounds — is significantly easier than explaining a traffic drop after the fact. Document what you've improved. Clients rarely see Core Web Vitals scores on their own. A monthly one-page performance summary showing before/after scores builds credibility and makes your technical work visible. Prioritize mobile. Google uses mobile-first indexing, and field data shows that mobile CWV scores are almost always worse than desktop. If you only have time to optimize one version, do mobile first. Core Web Vitals aren't a one-time fix. Platforms change, new scripts get added, campaigns bring in new widgets. Build the audit into your workflow and treat it like any other ongoing deliverable, and you'll stay ahead of the issues before they affect your clients' rankings. Duda's platform is built with Core Web Vitals performance in mind. Explore how it handles image optimization, script management, and site speed automatically — so your team spends less time debugging and more time building.
By Ilana Brudo March 31, 2026
Vertical SaaS must transition from tools to an AI-powered Vertical Operating System (vOS). Learn to leverage context, end tech sprawl, and maximize retention.
By Shawn Davis March 27, 2026
Automate client management, instant site generation, and data synchronization with an API-driven website builder to create a scalable growth engine for your SaaS platform.
Show More

Latest posts