Five ways to optimize your clients’ sites for AI search today

October 7, 2025
0 minute read

As AI-driven search reshapes how people find information online, digital marketing agencies need to adapt faster than ever. The shift from traditional keyword-based SEO to generative search experiences means websites must be not only visible to search engines, but also citable by large language models and answer engines such as ChatGPT, Perplexity, Bing Copilot, and Google’s AI Overviews. In this emerging landscape, your goal is no longer just to earn a spot on page one, rather, it’s to have your clients’ content become the trusted material AI systems quote, summarize, build upon, and, hopefully, recommend.


Optimizing for AI search requires a strategic mix of technical accessibility, structured content, credibility, conversational alignment, and continuous iteration. The following five approaches outline how agencies can help clients adapt right now, using tactics that are already proving effective.


1. Make sure AI systems can access, and understand, your content


The foundation of AI visibility begins with technical accessibility. If an AI crawler can’t reach, render, or interpret your client’s pages, no amount of content polish will matter. Start by confirming that essential files such as robots.txt and meta-robots tags allow known AI crawlers, like GPTBot, Claude-Web, and CCBot, to index relevant sections of the site. Some organizations are beginning to experiment with an emerging file called llms.txt, which works similarly to robots.txt but provides explicit instructions for large language models. Including such a file helps communicate which pages are suitable for training or citation.


Next, review canonicalization and redirect structures to ensure that AI systems can identify the preferred version of each URL. Misconfigured canonical tags or long redirect chains can obscure content from crawlers, particularly those used by AI systems. It’s also worth checking whether valuable resources sit behind login screens, JavaScript-only rendering, or paywalls. Many AI crawlers cannot execute JavaScript or navigate authenticated environments. Providing prerendered snapshots or server-rendered alternatives can solve this issue.


Top Tip
Duda generates crawler-friendly robots.txt and LLMs.txt files automatically, right out-of-the-box, for every site you build. Additionally, Duda’s use of server-side rendering minimizes the amount of JavaScript necessary to view your content, making your sites even more accessible to AI crawlers.


Finally, test visibility directly by entering target queries or page titles into AI search platforms. If the client’s content never appears as a cited or referenced source, start by diagnosing accessibility. This technical groundwork ensures that AI systems can actually evaluate and learn from the site’s information.


2. Structure your content to be AI-readable and human friendly


Once accessibility is handled, the next step is to make the content intelligible to AI. Large language models parse structure, hierarchy, and semantics to determine what information is reliable and relevant. Agencies should guide clients toward producing well-organized content that’s easy to parse at scale.


Strong heading hierarchies, clear introductions, and concise summaries all contribute to better comprehension. Placing a brief, direct answer near the top of each page or section helps AI identify definitive statements. According to Conductor, content following this “answer-first” model is more likely to be featured in generative summaries. Similarly, content broken into clear subheadings, short paragraphs, and semantic groupings gives AI the signals it needs to extract and reformat information confidently.


Structured data is another underused tool for AI optimization. Schema markup in JSON-LD, such as Article, FAQPage, or HowTo, tells both search engines and generative systems exactly what a page represents. Combined with semantic internal linking between related subtopics, schema reinforces topical authority. Agencies should also encourage clients to regularly refresh content with update dates or version labels; AI systems often prioritize newer or recently maintained information.


Top Tip
Duda generates and embeds many relevant schemas automatically.


These changes don’t require rewriting entire websites. They involve tightening structure, clarifying hierarchy, and adding contextual signals that make it easier for AI models to cite a source confidently.


3. Strengthen credibility with transparent data and attribution


Generative engines prefer content that’s trustworthy and verifiable. As misinformation concerns rise, models increasingly favor material that cites reputable sources and demonstrates author expertise. For agencies, this means coaching clients to embed clear evidence of authority and transparency throughout their sites.


Whenever possible, content should include specific data points with visible attribution. Replace generic statements like “most marketers agree” with sourced facts such as, according to Semrush, “According to Litmus’s 2024 Email Benchmark Report, email marketing generates $42 in return for every $1 spent.” Data-backed claims are more likely to be cited by AI systems. Additionally, including author bios with professional credentials, especially in specialized industries such as finance, health, or law, adds signals of expertise and accountability.


Quoting or referencing established organizations, even when paraphrased, can also help contextualize authority. AI models tend to link such names and citations to existing knowledge graphs, improving discoverability. Beyond on-page optimization, offsite credibility remains crucial: guest posts, media mentions, and digital PR placements create external references that models recognize, even if they are not traditional backlinks.


Transparency about methodology and freshness also matters. Marking when data was last updated, outlining editorial review processes, or noting the sample size behind a study all contribute to the perception of reliability. In the AI-driven ecosystem, credibility and verifiability are the new currency of ranking power.


If this sounds familiar, that’s because it’s exceptionally similar to Google’s E-E-A-T guidelines (Experience, Expertise, Authoritativeness, and Trustworthiness) SEOs are long familiar with.


4. Align content with AI query patterns instead of traditional keywords


Generative search systems analyze language differently than conventional engines. They don’t just look for matching keywords, they interpret intent, context, and phrasing in natural language. Optimizing for AI search therefore requires aligning with how users ask questions conversationally, not just how they type them.


Agencies can begin by studying how people phrase questions within AI chat interfaces. Running prompt variations in tools like ChatGPT, Perplexity, or Claude reveals which wording produces the most relevant citations or responses. These insights inform how to phrase headings, titles, and introductory sentences on clients’ sites. Instead of relying on rigid keyword lists, focus on the semantic field around key concepts.


Framing headings as direct questions, such as “How does AI search affect SEO strategy?,” helps AI systems recognize answer relationships. Including concise, well-formulated answers beneath each heading reinforces that link. Tests by 97th floor found that content built around natural-language questions tends to appear more often in AI answer sets.


Covering adjacent subtopics also pays dividends. Because AI models aim to synthesize complete responses, they favor pages that address a cluster of related queries. Expanding coverage to include common follow-ups (“What are the pros and cons?” or “How does it compare to traditional search?”) can help a single article serve multiple prompt variations.


Finally, use semantic keyword variations and related terms rather than repetitive phrasing. AI systems understand synonyms and context, so diversity of expression strengthens topical depth. Agencies that train writers to think conversationally, not mechanically, will see better results in generative visibility.


5. Measure AI visibility and iterate based on real feedback


Because AI search is evolving rapidly, optimization cannot be a one-off project. Agencies need a feedback loop to measure performance and adjust strategies quickly. Traditional SEO metrics such as rank position or click-through rate still matter, but they only tell part of the story.


Start by tracking when and where your clients’ content appears as a cited source in generative systems. New AI search tools, like Semrush’s suite of AI tools, can detect citations across platforms like Google’s AI Overviews, Bing Copilot, and Perplexity. Even without such tools, you can manually test queries and record which prompts lead to visible citations or source mentions—although, try to do so with memory features disabled. Over time, this builds a reference library of prompt phrasing that performs well.


It’s also useful to monitor shifts in branded search demand. When AI systems repeatedly reference a client’s brand, users often start including it in their own search queries. Increases in branded search traffic can be an indirect sign of improved AI visibility.


Engagement metrics provide another layer of insight. Even if AI search reduces overall organic clicks, those who do visit may be more qualified. Track dwell time, scroll depth, and conversion actions rather than raw traffic volume. Periodically refreshing older content, updating examples, refining structure, or clarifying data, can also help maintain AI relevance.


Agencies should treat this as an iterative process, much like conversion rate optimization: experiment, test, and refine. The goal is not perfection in a single update, but consistent improvement informed by observation.


A roadmap for agencies


For agencies managing multiple clients, these optimization principles can be scaled efficiently through process design. Begin by identifying the high-impact content that already performs well in search or serves as a key lead driver. Run an AI accessibility audit on those pages to detect any crawling or rendering barriers. Once resolved, layer on structural improvements such as concise summaries, schema markup, and internal linking that clarifies topic clusters.


Next, enrich each page with verifiable data and clear attribution, integrating updated research or statistics where possible. Follow this by testing prompt variations across AI platforms to see whether your clients’ content is cited. Document the successful phrasing, then replicate the format on similar pages. Over time, these insights form the basis of an internal playbook—a repeatable framework for AI visibility.


Finally, build regular performance reviews into client reporting. Even simple monthly tests across major AI search systems can provide valuable trend data. Sharing these insights with clients helps position your agency as a forward-looking partner prepared for the realities of generative search.


From ranking to referencing


AI search is already changing how people discover and trust information. According to Xponent21, Google’s AI Overviews now appear in more than half of all search results, a number that continues to grow. Google is also continuing to invest in an AI-only version of its search engine interface, AI Mode, emphasizing summary responses over links. The shift toward zero-click, conversation-led discovery is well underway.


For digital marketing agencies, this means the line between SEO, content strategy, and data credibility has blurred. Traditional optimization for rank alone is no longer enough. The new objective is to make your clients’ content accessible, understandable, and authoritative enough to be used, not just found, by AI systems.


By focusing on five key areas—technical accessibility, structured content, verified credibility, conversational alignment, and adaptive iteration—agencies can help clients remain visible and trusted in this changing environment. Generative engine optimization is not a replacement for SEO; rather, it’s its next evolution. The agencies that master it first will define what digital visibility means in the AI era.


Did you find this article interesting?


Thanks for the feedback!
By Shawn Davis April 1, 2026
Core Web Vitals aren't new, Google introduced them in 2020 and made them a ranking factor in 2021. But the questions keep coming, because the metrics keep changing and the stakes keep rising. Reddit's SEO communities were still debating their impact as recently as January 2026, and for good reason: most agencies still don't have a clear, repeatable way to measure, diagnose, and fix them for clients. This guide cuts through the noise. Here's what Core Web Vitals actually measure, what good scores look like today, and how to improve them—without needing a dedicated performance engineer on every project. What Core Web Vitals measure Google evaluates three user experience signals to determine whether a page feels fast, stable, and responsive: Largest Contentful Paint (LCP) measures how long it takes for the biggest visible element on a page — usually a hero image or headline — to load. Google considers anything under 2.5 seconds good. Above 4 seconds is poor. Interaction to Next Paint (INP) replaced First Input Delay (FID) in March 2024. Where FID measures the delay before a user's first click is registered, INP tracks the full responsiveness of every interaction across the page session. A good INP score is under 200 milliseconds. Cumulative Layout Shift (CLS) measures visual stability — how much page elements unexpectedly move while content loads. A score below 0.1 is good. Higher scores signal that images, ads, or embeds are pushing content around after load, which frustrates users and tanks conversions. These three metrics are a subset of Google's broader Page Experience signals, which also include HTTPS, safe browsing, and mobile usability. Core Web Vitals are the ones you can most directly control and improve. Why your clients' scores may still be poor Core Web Vitals scores vary dramatically by platform, hosting, and how a site was built. Some of the most common culprits agencies encounter: Heavy above-the-fold content . A homepage with an autoplay video, a full-width image slider, and a chat widget loading simultaneously will fail LCP every time. The browser has to resolve all of those resources before it can paint the largest element. Unstable image dimensions . When an image loads without defined width and height attributes, the browser doesn't reserve space for it. It renders the surrounding text, then jumps it down when the image appears. That jump is CLS. Third-party scripts blocking the main thread . Analytics pixels, ad tags, and live chat tools run on the browser's main thread. When they stack up, every click and tap has to wait in line — driving INP scores up. A single slow third-party script can push an otherwise clean site into "needs improvement" territory. Too many web fonts . Each font family and weight is a separate network request. A page loading four font files before rendering any text will fail LCP, especially on mobile connections. Unoptimized images . JPEGs and PNGs served at full resolution, without compression or modern formats like WebP or AVIF, add unnecessary weight to every page load. How to measure them accurately There are two types of Core Web Vitals data you should be looking at for every client: Lab data comes from tools like Google PageSpeed Insights, Lighthouse, and WebPageTest. It simulates page loads in controlled conditions. Lab data is useful for diagnosing specific issues and testing fixes before you deploy them. Field data (also called Real User Monitoring, or RUM) comes from actual users visiting the site. Google collects this through the Chrome User Experience Report (CrUX) and surfaces it in Search Console and PageSpeed Insights. Field data is what Google actually uses as a ranking signal — and it often looks worse than lab data because it reflects real-world device and connection variability. If your client's site has enough traffic, you'll see field data in Search Console under Core Web Vitals. This is your baseline. Lab data helps you understand why the scores are what they are. For clients with low traffic who don't have enough field data to appear in CrUX, you'll be working primarily with lab scores. Set that expectation early so clients understand that improvements may not immediately show up in Search Console. Practical fixes that move the needle Fix LCP: get the hero image loading first The single most effective LCP improvement is adding fetchpriority="high" to the hero image tag. This tells the browser to prioritize that resource over everything else. If you're using a background CSS image for the hero, switch it to anelement — background images aren't discoverable by the browser's preload scanner. Also check whether your hosting serves images through a CDN with caching. Edge delivery dramatically reduces the time-to-first-byte, which feeds directly into LCP. Fix CLS: define dimensions for every media element Every image, video, and ad slot on the page needs explicit width and height attributes in the HTML. If you're using responsive CSS, you can still define the aspect ratio with aspect-ratio in CSS while leaving the actual size fluid. The key is giving the browser enough information to reserve space before the asset loads. Avoid inserting content above existing content after page load. This is common with cookie banners, sticky headers that change height, and dynamically loaded ad units. If you need to show these, anchor them to fixed positions so they don't push content around. Fix INP: reduce what's competing for the main thread Audit third-party scripts and defer or remove anything that isn't essential. Tools like WebPageTest's waterfall view or Chrome DevTools Performance panel show you exactly which scripts are blocking the main thread and for how long. Load chat widgets, analytics, and ad tags asynchronously and after the page's critical path has resolved. For most clients, moving non-essential scripts to load after the DOMContentLoaded event is a meaningful INP improvement with no visible impact on the user experience. For websites with heavy JavaScript — particularly those built on frameworks with large client-side bundles — consider breaking up long tasks into smaller chunks using the browser's Scheduler API or simply splitting components so the main thread isn't locked for more than 50 milliseconds at a stretch. What platforms handle automatically One of the practical advantages of building on a platform optimized for performance is that many of these fixes are applied by default. Duda, for example, automatically serves WebP images, lazy loads below-the-fold content, minifies CSS, and uses efficient cache policies for static assets. As of May 2025, 82% of sites built on Duda pass all three Core Web Vitals metrics — the highest recorded pass rate among major website platforms. That baseline matters when you're managing dozens or hundreds of client sites. It means you're starting each project close to or at a passing score, rather than diagnosing and patching a broken foundation. How much do Core Web Vitals actually affect rankings? Honestly, they're a tiebreaker — not a primary signal. Google has been clear that content quality and relevance still dominate ranking decisions. A well-optimized site with thin, irrelevant content won't outrank a content-rich competitor just because its CLS is 0.05. What Core Web Vitals do affect is the user experience that supports those rankings. Pages with poor LCP scores have measurably higher bounce rates. Sites with high CLS lose users mid-session. Those behavioral signals — time on page, return visits, conversions — are things search engines can observe and incorporate. The practical argument for fixing Core Web Vitals isn't just "because Google said so." It's that faster, more stable pages convert better. Every second of LCP improvement can reduce bounce rates by 15–20% depending on the industry and device mix. For client sites that monetize through leads or eCommerce, that's a revenue argument, not just an SEO argument. A repeatable process for agencies Audit every new site before launch. Run PageSpeed Insights and record LCP, INP, and CLS scores for both mobile and desktop. Flag anything in the "needs improvement" or "poor" range before the client sees the live site. Check Search Console monthly for existing clients. The Core Web Vitals report surfaces issues as they appear in field data. Catching a regression early — before it compounds — is significantly easier than explaining a traffic drop after the fact. Document what you've improved. Clients rarely see Core Web Vitals scores on their own. A monthly one-page performance summary showing before/after scores builds credibility and makes your technical work visible. Prioritize mobile. Google uses mobile-first indexing, and field data shows that mobile CWV scores are almost always worse than desktop. If you only have time to optimize one version, do mobile first. Core Web Vitals aren't a one-time fix. Platforms change, new scripts get added, campaigns bring in new widgets. Build the audit into your workflow and treat it like any other ongoing deliverable, and you'll stay ahead of the issues before they affect your clients' rankings. Duda's platform is built with Core Web Vitals performance in mind. Explore how it handles image optimization, script management, and site speed automatically — so your team spends less time debugging and more time building.
By Ilana Brudo March 31, 2026
Vertical SaaS must transition from tools to an AI-powered Vertical Operating System (vOS). Learn to leverage context, end tech sprawl, and maximize retention.
By Shawn Davis March 27, 2026
Automate client management, instant site generation, and data synchronization with an API-driven website builder to create a scalable growth engine for your SaaS platform.
Show More

Latest posts