AI browsers and the web of the future

December 16, 2025
0 minute read

AI is everywhere, and it’s changing the way people interact with websites. By late 2025, 46% of adults on the Internet in the US were already using generative AI tools for online searches and everyday tasks. The web browser, our main interface with the web for two decades, is in the crosshairs of the tech giants looking to bring their AI services toward mass adoption. This means that the way websites are built will have to change as well and adapt to serving two audiences: AI engines and human users.


For marketers, this is a tectonic shift that can only be compared to the introduction of connected mobile devices. Now, AI browsers are promising a change bigger than smaller screens - making websites into little more than an interface for robots to crawl and perform actions.


Before we discuss the role of digital marketing agencies in this transition, it’s worth defining what is what in the evolving landscape of user-AI-web interfaces.


AI browsers vs AI search vs AI agencts


For years, a user agent like “Chrome on Android” in your analytics report was indicative of a real person looking at a page for a certain length of time. With AI services involved, that’s no longer the case. Your website now interacts with the end user through multiple AI touchpoints, and in some instances, it is never seen by a human at all. Let’s start by understanding what each does, and how we, marketers, see it (or don’t) in our website analytics dashboards.


AI search


The topic that’s been discussed most in the digital marketing circles is the impact of AI search on traditional organic search traffic. Some of us are already feeling the drop due to Google AI Overviews drastically reducing CTRs. But how does it actually work?


AI search results are generated by the various engines based on information collected by their own crawlers (think Google Bot on steroids) to ingest and refresh your content. Your websites become training data and live reference material, shaping the answers people see (even if they never click through).


AI browsers


ChatGPT Atlas, Perplexity’s Comet, Dia, and even Firefox’s upcoming AI Window are trying, each in their way, to bake the answer layer into the browser itself. Users still see sites in a traditional browser, but an AI reads the DOM, summarizes content, compares tabs, and often becomes the primary “interface” to website content. Early adopters are already boasting about replacing human assistants with an AI web browser.


In your logs, these sessions will appear as regular desktop or mobile browser traffic, but the user experience of the website will be drastically different from that of a non-AI browser.


AI agents


Atlas’s Agent Mode, Comet’s agentic workflows, and ChatGPT tasks are a different beast altogether. These are the “doers” who perform actions on behalf of users. AI agents can spawn virtual or headless browser instances that click, scroll, fill out forms, and submit them the way a user would, as well as interact with service APIs. AI agents can be baked in with an AI browser (like Comet and Atlas) or launched from AI assistant applications or web interfaces.


From an analytics and measurement perspective, these can be the most confusing participants in customer journeys. Some run locally in a user’s browser, others run in the cloud under generic “ChatGPT-User” or headless Chrome user agents.


What it is How it works What you see in dashboards
AI search Answer engines using your clients’ content as input Crawlers/fetchers hit pages directly Bot traffic & crawl logs, zero-click queries
AI browsers Browsers with built-in AI assistants Real users load pages, assistant summarizes the content Normal browser sessions, often with atypical behavior patterns
AI agents Task-doers acting like users Virtual/headless or embedded browsers mimic user flows “Ghost” sessions and conversions with thin or unusual click trails


The role of marketing agencies in an AI-mediated web


The client journey is mutating. In some AI-led journeys, the human barely sees your client sites at all. That means that whoever owns and maintains a site’s structure and content now has a more central role than before. In day-to-day agency work, AI-led browsing trends will shape how you build, how you measure, and how you keep the “digital source of truth” alive.


Building websites (also) for agents


AI engines of all kinds treat your clients’ sites as a kind of API interface for information they can ingest. They crawl the site to pull structured data and execute flows that were normally performed by users, but all without your beautiful designs and thought-out layouts ever admired by human eyes. That doesn’t mean that the brand and products turn invisible, or that you need to build a second, secret site for machines.


The good news is that you’re already doing 90% of the work of making sites agent-friendly by ensuring your clients’ sites are well-structured, fast, accessible, filled with original content, and SEO-sane. In reality, it’s what both humans and AI agents want: a clear, low-friction description of what’s true, what’s available, and how to get it.


It’s worth noting that researchers and businesses alike are pushing toward standardization of explicit, high-level actions for AI agents like “add to cart” or “book appointment” that agents can confidently interact with (instead of brittle pixel-level clicking). OpenAI and Stripe’s Agentic Commerce Protocol (ACP) standard already lets users fully complete a purchase through the ChatGPT interface.


Rethinking analytics and attribution


Getting a complete picture of the buyer journey across digital touchpoints was always a challenge, but with AI in the picture and zero-click behaviors on the rise, things are more complex than ever. AI surfaces increasingly satisfy intent on the results page or inside an assistant, while sending only incremental traffic to websites.


That creates an AI-driven “dark funnel.” Decisions are shaped by your content, but the exploration happens inside AI Overviews, AI chat apps - not on pages with your analytics pixels. This makes classic last-click attribution not only noisy but also dangerously misleading.


In response, a mini-industry of “AI visibility” tools is emerging to track how often brands get cited or recommended in AI answers, and how changes to content or schema affect that visibility. At the same time, C-suite guidance is nudging marketers to compensate for missing click-level data with more resilient strategies. This makes it the agency’s job to explain the shift and communicate the response plan that pairs traditional metrics (revenue, conversion, branded demand) with a small set of AI-aware indicators and experiments.


Maintaining your clients’ digital source of truth


Much like search engine indexing bots, generative AI systems don’t crawl once and forget the site existed. They keep coming back, but instead of indexing, they use your clients’ websites as a rolling reference file. Also similar in nature are the signals these bots use to evaluate content, with freshness, originality, consistency, and credibility before surfacing it to users. If key pages are thin or outdated, models will either downrank the sites or worse - start making guesses and generating hallucinations with your clients’ logos on them.


To serve AI bots with accurate and fresh data, you must look at websites the way they do - as a living knowledge system. This is where AEO/GEO guidance and AI SEO playbooks all converge on the same basics: consolidate answers to common questions, keep core facts current (pricing, availability, terms, locations, etc.), and structure them in formats answer engines and AI browsers reliably ingest.


From an agency perspective, this only strengthens the importance of a robust content ops strategy that can deliver at scale. For clients, it means enhancing collaboration between content owners internally. Knowledge-base teams already treat freshness as critical to preventing bad AI answers. Business websites now need the same discipline.


How Duda helps you stay ahead


AI is all the rage, and it may reach mass adoption faster than any digital tool before it. But new interfaces will come and go. Underneath all that, the pattern is clear: your clients still need a fast, reliable, discoverable, well-structured website that tells the truth about their business and makes client-agency collaboration smooth.


If you’re building on Duda, a lot of that groundwork is already handled for you. The platform keeps pushing Core Web Vitals performance to the top of the CMS pack, so sites load quickly and stay stable as standards tighten. It bakes in structured data support, SEO tooling, and accessibility features. On top of that, Duda’s AI Stack and SEO overview help you generate and maintain titles, meta descriptions, alt text, and on-page copy inside a governed editor, not a black box.


In an AI-mediated web, the combination of solid infrastructure, human judgment, and governed ease of use is exactly why your clients choose you to build and maintain their websites.


Did you find this article interesting?


Thanks for the feedback!
By Shawn Davis April 16, 2026
Website builder analysed 69M AI crawler visits across over 850,000 websites in February 2026 to determine key trends and characteristics that increase local AEO
By Shawn Davis April 1, 2026
Core Web Vitals aren't new, Google introduced them in 2020 and made them a ranking factor in 2021. But the questions keep coming, because the metrics keep changing and the stakes keep rising. Reddit's SEO communities were still debating their impact as recently as January 2026, and for good reason: most agencies still don't have a clear, repeatable way to measure, diagnose, and fix them for clients. This guide cuts through the noise. Here's what Core Web Vitals actually measure, what good scores look like today, and how to improve them—without needing a dedicated performance engineer on every project. What Core Web Vitals measure Google evaluates three user experience signals to determine whether a page feels fast, stable, and responsive: Largest Contentful Paint (LCP) measures how long it takes for the biggest visible element on a page — usually a hero image or headline — to load. Google considers anything under 2.5 seconds good. Above 4 seconds is poor. Interaction to Next Paint (INP) replaced First Input Delay (FID) in March 2024. Where FID measures the delay before a user's first click is registered, INP tracks the full responsiveness of every interaction across the page session. A good INP score is under 200 milliseconds. Cumulative Layout Shift (CLS) measures visual stability — how much page elements unexpectedly move while content loads. A score below 0.1 is good. Higher scores signal that images, ads, or embeds are pushing content around after load, which frustrates users and tanks conversions. These three metrics are a subset of Google's broader Page Experience signals, which also include HTTPS, safe browsing, and mobile usability. Core Web Vitals are the ones you can most directly control and improve. Why your clients' scores may still be poor Core Web Vitals scores vary dramatically by platform, hosting, and how a site was built. Some of the most common culprits agencies encounter: Heavy above-the-fold content . A homepage with an autoplay video, a full-width image slider, and a chat widget loading simultaneously will fail LCP every time. The browser has to resolve all of those resources before it can paint the largest element. Unstable image dimensions . When an image loads without defined width and height attributes, the browser doesn't reserve space for it. It renders the surrounding text, then jumps it down when the image appears. That jump is CLS. Third-party scripts blocking the main thread . Analytics pixels, ad tags, and live chat tools run on the browser's main thread. When they stack up, every click and tap has to wait in line — driving INP scores up. A single slow third-party script can push an otherwise clean site into "needs improvement" territory. Too many web fonts . Each font family and weight is a separate network request. A page loading four font files before rendering any text will fail LCP, especially on mobile connections. Unoptimized images . JPEGs and PNGs served at full resolution, without compression or modern formats like WebP or AVIF, add unnecessary weight to every page load. How to measure them accurately There are two types of Core Web Vitals data you should be looking at for every client: Lab data comes from tools like Google PageSpeed Insights, Lighthouse, and WebPageTest. It simulates page loads in controlled conditions. Lab data is useful for diagnosing specific issues and testing fixes before you deploy them. Field data (also called Real User Monitoring, or RUM) comes from actual users visiting the site. Google collects this through the Chrome User Experience Report (CrUX) and surfaces it in Search Console and PageSpeed Insights. Field data is what Google actually uses as a ranking signal — and it often looks worse than lab data because it reflects real-world device and connection variability. If your client's site has enough traffic, you'll see field data in Search Console under Core Web Vitals. This is your baseline. Lab data helps you understand why the scores are what they are. For clients with low traffic who don't have enough field data to appear in CrUX, you'll be working primarily with lab scores. Set that expectation early so clients understand that improvements may not immediately show up in Search Console. Practical fixes that move the needle Fix LCP: get the hero image loading first The single most effective LCP improvement is adding fetchpriority="high" to the hero image tag. This tells the browser to prioritize that resource over everything else. If you're using a background CSS image for the hero, switch it to anelement — background images aren't discoverable by the browser's preload scanner. Also check whether your hosting serves images through a CDN with caching. Edge delivery dramatically reduces the time-to-first-byte, which feeds directly into LCP. Fix CLS: define dimensions for every media element Every image, video, and ad slot on the page needs explicit width and height attributes in the HTML. If you're using responsive CSS, you can still define the aspect ratio with aspect-ratio in CSS while leaving the actual size fluid. The key is giving the browser enough information to reserve space before the asset loads. Avoid inserting content above existing content after page load. This is common with cookie banners, sticky headers that change height, and dynamically loaded ad units. If you need to show these, anchor them to fixed positions so they don't push content around. Fix INP: reduce what's competing for the main thread Audit third-party scripts and defer or remove anything that isn't essential. Tools like WebPageTest's waterfall view or Chrome DevTools Performance panel show you exactly which scripts are blocking the main thread and for how long. Load chat widgets, analytics, and ad tags asynchronously and after the page's critical path has resolved. For most clients, moving non-essential scripts to load after the DOMContentLoaded event is a meaningful INP improvement with no visible impact on the user experience. For websites with heavy JavaScript — particularly those built on frameworks with large client-side bundles — consider breaking up long tasks into smaller chunks using the browser's Scheduler API or simply splitting components so the main thread isn't locked for more than 50 milliseconds at a stretch. What platforms handle automatically One of the practical advantages of building on a platform optimized for performance is that many of these fixes are applied by default. Duda, for example, automatically serves WebP images, lazy loads below-the-fold content, minifies CSS, and uses efficient cache policies for static assets. As of May 2025, 82% of sites built on Duda pass all three Core Web Vitals metrics — the highest recorded pass rate among major website platforms. That baseline matters when you're managing dozens or hundreds of client sites. It means you're starting each project close to or at a passing score, rather than diagnosing and patching a broken foundation. How much do Core Web Vitals actually affect rankings? Honestly, they're a tiebreaker — not a primary signal. Google has been clear that content quality and relevance still dominate ranking decisions. A well-optimized site with thin, irrelevant content won't outrank a content-rich competitor just because its CLS is 0.05. What Core Web Vitals do affect is the user experience that supports those rankings. Pages with poor LCP scores have measurably higher bounce rates. Sites with high CLS lose users mid-session. Those behavioral signals — time on page, return visits, conversions — are things search engines can observe and incorporate. The practical argument for fixing Core Web Vitals isn't just "because Google said so." It's that faster, more stable pages convert better. Every second of LCP improvement can reduce bounce rates by 15–20% depending on the industry and device mix. For client sites that monetize through leads or eCommerce, that's a revenue argument, not just an SEO argument. A repeatable process for agencies Audit every new site before launch. Run PageSpeed Insights and record LCP, INP, and CLS scores for both mobile and desktop. Flag anything in the "needs improvement" or "poor" range before the client sees the live site. Check Search Console monthly for existing clients. The Core Web Vitals report surfaces issues as they appear in field data. Catching a regression early — before it compounds — is significantly easier than explaining a traffic drop after the fact. Document what you've improved. Clients rarely see Core Web Vitals scores on their own. A monthly one-page performance summary showing before/after scores builds credibility and makes your technical work visible. Prioritize mobile. Google uses mobile-first indexing, and field data shows that mobile CWV scores are almost always worse than desktop. If you only have time to optimize one version, do mobile first. Core Web Vitals aren't a one-time fix. Platforms change, new scripts get added, campaigns bring in new widgets. Build the audit into your workflow and treat it like any other ongoing deliverable, and you'll stay ahead of the issues before they affect your clients' rankings. Duda's platform is built with Core Web Vitals performance in mind. Explore how it handles image optimization, script management, and site speed automatically — so your team spends less time debugging and more time building.
By Ilana Brudo March 31, 2026
Vertical SaaS must transition from tools to an AI-powered Vertical Operating System (vOS). Learn to leverage context, end tech sprawl, and maximize retention.
Show More

Latest posts