How Duda used the Speculation Rules API to boost its runtime sites users’ experience

January 13, 2026
0 minute read

Website performance is no longer optional. It is one of the main reasons to choose one website builder over another. When every millisecond matters, building fast and reliable websites is not just a technical goal but a key part of your clients' business success. Strong performance drives user satisfaction, increases engagement, and improves search visibility. All of these factors are essential for growth.


In the competitive web landscape, superior site performance is key to success. It not only enhances the visitor experience but also claims to improve search engine rankings.


A high-performing website loads quickly, maintains a stable layout that prevents user frustration, and provides smooth, responsive interactions. These three key measurements, defined by Google, are known as Core Web Vitals (CWV).

Since their introduction, the Core Web Vitals have evolved, and today they are defined by the following three metrics:



Duda leads the market in website performance. According to Google’s latest Core Web Vitals Technology Report, an impressive 85% of Duda-powered sites achieve “good” Core Web Vitals scores, the highest among website building platforms:


A graph comparing the percentage of websites with good Core Web Vital scores across multiple website technologies. In order, the image shows that 85% of sites built on Duda have good Core Web Vital Scores, 44% of sites built on Wordpress, 76% of sites built on Shopify, 73% of sites built on Wix, 64% of sites built on Squarespace, and 50% of all websites on average. The data is dated 09-01-2025.


This leadership isn’t by chance. It's the result of Duda’s relentless focus on speed, stability, and user experience at scale. Every optimization, from infrastructure to runtime innovations, is designed to ensure that sites built on Duda consistently perform at the top of the web.


One of the tools Duda uses to achieve these results is the Speculation Rules API. This article explains how Duda applies the Speculation Rules API to improve future site navigation and enhance LCP scores. It also discusses the challenges faced and shares data to help you decide whether this API could improve your own site's performance.


Ready to dive in? Excellent. But first, let's understand what the Speculation Rules API is.



Table of Contents


Speculation Rules API


From MDN:


“The Speculation Rules API is designed to improve performance for future navigations. It targets document URLs rather than specific resource files, and so makes sense for multi-page applications (MPAs) rather than single-page applications (SPAs).”


This feature allows site authors to control the loading strategy for different pages. For example, an author can specify that when a user hovers over a link to the About page, the browser should prefetch its content. This ensures that when the link is clicked, the page's assets are already loaded, providing a seamless transition and faster page loading.


The feature replaces the <link rel="prefetch"> hint and the Chromium-only, deprecated <link rel="prerender">. Although it is often described as an “experimental” API, Duda has deployed it with careful rollout and monitoring, taking advantage of its support in modern Chromium-based browsers.


The Speculation Rules API offers a simple and direct configuration process. You specify a List of URLs or CSS selectors matching those Anchor elements, or rules to identify the URLs on the page, the desired strategy (prefetch or prerender), and the "eagerness" level that will trigger it.


While there are multiple implementation methods (which will be explored in more detail later), a basic implementation using a Script tag would appear as follows:


<script type="speculationrules">

 {

   "prefetch": [{

     "where": {

       "and": [

         { "href_matches": "/*" },

         { "not": {"href_matches": "/logout"}}

       ]

     },

     "eagerness": "moderate"

   }]

 }

</script>



In the example above you can see the different configuration parts mentioned: strategy (“prefetch”), urls (“where”) and the triggering “eagerness” which is set as “moderate.”


You can read more about Chrome's implementation of Speculation rules here


Initial approach


Our strategy was straightforward: implement the <script type="speculationrules"></script> on our runtime sites, initially under a feature flag, and then observe the outcomes. The configuration needed to be versatile enough for all sites, yet carefully designed to avoid the risks associated with prerendering pages (a topic we'll explore further in this article).


Here's the initial configuration we utilized:


<script type="speculationrules">

 {

   "prerender": [

       {

           "where": {

               "and": [

                   {"href_matches": "/*"},

                   {"not": {"href_matches": "/logout"}},

                   {"not": {"href_matches": "/signin"}},

                   {"not": {"selector_matches": ".no-prerender"}},

                   {"not": {"selector_matches": "[rel~=nofollow]"}}

               ]

           },

           "eagerness": "moderate"

       }

   ]

 }



</script>


We implemented a prerender rule for all paths while excluding those with potential undesired implications like /logout, and incorporating selector matching. Although Google suggests a milder prefetch strategy initially, we opted for a more aggressive approach, anticipating greater performance benefits, and committed to close monitoring.


As we awaited the results from our Core Web Vitals monitoring tools, we started observing a growing number of complaints from the field…


Speculation Rules & Google crawlers


While investigating customer reports about unexpected URL paths, such as /logout, being crawled by Googlebot and appearing in Google Search Console, even for sites without a logout page, we identified a key connection. These paths exactly matched those configured in our Speculation Rules. This finding led us to disable the speculation feature and conduct a thorough investigation into the underlying issue.


Given the Speculation rules feature is relatively new, information online was scarce. This prompted an unconventional approach: reaching out to key individuals who might offer assistance. Barry Pollard, a Web Performance Developer Advocate at Google and a leading expert on Speculation Rules, was a primary contact. In an age where such influential figures are accessible via numerous channels, I took a chance and directly messaged Barry on X (formerly Twitter) with a bit of "chutzpah" to inquire about our situation.


Initially, Barry was skeptical about the connection to Speculations. However, once I asserted that disabling the feature stopped the crawling of these missing paths and our internal logs showed Googlebot reaching these pages, Barry was convinced. He then conducted his own experiment, confirming the existence of a real issue, but this had no direct connection to the Speculation rules but rather to how crawlers operate. Specifically, when a crawler encounters a URL-like path in a document, whether it's an actual href or a path written as part of a rule (as in our case), it attempts to access it. Essentially, crawlers follow anything that could potentially be a link.


Crawling of “potential links” is not a problem per-se, but does create noise in Google Search Console when the “links” fail with a 404—for example if a site has no /logout URL despite appearing in the generic Duda rules.


Given this understanding, we sought a solution. While excluding URLs not on the site in speculation rules, or adding paths to the robot.txt file offers an immediate fix, it's not scalable for Duda. This approach would necessitate site-specific definitions and complex logic to differentiate valid from invalid paths. Our goal was a swift, universal application that would deliver immediate performance benefits across all sites.


So, Barry came up with an alternative that avoided the issue: defining Speculation Rules using an HTTP header. This way, the rules wouldn't be tacked onto the actual document, keeping them out of crawlers' reach.


Define Speculations Rules in an HTTP header


To define Speculation Rules via an HTTP header, include a Speculation-Rules header in the site's document response. The value of this header should be a URL pointing to a JSON file (or multiple files) that outlines the rules. The content of this JSON file is identical to what would be placed within a script tag (as described previously). 


An example for such header can be:


Speculation-Rules : "https://static.cdn-website.com/speculations/rules/prerender-1.0.3.json"


You can read more about it in detail here.


Once the browser spots this header, it sets up the speculation rules. Now's a good time to hit pause and figure out how we can confirm these rules are actually being recognized and put to use.


Debugging Speculation Rules


Once you've got your speculation rules set up, you can check them out by opening DevTools. Just head over to the "Application" tab and find the "Speculations loads" section. You should see something like this:


A screenshot of Chrome DevTools, showing speculation rules.


This means that our Speculation Rules were accepted by the browser. If you wanna know if these rules are being respected and acted upon you need to open the “Speculations” section, and test it.


When hovering over the links which the speculation rules have identified, the status of this path goes from “Not triggered” to “Running” and then to “Ready”, which means that they were prerendered and now ready to be navigated much quicker.


We reactivated the feature after resolving the crawling issue and confirming the speculation rules were active. We closely monitored its performance, attentive to real-world feedback, alas something was still not working as expected.


Some bumps on the way


We previously encountered an issue in Microsoft Edge for Windows, where pre-rendered pages containing videos could cause the browser to freeze and crash. This was related to Edge’s “Preload pages for faster browsing and searching” feature. The problem has since been fully resolved by the Edge team.


You can find more details about this bug in this thread; Pre-rendered pages with videos don't load on MS Edge on Windows.


With the bug resolved, we were able to reactivate the feature and monitor its performance.


Improvements in LCP 


Speculation rules are anticipated to positively impact the Largest Contentful Paint (LCP), which measures the loading speed of a page's main content. We have observed promising improvements in our internal LCP scores, a trend also confirmed by data shared by the Google team with whom we are collaborating.


The feature, launched in mid-June, has led to a gradual improvement in LCP scores, as illustrated by the weekly distribution chart below. This gradual change is directly tied to the number of sites republished since the Speculation rules were enabled.


A graph showing improvements in LCP over time.


We used CrUX History API Navigation Type Breakdown to measure the shift. We saw that it was shifting from "navigate" to "prerender," indicating that the Speculation rules are functioning as intended.


Navigation type categorizes how a page loads, reporting the proportion of page loads across seven distinct types. Several of these types optimize loading performance. For example, pages restored from the bfcache load almost instantly, and the back_forward_cache navigation type tracks these bfcache restores. Similarly, the prerender type signifies a page that was pre-rendered, also resulting in potentially near-instant page loads.


A graph showing changes in Navigation Type over time.


And in addition to that the good impact it has over the LCP:


A graph showing changes in LCP over time.


The mobile challenge


Our speculation rules are set to "Moderate" eagerness, meaning prerendering is triggered when a link is hovered over or clicked. However, this raises a question for mobile users: what happens when there's no "hover" interaction?


The Chrome team has recognized this limitation, and recently updated their Speculation Rules implementation with heuristics to help speculation on mobile in this case.


The new mobile moderate behavior involves a more complex algorithm, optimized for an effective precision/recall balance:


  • The anchor needs to be within 30% vertical distance from the previous pointer down.
  • The anchor needs to be at least 0.5x as big as the largest anchor in the viewport.
  • We wait 500 ms after the user stopped scrolling.


This enhancement was fully rolled out on August 22nd.


You'll notice that speculation rules positively impacted mobile navigation type, though not as significantly as desktop.


A graph showing changes in Navigation Type over time.


What to be aware of


Pre-rendering involves the browser fetching and rendering a page's assets before actual navigation. This includes downloading and executing scripts, which can sometimes lead to unintended consequences. For instance, pre-rendering a logout page could trigger a logout action simply by hovering over its link. Therefore, it's crucial to exclude pages where pre-rendering might cause undesirable actions, which is why Duda’s default rules specifically exclude pages like the logout page.


Be aware that pre-rendering and prefetching can lead to an increase in server/CDN hits. This might result in higher traffic numbers without a corresponding increase in actual visits (e.g., hovering over a link doesn't guarantee a click). Therefore, it's crucial to monitor your server hits to ensure they align with your expectations.


Conclusion


Our Duda journey with the Speculation Rules API has been a wild ride. We hit a few bumps in the road, like sneaky crawler behavior and a video bug that seemed to have a personal vendetta against Edge browser, but our insistence on having it proved to be the right direction, and our Core Web Vitals scores see great benefit from it.


So, what's the takeaway from our Speculation Rules API escapade? While these cutting-edge features are like supercharging your website, you've got to be prepared for a few twists and turns. For other platforms and developers out there, this API, especially with the HTTP header method, can be a fantastic way to spruce up your site's performance and give users a smoother experience. Just be ready to roll up your sleeves and tackle any integration quirks that come your way!




Did you find this article interesting?


Thanks for the feedback!
By Shawn Davis April 1, 2026
Core Web Vitals aren't new, Google introduced them in 2020 and made them a ranking factor in 2021. But the questions keep coming, because the metrics keep changing and the stakes keep rising. Reddit's SEO communities were still debating their impact as recently as January 2026, and for good reason: most agencies still don't have a clear, repeatable way to measure, diagnose, and fix them for clients. This guide cuts through the noise. Here's what Core Web Vitals actually measure, what good scores look like today, and how to improve them—without needing a dedicated performance engineer on every project. What Core Web Vitals measure Google evaluates three user experience signals to determine whether a page feels fast, stable, and responsive: Largest Contentful Paint (LCP) measures how long it takes for the biggest visible element on a page — usually a hero image or headline — to load. Google considers anything under 2.5 seconds good. Above 4 seconds is poor. Interaction to Next Paint (INP) replaced First Input Delay (FID) in March 2024. Where FID measures the delay before a user's first click is registered, INP tracks the full responsiveness of every interaction across the page session. A good INP score is under 200 milliseconds. Cumulative Layout Shift (CLS) measures visual stability — how much page elements unexpectedly move while content loads. A score below 0.1 is good. Higher scores signal that images, ads, or embeds are pushing content around after load, which frustrates users and tanks conversions. These three metrics are a subset of Google's broader Page Experience signals, which also include HTTPS, safe browsing, and mobile usability. Core Web Vitals are the ones you can most directly control and improve. Why your clients' scores may still be poor Core Web Vitals scores vary dramatically by platform, hosting, and how a site was built. Some of the most common culprits agencies encounter: Heavy above-the-fold content . A homepage with an autoplay video, a full-width image slider, and a chat widget loading simultaneously will fail LCP every time. The browser has to resolve all of those resources before it can paint the largest element. Unstable image dimensions . When an image loads without defined width and height attributes, the browser doesn't reserve space for it. It renders the surrounding text, then jumps it down when the image appears. That jump is CLS. Third-party scripts blocking the main thread . Analytics pixels, ad tags, and live chat tools run on the browser's main thread. When they stack up, every click and tap has to wait in line — driving INP scores up. A single slow third-party script can push an otherwise clean site into "needs improvement" territory. Too many web fonts . Each font family and weight is a separate network request. A page loading four font files before rendering any text will fail LCP, especially on mobile connections. Unoptimized images . JPEGs and PNGs served at full resolution, without compression or modern formats like WebP or AVIF, add unnecessary weight to every page load. How to measure them accurately There are two types of Core Web Vitals data you should be looking at for every client: Lab data comes from tools like Google PageSpeed Insights, Lighthouse, and WebPageTest. It simulates page loads in controlled conditions. Lab data is useful for diagnosing specific issues and testing fixes before you deploy them. Field data (also called Real User Monitoring, or RUM) comes from actual users visiting the site. Google collects this through the Chrome User Experience Report (CrUX) and surfaces it in Search Console and PageSpeed Insights. Field data is what Google actually uses as a ranking signal — and it often looks worse than lab data because it reflects real-world device and connection variability. If your client's site has enough traffic, you'll see field data in Search Console under Core Web Vitals. This is your baseline. Lab data helps you understand why the scores are what they are. For clients with low traffic who don't have enough field data to appear in CrUX, you'll be working primarily with lab scores. Set that expectation early so clients understand that improvements may not immediately show up in Search Console. Practical fixes that move the needle Fix LCP: get the hero image loading first The single most effective LCP improvement is adding fetchpriority="high" to the hero image tag. This tells the browser to prioritize that resource over everything else. If you're using a background CSS image for the hero, switch it to anelement — background images aren't discoverable by the browser's preload scanner. Also check whether your hosting serves images through a CDN with caching. Edge delivery dramatically reduces the time-to-first-byte, which feeds directly into LCP. Fix CLS: define dimensions for every media element Every image, video, and ad slot on the page needs explicit width and height attributes in the HTML. If you're using responsive CSS, you can still define the aspect ratio with aspect-ratio in CSS while leaving the actual size fluid. The key is giving the browser enough information to reserve space before the asset loads. Avoid inserting content above existing content after page load. This is common with cookie banners, sticky headers that change height, and dynamically loaded ad units. If you need to show these, anchor them to fixed positions so they don't push content around. Fix INP: reduce what's competing for the main thread Audit third-party scripts and defer or remove anything that isn't essential. Tools like WebPageTest's waterfall view or Chrome DevTools Performance panel show you exactly which scripts are blocking the main thread and for how long. Load chat widgets, analytics, and ad tags asynchronously and after the page's critical path has resolved. For most clients, moving non-essential scripts to load after the DOMContentLoaded event is a meaningful INP improvement with no visible impact on the user experience. For websites with heavy JavaScript — particularly those built on frameworks with large client-side bundles — consider breaking up long tasks into smaller chunks using the browser's Scheduler API or simply splitting components so the main thread isn't locked for more than 50 milliseconds at a stretch. What platforms handle automatically One of the practical advantages of building on a platform optimized for performance is that many of these fixes are applied by default. Duda, for example, automatically serves WebP images, lazy loads below-the-fold content, minifies CSS, and uses efficient cache policies for static assets. As of May 2025, 82% of sites built on Duda pass all three Core Web Vitals metrics — the highest recorded pass rate among major website platforms. That baseline matters when you're managing dozens or hundreds of client sites. It means you're starting each project close to or at a passing score, rather than diagnosing and patching a broken foundation. How much do Core Web Vitals actually affect rankings? Honestly, they're a tiebreaker — not a primary signal. Google has been clear that content quality and relevance still dominate ranking decisions. A well-optimized site with thin, irrelevant content won't outrank a content-rich competitor just because its CLS is 0.05. What Core Web Vitals do affect is the user experience that supports those rankings. Pages with poor LCP scores have measurably higher bounce rates. Sites with high CLS lose users mid-session. Those behavioral signals — time on page, return visits, conversions — are things search engines can observe and incorporate. The practical argument for fixing Core Web Vitals isn't just "because Google said so." It's that faster, more stable pages convert better. Every second of LCP improvement can reduce bounce rates by 15–20% depending on the industry and device mix. For client sites that monetize through leads or eCommerce, that's a revenue argument, not just an SEO argument. A repeatable process for agencies Audit every new site before launch. Run PageSpeed Insights and record LCP, INP, and CLS scores for both mobile and desktop. Flag anything in the "needs improvement" or "poor" range before the client sees the live site. Check Search Console monthly for existing clients. The Core Web Vitals report surfaces issues as they appear in field data. Catching a regression early — before it compounds — is significantly easier than explaining a traffic drop after the fact. Document what you've improved. Clients rarely see Core Web Vitals scores on their own. A monthly one-page performance summary showing before/after scores builds credibility and makes your technical work visible. Prioritize mobile. Google uses mobile-first indexing, and field data shows that mobile CWV scores are almost always worse than desktop. If you only have time to optimize one version, do mobile first. Core Web Vitals aren't a one-time fix. Platforms change, new scripts get added, campaigns bring in new widgets. Build the audit into your workflow and treat it like any other ongoing deliverable, and you'll stay ahead of the issues before they affect your clients' rankings. Duda's platform is built with Core Web Vitals performance in mind. Explore how it handles image optimization, script management, and site speed automatically — so your team spends less time debugging and more time building.
By Ilana Brudo March 31, 2026
Vertical SaaS must transition from tools to an AI-powered Vertical Operating System (vOS). Learn to leverage context, end tech sprawl, and maximize retention.
By Shawn Davis March 27, 2026
Automate client management, instant site generation, and data synchronization with an API-driven website builder to create a scalable growth engine for your SaaS platform.
Show More

Latest posts