Google Lighthouse vs Core Web Vitals

November 4, 2021
0 minute read

From the variety of ways to measure your website’s performance and user experience, Google Lighthouse and Core Web Vitals stand out as two of the most well-known. This article takes a deep dive into comparing the two approaches to help you understand what they are, how they differ, and when to use them. 

Download Your Lighthouse VS Core Web Vitals Cheatsheet >>

Lighthouse and Core Web Vitals: What Are They? 

Lighthouse is a Google tool that audits the performance, accessibility, SEO, and other best practice quality indicators of your web pages from within a fixed test environment. Based on the audits and checks it carries out, Lighthouse generates a report that you can use to assess performance and improve your web pages. 


The Lighthouse testing environment simulates what it would be like to visit your website on a slower 3G Internet connection using a mobile device. Globally, more than 90 percent of the world’s population has access to a mobile broadband network, however, 8.5 percent of that figure still uses 3G networks. This speed constraint makes any on-site issues that slow pages down stand out while also reflecting real-world conditions for many users visiting your site. 


Lighthouse also uses a fixed CPU speed to simulate the experience of visiting your website. The current device used is a Moto G4, which is not as powerful as today’s top-tier smartphones. The use of a slightly underpowered CPU in the testing environment reflects the fact that many visitors to your site use older mobile devices. 


Core Web Vitals are a set of three metrics that attempt to measure and summarize the overall user experience of visiting your web pages. These metrics use field data, which contrasts with the controlled testing environment simulated by the Lighthouse tool. This field data is real-world data anonymously sent to Google from actual users when they visit a specific page. 


It’s important to understand that Core Web Vitals became a Google ranking factor in August 2021. The score you get on these metrics can make a difference in how prominently you appear within Google’s own search engine results pages for your targeted search queries. 


Core Web Vitals are not limited to raw speed — here are the three specific areas of user experience that they cover:


  • Loading time: How long does it take for the page to finish loading?
  • Interactivity: How quickly does your page respond after the user’s first interaction with it?
  • Visual Stability: How stable is your web page within users’ browsers?


Pro tip: You should rely on Core Web Vitals data as much as possible to assess performance and user experience because it’s real-world aggregated data, not point-in-time data based on a controlled set of lab conditions. 


Download Your Lighthouse VS Core Web Vitals Cheatsheet >>

Lighthouse and Core Web Vitals: Metrics and Optimal Thresholds 

So, what are the actual metrics (and their optimal thresholds) used within these two different approaches to measuring performance and user experience?

 











Lighthouse


The Lighthouse Performance report uses six different metrics as follows:


1) Largest Contentful Paint (LCP): a measure of your page’s loading time that checks how long it takes for the largest above the fold element (image, text, etc) to load. Above the fold means that the metric only considers what appears on the page without scrolling down. 


Optimal threshold: Less than 2.5 seconds

2) Cumulative Layout Shift (CLS): measures the visual stability of a page load by focusing on unexpected layout shifts not caused by a user interaction. The actual CLS calculation multiplies together two measures of movement: impact fraction and distance fraction. The lower the calculated score, the better visual stability your page has. 


Sometimes when a page loads, elements shift around unexpectedly and frustrate users. For example, you might load a page and start reading a paragraph, only for an image to load that pushes the paragraph down the page. Specifying image dimensions is one way to improve your CLS scores (among several other ways). 


Optimal threshold: Less than 0.1

3) Total Blocking Time (TBT): measures the total amount of time that a page is blocked from responding to user input. These inputs include mouse clicks or keyboard strokes. 


Optimal threshold: Less than 200 milliseconds

4) First Contentful Paint (FCP): this is the time taken until the first piece of content loads on the page. The content must come from the page’s DOM (Document Object Model). The DOM includes standard page content like images and text. 


Optimal threshold: Less than 1.8 seconds

5) Speed Index: measures the entire loading process for the visual parts of a web page by capturing a video of the page loading and checking the difference between frames. The total duration essentially measures how long it takes to go from blank screen to complete page. 


Optimal threshold: Less than 3.4 seconds

6) TIme to Interactive: measures how long the page takes to become fully interactive so that it reliably and rapidly responds to user inputs. 


Optimal threshold: Less than 0.1

 






Core Web Vitals


Two of the three Core Web Vital metrics are the same LCP and CLS measures as Lighthouse uses (and they have the same optimal thresholds). The third metric in Core Web Vitals is First Input Delay (FID), which measures the interactivity and responsiveness of a page. The calculation takes the time from the user’s first interaction, such as clicking a button, until the browser can start responding to that interaction. 


Remember, Lighthouse uses simulated lab data to generate reports while Core Web Vitals scores are based on real user data. FID is only measurable using real-world data because it depends on an actual user’s action. Lighthouse uses the Total Blocking Time metric as a proxy for FID’s value. 

Download Your Lighthouse VS Core Web Vitals Cheatsheet >>

Lighthouse and Core Web Vitals: Performance Scoring 

Lighthouse

As you can see from the Lighthouse scoring calculator, the report uses a weighted average calculation to provide you with a total Performance score. The LCP and TBT metrics are particularly heavily weighted in this calculation. 


By looking at the distribution of all metric scores across every test performed, Lighthouse gauges where your particular page falls in that distribution. This enables Lighthouse to convert raw metric scores into a standard 0-100 value. A good Lighthouse score for any individual performance metric or for the total weighted average performance score is anything above 90.

 

Core Web Vitals


The real-world focus of Core Web Vitals makes the performance scoring a bit simpler than Lighthouse. For you to score well in any of the three metrics, you must get a “Good” score in that metric for 75 percent of visitors to your page. If you score “Good” on all three metrics, you pass the Core Web Vitals assessment. The definition of “Good” is that a metric’s score falls within the previously outlined optimal threshold. 


The scores are calculated using aggregated field data from many users over a 28-day period. This 28-day period is important to bear in mind because if you score badly and then make some improvements, you’ll need to wait a while for Google to pick those improvements up. 

Download Your Lighthouse VS Core Web Vitals Cheatsheet >>

Lighthouse and Core Web Vitals: Performance Scoring 

Lighthouse


If you want to get a Lighthouse report, the most user-friendly ways are to use the Lighthouse extension for your Chrome web browser or to simply visit Google's Web Dev Portal and enter your URL. Other options include Chrome Dev Tools, Page Speed Insights, and using the command line. 


Core Web Vitals


A number of tools measure Core Web Vitals, including Chrome User Experience


Report (cRUX), Google Search Console, and PageSpeed Insights. The quickest and simplest way to check any URL’s Core Web Vitals is through PageSpeed Insights. 


Duda’s Core Web Vital Achievements 

Among all website builders, Duda leads the way in ensuring our users’ websites score consistently well on Core Web Vitals. Given that Core Web Vitals are a ranking factor in Google search results, it’s important that your website or your clients’ websites score well in these metrics. Good Core Web Vitals scores are also vital from a conversion standpoint — customers prefer to do business with sites that provide a solid performance and strong user experience. 


Since Core Web Vitals became a ranking factor, Duda leads the way with the highest proportion of sites on our platform scoring “Good” versus competitors such as WordPress, Drupal, Wix, and Squarespace. Some competitors (no names mentioned) omit Duda from their Data Studio comparison graphs in an attempt to make it look like they are industry leaders. Pro tip: if you want the full picture, visit this Google Data Studio link and see how all platforms perform against each other.

Download Your Lighthouse VS Core Web Vitals Cheatsheet >>


Did you find this article interesting?


Thanks for the feedback!
By Shawn Davis April 1, 2026
Core Web Vitals aren't new, Google introduced them in 2020 and made them a ranking factor in 2021. But the questions keep coming, because the metrics keep changing and the stakes keep rising. Reddit's SEO communities were still debating their impact as recently as January 2026, and for good reason: most agencies still don't have a clear, repeatable way to measure, diagnose, and fix them for clients. This guide cuts through the noise. Here's what Core Web Vitals actually measure, what good scores look like today, and how to improve them—without needing a dedicated performance engineer on every project. What Core Web Vitals measure Google evaluates three user experience signals to determine whether a page feels fast, stable, and responsive: Largest Contentful Paint (LCP) measures how long it takes for the biggest visible element on a page — usually a hero image or headline — to load. Google considers anything under 2.5 seconds good. Above 4 seconds is poor. Interaction to Next Paint (INP) replaced First Input Delay (FID) in March 2024. Where FID measures the delay before a user's first click is registered, INP tracks the full responsiveness of every interaction across the page session. A good INP score is under 200 milliseconds. Cumulative Layout Shift (CLS) measures visual stability — how much page elements unexpectedly move while content loads. A score below 0.1 is good. Higher scores signal that images, ads, or embeds are pushing content around after load, which frustrates users and tanks conversions. These three metrics are a subset of Google's broader Page Experience signals, which also include HTTPS, safe browsing, and mobile usability. Core Web Vitals are the ones you can most directly control and improve. Why your clients' scores may still be poor Core Web Vitals scores vary dramatically by platform, hosting, and how a site was built. Some of the most common culprits agencies encounter: Heavy above-the-fold content . A homepage with an autoplay video, a full-width image slider, and a chat widget loading simultaneously will fail LCP every time. The browser has to resolve all of those resources before it can paint the largest element. Unstable image dimensions . When an image loads without defined width and height attributes, the browser doesn't reserve space for it. It renders the surrounding text, then jumps it down when the image appears. That jump is CLS. Third-party scripts blocking the main thread . Analytics pixels, ad tags, and live chat tools run on the browser's main thread. When they stack up, every click and tap has to wait in line — driving INP scores up. A single slow third-party script can push an otherwise clean site into "needs improvement" territory. Too many web fonts . Each font family and weight is a separate network request. A page loading four font files before rendering any text will fail LCP, especially on mobile connections. Unoptimized images . JPEGs and PNGs served at full resolution, without compression or modern formats like WebP or AVIF, add unnecessary weight to every page load. How to measure them accurately There are two types of Core Web Vitals data you should be looking at for every client: Lab data comes from tools like Google PageSpeed Insights, Lighthouse, and WebPageTest. It simulates page loads in controlled conditions. Lab data is useful for diagnosing specific issues and testing fixes before you deploy them. Field data (also called Real User Monitoring, or RUM) comes from actual users visiting the site. Google collects this through the Chrome User Experience Report (CrUX) and surfaces it in Search Console and PageSpeed Insights. Field data is what Google actually uses as a ranking signal — and it often looks worse than lab data because it reflects real-world device and connection variability. If your client's site has enough traffic, you'll see field data in Search Console under Core Web Vitals. This is your baseline. Lab data helps you understand why the scores are what they are. For clients with low traffic who don't have enough field data to appear in CrUX, you'll be working primarily with lab scores. Set that expectation early so clients understand that improvements may not immediately show up in Search Console. Practical fixes that move the needle Fix LCP: get the hero image loading first The single most effective LCP improvement is adding fetchpriority="high" to the hero image tag. This tells the browser to prioritize that resource over everything else. If you're using a background CSS image for the hero, switch it to anelement — background images aren't discoverable by the browser's preload scanner. Also check whether your hosting serves images through a CDN with caching. Edge delivery dramatically reduces the time-to-first-byte, which feeds directly into LCP. Fix CLS: define dimensions for every media element Every image, video, and ad slot on the page needs explicit width and height attributes in the HTML. If you're using responsive CSS, you can still define the aspect ratio with aspect-ratio in CSS while leaving the actual size fluid. The key is giving the browser enough information to reserve space before the asset loads. Avoid inserting content above existing content after page load. This is common with cookie banners, sticky headers that change height, and dynamically loaded ad units. If you need to show these, anchor them to fixed positions so they don't push content around. Fix INP: reduce what's competing for the main thread Audit third-party scripts and defer or remove anything that isn't essential. Tools like WebPageTest's waterfall view or Chrome DevTools Performance panel show you exactly which scripts are blocking the main thread and for how long. Load chat widgets, analytics, and ad tags asynchronously and after the page's critical path has resolved. For most clients, moving non-essential scripts to load after the DOMContentLoaded event is a meaningful INP improvement with no visible impact on the user experience. For websites with heavy JavaScript — particularly those built on frameworks with large client-side bundles — consider breaking up long tasks into smaller chunks using the browser's Scheduler API or simply splitting components so the main thread isn't locked for more than 50 milliseconds at a stretch. What platforms handle automatically One of the practical advantages of building on a platform optimized for performance is that many of these fixes are applied by default. Duda, for example, automatically serves WebP images, lazy loads below-the-fold content, minifies CSS, and uses efficient cache policies for static assets. As of May 2025, 82% of sites built on Duda pass all three Core Web Vitals metrics — the highest recorded pass rate among major website platforms. That baseline matters when you're managing dozens or hundreds of client sites. It means you're starting each project close to or at a passing score, rather than diagnosing and patching a broken foundation. How much do Core Web Vitals actually affect rankings? Honestly, they're a tiebreaker — not a primary signal. Google has been clear that content quality and relevance still dominate ranking decisions. A well-optimized site with thin, irrelevant content won't outrank a content-rich competitor just because its CLS is 0.05. What Core Web Vitals do affect is the user experience that supports those rankings. Pages with poor LCP scores have measurably higher bounce rates. Sites with high CLS lose users mid-session. Those behavioral signals — time on page, return visits, conversions — are things search engines can observe and incorporate. The practical argument for fixing Core Web Vitals isn't just "because Google said so." It's that faster, more stable pages convert better. Every second of LCP improvement can reduce bounce rates by 15–20% depending on the industry and device mix. For client sites that monetize through leads or eCommerce, that's a revenue argument, not just an SEO argument. A repeatable process for agencies Audit every new site before launch. Run PageSpeed Insights and record LCP, INP, and CLS scores for both mobile and desktop. Flag anything in the "needs improvement" or "poor" range before the client sees the live site. Check Search Console monthly for existing clients. The Core Web Vitals report surfaces issues as they appear in field data. Catching a regression early — before it compounds — is significantly easier than explaining a traffic drop after the fact. Document what you've improved. Clients rarely see Core Web Vitals scores on their own. A monthly one-page performance summary showing before/after scores builds credibility and makes your technical work visible. Prioritize mobile. Google uses mobile-first indexing, and field data shows that mobile CWV scores are almost always worse than desktop. If you only have time to optimize one version, do mobile first. Core Web Vitals aren't a one-time fix. Platforms change, new scripts get added, campaigns bring in new widgets. Build the audit into your workflow and treat it like any other ongoing deliverable, and you'll stay ahead of the issues before they affect your clients' rankings. Duda's platform is built with Core Web Vitals performance in mind. Explore how it handles image optimization, script management, and site speed automatically — so your team spends less time debugging and more time building.
By Ilana Brudo March 31, 2026
Vertical SaaS must transition from tools to an AI-powered Vertical Operating System (vOS). Learn to leverage context, end tech sprawl, and maximize retention.
By Shawn Davis March 27, 2026
Automate client management, instant site generation, and data synchronization with an API-driven website builder to create a scalable growth engine for your SaaS platform.
Show More

Latest posts