How to uncover friction points in your SaaS platform

February 11, 2025
0 minute read

SaaS leaders are constantly innovating, building features, and refining our products to meet the needs of customers. But no matter how forward-thinking or feature-rich your product is, if customers struggle to use it effectively, its value is significantly diminished. 


Identifying and resolving customer friction points—the moments where customers feel confused, frustrated, or blocked in achieving their goals—is absolutely essential to delivering a truly exceptional user experience.


Uncovering these friction points takes more than intuition. It requires a combination of targeted strategies and tools that allow you to observe, understand, and ultimately fix the obstacles standing in the way of your customers. 


In this post, we’ll walk through some of the most effective methods for identifying friction points and how we use them at Duda.


1. Do a click test for a simple–but effective–gut check 


One of the simplest yet most revealing methods for identifying friction is the click test. The click test asks: how many clicks does it take to complete a basic task in the platform? 


Your absolute core functions, like adding a new prospect to a relationship management system, should be front and center. Ideally, this requires one or two clicks. Other basic functions should follow a similar philosophy and require as few clicks as possible.


You can do this test with customers, recruit members of your team that aren’t in the product every day, or try it for yourself. 


First, take a few minutes and write down what are the most important actions a person using your product needs to take. The goal here isn’t to come up with a lot of functions, but to really focus on the most fundamental and important tasks your customers will need to do each and every day. 


Let’s take a look at an example: Duda is a website building platform, so these are a few core actions our customers need to be able to take quickly with as little friction as possible:


  • Start a new site from scratch
  • Start a new site with AI 
  • Start a new site with an existing template 
  • Add a client
  • Add a team member


Great! Now we have a list of core functions. 


What we’re going to do now is take it for a spin. From the first screen when a person logs in to the platform, how many clicks does it take to perform each one of these functions? 


Here’s our score: 


  • Start a new site from scratch - 1 click 
  • Start a new site with AI - 1 click 
  • Start a new site with an existing template - 1 click
  • Add a client - 2 clicks 
  • Add a team member - 2 clicks 


If you need to click >3 times to do any basic function, you’re losing people. 


Now you know exactly where your biggest friction points are. Prioritize those first.   


If it takes 1 or 2 clicks for each basic function, you’re doing great! 


Now you can dig a little bit deeper. Make a list of your more advanced functions and repeat the click test again. 


It’s deceptively simple, but the click test is a great gut check to help you quickly find and resolve glaring friction points, but there are many more strategies you can employ to dig deeper into how people use your product and what’s holding them back from wholeheartedly adopting your platform. 


Let’s take a look at some additional strategies…   


2. Observe your entire customer journey with usability testing 


Usability testing involves observing people completing full workflows within your product. This method gives you a broader view of your in-product customer behavior and uncovers deeper friction points that might not be visible in a single interaction.


In a usability test, you might ask customers to complete a task that involves multiple steps—like building a website from start to finish using your platform. Observing them step by step through this journey allows you to identify pain points that affect the overall process, such as confusing instructions, difficult navigation, or overly complex steps.


3. Interview your customers about their in-product experience


While tests and analytics provide valuable insights, nothing beats direct feedback from your customers. Customer interviews allow you to dive deep into the experiences, frustrations, and needs of your customers in their own words. These conversations can reveal pain points that wouldn’t be apparent through testing alone.


Focus on open-ended, but specific questions like:


  • "Can you walk me through a recent experience with [specific feature]?"
  • "What, if anything, felt more difficult than you expected?"
  • "What would make this feature easier for you to use?"


By listening to the specific challenges they face, you can prioritize product changes that will have the greatest impact. For example, you might learn that customers find your integrations hard to navigate, even though they are technically functional. 


4. Use behavioral analysis to understand user actions at scale


In addition to qualitative feedback, behavioral analytics provides a quantitative layer that helps you understand how customers interact with your product on a broader scale. 


Tools like heatmaps, session recordings, and funnel analysis offer insights into how users navigate your product and where they face obstacles.


  • Heatmaps can show where customers are clicking the most, and where they’re not interacting at all. If a key call-to-action is being ignored, it may indicate that the placement, design, or copy isn’t effective.
  • Session recordings allow you to watch real-time customer interactions, showing you exactly where they get stuck.
  • Funnel analysis helps you track where customers are dropping off in their journeys, helping you identify bottlenecks that cause friction, like during sign-up or task completion.


5. Mine your support tickets for pain points 


Your customer support team is often the first to hear about struggles and issues, so it's crucial to mine this resource regularly. Review support tickets, chat transcripts, and feedback forms to spot recurring themes and patterns. This will give you valuable insight into friction points and allows you to prioritize issues that impact a large number of customers.


6. Create multiple avenues for customers to share feedback 


At Duda, we’ve also developed tools like our Idea Board, where customers can submit requests for specific features or improvements and our community can vote on these ideas. This helps us gauge which features or issues are most important to our community, and which suggestions would be the most impactful. 


Additionally, our Facebook Community is another great source of real-time feedback. We actively monitor discussions to identify pain points and gather suggestions on how we can make our platform easier to use and prioritize the features our customers care about most.


7. Use AI to discover patterns and prioritize improvements


With the volume of feedback and data coming in from all these sources, it can be overwhelming to track and prioritize improvements. That’s where AI comes in. 

Use AI-powered tools to analyze large datasets, including feedback from your support tickets, usability tests, and customer interviews. AI can help you quickly identify patterns in the data, highlighting recurring issues or feature requests that require attention.


For example, AI can quickly identify if a specific feature is mentioned repeatedly in support tickets or feedback forms. It can rank these issues based on sentiment, frequency, and potential impact, allowing us to prioritize which friction points to address first.


Final thoughts


The key to building a successful product isn’t just about adding more features—it’s about ensuring those features are easy and enjoyable for your customers to use. By actively seeking out friction points and addressing them, you’ll create a product that your customers truly love.


At Duda, we’ve built a strong feedback loop by combining qualitative insights, behavioral data, and AI to drive continuous improvement. The result is a product that’s more intuitive, customer-friendly, and effective—ultimately leading to a more loyal, engaged user base.

Headshot of Santi Clarke

Director of Content & Communication, Duda

Santi is a seasoned marketing leader with a passion for digital transformation. Santi has over 12 years of experience leading digital marketing and content teams around the world, from SaaS startups to global enterprises.


Did you find this article interesting?


Thanks for the feedback!
By Shawn Davis April 1, 2026
Core Web Vitals aren't new, Google introduced them in 2020 and made them a ranking factor in 2021. But the questions keep coming, because the metrics keep changing and the stakes keep rising. Reddit's SEO communities were still debating their impact as recently as January 2026, and for good reason: most agencies still don't have a clear, repeatable way to measure, diagnose, and fix them for clients. This guide cuts through the noise. Here's what Core Web Vitals actually measure, what good scores look like today, and how to improve them—without needing a dedicated performance engineer on every project. What Core Web Vitals measure Google evaluates three user experience signals to determine whether a page feels fast, stable, and responsive: Largest Contentful Paint (LCP) measures how long it takes for the biggest visible element on a page — usually a hero image or headline — to load. Google considers anything under 2.5 seconds good. Above 4 seconds is poor. Interaction to Next Paint (INP) replaced First Input Delay (FID) in March 2024. Where FID measures the delay before a user's first click is registered, INP tracks the full responsiveness of every interaction across the page session. A good INP score is under 200 milliseconds. Cumulative Layout Shift (CLS) measures visual stability — how much page elements unexpectedly move while content loads. A score below 0.1 is good. Higher scores signal that images, ads, or embeds are pushing content around after load, which frustrates users and tanks conversions. These three metrics are a subset of Google's broader Page Experience signals, which also include HTTPS, safe browsing, and mobile usability. Core Web Vitals are the ones you can most directly control and improve. Why your clients' scores may still be poor Core Web Vitals scores vary dramatically by platform, hosting, and how a site was built. Some of the most common culprits agencies encounter: Heavy above-the-fold content . A homepage with an autoplay video, a full-width image slider, and a chat widget loading simultaneously will fail LCP every time. The browser has to resolve all of those resources before it can paint the largest element. Unstable image dimensions . When an image loads without defined width and height attributes, the browser doesn't reserve space for it. It renders the surrounding text, then jumps it down when the image appears. That jump is CLS. Third-party scripts blocking the main thread . Analytics pixels, ad tags, and live chat tools run on the browser's main thread. When they stack up, every click and tap has to wait in line — driving INP scores up. A single slow third-party script can push an otherwise clean site into "needs improvement" territory. Too many web fonts . Each font family and weight is a separate network request. A page loading four font files before rendering any text will fail LCP, especially on mobile connections. Unoptimized images . JPEGs and PNGs served at full resolution, without compression or modern formats like WebP or AVIF, add unnecessary weight to every page load. How to measure them accurately There are two types of Core Web Vitals data you should be looking at for every client: Lab data comes from tools like Google PageSpeed Insights, Lighthouse, and WebPageTest. It simulates page loads in controlled conditions. Lab data is useful for diagnosing specific issues and testing fixes before you deploy them. Field data (also called Real User Monitoring, or RUM) comes from actual users visiting the site. Google collects this through the Chrome User Experience Report (CrUX) and surfaces it in Search Console and PageSpeed Insights. Field data is what Google actually uses as a ranking signal — and it often looks worse than lab data because it reflects real-world device and connection variability. If your client's site has enough traffic, you'll see field data in Search Console under Core Web Vitals. This is your baseline. Lab data helps you understand why the scores are what they are. For clients with low traffic who don't have enough field data to appear in CrUX, you'll be working primarily with lab scores. Set that expectation early so clients understand that improvements may not immediately show up in Search Console. Practical fixes that move the needle Fix LCP: get the hero image loading first The single most effective LCP improvement is adding fetchpriority="high" to the hero image tag. This tells the browser to prioritize that resource over everything else. If you're using a background CSS image for the hero, switch it to anelement — background images aren't discoverable by the browser's preload scanner. Also check whether your hosting serves images through a CDN with caching. Edge delivery dramatically reduces the time-to-first-byte, which feeds directly into LCP. Fix CLS: define dimensions for every media element Every image, video, and ad slot on the page needs explicit width and height attributes in the HTML. If you're using responsive CSS, you can still define the aspect ratio with aspect-ratio in CSS while leaving the actual size fluid. The key is giving the browser enough information to reserve space before the asset loads. Avoid inserting content above existing content after page load. This is common with cookie banners, sticky headers that change height, and dynamically loaded ad units. If you need to show these, anchor them to fixed positions so they don't push content around. Fix INP: reduce what's competing for the main thread Audit third-party scripts and defer or remove anything that isn't essential. Tools like WebPageTest's waterfall view or Chrome DevTools Performance panel show you exactly which scripts are blocking the main thread and for how long. Load chat widgets, analytics, and ad tags asynchronously and after the page's critical path has resolved. For most clients, moving non-essential scripts to load after the DOMContentLoaded event is a meaningful INP improvement with no visible impact on the user experience. For websites with heavy JavaScript — particularly those built on frameworks with large client-side bundles — consider breaking up long tasks into smaller chunks using the browser's Scheduler API or simply splitting components so the main thread isn't locked for more than 50 milliseconds at a stretch. What platforms handle automatically One of the practical advantages of building on a platform optimized for performance is that many of these fixes are applied by default. Duda, for example, automatically serves WebP images, lazy loads below-the-fold content, minifies CSS, and uses efficient cache policies for static assets. As of May 2025, 82% of sites built on Duda pass all three Core Web Vitals metrics — the highest recorded pass rate among major website platforms. That baseline matters when you're managing dozens or hundreds of client sites. It means you're starting each project close to or at a passing score, rather than diagnosing and patching a broken foundation. How much do Core Web Vitals actually affect rankings? Honestly, they're a tiebreaker — not a primary signal. Google has been clear that content quality and relevance still dominate ranking decisions. A well-optimized site with thin, irrelevant content won't outrank a content-rich competitor just because its CLS is 0.05. What Core Web Vitals do affect is the user experience that supports those rankings. Pages with poor LCP scores have measurably higher bounce rates. Sites with high CLS lose users mid-session. Those behavioral signals — time on page, return visits, conversions — are things search engines can observe and incorporate. The practical argument for fixing Core Web Vitals isn't just "because Google said so." It's that faster, more stable pages convert better. Every second of LCP improvement can reduce bounce rates by 15–20% depending on the industry and device mix. For client sites that monetize through leads or eCommerce, that's a revenue argument, not just an SEO argument. A repeatable process for agencies Audit every new site before launch. Run PageSpeed Insights and record LCP, INP, and CLS scores for both mobile and desktop. Flag anything in the "needs improvement" or "poor" range before the client sees the live site. Check Search Console monthly for existing clients. The Core Web Vitals report surfaces issues as they appear in field data. Catching a regression early — before it compounds — is significantly easier than explaining a traffic drop after the fact. Document what you've improved. Clients rarely see Core Web Vitals scores on their own. A monthly one-page performance summary showing before/after scores builds credibility and makes your technical work visible. Prioritize mobile. Google uses mobile-first indexing, and field data shows that mobile CWV scores are almost always worse than desktop. If you only have time to optimize one version, do mobile first. Core Web Vitals aren't a one-time fix. Platforms change, new scripts get added, campaigns bring in new widgets. Build the audit into your workflow and treat it like any other ongoing deliverable, and you'll stay ahead of the issues before they affect your clients' rankings. Duda's platform is built with Core Web Vitals performance in mind. Explore how it handles image optimization, script management, and site speed automatically — so your team spends less time debugging and more time building.
By Ilana Brudo March 31, 2026
Vertical SaaS must transition from tools to an AI-powered Vertical Operating System (vOS). Learn to leverage context, end tech sprawl, and maximize retention.
By Shawn Davis March 27, 2026
Automate client management, instant site generation, and data synchronization with an API-driven website builder to create a scalable growth engine for your SaaS platform.
Show More

Latest posts