Intelligent pricing and packaging for vertical SaaS

December 20, 2023
0 minute read

Pricing is all about capturing value. This idea is a foundational element of James Wilton’s approach to pricing and packaging.


James is the Managing Partner and Founder of Monevate, a monetization and pricing strategy consulting firm for growing tech companies. He has spent over a decade and 20,000 hours transforming pricing strategies at SaaS companies—helping them maximize their customer ACV, NRR, and ARR growth.


This article is based on a presentation he gave during the “Vertical SaaS Roadmap to Revenue Expansion” online event. The complete recording is available today on YouTube.


His session explored the benefits of strategic pricing, particularly for those in the Vertical SaaS space. It’s a valuable area to focus on. Startups that transformed their pricing strategy have realized a 10-15% bump in their revenue growth rate.


The value leak


Your vertical SaaS solution creates a ton of value for your customers. Your goal, when pricing your products, is to capture a fair portion of that value. Unfortunately, that often isn’t what happens. Instead, a significant portion of value is leaked between the creation of your solution and the actual purchase.




Every service you offer, for every particular customer who’s buying it, provides a certain amount of “actual” value. This value is often abstract at first, needing to be uncovered and presented. If you are unable to convey the full value of your service, then your customer may only truly see a portion of the value you’ve created. This smaller metric is called the “perceived” value.


There are dozens of reasons for the discrepancy between actual and perceived value, typically due to failures in messaging. You may be highlighting the wrong set of features or doing so in the wrong way. Regardless, a certain amount of value is being left unseen.


Value is further eroded by your customer’s “willingness to pay.” Customers consider many factors when deciding how much they’re actually willing to pay for a particular product or service. Return on investment, marketplace comparables, and various other elements all play a role.


Understanding how much your customers are willing to pay is essential when setting your service’s “target price.” This number is the true, bonafide cost that your service is being listed for.


It is wildly impractical to set your target price based on the willingness to pay of each and every individual customer. Instead, we often price based on segments. This practice forces us to determine, then undercut, the average willingness to pay of an entire swath of people—leading to some additional amount of value loss.


At the end of the day, the target price may not be the actual amount your customers pay. Sales incentives can reduce value even further via transactional discounts, leaving you with a final, “realized” price that is much lower than the initial value your product created.


After seeing where value leaks, you might be wondering—how can I capture more value?


Differentiation through packaging


Price differentiation may be the most important concept in pricing. This practice is essentially exactly as it sounds; you charge different prices to different customers. Doing so allows you to widen your TAM to include more price-sensitive customers while simultaneously charging more to less price-sensitive customers. When done well, this can have a huge impact on your revenue. Subscription Box, for example, added an incredible 30% in annual recurring revenue by offering a premium tier.


In order to differentiate your pricing, you’re going to need a “price structure.” This structure can come in one of two flavors: Packaging or Price Metrics. With packaging, customers self-select which level or tier of product they would like to buy. Price metrics, on the other hand, scale automatically based on quantity.


While there are many different flavors of packaging, the most ubiquitous in SaaS is the “Good, Better, Best” approach—and for good reason too. This is a straightforward and easy-to-understand price structure that provides customers with simple choices and a clear upsell path right out of the box.


Keep in mind, this approach isn’t as simple as just creating three packages at random. There are four common pitfalls to avoid when developing a “Good, Better, Best” strategy.




  1. Too much base: In this scenario, your initial base offering is far more generous than it should be. It may include luxurious features that are unnecessary for a more minimally-viable experience. Instead, your base should focus on the smallest number of features necessary to extract value from your service.
  2. Pitching to the middle: Here, the middle package may overwhelmingly be the best deal. The base offering may be too impractical while the premium tier may include unnecessary features that don’t justify the upsell.
  3. The grand canyon: Substantial differences in price between tiers may lead to a “canyon” effect, where the leap between packages is too great for your customers to justify. Instead of a canyon, tiers should create more of a “ladder” effect. That means fewer differentiating features and a smaller price gap.
  4. Too many sources of difference: This scenario is similar to the grand canyon, except focusing primarily on features instead of price. Packages with a wide variety of different components can be difficult for customers to compare. Additionally, customers may not value all of the different components in a particular tier and typically won’t want to pay for components that they don’t value. The solution is to offer more niche features as additional add-ons, outside of the pricing structure.


Differentiation through price metrics


The flexibility of the SaaS industry allows for pricing based on virtually any metric, so long as it makes sense. This idea takes on many forms: “pay-per-seat,” “pay-per-view,” “pay as you go,” etc. The general idea is that pricing is tied to a particular metric, allowing for customer differentiation immediately without much segmentation.


Some popular price metrics across the SaaS industry are:


  1. User-based pricing: This may be the most common price metric in SaaS today. Large companies like Zoom and DocuSign all charge based on the number of active users.
  2. Usage-based pricing: Another common price metric, this structure charges based on the number of times a certain feature is used. Examples include transactions, API calls, etc.
  3. Capacity-based pricing: More common in the infrastructure space, this structure charges based on the literal resources being used. As an example, cloud companies like AWS often charge per gigabyte.
  4. Business-based pricing: This structure charges based on the size of the customer's business using metrics such as revenue, employees, or total customers.
  5. Outcome-based pricing: A more novel approach, outcome-based pricing “shares the wealth” by charging a percentage of revenue generated or some other success metric.


Choosing the right metric to price against is key. You should ensure that, whatever you choose, is value-aligned, growth-aligned, acceptable, controllable, predictable, and, most importantly, auditable.


A metric that’s value-aligned will scale proportionately to the amount of value you’re providing your customers. This comes back to the very core idea of pricing—that you want to capture a fair amount of the value you create. Similarly, a growth-aligned metric will successfully scale alongside your organization.


An acceptable metric is one that is palatable to your customers. Your customers aren’t going to pay for some arbitrary metric that doesn’t feel fair. Similarly, it needs to be possible for your customer to actually control this metric themselves. A metric that’s fair and controllable, then, will likely also be predictable. That means the bill shouldn’t come as a total surprise to your customers at the end of the month—they should be able to generally predict how much you’re going to charge.


Finally, this metric absolutely must be auditable. If you can’t objectively measure a particular metric, then you certainly can’t charge for it.


The case for usage-based pricing is clear, but there are several downsides that companies don’t always anticipate.


On the up-side, usage-based pricing:


  • Is increasingly familiar to customers.
  • Is highly scalable.
  • Tends to inherently grow over time.
  • Covers costs (for cloud-hosted companies).


However, some of the pitfalls of usage-based pricing are that it’s:


  • Not always value-aligned.
  • Not predictable.
  • Typically doesn’t count as ARR.
  • Requires a high level of effort to implement.


Identifying the perfect price metric for your company isn’t easy, and implementing it even less so. Structuring your price metric in such a way that it is predictable, acceptable, and all of the other values listed above requires a thoughtful architecture tailored to your business.




Consider your pricing metric as a graph, where the Y axis is “price,” and the X axis is “usage.” The way you define the relationship between these axes is your “architecture.” There are four common architectures used for pricing metrics:


  • Caps: This architecture “caps” the price to a single, maximum amount. Any usage beyond this point is essentially free.
  • Sliding Scale: As your client’s usage increases, the price decreases. This plays off of the “economies of scale” concept, rewarding increased engagement with discounts.
  • Bands: Price bands charge a set amount for a certain ratio of usage. An example of this may be cloud storage, where you pay for a virtual drive up to 50GB, then upgrade to the next band when you need additional capacity.
  • Adaptive subscription: With an adaptive subscription, your company charges a fixed amount for usage capacity rather than actual usage. If users exceed this pre-allocated capacity, they pay an upcharge. However, if capacity goes unused, they do not receive a discount. This is similar to how many mobile phone providers charge for data. Unlike other architectures, this method provides a means to upsell.


The final architecture, adaptive subscription, offers a great mid-point between packaging and price metrics that includes benefits from both options.


Are you undercharging?


While we all want to give our customers the best bang for their buck, the reality is you may be undercharging for the services you provide. How can you tell?


  • Customers and prospects never complain about your prices or often talk about the “great value for money” that you provide.
  • Your churn did not increase the last time you put through a price increase.
  • Your win rate is very high.
  • Your sales reps rarely ask for higher discounts.
  • Your weaker competitors charge the same as you, and/or your comparable competitors charge higher prices than you.


If you do find that your prices are too low, the obvious next step would be to understand what those prices should be. There are several analytical approaches to estimate your customer’s willingness-to-pay that you can use to arrive at a suitable price level.


One example is the Van Westendorp technique, which is good for companies looking for a rough range of how different customers would expect to pay for a new product. A different technique, the Value Proposition Map, is better suited for companies operating in a highly competitive market with highly visible customers. Unlike the Van Westendorp technique, which can help you determine a viable price range for each customer, the Value Proposition Map helps determine the price premium you can command versus your competition.



Preventing unwanted sales discounts


Entrepreneurship in your sales team should be applauded, but salespeople, when compensated by a flat percentage of revenue, have an incentive to discount as much as possible. This is often the cause of the final value leak, the gap between your target price and realized value.


Salespeople are looking to maximize their expected payout. This value is a product of the relationship between the incentive payout if the deal is won and the perceived chance of actually winning the deal. This incentivizes discounting as the chances of winning drop significantly as the price increases.


There are levers you can pull to stop value leakage through unnecessary discounting.


  • Rules: Do not allow reps to price deals with unacceptable levels of discounting.
  • Incentives: Give reps “skin in the game” to push for lower discount deals.
  • Enablers: Make reps feel better able to win at lower discount levels.


Putting it all together


When evaluating your own pricing strategy, there are a few key takeaways to consider. First, you want to build an effective price structure. If going the “good, better, best” route, avoid common pitfalls and evaluate your features to build the right tiers. If using usage-based pricing, use architecture to solve for value-alignment and predictability issues. You should also choose the right value-based price metric.


You’ll also want to determine your customer’s willingness-to-pay as accurately as possible using techniques like the Value Proposition Map. Reducing excessive discounts in sales is another great way to ensure you get the prices you deserve.


For a more detailed and in-depth look at these concepts, tune into James Wilton’s entire presentation available today on YouTube.



Headshot of Shawn Davis

Content Writer, Duda

Denver-based writer with a passion for creating engaging, informative content. Loves running, cycling, coffee, and the New York Times' minigames.


Did you find this article interesting?


Thanks for the feedback!
By Shawn Davis April 1, 2026
Core Web Vitals aren't new, Google introduced them in 2020 and made them a ranking factor in 2021. But the questions keep coming, because the metrics keep changing and the stakes keep rising. Reddit's SEO communities were still debating their impact as recently as January 2026, and for good reason: most agencies still don't have a clear, repeatable way to measure, diagnose, and fix them for clients. This guide cuts through the noise. Here's what Core Web Vitals actually measure, what good scores look like today, and how to improve them—without needing a dedicated performance engineer on every project. What Core Web Vitals measure Google evaluates three user experience signals to determine whether a page feels fast, stable, and responsive: Largest Contentful Paint (LCP) measures how long it takes for the biggest visible element on a page — usually a hero image or headline — to load. Google considers anything under 2.5 seconds good. Above 4 seconds is poor. Interaction to Next Paint (INP) replaced First Input Delay (FID) in March 2024. Where FID measures the delay before a user's first click is registered, INP tracks the full responsiveness of every interaction across the page session. A good INP score is under 200 milliseconds. Cumulative Layout Shift (CLS) measures visual stability — how much page elements unexpectedly move while content loads. A score below 0.1 is good. Higher scores signal that images, ads, or embeds are pushing content around after load, which frustrates users and tanks conversions. These three metrics are a subset of Google's broader Page Experience signals, which also include HTTPS, safe browsing, and mobile usability. Core Web Vitals are the ones you can most directly control and improve. Why your clients' scores may still be poor Core Web Vitals scores vary dramatically by platform, hosting, and how a site was built. Some of the most common culprits agencies encounter: Heavy above-the-fold content . A homepage with an autoplay video, a full-width image slider, and a chat widget loading simultaneously will fail LCP every time. The browser has to resolve all of those resources before it can paint the largest element. Unstable image dimensions . When an image loads without defined width and height attributes, the browser doesn't reserve space for it. It renders the surrounding text, then jumps it down when the image appears. That jump is CLS. Third-party scripts blocking the main thread . Analytics pixels, ad tags, and live chat tools run on the browser's main thread. When they stack up, every click and tap has to wait in line — driving INP scores up. A single slow third-party script can push an otherwise clean site into "needs improvement" territory. Too many web fonts . Each font family and weight is a separate network request. A page loading four font files before rendering any text will fail LCP, especially on mobile connections. Unoptimized images . JPEGs and PNGs served at full resolution, without compression or modern formats like WebP or AVIF, add unnecessary weight to every page load. How to measure them accurately There are two types of Core Web Vitals data you should be looking at for every client: Lab data comes from tools like Google PageSpeed Insights, Lighthouse, and WebPageTest. It simulates page loads in controlled conditions. Lab data is useful for diagnosing specific issues and testing fixes before you deploy them. Field data (also called Real User Monitoring, or RUM) comes from actual users visiting the site. Google collects this through the Chrome User Experience Report (CrUX) and surfaces it in Search Console and PageSpeed Insights. Field data is what Google actually uses as a ranking signal — and it often looks worse than lab data because it reflects real-world device and connection variability. If your client's site has enough traffic, you'll see field data in Search Console under Core Web Vitals. This is your baseline. Lab data helps you understand why the scores are what they are. For clients with low traffic who don't have enough field data to appear in CrUX, you'll be working primarily with lab scores. Set that expectation early so clients understand that improvements may not immediately show up in Search Console. Practical fixes that move the needle Fix LCP: get the hero image loading first The single most effective LCP improvement is adding fetchpriority="high" to the hero image tag. This tells the browser to prioritize that resource over everything else. If you're using a background CSS image for the hero, switch it to anelement — background images aren't discoverable by the browser's preload scanner. Also check whether your hosting serves images through a CDN with caching. Edge delivery dramatically reduces the time-to-first-byte, which feeds directly into LCP. Fix CLS: define dimensions for every media element Every image, video, and ad slot on the page needs explicit width and height attributes in the HTML. If you're using responsive CSS, you can still define the aspect ratio with aspect-ratio in CSS while leaving the actual size fluid. The key is giving the browser enough information to reserve space before the asset loads. Avoid inserting content above existing content after page load. This is common with cookie banners, sticky headers that change height, and dynamically loaded ad units. If you need to show these, anchor them to fixed positions so they don't push content around. Fix INP: reduce what's competing for the main thread Audit third-party scripts and defer or remove anything that isn't essential. Tools like WebPageTest's waterfall view or Chrome DevTools Performance panel show you exactly which scripts are blocking the main thread and for how long. Load chat widgets, analytics, and ad tags asynchronously and after the page's critical path has resolved. For most clients, moving non-essential scripts to load after the DOMContentLoaded event is a meaningful INP improvement with no visible impact on the user experience. For websites with heavy JavaScript — particularly those built on frameworks with large client-side bundles — consider breaking up long tasks into smaller chunks using the browser's Scheduler API or simply splitting components so the main thread isn't locked for more than 50 milliseconds at a stretch. What platforms handle automatically One of the practical advantages of building on a platform optimized for performance is that many of these fixes are applied by default. Duda, for example, automatically serves WebP images, lazy loads below-the-fold content, minifies CSS, and uses efficient cache policies for static assets. As of May 2025, 82% of sites built on Duda pass all three Core Web Vitals metrics — the highest recorded pass rate among major website platforms. That baseline matters when you're managing dozens or hundreds of client sites. It means you're starting each project close to or at a passing score, rather than diagnosing and patching a broken foundation. How much do Core Web Vitals actually affect rankings? Honestly, they're a tiebreaker — not a primary signal. Google has been clear that content quality and relevance still dominate ranking decisions. A well-optimized site with thin, irrelevant content won't outrank a content-rich competitor just because its CLS is 0.05. What Core Web Vitals do affect is the user experience that supports those rankings. Pages with poor LCP scores have measurably higher bounce rates. Sites with high CLS lose users mid-session. Those behavioral signals — time on page, return visits, conversions — are things search engines can observe and incorporate. The practical argument for fixing Core Web Vitals isn't just "because Google said so." It's that faster, more stable pages convert better. Every second of LCP improvement can reduce bounce rates by 15–20% depending on the industry and device mix. For client sites that monetize through leads or eCommerce, that's a revenue argument, not just an SEO argument. A repeatable process for agencies Audit every new site before launch. Run PageSpeed Insights and record LCP, INP, and CLS scores for both mobile and desktop. Flag anything in the "needs improvement" or "poor" range before the client sees the live site. Check Search Console monthly for existing clients. The Core Web Vitals report surfaces issues as they appear in field data. Catching a regression early — before it compounds — is significantly easier than explaining a traffic drop after the fact. Document what you've improved. Clients rarely see Core Web Vitals scores on their own. A monthly one-page performance summary showing before/after scores builds credibility and makes your technical work visible. Prioritize mobile. Google uses mobile-first indexing, and field data shows that mobile CWV scores are almost always worse than desktop. If you only have time to optimize one version, do mobile first. Core Web Vitals aren't a one-time fix. Platforms change, new scripts get added, campaigns bring in new widgets. Build the audit into your workflow and treat it like any other ongoing deliverable, and you'll stay ahead of the issues before they affect your clients' rankings. Duda's platform is built with Core Web Vitals performance in mind. Explore how it handles image optimization, script management, and site speed automatically — so your team spends less time debugging and more time building.
By Ilana Brudo March 31, 2026
Vertical SaaS must transition from tools to an AI-powered Vertical Operating System (vOS). Learn to leverage context, end tech sprawl, and maximize retention.
By Shawn Davis March 27, 2026
Automate client management, instant site generation, and data synchronization with an API-driven website builder to create a scalable growth engine for your SaaS platform.
Show More

Latest posts