AI essentials in your PPC campaigns - Insights from a webinar with Google's Ginny Marvin

September 2, 2024
0 minute read

The integration of AI in PPC campaigns has become a key aspect for marketers, driven by the latest advancements unveiled at Google Marketing Live. Google's new AI-powered tools, such as the rebranded search generative experiences (SGE) now known as AI overviews, are set to revolutionize ad creatives. These tools will soon be tested in the US, showcasing text and product listing ads within AI-generated summaries. Additionally, AI's role in automating ad placement, optimizing bidding strategies, and enhancing audience targeting has grown significantly, allowing marketers to achieve higher efficiency and better performance in their campaigns. 


Recently, Google Ads' Product Liaison, Ginny Marvin, and the former editor in chief of Search Engine Land of Marketing Land, shared her invaluable insights on this topic in a Duda webinar hosted by Purna Virji, a principal consultant at LinkedIn and an esteemed author. 


This webinar delved into the transformative power of AI in advertising while emphasizing the importance of marketing fundamentals. 


In this blog post, we'll explore the key takeaways from their conversation, offering a guide to harnessing AI in your PPC campaigns effectively while maintaining old-school marketing strategies. 


Let’s jump right in.


Key takeaways


Below are the key takeaways from the session, offering a straightforward guide to integrating AI effectively into your campaigns:


  • Stay grounded: Core marketing principles remain essential despite rapid technological changes. The introduction of AI, like the rise of social media and mobile technology before it, requires marketers to adapt and innovate. However, the core principles that drive successful marketing campaigns have always been and continue to be foundational.


  • AI-powered creativity: Ginny unveiled a glimpse into the future of ad creation with features like Google Ads AI overviews and search ads with AI-generated recommendations. These tools use AI to personalize the ad experience for users and tailor content based on their specific needs. Ginny explained how AI will enhance ad matching by considering both the user query and information within the AI-generated overview. Currently, ads are served based solely on the query, either above or below AI overviews. The new approach will integrate data from the AI overview, impacting search campaigns, standard shopping campaigns, and Max campaigns. This change aims to provide more relevant ads, enhancing the user experience and ad effectiveness. Though this feature is in early testing stages, it marks a significant step towards smarter ad placements.

    Another innovative development is the introduction of search ads with AI-generated recommendations. This feature creates a more interactive ad experience. For example, users seeking storage solutions can upload photos of their home, allowing AI to generate an inventory list of items and suggest appropriate storage unit sizes and packing supplies. This pilot, currently in the US, showcases how AI can personalize and enrich the user experience within ads, paving the way for more engaging and helpful advertisements.

  • Simplifying on-brand ad generation: While AI can be a powerful tool for generating creative assets, it's crucial to maintain brand consistency. With some exciting new features advertisers can incorporate their brand identity into AI-generated content. Ginny discusses these features like the new brand guidelines feature in Performance Max (P Max) campaigns and image references in asset generation, which allow the system to auto-detect brand colors from a website or accept exact hex codes, and it can suggest fonts similar to those on the site. The image references also allows advertisers to upload a reference image and text prompt to quickly generate consistent, on-brand visuals for their ads. This helps ensure that auto-generated video ads and responsive display ads remain consistent with the brand’s visual identity. Updated and modernized templates will make these ads more visually appealing, with these features rolling out in beta soon.


  • Advanced image editing: The Image Editor tool has been enhanced to offer more capabilities for advertisers. Users can now remove, add, or replace elements in product images from their Merchant Center feed. Additionally, new features allow the generation of scenes and backgrounds and adjusting images to fit various sizes and aspect ratios. For example, advertisers can draw a box over an image of a product, like a couch, and prompt the AI to generate a creative painting in a specific style within that image. These updates, currently in pilot, will be widely available later in the year, providing more flexibility and creativity in ad visuals.


  • Enhancing product images: Ginny highlighted new features in Product Studio, an enhancement tool within Google Merchant Center. This tool allows retailers to remove backgrounds from product images and upload reference images that reflect their brand. For instance, you can take an image of a man walking along a rocky coast, upload it, and have the AI place your product, such as a bag, seamlessly into this scene. These generated images can be downloaded and used across various marketing efforts, not just within Google Ads. Additionally, Product Studio now offers the ability to transform product images into video assets with subtle animations, making them more versatile for different advertising formats.


  • Utilizing Lookalike audiences in demand gen campaigns: A significant update for Demand Gen campaigns is the reduced threshold for using lookalike audiences. Previously, a list size of 1,000 users was required, but now you can use lookalike audiences with just 100 users. This change makes it easier for advertisers to leverage these segments in Gmail, Discover Feed, and YouTube. Lookalike segments can provide valuable insights into affinity, in-market behavior, age, gender, and other demographics. This update, available globally in all languages, opens up new opportunities for advertisers to refine their targeting strategies.


  • Standing out with AI-generated ads: A common concern among marketers is how to differentiate themselves when everyone is using AI in their ads. Ginny emphasized that Google's generative AI creative tools are grounded in your own assets, making the uniqueness and quality of your inputs crucial. To stand out, focus on the following:


  • Grounding in your assets: Ensure that your landing pages and existing assets are accurate and up-to-date. These elements form the foundation for the AI-generated content, so their quality directly impacts the effectiveness of your ads.

  • Strong branding: Leverage strong branding to make your AI-generated ads reflect your unique business identity. Consistent and clear branding has always been important and is even more critical in the era of AI-driven advertising.

  • Differentiation strategy: Highlight your unique selling propositions (USPs) and differentiators in your manual assets. A robust differentiation strategy will help your AI-generated content shine and set you apart from competitors.


  • General recommendations: Ginny highlighted the need to be cautious of absolute thinking in marketing strategies, such as assuming a tactic that worked in the past will always be effective. With the rapid evolution of AI tools, it is crucial to continually test new strategies and approaches. Ginny underscored that what might not have worked two years ago, or even six months ago, could be effective today due to improvements in AI and machine learning capabilities.

    Ginny also stressed the importance of staying informed about the latest industry updates and tools. She mentioned an
    article she recently wrote for Search Engine Journal, which serves as a primer for various AI tools and their applications. This resource aims to help marketers navigate the overwhelming array of tools available and understand how to utilize them effectively. Reviewing and understanding these tools can help marketers make informed decisions and stay ahead in the rapidly changing digital landscape.

    When using AI-generated content, Ginny advised marketers to thoroughly review the generated assets before publishing them. This includes ensuring that the content is accurate, not misleading, and in line with advertising policies and applicable laws. She also noted that Google Ads' AI tools avoid generating images with branded items, logos, faces, children, specific people (including celebrities and public figures), or opinion/advice prompts. Understanding these limitations can help marketers avoid errors and ensure their ads comply with guidelines.

    Ginny provided insights into
    responsive search ads (RSAs), which dynamically assemble ad combinations based on relevance and expected performance. She explained that maximizing the number of relevant and diverse assets in RSAs increases the potential combinations that can be served, thereby improving performance. Ginny also highlighted the importance of understanding performance labels, which indicate how individual assets perform relative to others in the same RSA. Swapping out low-performing assets and testing new ones can help improve overall ad performance.

    Ginny also discussed
    ad strength, a diagnostic tool that evaluates the quality and diversity of assets in RSAs. Ginny clarifies a common misconception about ad strength: ad strength is not a factor in the ad auction and does not impact cost-per-click (CPC) or ad serving. Instead, it reflects the relevance, uniqueness, and keyword alignment of your ad assets. Advertisers are encouraged to aim for at least one ad with good or excellent ad strength, as this has been correlated with better performance. However, Ginny emphasized that high ad strength does not guarantee success; ongoing testing and optimization are essential.

  • Overcoming resistance to AI features: A common concern among marketers is the fear of relinquishing control to AI. Purna and Ginny addressed this by stressing the importance of having robust assets and landing pages that AI can effectively leverage. They recommended starting with a thorough review process before deploying AI-generated assets. This ensures the generated content aligns with your brand's voice and goals. While AI can automate tasks and save time, human oversight remains crucial for maintaining quality and relevance.


  • Leveraging video advertising opportunities: Ginny highlighted that advancements in AI now allow businesses of all sizes to explore video formats more effectively. Features like adding subtle animations to product images and converting landscape videos into portrait views using AI tools are making video advertising more accessible and impactful. These developments enable marketers to capitalize on the engagement potential of video without extensive resources.


  • Insight omnichannel advertising strategies: Discussing the omnichannel approach, Ginny underscored the role of tools like Performance Max and Demand Gen campaigns in scaling across multiple channels. These campaign types are designed to align with business objectives and optimize ad placements across various platforms. By leveraging AI-driven insights and smart bidding strategies, marketers can ensure their ads are delivered to the right audience at the right time, enhancing overall campaign performance.


Some questions from the audience


During the webinar, attendees asked several insightful questions that sparked in-depth discussions on the practical applications of AI in PPC campaigns. 


Here are some of the key questions raised:


  • How do you stay grounded in core marketing principles amidst rapid AI advancements?

  • What strategies should marketers use to differentiate their AI-generated ads?

  • How can AI tools be leveraged effectively without losing creative control?

  • What are the best practices for using lookalike audiences in Demand Gen campaigns?

  • How can advertisers optimize ads for complex user queries?


To get the full answers to these questions and more insights, watch the full webinar
here.

Headshot of Renana Dar

Senior Content Writer, Duda.


Did you find this article interesting?


Thanks for the feedback!
By Shawn Davis April 1, 2026
Core Web Vitals aren't new, Google introduced them in 2020 and made them a ranking factor in 2021. But the questions keep coming, because the metrics keep changing and the stakes keep rising. Reddit's SEO communities were still debating their impact as recently as January 2026, and for good reason: most agencies still don't have a clear, repeatable way to measure, diagnose, and fix them for clients. This guide cuts through the noise. Here's what Core Web Vitals actually measure, what good scores look like today, and how to improve them—without needing a dedicated performance engineer on every project. What Core Web Vitals measure Google evaluates three user experience signals to determine whether a page feels fast, stable, and responsive: Largest Contentful Paint (LCP) measures how long it takes for the biggest visible element on a page — usually a hero image or headline — to load. Google considers anything under 2.5 seconds good. Above 4 seconds is poor. Interaction to Next Paint (INP) replaced First Input Delay (FID) in March 2024. Where FID measures the delay before a user's first click is registered, INP tracks the full responsiveness of every interaction across the page session. A good INP score is under 200 milliseconds. Cumulative Layout Shift (CLS) measures visual stability — how much page elements unexpectedly move while content loads. A score below 0.1 is good. Higher scores signal that images, ads, or embeds are pushing content around after load, which frustrates users and tanks conversions. These three metrics are a subset of Google's broader Page Experience signals, which also include HTTPS, safe browsing, and mobile usability. Core Web Vitals are the ones you can most directly control and improve. Why your clients' scores may still be poor Core Web Vitals scores vary dramatically by platform, hosting, and how a site was built. Some of the most common culprits agencies encounter: Heavy above-the-fold content . A homepage with an autoplay video, a full-width image slider, and a chat widget loading simultaneously will fail LCP every time. The browser has to resolve all of those resources before it can paint the largest element. Unstable image dimensions . When an image loads without defined width and height attributes, the browser doesn't reserve space for it. It renders the surrounding text, then jumps it down when the image appears. That jump is CLS. Third-party scripts blocking the main thread . Analytics pixels, ad tags, and live chat tools run on the browser's main thread. When they stack up, every click and tap has to wait in line — driving INP scores up. A single slow third-party script can push an otherwise clean site into "needs improvement" territory. Too many web fonts . Each font family and weight is a separate network request. A page loading four font files before rendering any text will fail LCP, especially on mobile connections. Unoptimized images . JPEGs and PNGs served at full resolution, without compression or modern formats like WebP or AVIF, add unnecessary weight to every page load. How to measure them accurately There are two types of Core Web Vitals data you should be looking at for every client: Lab data comes from tools like Google PageSpeed Insights, Lighthouse, and WebPageTest. It simulates page loads in controlled conditions. Lab data is useful for diagnosing specific issues and testing fixes before you deploy them. Field data (also called Real User Monitoring, or RUM) comes from actual users visiting the site. Google collects this through the Chrome User Experience Report (CrUX) and surfaces it in Search Console and PageSpeed Insights. Field data is what Google actually uses as a ranking signal — and it often looks worse than lab data because it reflects real-world device and connection variability. If your client's site has enough traffic, you'll see field data in Search Console under Core Web Vitals. This is your baseline. Lab data helps you understand why the scores are what they are. For clients with low traffic who don't have enough field data to appear in CrUX, you'll be working primarily with lab scores. Set that expectation early so clients understand that improvements may not immediately show up in Search Console. Practical fixes that move the needle Fix LCP: get the hero image loading first The single most effective LCP improvement is adding fetchpriority="high" to the hero image tag. This tells the browser to prioritize that resource over everything else. If you're using a background CSS image for the hero, switch it to anelement — background images aren't discoverable by the browser's preload scanner. Also check whether your hosting serves images through a CDN with caching. Edge delivery dramatically reduces the time-to-first-byte, which feeds directly into LCP. Fix CLS: define dimensions for every media element Every image, video, and ad slot on the page needs explicit width and height attributes in the HTML. If you're using responsive CSS, you can still define the aspect ratio with aspect-ratio in CSS while leaving the actual size fluid. The key is giving the browser enough information to reserve space before the asset loads. Avoid inserting content above existing content after page load. This is common with cookie banners, sticky headers that change height, and dynamically loaded ad units. If you need to show these, anchor them to fixed positions so they don't push content around. Fix INP: reduce what's competing for the main thread Audit third-party scripts and defer or remove anything that isn't essential. Tools like WebPageTest's waterfall view or Chrome DevTools Performance panel show you exactly which scripts are blocking the main thread and for how long. Load chat widgets, analytics, and ad tags asynchronously and after the page's critical path has resolved. For most clients, moving non-essential scripts to load after the DOMContentLoaded event is a meaningful INP improvement with no visible impact on the user experience. For websites with heavy JavaScript — particularly those built on frameworks with large client-side bundles — consider breaking up long tasks into smaller chunks using the browser's Scheduler API or simply splitting components so the main thread isn't locked for more than 50 milliseconds at a stretch. What platforms handle automatically One of the practical advantages of building on a platform optimized for performance is that many of these fixes are applied by default. Duda, for example, automatically serves WebP images, lazy loads below-the-fold content, minifies CSS, and uses efficient cache policies for static assets. As of May 2025, 82% of sites built on Duda pass all three Core Web Vitals metrics — the highest recorded pass rate among major website platforms. That baseline matters when you're managing dozens or hundreds of client sites. It means you're starting each project close to or at a passing score, rather than diagnosing and patching a broken foundation. How much do Core Web Vitals actually affect rankings? Honestly, they're a tiebreaker — not a primary signal. Google has been clear that content quality and relevance still dominate ranking decisions. A well-optimized site with thin, irrelevant content won't outrank a content-rich competitor just because its CLS is 0.05. What Core Web Vitals do affect is the user experience that supports those rankings. Pages with poor LCP scores have measurably higher bounce rates. Sites with high CLS lose users mid-session. Those behavioral signals — time on page, return visits, conversions — are things search engines can observe and incorporate. The practical argument for fixing Core Web Vitals isn't just "because Google said so." It's that faster, more stable pages convert better. Every second of LCP improvement can reduce bounce rates by 15–20% depending on the industry and device mix. For client sites that monetize through leads or eCommerce, that's a revenue argument, not just an SEO argument. A repeatable process for agencies Audit every new site before launch. Run PageSpeed Insights and record LCP, INP, and CLS scores for both mobile and desktop. Flag anything in the "needs improvement" or "poor" range before the client sees the live site. Check Search Console monthly for existing clients. The Core Web Vitals report surfaces issues as they appear in field data. Catching a regression early — before it compounds — is significantly easier than explaining a traffic drop after the fact. Document what you've improved. Clients rarely see Core Web Vitals scores on their own. A monthly one-page performance summary showing before/after scores builds credibility and makes your technical work visible. Prioritize mobile. Google uses mobile-first indexing, and field data shows that mobile CWV scores are almost always worse than desktop. If you only have time to optimize one version, do mobile first. Core Web Vitals aren't a one-time fix. Platforms change, new scripts get added, campaigns bring in new widgets. Build the audit into your workflow and treat it like any other ongoing deliverable, and you'll stay ahead of the issues before they affect your clients' rankings. Duda's platform is built with Core Web Vitals performance in mind. Explore how it handles image optimization, script management, and site speed automatically — so your team spends less time debugging and more time building.
By Ilana Brudo March 31, 2026
Vertical SaaS must transition from tools to an AI-powered Vertical Operating System (vOS). Learn to leverage context, end tech sprawl, and maximize retention.
By Shawn Davis March 27, 2026
Automate client management, instant site generation, and data synchronization with an API-driven website builder to create a scalable growth engine for your SaaS platform.
Show More

Latest posts