Core Web Vitals, explained for service businesses.
What LCP, INP, and CLS actually measure, what scores Google rewards, and how to fix a slow site without burning a quarter on it. Written from the field, with citations.
What Core Web Vitals actually measure
Core Web Vitals are three metrics Google publishes as the public summary of how your site feels to a real visitor on a real connection. LCP(Largest Contentful Paint) is the time from navigation start until the largest visible element renders, with a “good” threshold at 2.5 seconds (Google, 2024). INP(Interaction to Next Paint) is the time from a user input to the next visual update, with a “good” threshold at 200 milliseconds (Google, 2024). CLS (Cumulative Layout Shift) is the cumulative score of unexpected layout movement during a page’s lifecycle, with a “good” threshold at 0.1 (Google, 2024).
The bar Google grades you against is the 75th percentile of real user sessions, segmented across mobile and desktop devices, sampled over the most recent 28 days (Google, 2024). Three quarters of your visitors must hit “good” on every metric for the page to count as passing. A site that scores well in lab tests but slowly in production will fail this gate. A site that loads well on a high-end laptop and badly on a mid-tier Android will fail this gate.
Why these three numbers, and not others
Google introduced Core Web Vitals in May 2020 as the visible portion of the page experience signal, replacing the older “site speed” rumor with three field metrics grounded in real visitor data (Walton, 2020). LCP captures whether the page loaded. INP captures whether the page responds when touched. CLS captures whether content stays where the user expects it. Three numbers, three orthogonal failure modes, three different code paths to fix.
INP replaced First Input Delay (FID) as a stable Core Web Vital in March 2024 (Sullivan and Viscomi, 2024). The reason matters. FID measured only the delay before the first input was processed, which most sites passed easily because browsers schedule the first event handler quickly. INP measures every interaction across the visit and reports the slowest, weighted toward the high percentile of observations, which exposes the long tasks and hydration costs that FID had been masking. The metric got harder. Most sites that passed FID do not pass INP today.
What “good” actually buys you
Page speed pays for itself in conversions. Akamai’s State of Online Retail Performance report (2017) found that every 100-millisecond delay in load time correlated with a 7% drop in conversion across thousands of online retailers. Deloitte Digital’s Milliseconds Make Millions study (2020), commissioned by Google and run across 30 mobile retail sites, measured an average 8.4% conversion lift in retail and a 10.1% lift in travel for every 0.1-second improvement in mobile site speed.
The effect compounds for service businesses with high-intent traffic. A plumber whose mobile site loads in five seconds instead of two does not lose 60% of their leads in a single jump; the loss is gradual across the funnel, with the largest cuts on first impressions where bounce decisions happen in the first three seconds (An, 2017). The headline figure that gets cited everywhere, that 53% of mobile users abandon a site that takes longer than three seconds to load, is from Daniel An’s 2017 Think with Google analysis of mobile speed benchmarks.
LCP: what it is, what kills it, what fixes it
LCP is the time from navigation start until the largest above-the-fold visible element finishes rendering. On most service-business sites, that element is the hero image or the H1 headline. If the hero is a video, it is whichever element rendered before the video poster. The metric ignores anything below the fold and anything off-screen (Google, 2024).
LCP fails for a small number of identifiable reasons. The hero image is too large or unoptimized. A render-blocking script in the head delays first paint. A web font swap pushes the headline render past the LCP candidate threshold. Server response time is slow because the page is rendered on demand from a database query that should have been cached. The fix sequence I follow on every audit is:
- Serve the hero image in a modern format (AVIF or WebP) at a sensible size for the breakpoint.
- Eliminate render-blocking JavaScript above the fold. Defer or async-load anything not needed for first paint.
- Preload the LCP image with
<link rel="preload">so the browser fetches it in parallel with the HTML. - Move expensive server-side work to a background path or a cache layer so the HTML response time stays under 200 milliseconds.
On a typical small-business site, these four changes move LCP from the 4-to-6-second range into the under-2.5-second range without any framework changes underneath.
INP: the metric most sites fail today
INP measures the responsiveness of the page across the entire visit. Every click, tap, or keystroke fires the metric; the reported value is weighted toward the slowest observed interactions (Sullivan and Viscomi, 2024). A site can pass LCP and CLS comfortably and still fail INP, because all three metrics measure orthogonal things.
Most INP failures I see in the field come from one of three sources.
- Hydration cost on the first interaction after a React, Vue, or Angular page renders, where the page looks ready but the framework has not yet attached event handlers.
- Long tasks in third-party scripts, especially analytics, chat widgets, and tag managers, that block the main thread for hundreds of milliseconds at unpredictable times.
- Heavy synchronous work in the click handler itself, especially on pages that re-render large component trees on a single state change.
The fixes are different for each. The first wants framework-level work like deferred or selective hydration. The second wants a third-party script audit and aggressive lazy loading. The third wants React’s useTransition and component memoization. INP rewards discipline, not heroics.
CLS: the most fixable of the three
CLS is a cumulative score, not a time. It sums the impact of every unexpected layout shift during the page’s life, weighted by how much screen area moved and how far it moved. The “good” threshold is 0.1, which means small shifts are tolerated, but a single mid-load shift of an entire hero block will fail you on its own (Google, 2024).
CLS is the most fixable of the three Core Web Vitals because the failure modes are well-known and the fixes are mechanical.
- Images and videos without explicit width and height attributes shift content when they load. Declare both, even on responsive assets.
- Web fonts with FOIT or FOUT swap behaviors push text down when the font loads. Use
font-display: optionalorsize-adjustto keep swaps from changing line height. - Ads and embeds inserted above existing content shove the rest of the page down. Reserve the slot with a
min-heightcontainer before the embed loads. - Cookie banners and consent gates that animate in from the top should slide over the page, not push it.
Most sites can move from a CLS of 0.3 to under 0.05 in one focused day of work. CLS is the metric to fix first when you need a quick win before a bigger performance project.
How to actually measure your own site
Three tools matter. PageSpeed Insights at pagespeed.web.dev returns both lab data, generated on a simulated mid-tier device with throttled connectivity, and field data, pulled from the Chrome User Experience Report (CrUX) for sites with enough real traffic. Lighthouse, the same engine PSI uses for lab data, runs locally in Chrome DevTools and is what you want when iterating on a fix because it is fast and reproducible. The Chrome User Experience Report itself, queryable via BigQuery for production sites, is the underlying source of truth Google uses for its ranking signals (Google Search Central, 2021).
Lab data and field data disagree often. A page can score 95 in Lighthouse and fail Core Web Vitals in CrUX because Lighthouse simulates a fast 4G connection on a mid-tier device while real traffic includes slower devices, slower networks, and longer sessions where INP failures accumulate. Field data is what Google uses to rank you. Always trust CrUX over Lighthouse for what your search ranking actually depends on.
Pathlight automates the measurement and the diagnostic in 90 seconds, returning a scored report against your own URL with the prioritized fixes underneath. Worth running before a four-hour DIY audit if only to confirm what you are about to spend the four hours on. The longer reference on what a real performance audit covers (and what free tools quietly miss) is on the service page.
Common mistakes I see in the field
Optimizing for Lighthouse score over CrUX. A perfect Lighthouse score that fails CrUX is a vanity metric. A 78 Lighthouse that passes CrUX is the one that ranks. Treat Lighthouse as the iteration loop, CrUX as the source of truth.
Treating INP like FID.Sites that passed FID often inherited the assumption that “interactivity is fine, we passed the test.” INP is a stricter test that includes every interaction across the visit. If you have not measured INP since March 2024, you have not measured INP.
Ignoring third-party scripts. Most service-business sites carry between five and twelve third-party scripts: analytics, tag managers, chat widgets, schedulers, social pixels, customer review embeds. Each one runs JavaScript on the main thread. The 2024 Web Almanac reports that the median site loads 22 third-party requests, and the 90th percentile loads more than 100 (HTTP Archive, 2024). INP failures cluster around these requests. Audit ruthlessly. Remove what you cannot justify.
Optimizing only the homepage. Core Web Vitals are page-by-page, not site-wide. A homepage scoring 95 with twelve service-detail pages scoring 60 gets ranked on the service-detail pages for service-detail queries. Audit every commercially-important URL, not just the front door.
Frequently asked
Skip the four-hour DIY
Get the same diagnostic in ninety seconds.
Pathlight runs the audit you would otherwise spend an afternoon on, returns a scored report against your own URL, and surfaces the fixes ranked by impact. Free. No signup. Built on the same field-grade methodology described above.
Sources
- 1.Google. (2024). Web Vitals web.dev. https://web.dev/articles/vitals
- 2.Google. (2024). Largest Contentful Paint (LCP) web.dev. https://web.dev/articles/lcp
- 3.Google. (2024). Interaction to Next Paint (INP) web.dev. https://web.dev/articles/inp
- 4.Google. (2024). Cumulative Layout Shift (CLS) web.dev. https://web.dev/articles/cls
- 5.Sullivan, A., & Viscomi, R.. (2024). Introducing INP to Core Web Vitals Chrome for Developers Blog. https://developer.chrome.com/blog/inp-cwv-march-12
- 6.An, D.. (2017). Find out how you stack up to new industry benchmarks for mobile page speed Think with Google. https://www.thinkwithgoogle.com/marketing-strategies/app-and-mobile/page-load-time-statistics/
- 7.Akamai Technologies. (2017). Akamai Online Retail Performance Report: Milliseconds Are Critical. https://www.akamai.com/newsroom/press-release/akamai-releases-spring-2017-state-of-online-retail-performance-report
- 8.Deloitte Digital. (2020). Milliseconds Make Millions: A study on how improvements in mobile site speed positively affect a brand's bottom line. https://www2.deloitte.com/ie/en/pages/consulting/articles/milliseconds-make-millions.html
- 9.HTTP Archive. (2024). Web Almanac 2024: Performance. https://almanac.httparchive.org/en/2024/performance
- 10.Walton, P.. (2020). Web Vitals: essential metrics for a healthy site web.dev. https://web.dev/articles/vitals
- 11.Google Search Central. (2021). More details about the page experience update for Google Search. https://developers.google.com/search/blog/2021/04/more-details-page-experience