Incrementality Testing
Incrementality testing is a controlled experiment that measures the causal lift your marketing creates what extra conversions, revenue, or app installs happened because of the ads (or tactic) versus what would have happened anyway. The classic setup splits eligible audiences into treatment (can see ads) and control (held out), then compares outcomes to estimate incremental conversions, lift %, iROAS, etc. Platforms like Meta (Conversion Lift) and Google Ads (Conversion Lift, user- or geo-based) provide built-in frameworks for running these tests.
Why It Matters
Proves true impact: Unlike attribution (which assigns credit), incrementality shows what wouldn’t have happened without ads, the gold standard for causal effectiveness.
Guides budget shifts: Compare incremental ROAS and incremental conversions across channels/campaigns to fund what really works.
Privacy-ready: Modern lift tests can use randomization and aggregated/geo data, reducing reliance on user-level tracking.
Examples
User-level lift (Google Ads): Randomly hold out 10% of eligible users. After the campaign, Google reports Incremental Conversions, Relative Lift, Incremental Conversion Value for exposed vs. control.
Geo experiment (Google Ads): Randomize by non-overlapping regions (cities/DMAs). Report Incremental Conversions, Incremental ROAS, Incremental Cost to quantify causal lift at regional level.
Meta Conversion Lift: Meta suppresses ads to a holdout group and compares their outcomes with exposed users to estimate incremental sales/conversions.
Ghost Ads approach: Instead of showing public-service ads to control, a platform logs where your ad would have won and uses those “ghost impressions” to estimate lift improving realism and efficiency.
Best Practices
State the decision upfront: Pick your success metric (conversions, revenue, app installs) and decide whether you need user-level or geo-level testing.
Randomize cleanly: Use platform tools to assign treatment vs. control and avoid audience overlap or targeting changes mid-test.
Run long enough for power: Ensure enough conversion volume to detect a realistic minimum detectable effect (MDE); avoid stopping early. (Google/Meta guides emphasize waiting for full results.)
Pick the right design:
User-based lift when you need granular results.
Geo experiments when user-level suppression isn’t possible or for broad media (YouTube, Display, TV-like).
Choose robust measurement: For display/retargeting, consider Ghost Ads designs when available to better reflect real auctions.
Read the right metrics: Focus on incremental conversions/value, lift %, and iROAS, not just CPC/CPA.
Triangulate with other methods: Pair lift tests (tactical, causal) with MMM (strategic, long-term) and attribution (journey visibility) for a fuller picture.
Related Terms
Attribution Modelling
Geo Experiments
Conversion Lift (Meta / Google Ads)
Ghost Ads
Marketing Mix Modeling (MMM)
FAQs
Q1. How is incrementality different from attribution?
Attribution assigns credit across touchpoints; incrementality estimates what your ads caused by comparing treatment vs. control. They answer different questions and are often used together.
Q2. What metrics should I look at?
Incremental Conversions, Relative Lift (%), Incremental Conversion Value, and Incremental ROAS (especially for geo tests). These are standard in Google’s Conversion Lift reporting.
Q3. When should I use geo experiments instead of user-level tests?
Use geo when user-level suppression isn’t feasible (policy, walled gardens, or broad media) or when you want market-level outcomes.
Q4. What is a “ghost ads” test?
A design where the platform logs impressions your ad would have won for a control group and uses that counterfactual to compute lift, reducing wasteful PSA impressions.
Q5. How long should a lift test run?
Until you reach the planned sample size/power and the platform indicates final results; both Google and Meta caution against reading partial results early.