In short: Media Mix Modeling (MMM) is an econometric technique that measures the impact of marketing spend on sales using aggregate data, immune to cookie deprecation and ATT. In 2025 Google made Meridian publicly available and Meta consolidated Robyn: two open-source frameworks that have democratised access, once reserved for six-figure consulting engagements. Implementing a first model requires 2-3 years of historical data and 3-6 weeks of work.
- Google released Meridian as an open-source MMM framework in 2025 — Google Blog 2025
- Meta has kept Robyn open-source since 2021 (over 3,900 GitHub stars) — Meta Robyn GitHub
- MMM is back in the spotlight because it does not rely on individual tracking, eroded by 35-60% due to ATT and cookie deprecation — Nielsen 2024
Media Mix Modeling (MMM, or Marketing Mix Modeling) is not new: its econometric roots go back to the 1960s, when Kristian Palda studied the cumulative effect of advertising on sales. For decades it remained confined to large FMCG companies with dedicated budgets and in-house data science teams. In 2026, two shifts have made it accessible to SMEs as well: the open-source frameworks from Google and Meta, and the erosion of individual tracking that forced the industry to seek privacy-proof alternatives.
This is a practical guide: what MMM is, how it works, which tools to choose and how to set up a first project. If instead you are looking for an overarching framing of the marketing attribution problem — including multi-touch attribution, incrementality and geo-holdout — our dedicated guide is Marketing Attribution: 62% Get It Wrong (Nielsen 2026).
What Media Mix Modeling is
Media Mix Modeling is a statistical technique that estimates, through regression, the contribution of each marketing channel to a company’s total sales, controlling for exogenous variables such as seasonality, price, distribution, macroeconomics and competitor activity. Rather than following the individual user (as multi-touch attribution does), it analyses data aggregated by week and channel: media spend, impressions, GRP, sales, organic searches, weather, holidays.
The result is a mathematical model that answers concrete operational questions:
- If I shift 50,000 euros from Google Ads to connected TV, how does revenue change over the next 12 months?
- What is the saturation point (diminishing returns) of my Meta investment?
- How long does the effect of a TV campaign (adstock) last on sales weeks after airing?
- What share of sales is base (brand equity, word of mouth) and how much is incremental from media spend?
Three features distinguish it from other measurement approaches. First: it uses only aggregate data, so it is privacy-proof — it does not depend on cookies, device IDs or individual consent. Second: it integrates online and offline channels in the same model (digital, TV, radio, OOH, print, sponsorship). Third: it explicitly includes exogenous variables, isolating the contribution of marketing from market noise.
How it works: adstock, saturation, base vs incremental
For those coming from digital attribution, MMM introduces three concepts that don’t exist in last-click dashboards.
Adstock: the lagged effect of advertising
A TV campaign broadcast in week 1 does not exhaust its effect that same week: it influences sales for the following 3-8 weeks with exponential decay. The adstock parameter models this memory. Brand channels (TV, OOH, YouTube awareness) have a high adstock (slow decay), performance channels (retargeting, branded search) a low adstock (fast decay). Ignoring adstock means systematically underestimating the ROI of brand channels.
Saturation: diminishing returns on spend
Doubling spend on a channel does not double results. There is a point beyond which each additional euro yields ever smaller returns. MMM estimates the saturation curve (often modelled with Hill or S-shape functions) for each channel, allowing you to identify where the budget is already saturated and where there is still room for growth. It is the mathematical foundation of mix optimisation.
Base vs incremental sales
The model decomposes total sales into two components: base sales (what the company sells without media activity — brand equity, distribution, word of mouth) and incremental sales (attributable to specific campaigns). Typically base sales represent 60-80% of the total for mature brands. If you only look at platform dashboards, you only see the incremental and miss the strategic view on where to invest to grow the base.
Why MMM is back at the centre in 2026
The MMM revival is not academic: it responds to three converging pressures.
Erosion of individual tracking. Apple’s App Tracking Transparency has pushed iOS opt-in below 25% in EMEA according to AppsFlyer. Chrome has completed third-party cookie deprecation. Ad blockers are widespread. MTA models based on individual tracking are working with ever more incomplete data, while MMM — based on aggregates — remains valid.
Technological democratisation. Until 2019, an MMM project required consulting fees of 150-500K euros with agencies such as Nielsen, Ipsos MMA or Analytic Partners. In 2021 Meta open-sourced Robyn; in 2024-2025 Google released Meridian as open-source. The tools are free, built on standard languages (R, Python), and documented. The cost of a first MMM project has dropped to 15-50K euros of consulting or internal work.
Pressure on demonstrable ROI. According to the Gartner 2024 CMO Survey, only 52% of senior marketing leaders can prove the value of marketing to the board. MMM produces an output that even the CFO can read (contribution in euros per channel), not just platform-centric metrics.
Open-source MMM tools compared: Meridian, Robyn, LightweightMMM
Today there are three main open-source frameworks, all maintained by Big Tech but with different philosophies. The choice depends on internal skills (R vs Python), spend volumes and mix complexity.
In practice, for most projects starting today the choice is binary between Meridian (if you have a Python / data scientist team) and Robyn (if you have R skills or prefer more mainstream documentation). LightweightMMM remains useful for teaching and prototypes, but Google has officially indicated Meridian as the successor for new projects.
How to implement a first MMM project in 6 steps
A first MMM cycle typically takes 4-8 weeks for a company with a 300-800K euro budget and two years of clean historical data. The six fundamental steps, based on best practices documented by Meta Robyn and Google Meridian, are:
- Data collection (weeks 1-2). Weekly sales (minimum 104 weeks), spend per media channel, impressions/GRP, average price, promotions, distribution, exogenous variables (holidays, weather, lockdowns, extraordinary events). Data quality drives model quality more than any statistical technique.
- Data structure (week 2). Weekly panel with one row per week (per-geo if available), columns for KPI, spend and control variables. CSV or parquet format. Exceptional outliers (e.g. COVID effect) should be explicitly flagged.
- Choice of transformation functions (week 3). Adstock (geometric or Weibull) and saturation (Hill function) for each channel. Parameters calibrated with realistic prior ranges: TV adstock 3-8 weeks, digital performance 0-2 weeks.
- Model training (weeks 3-4). Multiple runs (Robyn executes thousands of iterations with Nevergrad; Meridian uses Bayesian MCMC). Selection of candidate models on the Pareto front of decomposition error + prediction error.
- Calibration with experiments (weeks 4-5). Comparison of MMM coefficients with results from real incrementality tests (conversion lift, geo-holdout). Calibration is the step most often skipped and the one that distinguishes a reliable model from a numerical exercise.
- Budget optimisation and reporting (weeks 5-6). Reallocation scenarios with constraints (min/max per channel, total budget), saturation curves for planning dashboards, deliverables for non-technical stakeholders.
The next cycle (refresh) is faster (2-3 weeks) because data pipeline and model are already in place. The recommended cadence is quarterly or semi-annual.
Typical mistakes and how to avoid them
Academic literature and framework maintainers (see Harvard Business Review and Robyn’s “Analyst’s Guide” section) flag recurring mistakes that make MMM models useless.
- Too short a history. Less than 18 months (roughly 78 weeks) does not allow the model to distinguish seasonality from media effect. Minimum 104 weeks, ideal 156.
- No calibration. An MMM not calibrated with incrementality tests produces plausible but often far-from-real coefficients. Calibration reduces uncertainty by 30-50%.
- Ignoring exogenous variables. Launching a campaign during a price promotion and attributing all lift to media is the most common mistake.
- Over-interpreting single coefficients. MMM returns estimates with confidence intervals: a channel with coefficient 0.05 ± 0.15 did not contribute “little”, it is simply indistinguishable from zero in the dataset.
- Rejecting any qualitative input. Bayesian frameworks (Meridian) accept priors: market experience and industry benchmarks make the model more robust, not less “scientific”.
MMM or incrementality testing: when you need both
MMM does not replace incrementality testing: the two approaches are complementary. MMM offers a strategic view of the entire mix but is a correlational model; incrementality testing (conversion lift, geo-holdout) provides point causal evidence on a single channel. Best practice, documented by Nielsen teams and by Meridian’s calibration framework itself, is to use incrementality tests as ground truth to calibrate MMM coefficients.
In practice: 1-2 geo-holdouts per year on the main channels + semi-annual MMM refresh = a robust evidence-based measurement system. Those who implement only MMM risk making decisions on non-validated coefficients; those who implement only incrementality testing get point validation but miss the strategic cross-channel view.
Do you need to implement an MMM system?
Deep Marketing supports Italian SMEs and brands in designing evidence-based measurement systems with the open-source frameworks Meridian and Robyn, from data collection to experiment-based calibration. Request a feasibility assessment or explore our digital advertising consulting to align budget with real business impact.
Frequently Asked Questions
What is Media Mix Modeling?
Media Mix Modeling (MMM) is an econometric technique that estimates the contribution of each marketing channel to total sales through regression on aggregate data (week by week, channel by channel), controlling for exogenous variables such as price, seasonality and competition. Unlike multi-touch attribution, it does not depend on individual tracking and integrates online and offline channels (digital, TV, radio, OOH, print) in the same model. Since 2024-2026 it has been considered the privacy-proof standard for strategic marketing measurement.
What is the difference between MMM and multi-touch attribution?
Multi-touch attribution (MTA) tracks the individual user across touchpoints using cookies and device IDs, distributing conversion credit. MMM instead uses aggregate data (spend and sales per week), estimating statistical relationships without identifying the single user. MTA is granular but fragile (cookie deprecation, iOS ATT, ad blockers); MMM is aggregate but robust in a privacy-first environment. In 2026 the evidence-based approach combines strategic MMM with incrementality tests for causal validation.
How much does it cost to implement an MMM project in 2026?
With the open-source frameworks Meridian (Google) and Robyn (Meta), the cost of a first MMM project has dropped from 150-500K euros for traditional consulting to 15-50K euros for in-house work or specialised consulting. It requires at least 2 years of clean historical data, 4-8 weeks of work for the first cycle, 2-3 weeks for subsequent refreshes. The recommended cadence is quarterly or semi-annual.
Meridian or Robyn: which to choose?
Meridian (Google) is ideal for multi-regional brands with a Python team, supports hierarchical Bayesian modelling and integrates reach and frequency natively at geographic level. Robyn (Meta) is more accessible for R users, has extensive documentation and a broad community, and excels for DTC e-commerce and predominantly digital mixes. Both are open-source and actively maintained. For new projects on complex mixes, Meridian is the 2025-2026 reference; for rapid prototyping or budgets below 300K/year, Robyn remains more practical.
Do you need a data scientist to do MMM?
For a reliable first project, yes: MMM requires applied statistics skills (regression, Bayesian inference, hyperparameter tuning) that go beyond dashboard use. There are three alternatives: train a marketing analyst internally through a dedicated programme (6-12 months), engage an external consultancy to work alongside the internal team during the first 1-2 cycles, or buy SaaS platforms that abstract the complexity (Analytic Partners, Measured, Ipsos MMA) at 50-200K/year. The hybrid solution — consulting + internal upskilling — is the most widespread among SMEs.
Sources and References
- Google — Meridian: Open-Source Marketing Mix Model Now Available to Everyone (2025)
- Google — Meridian Open-Source Repository
- Meta — Robyn: Open-Source Marketing Mix Modeling
- Meta Robyn — Analyst’s Guide to MMM
- Google — LightweightMMM Repository
- Harvard Business Review — Raising the ROI of Marketing Mix Modeling
- Nielsen — 2024 Annual Marketing Report
- Gartner — 2024 CMO Survey on Marketing Value
- AppsFlyer — ATT Adoption Report
- Google Meridian — Experimental Calibration Framework


