Post-run Plots

Purpose

Post-run plots decompose the fitted response into its constituent parts. They answer the question: how much does each predictor contribute to the modelled KPI, and how do those contributions evolve over time? These plots are written to 30_post_run/ within the run directory.

The runner generates them via write_response_decomposition_artifacts() in R/run_artifacts_enrichment.R, which calls runner_response_decomposition_tables() to compute per-term contributions from the design matrix and posterior coefficient estimates. For hierarchical models with random-effects formula syntax (|), the decomposition may fail gracefully — the runner logs a warning and continues to downstream stages.

Plot catalogue

Filename What it shows Conditions
decomp_predictor_impact.png Total contribution per model term (bar chart) Decomposition tables computed successfully
decomp_timeseries.png Stacked media channel contribution over time Decomposition tables computed successfully; media terms present

Predictor impact

Filename: decomp_predictor_impact.png

Predictor impact Predictor impact

What it shows

A horizontal bar chart of the total contribution of each model term to the response, computed as the sum of coefficient × design-matrix column across all observations. Terms are sorted by absolute contribution magnitude. The intercept and total rows are excluded.

When it is generated

The runner generates this plot when runner_response_decomposition_tables() returns a valid predictor-level summary table. This requires that stats::model.matrix() can parse the model formula against the original input data — a condition that holds for BLM and pooled models but may fail for hierarchical models whose formulas contain random-effects syntax.

How to interpret it

The bar lengths represent total modelled impact over the data period. Media channels with large positive bars drove the most KPI in the model’s account of the data. Control variables (trend, seasonality, holidays) often dominate in absolute terms because they capture baseline demand — this is expected and does not diminish the media findings.

Negative contributions can arise for terms with negative coefficients (e.g. price sensitivity) or for seasonality harmonics where the net effect over the year partially cancels.

Warning signs

  • A media channel with negative total contribution: Unless the coefficient is intentionally unconstrained (no lower boundary at zero), a negative contribution suggests the model is absorbing noise or confounding through that channel. Review the posterior forest plot and check whether the coefficient’s credible interval excludes zero.
  • Intercept-dominated decomposition (not shown here, but visible in the CSV): If the intercept accounts for >90% of the total, media effects are negligible relative to baseline demand. This may be correct, but it limits the utility of the model for budget allocation.
  • Missing plot: If the decomposition failed (logged as a warning), the model type likely does not support direct model.matrix() decomposition. The CSV companions will also be absent.

Action

Use this plot to prioritise which channels to scrutinise. Cross-reference large contributors with the prior vs posterior plot to confirm they are data-driven rather than prior-driven.

  • decomp_predictor_impact.csv in 30_post_run/ contains the same data in tabular form.
  • posterior_summary.csv in 30_post_run/ provides the coefficient summary underlying the decomposition.

Decomposition time series

Filename: decomp_timeseries.png

Decomposition time series Decomposition time series

What it shows

A stacked area chart of media channel contributions over time. Each layer represents one media term’s weekly contribution (coefficient × transformed media input). Non-media terms (intercept, controls, seasonality) are excluded to focus the view on the media mix.

When it is generated

The runner generates this plot alongside the predictor impact chart, provided the decomposition tables include at least one media term.

How to interpret it

The height of each band at a given week represents how much that channel contributed to the modelled response. Seasonal patterns in the stack reflect campaign timing and adstock carry-over. The total height of the stack is the aggregate media contribution — the gap between this and the observed KPI is accounted for by non-media terms and noise.

Warning signs

  • A channel with near-zero contribution throughout: The model assigns negligible effect to that channel. This could be correct (low spend, weak signal) or a sign that multicollinearity is suppressing the estimate.
  • Implausibly large single-channel dominance: If one channel accounts for the vast majority of the media stack, verify the coefficient is plausible and not inflated by collinearity with a correlated channel.
  • Abrupt jumps unrelated to spend changes: Check whether the design matrix term (adstock/saturation output) is well-behaved. Sudden spikes in contribution without corresponding spend changes suggest a data or transform issue.

Action

Compare the relative channel contributions here with the business’s spend allocation. Channels that receive large spend but show small contributions may have diminishing returns or weak effects. This comparison motivates the budget optimisation stage.

  • decomp_timeseries.csv in 30_post_run/ contains the weekly decomposition in long format.

Cross-references