Optimisation Plots
Purpose
Optimisation plots visualise the outputs of the budget allocator. They translate model estimates into actionable budget decisions by showing response curves, efficiency comparisons, and the sensitivity of recommendations to budget changes. These are decision-layer artefacts: they sit downstream of all modelling and diagnostics, and their quality depends entirely on the credibility of the upstream fit.
All optimisation plots are written to 60_optimisation/ within the run directory. The runner generates them via write_budget_optimisation_artifacts() in R/run_artifacts_enrichment.R, which calls the public plotting APIs in R/optimise_budget_plots.R. They require a successful call to optimise_budget() that produces a budget_optimisation object with a plot_data payload.
Plot catalogue
| Filename | What it shows | Conditions |
|---|---|---|
budget_response_curves.png |
Channel response curves with current/optimised points | Optimisation completed with response curve data |
budget_roi_cpa.png |
ROI or CPA comparison by channel | Optimisation completed with ROI/CPA summary |
budget_impact.png |
Spend reallocation and response impact (diverging bars) | Optimisation completed with ROI/CPA summary |
budget_contribution.png |
Absolute response comparison by channel | Optimisation completed with ROI/CPA summary |
budget_confidence_comparison.png |
Posterior credible intervals for current vs optimised | Optimisation completed with response points |
budget_sensitivity.png |
Total response change when each channel varies ±20% | Optimisation completed with response curve data |
budget_efficient_frontier.png |
Optimised response across budget levels | Efficient frontier computed via budget_efficient_frontier() |
budget_kpi_waterfall.png |
KPI decomposition waterfall (base + channels + controls) | Waterfall data computable from model coefficients and data means |
budget_marginal_roi.png |
Marginal ROI (or marginal response) curves by channel | Optimisation completed with response curve data |
budget_spend_share.png |
Current vs optimised spend allocation as percentage | Optimisation completed with ROI/CPA summary |
Response curves
Filename: budget_response_curves.png
What it shows
Faceted line charts of the estimated response curve for each media channel. The x-axis is raw spend (model units); the y-axis is expected response. A shaded band shows the posterior credible interval around the mean curve. Two marked points per channel indicate the current (reference) and optimised spend allocations.
The subtitle notes which media transforms were applied (e.g. Hill saturation, adstock). A caption reports the marginal response at the optimised point for each channel.
When it is generated
The runner generates this plot whenever optimise_budget() returns response curve data in the plot_data payload. This requires at least one media channel in the allocation configuration with a computable response function.
How to interpret it
The curve shape encodes diminishing returns. Steep initial slopes indicate high marginal response at low spend; flattening curves indicate saturation. The gap between the current and optimised points shows the direction of the recommended reallocation: if the optimised point sits to the right (higher spend) of the current point, the allocator recommends increasing that channel’s budget.
The credible band width reflects posterior uncertainty about the response function. Wide bands mean the shape is poorly identified — the recommendation is sensitive to modelling assumptions. Narrow bands indicate data-informed estimates.
Warning signs
- Very wide credible bands: The response curve shape is uncertain. Budget recommendations based on it carry substantial risk.
- Optimised point near the flat part of the curve: The channel is saturated at the recommended spend. Further increases yield negligible marginal returns.
- Current and optimised points nearly identical: The allocator found little room for improvement on that channel. The current allocation is already near-optimal (or the response function is too uncertain to justify a change).
Action
Compare the marginal response values across channels. The allocator equalises marginal response at the optimum — if marginal values differ substantially, the optimisation may have hit a constraint (spend floor/ceiling). Cross-reference with the budget sensitivity plot to assess how robust the recommendation is.
Related artefacts
budget_response_curves.csvin60_optimisation/contains the curve data.budget_response_points.csvin60_optimisation/contains the current and optimised point coordinates.
ROI/CPA comparison
Filename: budget_roi_cpa.png
What it shows
A grouped bar chart comparing ROI (or CPA, for subscription KPIs) by channel under the current and optimised allocations. If currency_col is defined per channel, bars show financial ROI; otherwise they show response-per-unit-spend in model units. A TOTAL bar summarises the portfolio-level metric.
The metric choice is automatic: the allocator uses ROI for revenue-type KPIs and CPA for subscription-type KPIs.
When it is generated
The runner generates this plot whenever the optimisation result includes a roi_cpa summary table.
How to interpret it
Channels where the optimised bar exceeds the current bar gain efficiency from the reallocation. Channels where the optimised bar is lower have had spend reduced — their marginal efficiency was below the portfolio average. The TOTAL bar shows the net portfolio improvement.
Warning signs
- Optimised ROI lower than current for most channels: The allocator redistributed spend towards higher-response channels, which may have lower per-unit efficiency but larger absolute contribution. This is not necessarily wrong — the allocator maximises total response, not per-channel ROI.
- TOTAL bar shows negligible improvement: The current allocation is already near-optimal, or the model’s response functions are too flat to support meaningful reallocation.
- Very large ROI values on low-spend channels: Small denominators inflate ROI. These channels may have high marginal returns at low spend but limited capacity to absorb budget.
Action
Do not interpret this plot in isolation. Cross-reference with the contribution comparison and the response curves to distinguish efficiency improvements from scale effects.
Related artefacts
budget_roi_cpa.csvin60_optimisation/contains the per-channel ROI/CPA values.budget_summary.csvin60_optimisation/provides the top-level allocation summary.
Allocation impact
Filename: budget_impact.png
What it shows
A horizontal diverging bar chart in two facets. The left facet shows spend reallocation (positive = increase, negative = decrease) per channel. The right facet shows the corresponding response impact. Bars are coloured green for increases and red for decreases. A TOTAL row at the bottom summarises the net change with muted styling.
Channels are sorted by response impact magnitude — the channels most affected by the reallocation appear at the top.
When it is generated
The runner generates this plot whenever the optimisation result includes a roi_cpa summary with delta_spend and delta_response columns.
How to interpret it
The spend facet shows where the allocator moves budget. The response facet shows the expected consequence. A useful pattern is a channel that receives a spend decrease (red bar, left) but shows a small response decrease (small red bar, right) — that channel was inefficient and the freed budget drives larger gains elsewhere.
Warning signs
- Large spend increase on a channel with modest response gain: Diminishing returns may be steep. Verify against the response curve.
- Response decreases that exceed response gains: The allocator expects a net negative outcome. This should not happen with a correctly specified
max_responseobjective, and suggests a configuration or constraint issue.
Action
Use this chart to brief stakeholders on the “where and why” of reallocation. Pair it with the confidence comparison to communicate whether the expected gains are statistically distinguishable from zero.
Response contribution
Filename: budget_contribution.png
What it shows
A grouped bar chart comparing absolute expected response (contribution) by channel under the current and optimised allocations. Delta annotations above each pair show the change. A TOTAL bar with muted styling shows the portfolio-level gain. The subtitle reports the percentage total response gain from optimisation.
When it is generated
The runner generates this plot whenever the optimisation result includes mean_reference and mean_optimised columns in the roi_cpa summary.
How to interpret it
This chart answers the question: in absolute terms, how much more (or less) response does each channel deliver under the optimised allocation? Unlike the ROI chart, this view is not distorted by small denominators — it shows the quantity the allocator actually maximises.
Warning signs
- Negative delta on a channel with high current contribution: The allocator is pulling spend from a channel that currently contributes a great deal. This is rational if the marginal return on that channel is below the portfolio average, but it requires careful communication to stakeholders accustomed to interpreting total contribution as “importance”.
- TOTAL gain is small: The reallocation may not justify the operational cost of implementing it. Consider whether the confidence intervals overlap (see confidence comparison).
Action
Report the TOTAL percentage gain as the headline number. Caveat it with the credible interval width from the confidence comparison. If the gain is within posterior uncertainty, the recommendation is suggestive rather than conclusive.
Related artefacts
budget_allocation.csvin60_optimisation/contains the per-channel spend and response values.
Confidence comparison
Filename: budget_confidence_comparison.png
What it shows
A horizontal forest plot (dodge-positioned point-and-errorbar) showing the posterior mean response and 90% credible interval for each channel under the current (grey) and optimised (red) allocations. Channels where the intervals overlap suggest that the reallocation gain may not be statistically meaningful.
When it is generated
The runner generates this plot whenever the optimisation result includes response point data with mean, lower, and upper columns for both reference and optimised allocations.
How to interpret it
Focus on channels where the optimised interval (red) does not overlap with the current interval (grey). These are the channels where the reallocation produces a distinguishable change in expected response. Overlapping intervals mean the posterior cannot confidently distinguish the two allocations — the gain exists in expectation but falls within sampling uncertainty.
Warning signs
- All intervals overlap: The data is too uncertain to support a confident reallocation recommendation. The allocator’s point estimate suggests improvement, but the posterior cannot distinguish it from noise.
- One channel shows a clear gain while others overlap: The headline portfolio gain may be driven by a single channel. Verify that channel’s response curve and prior-posterior shift.
Action
Use this plot to calibrate the confidence of the recommendation. If intervals overlap for most channels, present the allocation as “directionally suggestive” rather than “statistically supported”. If key channels show clear separation, the recommendation is stronger.
Budget sensitivity
Filename: budget_sensitivity.png
What it shows
A spider chart (line plot) showing how total expected response changes when each channel’s spend is varied ±20% from its optimised level, while all other channels are held fixed. Steeper lines indicate channels whose budgets have the most influence on total response. A horizontal dashed line at zero marks the optimised baseline.
When it is generated
The runner generates this plot whenever the optimisation result includes response curve data. The ±20% range and 11 evaluation points per channel are defaults set in plot_budget_sensitivity().
How to interpret it
Channels with steep lines are the most sensitive: small deviations from their optimised spend produce large response changes. Flat lines indicate channels where modest budget deviations have little impact — the response function is either saturated (on the flat part of the curve) or nearly linear (constant marginal return).
Warning signs
- A channel with an asymmetric slope (steep downward, flat upward): Cutting this channel’s spend is costly, but increasing it yields little. It is at or near its saturation point.
- All lines nearly flat: The optimisation surface is plateau-like. The allocator’s recommendation is robust to implementation imprecision, but also implies limited upside from optimisation.
- Lines that cross: Channels swap in relative importance at different budget perturbations. This complicates simple priority rankings.
Action
Use this chart to communicate implementation risk. If the recommended allocation is operationally difficult to achieve exactly, the sensitivity chart shows which channels require precise execution and which have margin for error.
Efficient frontier
Filename: budget_efficient_frontier.png
What it shows
A line-and-point chart of total optimised response as a function of total budget. Each point represents the optimal allocation at that budget level (expressed as a percentage of the current total budget). A red diamond marks the current budget level. The curve shows how much additional response is achievable by increasing the total budget — and the diminishing returns of doing so.
When it is generated
The runner generates this plot when budget_efficient_frontier() produces a budget_frontier object with at least two feasible points. This requires a valid optimisation result and a set of budget multipliers (configured in allocation.efficient_frontier).
How to interpret it
The frontier’s shape reveals the budget’s overall productivity. A concave curve (steepening, then flattening) is the classic diminishing-returns shape: each additional unit of budget buys less incremental response. The gap between the current point and the curve above it shows the unrealised potential at the same budget — the difference between the current allocation and the optimal one.
Warning signs
- Frontier is nearly linear: Returns are approximately constant across the budget range. The model may not have enough data to identify saturation, or the budget range is too narrow to reveal it.
- Frontier flattens early: The portfolio saturates at a budget well below the current level. The current spend may be wastefully high.
- Only 2–3 feasible points: The optimiser could not find feasible allocations at most budget levels. Constraints may be too tight.
Action
Use the frontier to frame budget conversations. The curve shows what is achievable at each budget level. If a stakeholder proposes a budget cut, the frontier quantifies the response cost. If they propose an increase, it quantifies the expected gain. Present the frontier alongside the spend share comparison to show how the allocation shifts at each level.
Related artefacts
budget_efficient_frontier.csvin60_optimisation/contains the frontier data.
KPI waterfall
Filename: budget_kpi_waterfall.png
What it shows
A horizontal waterfall bar chart decomposing the predicted KPI into its constituent components: base (intercept), trend, seasonality, holidays, controls, and individual media channels. Each bar shows the mean posterior coefficient multiplied by the mean predictor value — the average contribution of that component to the predicted KPI. A red TOTAL bar anchors the sum.
When it is generated
The runner generates this plot when build_kpi_waterfall_data() can extract posterior coefficients and match them to predictor means in the original data. This requires that the model’s .formula and .original_data are both accessible. For hierarchical models with random-effects syntax, the waterfall may fail gracefully and be skipped.
How to interpret it
The waterfall answers: “of the total predicted KPI, how much comes from each source?” The base (intercept) typically dominates, representing baseline demand independent of media and controls. Media channels sit at the bottom, showing their individual incremental contributions. The relative sizes of the media bars correspond to the decomposition impact chart (decomp_predictor_impact.png), but computed slightly differently (mean × mean vs sum over time).
Warning signs
- Negative media contributions: A channel with a negative bar reduces predicted KPI. Unless the coefficient is intentionally unconstrained, this suggests a fitting or identification problem.
- Intercept dwarfs all other terms: The model attributes nearly all KPI to baseline demand. Media effects are marginal. This may be realistic for low-spend brands but limits the value of budget optimisation.
- Missing plot (skipped with warning): The model type does not support direct waterfall decomposition.
Action
Use the waterfall to contextualise media contributions within the total predicted KPI. For stakeholder reporting, it provides a clear answer to “what drives our KPI?” — while emphasising that media is one factor among several.
Related artefacts
budget_kpi_waterfall.csvin60_optimisation/contains the waterfall data.
Marginal ROI curves
Filename: budget_marginal_roi.png
What it shows
Faceted line charts of marginal ROI (or marginal response, if no currency conversion is configured) as a function of spend for each channel. The marginal value is computed as the first difference of the response curve: the additional response per additional unit of spend. Current and optimised points are marked.
When it is generated
The runner generates this plot whenever the optimisation result includes response curve data with at least two points per channel.
How to interpret it
The marginal ROI curve is the derivative of the response curve. At the optimised allocation, the allocator equalises marginal ROI across channels (subject to constraints). If one channel’s marginal ROI at the optimised point is substantially higher than another’s, a constraint (spend floor or ceiling) is preventing further reallocation.
Diminishing returns appear as a downward-sloping marginal curve: each additional unit of spend yields less incremental response than the last. Channels with steeper slopes saturate faster.
Warning signs
- Marginal ROI near zero at the optimised point: The channel is at or near saturation. Additional spend yields negligible incremental response.
- Marginal ROI that increases with spend: This implies increasing returns, which is unusual for media. It may indicate a response curve misspecification or insufficient data in the high-spend region.
- Large differences in marginal ROI at the optimised points across channels: Constraints are binding. The allocator cannot equalise marginal returns because spend bounds prevent it.
Action
Use marginal ROI to identify which channels have headroom (high marginal ROI at the optimised point) and which are saturated (marginal ROI near zero). This informs not just the current allocation but also the value of relaxing spend constraints.
Spend share comparison
Filename: budget_spend_share.png
What it shows
Two horizontal stacked bars showing the percentage allocation of total budget across channels: one for the current allocation and one for the optimised allocation. Percentage labels appear within each segment (for segments ≥ 4% of total). The subtitle reports the total budget in currency or model units for both allocations.
When it is generated
The runner generates this plot whenever the optimisation result includes a roi_cpa summary with spend_reference and spend_optimised columns.
How to interpret it
This is the most intuitive optimisation output for non-technical stakeholders. It answers: “how should we split the budget?” Segments that grow from current to optimised represent channels the allocator recommends investing more in; segments that shrink represent channels to reduce.
Warning signs
- A channel disappears (0% share) in the optimised allocation: The allocator has hit the channel’s spend floor (which may be zero). If this is unintended, raise the minimum spend constraint.
- Allocations are nearly identical: The current mix is already near-optimal, or the model cannot distinguish channel effects well enough to justify reallocation.
- Very small segments in both allocations: Channels with negligible spend share contribute little to the optimisation. Consider whether they should be included or grouped.
Action
Present this chart as the primary recommendation visual. Accompany it with the confidence comparison to communicate the certainty of the recommendation and the allocation impact chart to show the expected consequence.
Cross-references
- Post-run plots — decomposition that informs the optimisation inputs
- Model selection plots — LOO diagnostics that validate the model underlying these recommendations
- Diagnostics plots — residual checks on the fitted model
- Runner output artefacts — complete artefact inventory
- Plot index









