<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Plots — DSAMbayes Documentation</title>
    <link>//localhost:1313/plots/index.html</link>
    <description>Purpose This section documents every plot the DSAMbayes runner produces. Each page covers one pipeline stage, describes what the plot shows, explains when and why the runner generates it, and gives practical interpretation guidance. The target reader is a modelling operator or analyst who needs to assess run quality without reading source code.&#xA;Pipeline stages The runner writes artefacts into timestamped directories under results/. Plots are organised into six stages, each with its own subdirectory:</description>
    <generator>Hugo</generator>
    <language>en-gb</language>
    <atom:link href="//localhost:1313/plots/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Pre-run Plots</title>
      <link>//localhost:1313/plots/pre-run-plots/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//localhost:1313/plots/pre-run-plots/index.html</guid>
      <description>Purpose Pre-run plots are generated before the model is fitted. They visualise the input data and flag structural problems — multicollinearity, missing spend periods, implausible KPI–media relationships — that could compromise inference. Treat these as a data quality gate: review them before interpreting any downstream output.&#xA;All pre-run plots are written to 10_pre_run/ within the run directory. They require ggplot2 and are generated by write_pre_run_plots() in R/run_artifacts_enrichment.R. The runner produces them whenever an allocation.channels block is present in the configuration and the data contains the referenced spend columns.</description>
    </item>
    <item>
      <title>Model Fit Plots</title>
      <link>//localhost:1313/plots/model-fit-plots/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//localhost:1313/plots/model-fit-plots/index.html</guid>
      <description>Purpose Model fit plots summarise the posterior and compare fitted values against observed data. They answer two questions: does the model track the response variable adequately, and are the estimated coefficients plausible? These plots are written to 20_model_fit/ within the run directory.&#xA;The runner generates them via write_model_fit_plots() in R/run_artifacts_enrichment.R. All four plots require ggplot2 and the fitted model object. Each is wrapped in tryCatch so that a failure in one does not prevent the others from being written.</description>
    </item>
    <item>
      <title>Post-run Plots</title>
      <link>//localhost:1313/plots/post-run-plots/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//localhost:1313/plots/post-run-plots/index.html</guid>
      <description>Purpose Post-run plots decompose the fitted response into its constituent parts. They answer the question: how much does each predictor contribute to the modelled KPI, and how do those contributions evolve over time? These plots are written to 30_post_run/ within the run directory.&#xA;The runner generates them via write_response_decomposition_artifacts() in R/run_artifacts_enrichment.R, which calls runner_response_decomposition_tables() to compute per-term contributions from the design matrix and posterior coefficient estimates. For hierarchical models with random-effects formula syntax (|), the decomposition may fail gracefully — the runner logs a warning and continues to downstream stages.</description>
    </item>
    <item>
      <title>Diagnostics Plots</title>
      <link>//localhost:1313/plots/diagnostics-plots/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//localhost:1313/plots/diagnostics-plots/index.html</guid>
      <description>Purpose Diagnostics plots assess whether the fitted model’s assumptions hold and whether any structural problems warrant remedial action. They cover residual behaviour, posterior predictive adequacy, and boundary constraint monitoring. These plots are written to 40_diagnostics/ within the run directory.&#xA;The runner generates residual plots via write_residual_diagnostics() in R/run_artifacts_diagnostics.R, the PPC plot via write_model_fit_plots() in R/run_artifacts_enrichment.R, and the boundary hits plot via write_boundary_diagnostics() in R/run_artifacts_diagnostics.R. Each plot is wrapped in tryCatch so that individual failures do not block the remaining outputs.</description>
    </item>
    <item>
      <title>Model Selection Plots</title>
      <link>//localhost:1313/plots/model-selection-plots/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//localhost:1313/plots/model-selection-plots/index.html</guid>
      <description>Purpose Model selection plots provide leave-one-out cross-validation (LOO-CV) diagnostics that assess predictive adequacy and calibration. They help answer: does the model generalise to unseen observations, and are any individual data points unduly influencing the fit? These plots are written to 50_model_selection/ within the run directory.&#xA;The runner generates them via write_model_selection_artifacts() in R/run_artifacts_diagnostics.R. LOO-CV is computed using Pareto-smoothed importance sampling (PSIS-LOO) from the loo package, which approximates exact leave-one-out predictive densities from a single MCMC fit. All three plots depend on the pointwise LOO table (loo_pointwise.csv), which contains per-observation ELPD contributions, Pareto-k diagnostics, and influence flags.</description>
    </item>
    <item>
      <title>Optimisation Plots</title>
      <link>//localhost:1313/plots/optimisation-plots/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//localhost:1313/plots/optimisation-plots/index.html</guid>
      <description>Purpose Optimisation plots visualise the outputs of the budget allocator. They translate model estimates into actionable budget decisions by showing response curves, efficiency comparisons, and the sensitivity of recommendations to budget changes. These are decision-layer artefacts: they sit downstream of all modelling and diagnostics, and their quality depends entirely on the credibility of the upstream fit.</description>
    </item>
  </channel>
</rss>