Overview
A single simulation run tells you what happens with one set of inputs. An experiment tells you what happens across many. Experiments in ProDex let you compare multiple scenarios side by side — different model configurations, different schedules, different parameters — and see how your KPIs change across them. Combined with Monte Carlo simulation, experiments also answer a deeper question: not just what your expected performance is, but how confident you should be in that number.What Is Monte Carlo Simulation?
In any real factory, there’s variability. Processing times fluctuate. Machines go down unexpectedly. Demand shifts. A single simulation run uses one random draw from every distribution in your model — it’s one possible version of reality. Monte Carlo simulation runs the same model dozens or hundreds of times, each with a different random seed. Every run produces slightly different results because the stochastic elements (processing time distributions, arrival patterns, failure rates) play out differently each time. The result is a distribution of outcomes rather than a single number. Instead of “throughput is 847 units/day,” you get “throughput is 847 units/day on average, with a standard deviation of 23, and a 95th percentile of 891.” That’s the difference between a point estimate and a decision you can actually trust.Setting Up an Experiment
An experiment lives inside a model. To create one, navigate to the What-If section in the sidebar.Defining Cases
Each experiment is built from cases. A case is a specific combination of:- Checkpoint — a saved version of your model (its parameters, structure, and configuration)
- Schedule (optional) — a production plan or arrival schedule to use for the run
| Case | Checkpoint | Schedule |
|---|---|---|
| Baseline | Current model | Standard demand |
| Extra capacity | +1 CNC machine | Standard demand |
| Peak season | Current model | Holiday demand |
Defining KPIs
Experiment KPIs define what you’re comparing across cases. These are separate from your model’s run-level KPIs — they’re designed for cross-case comparison. Each experiment KPI specifies:- A query that computes a value from the simulation’s output data
- A display name, unit, and format (number, percentage, time, currency)
- An optional color for chart display
Defining Charts
Experiment charts visualize differences across cases. Common chart types for experiments include comparison bar charts (one bar per case), box plots (showing statistical spread), and multi-series histograms (showing overlapping distributions from Monte Carlo runs).Running an Experiment
Once your cases are defined, click Run. ProDex queues a simulation for each enabled case and executes them in parallel. You’ll see real-time progress as each case completes. For Monte Carlo runs, you specify the number of seeds (iterations) per case — anywhere from 1 to 1,024. More seeds produce more statistically reliable results but take longer to run. For most models, 50–100 seeds give a good balance of speed and confidence. Each seed runs the same model with a different random number sequence, producing a unique set of results. ProDex aggregates these automatically.Analyzing Experiment Results
Single-Case Results
When viewing results for a single Monte Carlo case, you see:- Statistical summaries for each KPI — mean, standard deviation, median, and percentiles (25th, 75th, 90th, 95th, 99th)
- Histograms showing the distribution of each KPI across all seeds
- Box plots visualizing the spread (min, quartiles, max)
Cross-Case Comparison
When comparing multiple cases, results are displayed side by side:- KPI comparison grid — each KPI shown for every case, so you can directly compare performance
- Multi-series histograms — overlapping distributions on shared axes, making it visually clear where cases overlap and where they diverge
- Statistical tables — full summary statistics for each case’s KPI values
- Does adding a machine actually improve throughput, or does the bottleneck just move?
- How much does the peak season schedule stress our resources compared to baseline?
- Which scenario has the tightest distribution (most predictable performance)?
Comparing Scenarios
The What-If section is where experiment results live. You can:- Select individual cases to compare
- View KPI differences as absolute values or percentages
- Drill into any case’s detailed run results
- Re-run experiments with modified cases or additional seeds
When to Use Experiments
| Scenario | Approach |
|---|---|
| ”What does my model produce?” | Single run |
| ”How reliable is that number?” | Monte Carlo (single case, 50+ seeds) |
| “Which option is better?” | Experiment (multiple cases, single run each) |
| “Which option is better, accounting for variability?” | Experiment + Monte Carlo (multiple cases, 50+ seeds each) |

