Skip to main content

Overview

The Canvas is where you build and run pipelines — graph-based tools for extracting simulation model parameters from your operational data. Instead of manually translating process data, time studies, or historical records into model inputs, you build a pipeline that Dexter, the ProDex agent, runs to derive those values automatically — with full traceability back to the source. The output of a pipeline is a set of derivations: named parameter values, each linked to the source file it came from, the evidence supporting it, and the confidence level assigned by the extraction process. These derivations become the inputs you use to configure your simulation model.

When to Use Pipelines

Pipelines are most useful when you have structured or semi-structured data that needs to be translated into simulation parameters:
  • Time study spreadsheets with cycle time measurements
  • Historical production records with throughput and utilization data
  • Process documentation that describes operation sequences and durations
  • Equipment specifications with capacity and availability figures
Rather than reading through these files manually and entering values by hand, a pipeline automates the extraction and lets you review, correct, and trace the results.

Pipeline Structure

A pipeline is a directed graph with three node types: Source The entry point. A source node references an input file or a connected data source and provides instructions to Dexter on how to read it — what the data contains, what columns are relevant, and what context is needed to interpret it. Transformation An intermediate processing step. Transformation nodes apply operations to the data flowing through — filtering, aggregating, normalizing units, or combining data from multiple sources. Each transformation node includes instructions and optionally a code snippet showing how the transformation was performed. Output The exit point. Output nodes collect the final derived values as derivations. Each derivation records:
  • The parameter name and the model location it targets
  • The extracted value
  • The source file it came from
  • The query or extraction logic used
  • Supporting evidence (relevant excerpts, data points)
  • Assumptions made during extraction
  • A confidence level: high, medium, or low
  • Whether it was automated, re-derived, or manually overridden
Connect nodes with edges to define the data flow: Source → Transformation → Output is the typical pattern, but pipelines can have multiple sources feeding into a shared transformation or multiple outputs capturing different parameter sets.

Running a Pipeline

Pipelines are executed by Dexter. Open a conversation and ask Dexter to run a specific pipeline. Dexter processes each node in order, reads the source files, applies transformations, and populates the output nodes with derivations. Each run is recorded and stored as a pipeline run — see Runs for how to review execution history.

Reviewing Derivations

After a run completes, open the pipeline to see the derivation results in the output nodes. For each derived parameter:
  • Review the value and the evidence provided
  • Check the confidence level — low confidence derivations warrant manual verification
  • If a value is incorrect, you can override it manually. The original automated value is preserved alongside the correction, along with the reason for the change
Once you’re satisfied with the derivations, use them to configure your simulation model — either manually or by asking Dexter to apply them.

Workflow Integration

Pipelines can be linked to a workflow — a set of instructions Dexter follows when running the pipeline. Workflows let you encode your organization’s specific extraction standards: which columns to prioritize, how to handle missing data, what unit conversions to apply, and how to document assumptions. Custom workflows are created in Dexter and referenced by the pipeline’s workflow path. This ensures consistent extraction behavior across multiple runs and team members.