CLI Reference
AortaCFD provides two command-line entry points: run_patient.py for single-case execution, and run_batch.py for parallel multi-case workflows.
run_patient.py -- Single Case Runner
usage: run_patient.py [-h] [--list] [--list-steps] [--steps STEPS]
[--step STEP] [--config PATH] [--profile NAME]
[--quick] [--run-name NAME] [--update CASE_PATH]
[--postprocess RUN_DIR] [--verbose]
[patient_id]
Positional Arguments
| Argument | Description |
|---|---|
patient_id |
Case identifier matching a directory under cases_input/ |
Information Flags
| Flag | Description |
|---|---|
--list, -l |
List all available cases in cases_input/ |
--list-steps |
Display workflow steps and their dependencies |
Workflow Control
| Flag | Description |
|---|---|
--steps STEPS, -s |
Comma-separated list of steps to execute (e.g., case,mesh,boundary) |
--step STEP |
Execute a single step (repeatable; e.g., --step case --step mesh) |
Available step names: case, mesh, boundary, regenerate-numerics, solver, reconstruct, postprocess, paraview, all.
Configuration
| Flag | Description |
|---|---|
--config PATH, -c |
Path to configuration JSON (default: cases_input/<case_id>/config.json) |
--profile NAME |
Override the numerics profile (robust, standard, or precise) |
--quick |
Enable fast test mode: coarse mesh, first-order numerics, serial execution |
Output and Update
| Flag | Description |
|---|---|
--run-name NAME, -n |
Custom output directory name (default: run_YYYYMMDD_HHMMSS) |
--update CASE_PATH, -u |
Update an existing run, preserving the mesh. Default steps: case,boundary |
--postprocess RUN_DIR, -p |
Re-run hemodynamic post-processing on a completed simulation |
Other
| Flag | Description |
|---|---|
--verbose, -v |
Display full log output instead of summary mode |
Examples
# Complete workflow for case BPM120
python run_patient.py BPM120
# Custom output folder name
python run_patient.py BPM120 --run-name baseline_standard
# Use an alternative configuration file
python run_patient.py BPM120 --config cases_input/BPM120/config_mesh_fine.json
# Execute only selected workflow steps
python run_patient.py BPM120 --steps case,mesh,boundary
python run_patient.py BPM120 --steps solver
# Override the numerics profile
python run_patient.py BPM120 --profile robust
# Fast test mode (coarse mesh, serial, first-order)
python run_patient.py BPM120 --quick
# Update existing run: preserve mesh, regenerate boundary conditions
python run_patient.py BPM120 --update output/BPM120/run_20250301_120000
python run_patient.py BPM120 --update output/BPM120/run_20250301_120000 --steps boundary,solver
# Standalone post-processing on a completed simulation
python run_patient.py --postprocess output/BPM120/run_20250301_120000
run_batch.py -- Batch and Parallel Runner
run_batch.py executes multiple cases in parallel using Python multiprocessing. It also supports SLURM job-array generation for HPC clusters.
usage: run_batch.py [--cases ID [ID ...]] [--steps STEPS] [--workers N]
[--config-list CASE:CONFIG ...] [--slurm]
[--partition NAME] [--time-limit HH:MM:SS]
[--cpus-per-task N] [--mem-per-cpu SIZE] [--dry-run]
Options
| Flag | Description |
|---|---|
--cases ID [ID ...] |
Specific case IDs to run (default: discover all under cases_input/) |
--steps STEPS |
Comma-separated workflow steps (default: all) |
--workers N, -w |
Number of parallel workers (default: min(cases, CPU count)) |
--config-list CASE:CONFIG ... |
Run the same case with different configurations (for convergence studies) |
--slurm |
Generate a SLURM job-array submission script instead of running locally |
--partition NAME |
SLURM partition (default: batch) |
--time-limit HH:MM:SS |
SLURM wall-clock limit (default: 24:00:00) |
--cpus-per-task N |
CPUs per SLURM task (default: 8) |
--mem-per-cpu SIZE |
Memory per CPU for SLURM (default: 4G) |
--dry-run |
List cases and configurations without executing |
Examples
# Run all discovered cases with automatic worker count
python run_batch.py
# Specific cases with limited parallelism
python run_batch.py --cases PAT002 PAT003 --workers 2
# Mesh convergence study (same patient, different configs)
python run_batch.py \
--config-list PAT002:config_mesh10.json PAT002:config_mesh12.json PAT002:config_mesh14.json \
--workers 2
# Generate SLURM job-array script for HPC submission
python run_batch.py --slurm --partition gpu --time-limit 24:00:00 --cpus-per-task 16
# Dry run: list what would be executed
python run_batch.py --cases PAT002 BPM120 --dry-run
Cohort Comparison
After a batch run completes, QoIs from all cases are aggregated into a single comparison file:
This CSV contains the key hemodynamic metrics (TAWSS percentiles, OSI, pressure drop) for each case and configuration, suitable for cohort-level analysis.
Workflow Steps
The eight workflow steps can be combined freely via the --steps flag. The default is all, which executes them in order.
| Step | Internal Task | Description |
|---|---|---|
case |
setup:dict |
Create case structure, scale geometry, write OpenFOAM dictionaries |
mesh |
run:mesh |
Execute blockMesh, surfaceFeatures, snappyHexMesh, checkMesh |
boundary |
setup:bc |
Generate inlet flow data, configure outlet BCs, write 0/ fields |
regenerate-numerics |
setup:regenerate-numerics |
Adapt fvSchemes/fvSolution to mesh quality |
solver |
run:solver |
Execute foamRun with parallel decomposition if configured |
reconstruct |
run:reconstruct |
Reconstruct parallel case from processor*/ directories |
postprocess |
run:hemodynamics |
Compute TAWSS, OSI, RRT, pressure drop; export QoIs |
paraview |
execute_post |
Generate ParaView visualisation outputs |
Common Step Combinations
# Mesh only (for mesh quality iteration)
python run_patient.py BPM120 --steps case,mesh
# Re-run solver with different BCs (preserving mesh)
python run_patient.py BPM120 --update output/BPM120/run_xxx --steps boundary,solver
# Post-process only (after simulation completes)
python run_patient.py --postprocess output/BPM120/run_xxx
Found an issue or have a suggestion for this page?
Open a GitHub issue