This guide provides step-by-step instructions to set up, configure, and run the DashPVA application.
You can install DashPVA dependencies using either Conda (recommended for full compatibility) or UV (faster installation). Choose the method that best fits your needs.
UV is an extremely fast Python package installer and resolver written in Rust. It provides much faster dependency resolution and installation compared to traditional pip.
Prerequisites:
- Python 3.11 installed on your system
Installation Steps:
-
Install UV (if not already installed):
Linux/macOS:
curl -LsSf https://astral.sh/uv/install.sh | shAfter installation, add UV to your PATH:
# For bash/zsh (Linux/macOS) source $HOME/.local/bin/env # Or add permanently to ~/.bashrc or ~/.zshrc: export PATH="$HOME/.local/bin:$PATH"
Windows (PowerShell):
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
After installation, restart your terminal or add to PATH:
$env:PATH += ";$env:USERPROFILE\.cargo\bin"
Alternative (all platforms):
pip install uv
-
Install dependencies (UV will automatically create a virtual environment):
uv sync
This single command will:
- Create a virtual environment (
.venv/) automatically - Install all dependencies from
pyproject.toml - Use locked versions from
uv.lockfor reproducible installs
- Create a virtual environment (
-
Activate the environment and run the application:
Option A: Activate the virtual environment (traditional way):
# Linux/macOS source .venv/bin/activate # Windows (PowerShell) .venv\Scripts\Activate.ps1 # Windows (Command Prompt) .venv\Scripts\activate.bat
Then run your commands normally:
python dashpva.py setup
Option B: Use UV to run commands directly (no activation needed):
uv run python dashpva.py setup uv run python dashpva.py detector
Note: All dependencies including pvapy (required for PVAccess) are automatically installed via uv sync. No conda installation is needed!
Quick Start Summary:
# 1. Install UV (one-time setup)
curl -LsSf https://astral.sh/uv/install.sh | sh # Linux/macOS
# OR: pip install uv # All platforms
# 2. Install all dependencies (creates .venv automatically)
uv sync
# 3. Activate and run
source .venv/bin/activate # Linux/macOS
# OR: uv run python dashpva.py setup # No activation neededVerify Installation:
uv --version
uv pip listUpdating Dependencies:
# Update dependencies and regenerate lock file
uv lock --upgrade
# Sync with updated dependencies
uv syncUsing the environment.yml file, you can install the environment using the conda command:
conda env create -f environment.ymlInstead of using the environment.yml file, you can follow these manual instructions to set up the environment:
- Create a new Conda environment:
conda create -n DashPVA python=3.11 numpy pyqt pyqtgraph xrayutilities h5py toml
- Activate the environment:
conda activate DashPVA
- Install additional dependencies:
conda install -c apsu pvapy pip install pyepics
Verify Conda Installation: Ensure all dependencies are installed correctly:
conda listDashPVA now uses a command-line interface (CLI) for launching different components. All commands use the main dashpva.py script.
Set up PVA workflow and configure the system:
Run Command:
python dashpva.py setupWith Simulator:
python dashpva.py setup --simKey Features:
- Set PVA prefix and collector address.
- Load, edit, or create PV configuration files.
- Input caching frequency for live view.
Launch the live image visualization GUI:
Run Command:
python dashpva.py detectorGUI Features:
- Start/Stop Live View: Begin or end live image streaming.
- ROI Tools: Add, view, and manipulate ROIs on the displayed image.
- Statistical Monitoring: View and log key metrics from the live feed.
- Frame-by-Frame Processing: Supports both predetermined and spontaneous scan modes.
Launch the interactive 3D visualization tool:
Run Command:
python dashpva.py hkl3dFeatures:
- Interactive 3D point cloud visualization
- Real-time data streaming and analysis
- Integration with PVA data sources
Launch the standalone 3D slicer for offline data analysis:
Run Command:
python dashpva.py slice3dFeatures:
- Interactive 3D visualization with real-time slicing
- HDF5 data loading capabilities
- Slice extraction and analysis tools
- Loading indicators for large datasets
- Configurable reduction factors for performance optimization
# Run the launcher
python dashpva.py run
# Setup the system
python dashpva.py setup
# Launch area detector viewer
python dashpva.py detector
# Launch 3D visualization tools
python dashpva.py hkl3d
python dashpva.py slice3d
# Get help on available commands
python dashpva.py --helpFor HKL (reciprocal space) live streaming and analysis, DashPVA uses a multi-stage pipeline that processes detector images through several consumers before displaying HKL coordinates in real-time.
Detector → Metadata Associator → Collector → RSM Consumer → HKL Viewer
Each stage adds or processes data:
- Detector: Raw image data from area detector
- Metadata Associator: Attaches motor positions and metadata to images
- Collector: Collects and buffers images with metadata
- RSM Consumer: Calculates HKL coordinates from motor positions
- HKL Viewer: Displays 3D HKL visualization
Before starting HKL streaming, you must configure the TOML file with your beamline-specific PVs.
A. Set Detector Prefix (Line 2):
DETECTOR_PREFIX = 'your_beamline:detector_prefix'
# Example: '11idb:AD1' or '8idb:detector'B. Configure Metadata PVs (Lines 24-30):
[METADATA]
[METADATA.CA]
# Add your Channel Access PVs (motor positions, etc.)
x = 'your_beamline:x_motor_RBV'
y = 'your_beamline:y_motor_RBV'
# Add any other metadata PVs needed
[METADATA.PVA]
# Add your PVAccess PVs here if anyC. Configure HKL Section (Lines 83-154): This section is critical for HKL calculations. Update all motor PVs, spec PVs, and detector setup:
[HKL]
# Sample Circle Motors (typically 4 axes)
[HKL.SAMPLE_CIRCLE_AXIS_1]
AXIS_NUMBER = 'your_beamline:motor1_RBV:AxisNumber'
DIRECTION_AXIS = 'your_beamline:motor1_RBV:DirectionAxis'
POSITION = 'your_beamline:motor1_RBV:Position'
# Repeat for SAMPLE_CIRCLE_AXIS_2, 3, 4
# And DETECTOR_CIRCLE_AXIS_1, 2
[HKL.SPEC]
ENERGY_VALUE = 'your_beamline:spec:Energy:Value'
UB_MATRIX_VALUE = 'your_beamline:spec:UB_matrix:Value'
[HKL.DETECTOR_SETUP]
CENTER_CHANNEL_PIXEL = 'your_beamline:DetectorSetup:CenterChannelPixel'
DISTANCE = 'your_beamline:DetectorSetup:Distance'
PIXEL_DIRECTION_1 = 'your_beamline:DetectorSetup:PixelDirection1'
PIXEL_DIRECTION_2 = 'your_beamline:DetectorSetup:PixelDirection2'
SIZE = 'your_beamline:DetectorSetup:Size'
UNITS = 'your_beamline:DetectorSetup:Units'Note: For different beamlines, create a beamline-specific config file:
cp pv_configs/metadata_pvs.toml pv_configs/metadata_pvs_YOUR_BEAMLINE.tomlFollow these steps in order to start the complete HKL streaming pipeline:
python dashpva.py detector- Enter your PVA channel name (e.g.,
'11idb:detector:Image') - Click "Start Live View"
- Keep this terminal running - This shows live detector images
Purpose: Verify detector is streaming correctly before starting the processing pipeline.
python dashpva.py setupThis opens the PVA Setup Dialog with multiple tabs. Configure each component:
- Click "Browse" and select your
metadata_pvs.tomlfile (or beamline-specific version) - The "Current Mode" label will show the caching mode from your config
- This config file will be used by all consumers
This consumer attaches metadata (motor positions, etc.) to detector images.
Configuration:
- Input Channel: Your detector PVA channel (e.g.,
'11idb:detector:Image') - Output Channel: Where associator sends data (e.g.,
'processor:associator:output') - Control Channel:
'processor:*:control'(default) - Status Channel:
'processor:*:status'(default) - Processor File:
consumers/hpc_metadata_consumer.py - Processor Class:
HpcAdMetadataProcessor - Report Period:
5(seconds, default) - Server Queue Size:
100(default) - N Consumers:
1(default) - Distributor Updates:
10(default)
Action: Click "Run Associator Consumers"
What it does: Reads PVs from [METADATA] and [HKL] sections of your TOML file and attaches their values to each detector image frame.
This consumer collects and buffers images with attached metadata.
Configuration:
- Collector ID:
1(default) - Producer ID List:
1(default, comma-separated if multiple) - Input Channel: Same as Associator Output Channel (e.g.,
'processor:associator:output') - Output Channel: Where collector sends data (e.g.,
'processor:collector:output') - Control Channel:
'processor:*:control'(default) - Status Channel:
'processor:*:status'(default) - Processor File:
consumers/hpc_passthrough_consumer.py - Processor Class:
HpcPassthroughProcessor - Report Period:
5(seconds, default) - Server Queue Size:
100(default) - Collector Cache Size:
1000(default)
Action: Click "Run Collector"
What it does: Collects images with metadata from the associator and forwards them to the next stage.
This consumer calculates HKL coordinates from motor positions.
Configuration:
- Input Channel: Same as Collector Output Channel (e.g.,
'processor:collector:output') - Output Channel: Where RSM data goes (e.g.,
'processor:rsm:output') - Control Channel:
'processor:*:control'(default) - Status Channel:
'processor:*:status'(default) - Processor File:
consumers/hpc_rsm_consumer.py - Processor Class:
HpcRsmProcessor - Report Period:
5(seconds, default) - Server Queue Size:
100(default) - N Consumers:
1(default) - Distributor Updates:
10(default)
Action: Click "Run Analysis Consumer"
What it does:
- Reads motor positions from the
[HKL]section of your TOML file - Calculates reciprocal space (HKL) coordinates using xrayutilities
- Outputs HKL data (qx, qy, qz) for visualization
python dashpva.py hkl3d- Input Channel: Enter the RSM Consumer Output Channel (e.g.,
'processor:rsm:output') - Config File: Browse and select your
metadata_pvs.tomlfile - Click "Start Live View"
What it does: Displays real-time 3D HKL visualization with point cloud data streaming from the RSM consumer.
-
Channel Names Must Match: The output channel of one component must match the input channel of the next:
- Associator Output → Collector Input
- Collector Output → RSM Consumer Input
- RSM Consumer Output → HKL Viewer Input
-
TOML File is Critical:
- The Metadata Associator reads PVs from
[METADATA]and[HKL]sections - The RSM Consumer uses
[HKL]section PVs to calculate HKL coordinates - All motor PVs must be correctly specified in the
[HKL]section
- The Metadata Associator reads PVs from
-
Startup Order Matters:
- Start Detector Viewer first (to verify detector is working)
- Then start PVA Setup and launch consumers in order: Associator → Collector → RSM Consumer
- Finally, start HKL Viewer
-
For Different Beamlines:
- Create a beamline-specific TOML config file
- Update all PV names to match your beamline's EPICS PVs
- The same startup sequence applies, just use your beamline's config file
Before starting HKL streaming, ensure:
- TOML config file has correct
DETECTOR_PREFIX -
[METADATA]section has your metadata PVs -
[HKL]section has all motor PVs (sample circle, detector circle) -
[HKL.SPEC]section has energy and UB matrix PVs -
[HKL.DETECTOR_SETUP]section has detector geometry PVs - PVA channel names are consistent across all components
- All consumers are started in the correct order
All configuration files are stored in the pv_configs/ directory.
Below is an example configuration file (example_config.toml):
# Required Setup
CONSUMER_TYPE = "spontaneous"
# Section used specifically for Metadata Pvs
[METADATA]
[METADATA.CA]
x = "x"
y = "y"
[METADATA.PVA]
# Section specifically for ROI PVs
[ROI]
[ROI.ROI1]
MIN_X = "dp-ADSim:ROI1:MinX"
MIN_Y = "dp-ADSim:ROI1:MinY"
SIZE_X = "dp-ADSim:ROI1:SizeX"
SIZE_Y = "dp-ADSim:ROI1:SizeY"
[ROI.ROI2]
MIN_X = "dp-ADSim:ROI2:MinX"
MIN_Y = "dp-ADSim:ROI2:MinY"
SIZE_X = "dp-ADSim:ROI2:SizeX"
SIZE_Y = "dp-ADSim:ROI2:SizeY"
[ROI.ROI3]
MIN_X = "dp-ADSim:ROI3:MinX"
MIN_Y = "dp-ADSim:ROI3:MinY"
SIZE_X = "dp-ADSim:ROI3:SizeX"
SIZE_Y = "dp-ADSim:ROI3:SizeY"
[ROI.ROI4]
MIN_X = "dp-ADSim:ROI4:MinX"
MIN_Y = "dp-ADSim:ROI4:MinY"
SIZE_X = "dp-ADSim:ROI4:SizeX"
SIZE_Y = "dp-ADSim:ROI4:SizeY"
[STATS]
[STATS.STATS1]
TOTAL = "dp-ADSim:Stats1:Total_RBV"
MIN = "dp-ADSim:Stats1:MinValue_RBV"
MAX = "dp-ADSim:Stats1:MaxValue_RBV"
SIGMA = "dp-ADSim:Stats1:Sigma_RBV"
MEAN = "dp-ADSim:Stats1:MeanValue_RBV"
[STATS.STATS4]
TOTAL = "dp-ADSim:Stats4:Total_RBV"
MIN = "dp-ADSim:Stats4:MinValue_RBV"
MAX = "dp-ADSim:Stats4:MaxValue_RBV"
SIGMA = "dp-ADSim:Stats4:Sigma_RBV"
MEAN = "dp-ADSim:Stats4:MeanValue_RBV"
# For use in the analysis server, not on the client side.
[ANALYSIS]
# substitute with real PVs that are also in Metadata
AXIS1 = "x"
AXIS2 = "y"To use a custom configuration, load the file through the ConfigDialog GUI or place it in the pv_configs/ folder.
- Ensure the Conda environment is activated:
conda activate DashPVA
- Reinstall the necessary packages using the commands listed above.
- Verify
.uifiles (e.g.,imageshow.ui) exist in thegui/folder. - Ensure correct paths for configuration files.
If you encounter the error:
Cannot find dbd directory, please set EPICS_DB_INCLUDE_PATH environment variable to use CA metadata PVs.
This occurs when using CA (Channel Access) metadata PVs in the collector testing script. The script needs to find EPICS database definition files.
Solution 1: Set EPICS_DB_INCLUDE_PATH manually
Find your EPICS base installation and set the environment variable:
# For APS systems, EPICS base is typically at:
export EPICS_DB_INCLUDE_PATH=/APSshare/epics/base-7.0.8/dbd
# Or if using conda-installed pvapy:
# Find where pvapy is installed, then look for dbd directory
# Usually: $CONDA_PREFIX/share/epics/dbd or similar
# To find it automatically:
python -c "import pvaccess as pva; import os; print(os.path.dirname(pva.__file__))"
# Then navigate to the dbd directory relative to that locationSolution 2: Add to your shell configuration
Add to ~/.bashrc or ~/.zshrc:
export EPICS_DB_INCLUDE_PATH=/APSshare/epics/base-7.0.8/dbdSolution 3: Use PVA metadata instead of CA
If you don't need CA metadata, use PVA metadata instead:
# Instead of: -mpv ca://x,ca://y
# Use: -mpv pva://x,pva://yNote: The script will attempt to auto-detect the dbd directory, but if pvData library cannot be found, you must set EPICS_DB_INCLUDE_PATH manually.
Refer to the README.md for an overview of the project or contact the repository maintainer for assistance.