Ansible automation to provision an OVE (OpenShift Virtualization Engine) demo environment: networking, bastion VM, and bare-metal-emulated OVE nodes. Supports two infrastructure backends controlled by infra_backend in inventory:
openstack(default) — Provisions on an OpenStack cloud (project, Neutron networking, Nova VMs)libvirt— Provisions on a single RHEL KVM host using libvirt and Open vSwitch for VLAN trunking
Two install methods are controlled by install_method in inventory (work with both backends):
ove(default) — Agent ISO boot: blank root volume + USB volume from agent ISO, UEFI intervention requiredappliance— Factory disk image: pre-built appliance.raw fromopenshift-appliancecontainer, direct boot, no UEFI intervention required
See docs/infra-providers.md for detailed backend-specific documentation.
- Python 3 with
venv - OpenStack backend: A
clouds.yamlwith credentials that can create projects and users - libvirt backend: A RHEL KVM host with root SSH access, sufficient CPU/RAM/disk for bastion + OVE nodes
# Create and activate a virtual environment
python3 -m venv .venv
source .venv/bin/activate
# Install Python dependencies
pip install -r requirements.txt
# Install Ansible collections
ansible-galaxy collection install -r requirements.ymlcp inventory/group_vars/all.yml.sample inventory/group_vars/all.ymlEdit inventory/group_vars/all.yml and fill in all fields marked # Must be set by operator:
Common variables:
| Variable | Description |
|---|---|
infra_backend |
"openstack" or "libvirt" |
rh_subscription_org |
Red Hat subscription org ID |
rh_subscription_activation_key |
Red Hat activation key |
pull_secret |
(appliance mode only) Pull secret from console.redhat.com |
OpenStack backend:
| Variable | Description |
|---|---|
cloud_name |
Cloud name from your clouds.yaml |
ssh_key_name |
OpenStack keypair name for bastion access |
os_auth_url |
OpenStack identity endpoint (e.g. https://identity.example.com:5000/v3) |
os_region |
OpenStack region name |
libvirt backend:
| Variable | Description |
|---|---|
kvm_host |
IP or hostname of the KVM host |
bastion_qcow2_image |
Path to RHEL 9.x qcow2 image on the KVM host |
ssh_public_key |
SSH public key content for bastion cloud-init injection |
The .ove-demo-cache/lab-{lab_id}/ directory holds per-lab generated secrets (project suffix, sushy password, app credential) and is gitignored. Delete a lab's cache subdirectory to reset that lab's project identity.
Always activate the venv first:
source .venv/bin/activateansible-playbook site.yml # Full build (idempotent)
ansible-playbook teardown.yml # Destroy everything including project
ansible-playbook reset-ove-nodes.yml # Reset nodes (OVE: blank root; appliance: re-factory)Use tags for selective execution:
ansible-playbook site.yml --tags nodes # Only create OVE nodes
ansible-playbook site.yml --tags bastion,nodes # Skip infra, configure bastion + nodes
ansible-playbook site.yml --list-tags # Show all available tagsAvailable tags: infra, networking, bastion, nodes, appliance, sushy, openstack, libvirt.
For appliance mode, set install_method: "appliance" in all.yml. The appliance image is built on the bastion during site.yml (Play 3). To rebuild it separately:
ansible-playbook build-appliance-image.ymlEach lab is defined by a YAML file in labs/ that sets lab_id, infra_backend, install_method, and any per-lab overrides (see labs/*.yml.sample for examples).
TUI (interactive dashboard):
./lab-manager-tui.shThe TUI shows a live status table of all labs with current phase, progress, and task counters. Use arrow keys to select a lab, then press d to deploy, t to teardown, r to reset, or x to cancel a running action. The bottom panel streams the selected lab's Ansible log in real time (Enter to switch to highlighted lab, l to cycle). The TUI can attach to labs already running from the CLI.
CLI (headless):
# Deploy specific labs in parallel
./lab-manager-cli.sh deploy labs/openstack-appliance.yml labs/libvirt-ove.yml
# Teardown a single lab
./lab-manager-cli.sh teardown labs/libvirt-appliance.yml
# Deploy/teardown/reset all labs in labs/
./lab-manager-cli.sh deploy-all
./lab-manager-cli.sh teardown-all
./lab-manager-cli.sh reset-allEach lab runs as a background process. Logs are written to logs/YYYYMMDD-HHMMSS/<lab-name>.log and a summary is printed when all labs finish. Both the CLI and TUI write structured state to .ove-demo-cache/tui/, so you can start labs from the CLI and attach the TUI later to monitor progress.
The lab_id variable (integer 0--55) isolates each lab's resources: credential cache, network address space, sushy port, and (for libvirt) OVS bridges, NAT networks, dnsmasq/sushy services, and MAC addresses. See docs/infra-providers.md for details.