Convert Dockerfiles to Open Horizon Service Definition Files (SDFs) for edge computing deployments.
Container Converter is a TypeScript/Node.js tool that automates the conversion of Dockerfiles and Docker Compose files into Open Horizon Service Definition Files (SDFs). It parses container definitions, extracts service metadata, and generates valid SDF JSON files ready for deployment on Open Horizon edge computing platforms.
- Dockerfile Parsing: Extracts base images, exposed ports, environment variables, commands, volumes, and more
- Docker Compose Support: Parses docker-compose.yml files (v2.x and Compose Specification) with full support for multi-service applications
- Intelligent Inference: Automatically infers service name, version, and architecture from container definitions
- SDF Validation: Validates generated SDFs against schema and optionally with the Open Horizon CLI
- Exchange Publishing: Publish validated SDFs directly to an Open Horizon Exchange
- Multiple Interfaces:
- Command-line interface (CLI) for scripting and automation
- MCP Server for AI assistant integration
- Interactive TUI for conversational workflows
- Node.js 18.0.0 or higher
- npm or yarn
- (Optional) Open Horizon CLI (
hzn) for validation and publishing
npm install -g container-convertergit clone https://github.com/your-org/container-converter.git
cd container-converter
npm install
npm run build
npm link # Makes CLI commands available globallyConvert a Dockerfile to an SDF:
container-converter DockerfileThis creates a file named Dockerfile-sdf.json with the generated service definition.
container-converter Dockerfile \
-o my-service.json \
-n my-edge-service \
--svc-version 2.0.0 \
-a arm64 \
--org myorg \
--description "My edge computing service"container-converter Dockerfile --validatecontainer-converter Dockerfile --publish --config agent-install.cfg --creds mycreds.envcontainer-converter [options] <dockerfile>
| Argument | Description |
|---|---|
<input> |
Path to Dockerfile or docker-compose.yml to convert |
| Option | Description |
|---|---|
-o, --output <path> |
Output path for generated SDF (default: <input>-sdf.json) |
-t, --type <type> |
Input type: dockerfile or compose (auto-detected if omitted) |
--strategy <strategy> |
SDF generation strategy: single-sdf or multi-sdf (auto-inferred for compose) |
--output-dir <dir> |
Output directory for multi-SDF generation (used with --strategy multi-sdf) |
-n, --name <name> |
Service name (inferred from input if not provided) |
--svc-version <version> |
Service version (default: 1.0.0) |
-a, --arch <arch> |
Target architecture: amd64, arm64, arm (default: amd64) |
--org <org> |
Organization ID |
--description <desc> |
Service description |
--validate |
Validate generated SDF(s) with Open Horizon CLI |
--publish |
Publish to Open Horizon Exchange after conversion |
--config <path> |
Path to Exchange configuration file (.cfg format) |
--creds <path> |
Path to credentials file (.env format) |
--overwrite |
Overwrite if service already exists in Exchange |
--dry-run |
Validate publish without actually publishing |
-V, --version |
Output version number |
-h, --help |
Display help |
| Variable | Description |
|---|---|
HZN_ORG_ID |
Organization ID for Exchange |
HZN_EXCHANGE_USER_AUTH |
User credentials in user:password format |
HZN_EXCHANGE_URL |
Exchange URL (e.g., http://exchange.example.com:3090/v1) |
DEBUG |
Set to any value to enable stack traces on errors |
Input Dockerfile:
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
EXPOSE 3000
CMD ["node", "index.js"]Command:
container-converter Dockerfile -n my-node-app --svc-version 1.0.0Generated SDF:
{
"label": "my-node-app",
"description": "Service generated from Dockerfile",
"url": "my-node-app",
"version": "1.0.0",
"arch": "amd64",
"sharable": "multiple",
"deployment": {
"services": {
"my-node-app": {
"image": "node:18-alpine",
"ports": [
{
"HostIP": "0.0.0.0",
"HostPort": "3000:3000/tcp"
}
]
}
}
}
}Input Dockerfile:
FROM python:3.11-slim
ENV APP_PORT=8080
ENV DEBUG=false
ENV LOG_LEVEL=info
EXPOSE 8080
EXPOSE 9090
WORKDIR /app
COPY . .
CMD ["python", "app.py"]Command:
container-converter Dockerfile -n python-api --org myorg --validate# Using environment variables
export HZN_ORG_ID=myorg
export HZN_EXCHANGE_USER_AUTH=admin:password123
export HZN_EXCHANGE_URL=http://exchange.example.com:3090/v1
container-converter Dockerfile --publish
# Or using config files
container-converter Dockerfile \
--publish \
--config ~/agent-install.cfg \
--creds ~/mycreds.envContainer Converter supports parsing docker-compose.yml files (both legacy v2.x format and the modern Compose Specification). This enables conversion of multi-container applications to Open Horizon SDFs.
Convert a docker-compose.yml file:
container-converter docker-compose.yml -o services.json| Feature | Support | Notes |
|---|---|---|
image |
✅ Full | Required - images must be pre-built and pushed to a registry |
ports |
✅ Full | Converted to Open Horizon port mappings |
environment |
✅ Full | Supports both array and object formats |
volumes (bind mounts) |
✅ Full | Mapped to Open Horizon binds |
command / entrypoint |
✅ Full | Combined into SDF command array |
depends_on |
✅ Full | Mapped to requiredServices in multi-SDF mode |
privileged |
✅ Full | Directly mapped |
tmpfs |
✅ Full | Mapped to tmpfs mounts |
networks |
Open Horizon manages networking | |
build |
❌ Not supported | Pre-build images and use image field |
secrets / configs |
❌ Not supported | Use environment variables or bind mounts |
Input docker-compose.yml:
services:
web:
image: nginx:alpine
ports:
- '8080:80'
environment:
- NGINX_HOST=localhost
- NGINX_PORT=80Command:
container-converter docker-compose.ymlGenerated SDF:
{
"label": "compose-project",
"description": "Multi-container service generated from docker-compose.yml",
"url": "compose-project",
"version": "1.0.0",
"arch": "amd64",
"sharable": "multiple",
"deployment": {
"services": {
"web": {
"image": "nginx:alpine",
"ports": [
{
"HostIP": "0.0.0.0",
"HostPort": "8080:80/tcp"
}
],
"environment": ["NGINX_HOST=localhost", "NGINX_PORT=80"]
}
}
}
}Docker Compose environment variable syntax is fully supported:
services:
app:
image: ${APP_IMAGE:-myapp:latest}
ports:
- '${APP_PORT:-3000}:3000'
environment:
DATABASE_URL: ${DATABASE_URL}Supported formats:
${VAR}- Simple substitution${VAR:-default}- Use default if VAR is unset or empty${VAR-default}- Use default only if VAR is unset
Both legacy and modern Compose formats are supported:
Legacy v2.x format:
version: '2.4'
services:
web:
image: nginxModern Compose Specification (recommended):
services:
web:
image: nginxNote: The version field is deprecated in the Compose Specification and will trigger a warning.
Input docker-compose.yml:
services:
wordpress:
image: wordpress:latest
ports:
- '8080:80'
environment:
WORDPRESS_DB_HOST: db
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
WORDPRESS_DB_NAME: wordpress
depends_on:
- db
db:
image: mysql:8.0
environment:
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
MYSQL_RANDOM_ROOT_PASSWORD: '1'
volumes:
- db-data:/var/lib/mysql
volumes:
db-data:The generated SDF will include both services with proper dependency mapping via requiredServices.
When converting Docker Compose files, Container Converter supports two generation strategies:
What it does: Generates one SDF file containing all services in the deployment.services dictionary.
Best for:
- Simple applications (1-3 services)
- Services without complex dependencies
- Tightly coupled services that should always deploy together
- Development and testing scenarios
Example:
container-converter docker-compose.yml --strategy single-sdf -o app.jsonGenerated structure:
{
"label": "my-app",
"version": "1.0.0",
"deployment": {
"services": {
"web": { "image": "nginx:alpine", ... },
"api": { "image": "myapi:1.0", ... },
"cache": { "image": "redis:7", ... }
}
}
}Advantages:
- Single file to manage
- All services deploy atomically
- Simpler for small applications
Limitations:
- Cannot version services independently
- Cannot reuse services across patterns
- All services must be on same architecture
What it does: Generates separate SDF files for each service, with dependencies mapped to requiredServices.
Best for:
- Complex applications (4+ services)
- Services with dependencies (
depends_on) - Services that need independent versioning
- Reusable service components
- Production deployments
Example:
container-converter docker-compose.yml --strategy multi-sdf --output-dir ./sdfs/Generated structure:
sdfs/
├── web.json # Web service SDF
├── api.json # API service SDF (requires: db)
└── db.json # Database service SDF
Each SDF includes requiredServices for dependencies:
{
"label": "my-app - api",
"url": "my-app.api",
"version": "1.0.0",
"requiredServices": [
{
"url": "my-app.db",
"org": "myorg",
"versionRange": "1.0.0",
"arch": "amd64"
}
],
"deployment": {
"services": {
"api": { "image": "myapi:1.0", ... }
}
}
}Advantages:
- Independent service versioning
- Service reusability across patterns
- Better for microservices architecture
- Follows Open Horizon best practices
Limitations:
- Multiple files to manage
- Requires understanding of service dependencies
- More complex deployment workflow
Container Converter automatically infers the best strategy based on your Compose file:
| Condition | Inferred Strategy | Reason |
|---|---|---|
| ≤3 services, no dependencies | single-sdf |
Simple application |
| >3 services | multi-sdf |
Complex application |
Has depends_on |
multi-sdf |
Service dependencies |
| Multiple networks | multi-sdf |
Complex networking |
Override automatic inference:
# Force single-SDF for a complex app
container-converter docker-compose.yml --strategy single-sdf
# Force multi-SDF for a simple app
container-converter docker-compose.yml --strategy multi-sdf --output-dir ./sdfs/# Generate multi-SDF
container-converter docker-compose.yml \
--strategy multi-sdf \
--output-dir ./sdfs/ \
--org myorg \
--validate
# Output:
# ✓ Generated: sdfs/web.json
# ✓ Generated: sdfs/api.json
# ✓ Generated: sdfs/db.json
# ✓ Validated: sdfs/web.json
# ✓ Validated: sdfs/api.json
# ✓ Validated: sdfs/db.json# Publish all SDFs (publishes in dependency order)
container-converter docker-compose.yml \
--strategy multi-sdf \
--output-dir ./sdfs/ \
--publish \
--org myorg
# Output:
# ✓ Published: myorg/my-app.db@1.0.0
# ✓ Published: myorg/my-app.api@1.0.0 (requires: my-app.db)
# ✓ Published: myorg/my-app.web@1.0.0 (requires: my-app.api)# Convert EdgeX-style compose with many services
container-converter edgex-compose.yml \
--strategy multi-sdf \
--output-dir ./edgex-sdfs/ \
--org edgex \
--svc-version 3.0.0 \
-a arm64
# Generates separate SDFs for:
# - consul, redis, mqtt-broker (infrastructure)
# - core-data, core-metadata, core-command (core services)
# - device-virtual, device-rest (device services)
# - app-rules-engine (application services)When using multi-SDF strategy, Container Converter builds a dependency graph:
wordpress.db (no dependencies)
↑
└── wordpress.wordpress (depends on: db)
Services are published in topological order (dependencies first) to ensure all requiredServices exist in the Exchange before dependent services are published.
When migrating from docker-compose to Open Horizon, follow this comprehensive guide:
1.1 Build and Push Images
If your Compose file uses build sections, you must pre-build and push images:
# Build images
docker-compose build
# Tag for your registry
docker tag myapp_web:latest myregistry.io/myapp-web:1.0.0
docker tag myapp_api:latest myregistry.io/myapp-api:1.0.0
# Push to registry
docker push myregistry.io/myapp-web:1.0.0
docker push myregistry.io/myapp-api:1.0.01.2 Update Compose File
Replace build sections with image references:
# Before
services:
web:
build: ./web
# After
services:
web:
image: myregistry.io/myapp-web:1.0.01.3 Handle Named Volumes
Replace named volumes with bind mounts for edge deployments:
# Before (named volume)
services:
db:
volumes:
- db-data:/var/lib/mysql
volumes:
db-data:
# After (bind mount)
services:
db:
volumes:
- /data/mysql:/var/lib/mysql:rwWhy? Named volumes require Docker volume management, which may not be available on all edge devices. Bind mounts give you explicit control over data persistence.
For simple applications (1-3 services, no dependencies):
container-converter docker-compose.yml --strategy single-sdf -o app.jsonFor complex applications (4+ services, dependencies):
container-converter docker-compose.yml --strategy multi-sdf --output-dir ./sdfs/Let Container Converter decide (recommended):
container-converter docker-compose.yml # Auto-infers best strategyValidate schema locally:
container-converter docker-compose.yml --validateTest on edge device:
# Copy SDF to edge device
scp app.json edge-device:/tmp/
# On edge device, register the service
hzn register -n edge-node -p /tmp/app.jsonSingle-SDF:
container-converter docker-compose.yml \
--publish \
--org myorg \
--config agent-install.cfgMulti-SDF (publishes all in dependency order):
container-converter docker-compose.yml \
--strategy multi-sdf \
--output-dir ./sdfs/ \
--publish \
--org myorgAfter publishing services, create a deployment policy or pattern:
Example Policy (policy.json):
{
"label": "My App Deployment Policy",
"description": "Deploy my-app services to edge nodes",
"service": {
"name": "my-app.web",
"org": "myorg",
"arch": "amd64",
"serviceVersions": [
{
"version": "1.0.0"
}
]
},
"properties": [],
"constraints": [
"purpose == production"
]
}Register policy:
hzn exchange deployment addpolicy -f policy.json my-app-policyChallenge 1: Custom Networks
# Compose file with custom networks
networks:
frontend:
backend:Solution: Open Horizon manages networking automatically. Services can communicate using service names as hostnames. Remove custom network definitions.
Challenge 2: Health Checks
# Compose healthcheck
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost/health"]
interval: 30sSolution: Open Horizon uses agreement-based health monitoring. Remove healthcheck definitions. Configure node health policies instead.
Challenge 3: Resource Limits
# Compose resource limits
deploy:
resources:
limits:
cpus: '0.5'
memory: 512MSolution: Open Horizon handles resource management at the node level. Remove deploy sections. Configure node constraints in deployment policies.
Challenge 4: Secrets and Configs
# Compose secrets
secrets:
db_password:
file: ./secrets/db_password.txtSolution: Use environment variables or Open Horizon's secret management:
# Use environment variables instead
environment:
DB_PASSWORD: ${DB_PASSWORD}Then set on edge node:
export DB_PASSWORD="secure-password"
hzn register ...- All images built and pushed to accessible registry
-
buildsections removed from Compose file - Named volumes replaced with bind mounts
- Custom networks removed (or documented as not needed)
- Health checks removed (or documented)
- Resource limits removed (or documented)
- Secrets/configs migrated to environment variables
- Generated SDF(s) validated with
--validate - Tested on edge device
- Published to Exchange
- Deployment policy/pattern created
- Monitoring configured
- Version Everything: Use explicit image tags (not
latest) for reproducible deployments - Test Locally First: Use
docker-compose upto verify your updated Compose file works - Start Simple: Begin with single-SDF for initial testing, migrate to multi-SDF for production
- Document Dependencies: Clearly document service dependencies in your Compose file
- Plan Data Persistence: Design your bind mount strategy before deployment
- Monitor Edge Nodes: Set up monitoring for edge deployments (different from cloud monitoring)
- Rollback Strategy: Keep previous SDF versions for quick rollback if needed
Original docker-compose.yml:
version: '3.8'
services:
web:
build: ./web
ports:
- "8080:80"
depends_on:
- api
networks:
- frontend
api:
build: ./api
environment:
DB_HOST: db
depends_on:
- db
networks:
- frontend
- backend
db:
image: postgres:15
volumes:
- db-data:/var/lib/postgresql/data
networks:
- backend
volumes:
db-data:
networks:
frontend:
backend:Migrated docker-compose.yml:
services:
web:
image: myregistry.io/myapp-web:1.0.0
ports:
- "8080:80"
depends_on:
- api
api:
image: myregistry.io/myapp-api:1.0.0
environment:
DB_HOST: db
depends_on:
- db
db:
image: postgres:15
volumes:
- /data/postgres:/var/lib/postgresql/data:rwConversion command:
container-converter docker-compose.yml \
--strategy multi-sdf \
--output-dir ./sdfs/ \
--org myorg \
--svc-version 1.0.0 \
--validate \
--publishResult: Three SDFs published to Exchange with proper dependencies, ready for edge deployment.
Container Converter includes an MCP (Model Context Protocol) server that exposes its functionality as tools for AI assistants.
# Run directly with tsx
npx tsx src/mcp/server.ts
# Or after building
container-converter-mcp| Tool | Description |
|---|---|
convert_dockerfile |
Convert a Dockerfile to an SDF |
validate_sdf |
Validate an SDF (schema and/or CLI) |
publish_sdf |
Publish SDF to Open Horizon Exchange |
check_hzn_cli |
Check if hzn CLI is available and get version |
list_exchange_services |
List services in the Exchange |
| Parameter | Required | Description |
|---|---|---|
dockerfile_path |
Yes | Path to Dockerfile |
name |
No | Service name |
version |
No | Service version (default: 1.0.0) |
arch |
No | Target architecture (default: amd64) |
org |
No | Organization ID |
description |
No | Service description |
output_path |
No | Path to save generated SDF |
| Parameter | Required | Description |
|---|---|---|
sdf |
Yes | File path or SDF object |
use_cli |
No | Also validate with hzn CLI (default: true) |
| Parameter | Required | Description |
|---|---|---|
sdf |
Yes | File path or SDF object |
config_path |
No | Path to Exchange config file |
creds_path |
No | Path to credentials file |
overwrite |
No | Overwrite if service exists |
dry_run |
No | Validate without publishing |
Add to your MCP configuration:
{
"mcpServers": {
"container-converter": {
"command": "npx",
"args": ["tsx", "/path/to/src/mcp/server.ts"]
}
}
}The interactive Terminal User Interface provides a conversational workflow for container conversion.
# Run directly with tsx
npx tsx src/tui/index.tsx
# Or after building
container-converter-tui| Command | Description |
|---|---|
convert <dockerfile> |
Convert a Dockerfile to SDF |
validate <sdf-file> |
Validate an SDF file |
publish <sdf-file> |
Publish SDF to Exchange |
check cli |
Check hzn CLI availability |
list services |
List Exchange services |
preview |
Preview current SDF |
save <path> |
Save current SDF to file |
help |
Show available commands |
quit |
Exit the application |
- Conversational interface with command history
- Visual progress indicators
- SDF preview and editing workflow
- Color-coded output (info, success, warning, error)
- Keyboard shortcuts (Ctrl+C to exit)
Container Converter can be used as a library in your Node.js projects.
npm install container-converterimport {
DockerfileParser,
SDFGenerator,
SDFValidator,
publishService,
getCredentials,
} from 'container-converter';
import { readFileSync } from 'fs';
// Parse a Dockerfile
const dockerfile = readFileSync('Dockerfile', 'utf-8');
const parser = new DockerfileParser();
const dockerfileData = parser.parse(dockerfile);
// Generate SDF
const generator = new SDFGenerator();
const sdf = generator.generate(dockerfileData, {
name: 'my-service',
version: '1.0.0',
organization: 'myorg',
});
// Validate SDF
const validator = new SDFValidator();
const schemaResult = validator.validateSchema(sdf);
if (!schemaResult.valid) {
console.error('Validation errors:', schemaResult.errors);
}
// Publish to Exchange
const credentials = await getCredentials();
if (credentials) {
const result = await publishService({
credentials,
sdf,
});
console.log('Published:', result.serviceId);
}class DockerfileParser {
parse(content: string): DockerfileData;
}
interface DockerfileData {
baseImage: string;
exposedPorts: number[];
environment: Record<string, string>;
commands: string[][];
workdir: string;
user: string;
volumes: string[];
labels: Record<string, string>;
}class SDFGenerator {
generate(dockerfileData: DockerfileData, metadata?: Partial<ServiceMetadata>): ServiceDefinition;
}
interface ServiceMetadata {
name: string;
version: string;
architecture: string;
organization: string;
description: string;
}class SDFValidator {
validateSchema(sdf: ServiceDefinition): ValidationResult;
validateWithCli(sdf: ServiceDefinition): Promise<ValidationResult>;
validateFull(sdf: ServiceDefinition): Promise<ValidationResult>;
}
interface ValidationResult {
valid: boolean;
errors: Array<{ field?: string; message: string }>;
cliAvailable?: boolean;
}The Open Horizon CLI is required for validation and publishing. Install it from: https://github.com/open-horizon/anax
# Check if hzn is installed
which hzn
# Check version
hzn versionEnsure environment variables are set:
export HZN_ORG_ID=your-org
export HZN_EXCHANGE_USER_AUTH=user:password
export HZN_EXCHANGE_URL=http://your-exchange:3090/v1Or provide config files:
container-converter Dockerfile --publish \
--config agent-install.cfg \
--creds mycreds.env- Verify the Exchange URL is correct and accessible
- Check network connectivity
- Ensure the Exchange service is running
# Test connectivity
curl -s $HZN_EXCHANGE_URL/admin/statusThe generated SDF is missing required fields. Check:
- The Dockerfile has a valid
FROMinstruction - Required fields are provided via CLI options if not inferable
Enable stack traces for detailed error information:
DEBUG=1 container-converter Dockerfilegit clone https://github.com/your-org/container-converter.git
cd container-converter
npm install| Script | Description |
|---|---|
npm run dev |
Start development server with hot reload |
npm run build |
Build production bundle |
npm test |
Run all tests |
npm run test:watch |
Run tests in watch mode |
npm run test:coverage |
Run tests with coverage report |
npm run lint |
Run ESLint |
npm run lint:fix |
Auto-fix ESLint issues |
npm run format |
Format code with Prettier |
npm run typecheck |
Run TypeScript type checking |
npm run validate |
Run full validation (lint + typecheck + test) |
src/
├── cli/ # Command-line interface
├── mcp/ # MCP Server implementation
├── tui/ # Interactive terminal UI
├── parser/ # Dockerfile parsing logic
├── generator/ # SDF generation logic
├── validator/ # SDF validation logic
├── publisher/ # Exchange authentication and publishing
├── types/ # TypeScript type definitions
├── utils/ # Shared utilities
└── index.ts # Main entry point
tests/
├── unit/ # Unit tests
├── integration/ # Integration tests
└── fixtures/ # Test data and examples
# Run all tests
npm test
# Run specific test file
npm test -- --testPathPattern="sdf-generator"
# Run with coverage
npm run test:coverage- Fork the repository
- Create a feature branch (
git checkout -b feature/my-feature) - Make your changes
- Run validation (
npm run validate) - Commit your changes (
git commit -s -m "feat: add my feature") - Push to the branch (
git push origin feature/my-feature) - Open a Pull Request
We use Conventional Commits:
feat:New featurefix:Bug fixdocs:Documentation changestest:Test changesrefactor:Code refactoringchore:Maintenance tasks
Apache-2.0