This guide will help you deploy the SpaceAPI server using the pre-built Docker image.
- Docker installed on your system
- (Optional) Docker Compose for easier management
- Copy docker-compose-prod.yml to your host.
- Create your
.envandspaceapi.jsonfiles as described in README. docker-compose up -d
Making the Package Public:
- Go to your GitHub repository page
- Click on "Packages" in the right sidebar
- Click on the
spaceapi-endpointpackage - Click "Package settings"
- Scroll to "Danger Zone" → "Change visibility"
- Change to "Public"
If you want to test immediately before CI runs:
# Build the image locally
docker build -f Dockerfile.spaceapi -t spaceapi:local .
# Run it with your local tag
docker run -d \
--name spaceapi \
-p 8080:8080 \
-v $(pwd)/spaceapi.json:/app/spaceapi.json:ro \
--env-file .env \
--restart unless-stopped \
spaceapi:local# Create a directory for your deployment
mkdir spaceapi-server
cd spaceapi-server
# Download the example configuration
curl -O https://raw.githubusercontent.com/q30-space/spaceapi-endpoint/main/spaceapi.json.example
# Rename it to spaceapi.json
mv spaceapi.json.example spaceapi.jsonEdit spaceapi.json and update at minimum:
space- Your hackerspace nameurl- Your website URLlogo- Your logo URLlocation- Your physical address and coordinatescontact- Your contact information
# Generate a secure API key
echo "SPACEAPI_AUTH_KEY=$(openssl rand -hex 32)" > .env
# View the generated key (save it somewhere safe!)
cat .envOption A: Using Docker directly
docker run -d \
--name spaceapi \
-p 8080:8080 \
-v $(pwd)/spaceapi.json:/app/spaceapi.json:ro \
--env-file .env \
--restart unless-stopped \
ghcr.io/q30-space/spaceapi-endpoint:latestOption B: Using Docker Compose
# Download the production compose file
curl -O https://raw.githubusercontent.com/q30-space/spaceapi-endpoint/main/docker-compose.prod.yml
# Start the service
docker-compose -f docker-compose.prod.yml up -d# Check the health endpoint
curl http://localhost:8080/health
# Check your SpaceAPI endpoint
curl http://localhost:8080/api/spaceYou should see your space information in JSON format!
Now you can update your space status using the API:
# Get your API key from the .env file
source .env
# Open the space
curl -X POST \
-H "X-API-Key: $SPACEAPI_AUTH_KEY" \
-H "Content-Type: application/json" \
-d '{"open": true, "message": "Space is open!"}' \
http://localhost:8080/api/space/state
# Close the space
curl -X POST \
-H "X-API-Key: $SPACEAPI_AUTH_KEY" \
-H "Content-Type: application/json" \
-d '{"open": false, "message": "Space is closed"}' \
http://localhost:8080/api/space/state
# Update people count
curl -X POST \
-H "X-API-Key: $SPACEAPI_AUTH_KEY" \
-H "Content-Type: application/json" \
-d '{"value": 5, "location": "Main Space"}' \
http://localhost:8080/api/space/peopleFor production, you should run the service behind a reverse proxy with HTTPS.
Caddy (Recommended - automatic HTTPS)
api.yourdomain.com {
reverse_proxy localhost:8080
encode gzip
}Nginx with Let's Encrypt
server {
listen 80;
server_name api.yourdomain.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
server_name api.yourdomain.com;
ssl_certificate /etc/letsencrypt/live/api.yourdomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/api.yourdomain.com/privkey.pem;
location / {
proxy_pass http://localhost:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}# Pull the latest image
docker pull ghcr.io/q30-space/spaceapi-endpoint:latest
# Recreate the container
docker-compose -f docker-compose.prod.yml up -d --force-recreate
# Or with plain docker
docker stop spaceapi
docker rm spaceapi
# Then run the docker run command again from Quick Start# With Docker Compose
docker-compose -f docker-compose.prod.yml logs -f
# With plain Docker
docker logs -f spaceapiMake sure to backup your files regularly:
spaceapi.json- Your space configuration.env- Your API key
# Create a backup
tar czf spaceapi-backup-$(date +%Y%m%d).tar.gz spaceapi.json .env# Check the logs
docker logs spaceapi
# Common issues:
# - Missing spaceapi.json file
# - Invalid JSON in spaceapi.json
# - Port 8080 already in use# Check if the container is running
docker ps | grep spaceapi
# Check if the port is accessible
curl http://localhost:8080/health
# If using a firewall, make sure port 8080 is open
sudo ufw allow 8080/tcp# Verify your API key is set
docker exec spaceapi env | grep SPACEAPI_AUTH_KEY
# Make sure you're using the correct header
# -H "X-API-Key: your_key_here"# Edit docker-compose.prod.yml and change the ports section:
ports:
- "9000:8080" # Host port:Container port
# Or with docker run:
docker run -d \
--name spaceapi \
-p 9000:8080 \
-v $(pwd)/spaceapi.json:/app/spaceapi.json:ro \
--env-file .env \
--restart unless-stopped \
ghcr.io/q30-space/spaceapi-endpoint:latestAdd to your docker-compose.yml:
deploy:
resources:
limits:
cpus: '0.5'
memory: 128M
reservations:
cpus: '0.1'
memory: 64M- Documentation: See README.md for full documentation
- Issues: Report bugs on GitHub Issues
- SpaceAPI Spec: https://spaceapi.io/docs/
- ✅ Always use HTTPS in production (via reverse proxy)
- ✅ Keep your API key secret (never commit
.envto git) - ✅ Use strong API keys (32+ random characters)
- ✅ Mount spaceapi.json as read-only (
:roflag) - ✅ Keep the Docker image updated
- ✅ Use firewall rules to restrict access if needed
- ✅ Monitor logs for suspicious activity
- ✅ Rotate API keys periodically
- Set up automatic space status updates (see scripts/)
- Configure monitoring (health checks, uptime monitoring)
- Join the SpaceAPI community
- Add your space to the SpaceAPI directory
The repository includes a production-ready docker-compose configuration (docker-compose.prod.yml):
# Use the provided production compose file
docker-compose -f docker-compose.prod.yml up -d
# View logs
docker-compose -f docker-compose.prod.yml logs -f
# Stop the service
docker-compose -f docker-compose.prod.yml downOr create your own docker-compose.yml:
version: '3.8'
services:
spaceapi:
image: ghcr.io/q30-space/spaceapi-endpoint:latest
container_name: spaceapi
ports:
- "8080:8080"
volumes:
- ./spaceapi.json:/app/spaceapi.json:ro
env_file:
- .env
restart: unless-stopped
healthcheck:
test: ["CMD", "wget", "--quiet", "--tries=1", "--spider", "http://localhost:8080/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10sThen run:
docker-compose up -dlatest- Latest stable release from main branchmain- Latest commit from main branchdevelop- Latest commit from develop branchv1.0.0- Specific version tags (when available)main-<sha>- Specific commit SHA
The project follows Semantic Versioning for releases:
- Major versions (v2.0.0): Breaking changes
- Minor versions (v1.1.0): New features, backward compatible
- Patch versions (v1.0.1): Bug fixes, backward compatible
Current Status: Pre-release (v0.x.x) - API may change before v1.0.0
The Docker images support both amd64 and arm64 architectures, so you can run them on:
- x86_64 / AMD64 servers
- ARM64 servers (including Raspberry Pi 4/5, AWS Graviton, etc.)
Example Caddy configuration:
your-domain.com {
reverse_proxy localhost:8080
encode gzip
}Example Nginx configuration:
server {
listen 80;
server_name your-domain.com;
location / {
proxy_pass http://localhost:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}# Start the service
make docker-compose-up
# or
docker-compose up -d
# Check status
docker-compose ps
# View logs
docker-compose logs -f spaceapi- Update
spaceapi.jsonwith your space details - Build binary:
make build - Deploy the
bin/spaceapibinary
- Update
spaceapi.jsonwith your space details - Build binaries:
make build - Start the container:
make docker-compose-up(or justdocker compose up -d)
- Update
spaceapi.jsonwith your space details - Update
Caddyfile - Copy
Caddyfile,docker-compose-caddy.ymland.envto the parent directory and cd there - Build binaries:
docker compose build --no-cache -f docker-compose-caddy.yml spaceapi - Start the containers:
docker compose up -d -f docker-compose-caddy.yml