- Python 3.13
- Git
- Docker (for E2E tests with MariaDB testcontainer)
- Chrome/Chromium browser (for E2E tests)
-
Clone the repository:
git clone <repository-url> cd workshop-inventory-tracking
-
Create and activate virtual environment:
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies:
pip install -r requirements.txt pip install -r requirements-test.txt
-
Install Playwright browsers (for E2E tests):
python -m playwright install chromium
The project uses Nox for consistent test execution across environments. All test commands should be run from the project root directory.
Command: nox -s tests
Purpose: Tests individual components in isolation using mock dependencies.
Coverage:
- Model Tests (
test_models.py): Dimensions, Thread classes and enum validation - Service Tests (
test_inventory_service.py): Business logic, search, filtering, batch operations using SQLite backend - InventoryService Tests (
test_mariadb_inventory_service.py): MariaDB-specific active-only filtering logic - Basic Tests (
test_basic.py): Infrastructure and integration points - Audit Tests (
test_audit_logging.py): Audit logging functionality
Runtime: ~0.3 seconds
Command: nox -s e2e
Purpose: Tests complete user workflows through the web interface using browser automation.
Database: Uses MariaDB testcontainer locally (auto-managed) and MariaDB service in CI
Coverage:
- Form Submission: Adding new inventory items via web form
- Data Persistence: Verifying items are saved and retrievable
- UI Integration: Testing Flask routes, templates, and JavaScript interactions
- Multi-Row Scenarios: Testing active item lookup and history functionality
- API Endpoints: Testing item history API and data retrieval logic
Technology: Playwright with Chromium browser + MariaDB 10.11 testcontainer
Runtime: ~10-15 seconds (plus initial Docker container startup)
Debug Features:
- Automatic failure capture with screenshots, HTML dumps, and console logs
- Debug output saved to
test-debug-output/with timestamped directories - Comprehensive diagnostic information for failed test analysis
Command: nox -s coverage
Purpose: Generates detailed code coverage analysis.
Output:
- Terminal coverage summary
- HTML report in
htmlcov/index.html
# Run all unit tests
nox -s tests
# Run specific test file
python -m pytest tests/unit/test_models.py -v
# Run specific test method
python -m pytest tests/unit/test_models.py::TestDimensions::test_dimensions_creation_basic -v
# Run E2E tests (requires Flask server)
nox -s e2e
# Generate coverage report
nox -s coverage
# List all available nox sessions
nox -lThe project uses automated screenshot generation to keep documentation visually up-to-date with the UI.
Command: nox -s screenshots
Purpose: Generate all documentation screenshots with realistic test data.
Output: 12 screenshots in docs/images/screenshots/ (README and user manual)
Runtime: ~60-90 seconds
Command: nox -s screenshots_headless
Purpose: Generate screenshots without visible browser windows (for CI/CD pipelines).
Command: nox -s screenshots_verify
Purpose: Verify all screenshots meet quality standards:
- File size under 500KB
- Valid PNG format
- RGB/RGBA color mode
Screenshots should be regenerated when:
-
UI Changes
- Template modifications (
app/templates/**) - CSS changes (
app/static/css/**) - JavaScript changes affecting UI (
app/static/js/**)
- Template modifications (
-
Test Data Changes
- Screenshot test fixtures modified (
tests/e2e/fixtures/screenshot_data.py)
- Screenshot test fixtures modified (
-
Feature Additions
- New features that need documentation screenshots
When modifying UI files:
# 1. Make your UI changes
vim app/templates/inventory/list.html
# 2. Regenerate screenshots
nox -s screenshots
# 3. Review changes
git diff docs/images/screenshots/
# 4. Verify quality
nox -s screenshots_verify
# 5. Commit UI changes and screenshots together
git add app/templates/ docs/images/screenshots/
git commit -m "Update inventory list UI and regenerate screenshots"The project includes a pre-commit hook that reminds you to regenerate screenshots when UI files are modified.
Install the hook:
./hooks/install.shWhat it does:
- Detects when you're committing UI file changes
- Prompts you to confirm screenshots were regenerated
- Provides commands if regeneration is needed
- Allows skipping for non-visual changes
Example interaction:
⚠️ UI files were modified:
app/templates/inventory/list.html
📸 Consider regenerating screenshots:
nox -s screenshots
❓ Did you update screenshots? (y/n/skip)
GitHub Actions automatically verifies screenshots on pull requests:
Workflow: .github/workflows/screenshots.yml
Triggers: PRs that modify:
- Templates
- CSS/JavaScript
- Screenshot test files
Actions:
- Regenerates all screenshots
- Compares with committed versions
- Fails if screenshots are outdated
- Comments on PR with instructions
- Provides regenerated screenshots as artifacts
PR Requirements:
- If you modify UI files, you must regenerate and commit screenshots
- CI will block merge if screenshots are out of date
Test File: tests/e2e/test_screenshot_generation.py
Test Data: tests/e2e/fixtures/screenshot_data.py
- 12 realistic workshop inventory items
- Complete material specifications
- Professional purchase information
Configuration: tests/e2e/screenshot_config.yaml
- Screenshot definitions
- Viewport sizes
- Documentation insertion points
To add a new screenshot:
- Add test method to
test_screenshot_generation.py:
@pytest.mark.screenshot
@pytest.mark.e2e
def test_screenshot_new_feature(self, page, live_server):
"""Generate new feature screenshot"""
items = get_inventory_items(count=3)
self._load_inventory_data(live_server, items)
page.goto(f"{live_server.url}/new-feature")
page.wait_for_selector("#feature-element", timeout=5000)
self.screenshot.capture_viewport(
"user-manual/new_feature.png",
viewport_size=(1920, 1080),
wait_for_selector="#feature-element",
hide_selectors=[".toast-container"],
full_page=True
)- Generate the screenshot:
nox -s screenshots- Add to documentation:

*Description of what the screenshot shows*- Verify and commit:
nox -s screenshots_verify
git add tests/e2e/test_screenshot_generation.py docs/images/screenshots/ docs/user-manual.md
git commit -m "Add screenshot for new feature"Problem: Regenerated screenshots look different than expected
Solutions:
- Clear browser cache: Playwright uses fresh browser instances
- Check test data: Ensure fixtures match expected state
- Review viewport size: Screenshots use 1920x1080 by default
- Check wait conditions: May need longer timeouts
Problem: Screenshot test fails with timeout or element not found
Solutions:
- Run test with visible browser:
nox -s screenshots(not headless) - Check element selectors in test
- Verify test data is loading correctly
- Review test debug output in
test-debug-output/
Problem: Screenshot exceeds 500KB limit
Solutions:
- PNG optimization is automatic; check if working
- Consider reducing viewport size
- Hide unnecessary UI elements with
hide_selectors - Review if full-page capture is needed
- Always regenerate after UI changes - Don't manually edit screenshots
- Commit screenshots with UI changes - Keep them in sync
- Use realistic test data - Screenshots represent production usage
- Review generated screenshots - Ensure they look professional
- Run verification before committing - Catch issues early
- Generation Guide:
docs/images/screenshots/GENERATION_GUIDE.md - Quality Verification:
docs/images/screenshots/VERIFICATION.md - Hook Documentation:
hooks/README.md
MariaDB with SQLite Backend: Unit tests use MariaDBStorage with SQLite in-memory database for fast, isolated testing. This provides MariaDB interface compatibility while using SQLite for speed.
Fixtures:
app: Flask application context for service teststest_storage: Fresh MariaDBStorage instance with SQLite backend per testservice: InventoryService with test storage (no batching or caching)sample_item/sample_threaded_item: Pre-configured test data with full validation
Test Server: Dedicated Flask server with test configuration that uses MariaDB (testcontainer locally, service in CI). Direct database writes ensure test data is immediately available.
MariaDB Testcontainer: Automatically managed Docker container with MariaDB 10.11, same version as production. No manual setup required.
Page Objects: Organized test code that interacts with web elements using Playwright selectors.
Test Isolation: Each E2E test runs with a fresh database state.
Debug Capture: Automated failure diagnostics with comprehensive debugging information:
- Screenshots: Full-page screenshots at failure point (
failure_screenshot.png) - HTML Dumps: Complete DOM state for analysis (
failure_page.html) - Console Logs: Browser console output including errors (
console_logs.json) - Page State: URL, title, viewport, and failure context (
page_state.json) - Browser Storage: localStorage and sessionStorage contents
- Debug Summary: Human-readable analysis guide (
DEBUG_SUMMARY.md)
For local development and testing, you can run the Flask application directly with automatic reloading enabled.
Command:
# Activate virtual environment first
source venv/bin/activate
# Run Flask development server
python app.pyFeatures:
- Auto-reload: Server automatically restarts when code changes are detected
- Debug mode: Detailed error messages with stack traces in browser
- Local access: Available at
http://127.0.0.1:5000 - Hot reloading: Template and static file changes are reflected immediately
Alternative Methods:
# Using Flask CLI
export FLASK_APP=app.py
export FLASK_ENV=development
flask run
# Using Python module
python -m flask run --debug
# Custom host/port
python app.py # Configured for 127.0.0.1:5000Development Configuration:
- Debug mode enabled by default in
app.py - Automatic template reloading
- Static file serving with cache disabled
- Detailed error pages with interactive debugger
Note: The development server uses MariaDB for data storage (production setup). E2E tests use MariaDB testcontainer to match production exactly.
The application uses a single enhanced InventoryItem model for both business logic and database persistence:
- InventoryItem (
app/database.py): SQLAlchemy model with hybrid properties for enum conversion and business logic methods - Supporting Models:
Dimensions,Threadclasses for complex data structures - Enums:
ItemType,ItemShape,ThreadSeries,ThreadHandednessfor standardized values
Key Design Principles:
- Single source of truth for inventory data
- Enum properties provide type-safe access while storing strings in database
- Precision normalization for machinist notation compliance
- Backward compatibility maintained throughout model evolution
- InventoryService: Abstract base class defining business logic interface
- MariaDBInventoryService: Production implementation with MariaDB backend
- GoogleSheetsInventoryService: Legacy implementation (maintained for compatibility)
- Quick validation:
nox -s tests(runs in ~0.3s) - Full validation:
nox -s tests && nox -s e2e - Coverage check:
nox -s coverage(checkhtmlcov/index.html)
- Write failing test for new feature
- Implement minimal code to make test pass
- Refactor while keeping tests green
- Run full test suite before committing
# Unit test with fixtures
@pytest.mark.unit
def test_feature(self, service, sample_item):
result = service.some_method(sample_item)
assert result.success
# E2E test with browser automation
@pytest.mark.e2e
def test_workflow(page):
page.goto("http://127.0.0.1:5000/inventory/add")
page.fill("#ja_id", "JA000001")
page.click("#submit-btn")
expect(page.locator(".alert-success")).to_be_visible()Docker Container Issues:
- Error:
Cannot connect to the Docker daemon - Solution: Ensure Docker is running:
sudo systemctl start docker(Linux) or start Docker Desktop - Error:
Permission denied while trying to connect to Docker - Solution: Add user to docker group:
sudo usermod -aG docker $USERand restart terminal
Testcontainer Startup Issues:
- Error:
MariaDB container failed to start - Solution: Check Docker logs:
docker logs <container_id>, ensure port 3306 is available - Slow startup: Initial MariaDB container download may take time, subsequent runs are faster
Playwright Browser Issues (Arch Linux):
- Error:
sudo: a password is required - Solution: Browsers installed without
--with-depsflag to avoid sudo requirements
Test Failures After Model Changes:
- Check that test data matches new validation rules (e.g., JA ID format:
JA######) - Update fixtures if model constructors change
E2E Test Flakiness:
- MariaDB testcontainer provides consistent database state
- Check Docker container health if tests consistently fail
- Use explicit waits:
page.wait_for_selector(selector)
E2E Test Debugging:
- Failed tests automatically capture debug information to
test-debug-output/ - Review captured screenshots to see visual state at failure
- Check console logs for JavaScript errors or API failures
- Examine HTML dumps for missing elements or incorrect page state
- Use debug summary for guided troubleshooting steps
Import Errors:
- Ensure virtual environment is activated
- Verify all dependencies installed:
pip install -r requirements-test.txt
- Check test output for specific error messages
- Run with verbose mode:
python -m pytest -v --tb=long - Use debugger: Add
import pdb; pdb.set_trace()in test code
- Unit tests: Optimized for speed with SQLite in-memory database via MariaDB interface
- E2E tests: Use MariaDB testcontainer for production parity. Initial startup ~5-10s, subsequent tests are fast
- Docker optimization: Testcontainer reuses same container across test session for efficiency
- Test Isolation: Each test uses fresh database state with fast MariaDB operations
- E2E data persistence: Direct database writes ensure test data is immediately available
All test suites run automatically on pull requests. Local development should ensure:
- All unit tests pass:
nox -s tests - E2E tests pass:
nox -s e2e - Code coverage maintained:
nox -s coverage
The test suite is designed to be fast and reliable for rapid development iterations.