Pseudo-Production And Deployment¶
This is the recommended first external testing environment for EchoSpire.
It is intentionally not full production. The goal is to create a stable, repeatable place where a small tester cohort can use the current web-facing surfaces, hit the real API, generate telemetry, and provide feedback without forcing the project to solve full-scale infrastructure too early.
Recommendation¶
Use a single Linux VM with Docker Compose as the first pseudo-production target.
Host the following workloads on that machine:
- Caddy as the public reverse proxy and TLS terminator
- EchoSpire Docs as a static site container
- EchoSpire Admin as a server-rendered ASP.NET container
- EchoSpire API as an ASP.NET container
- PostgreSQL as the application database
- Redis as the cache, queue, and lightweight bus
- Azure Data Explorer Kusto emulator as the telemetry backend for this phase
Do not treat this stack as real production. The Kusto emulator is acceptable for closed external testing, but it must be replaced before a broader launch.
Hosted Versus Distributed UX¶
The hosted pseudo-production environment should expose these public entry points:
docs.<domain>for design, reference, and tester support materialadmin.<domain>for the current web UX and operator-facing screensapi.<domain>for the service boundary consumed by web and future clients
These UX surfaces should not be hosted as server workloads in this phase:
- ConsoleGame should be distributed as a downloadable artifact that points at the deployed API
- Unity should be distributed as a downloadable client build that points at the deployed API
- Simulation should remain an internal tool or operator-triggered workload until its operational shape is clearer
This keeps the first tester environment focused on the systems that must be online all the time, while leaving client packaging as a separate pipeline concern.
Deployment Flow¶
The recommended push-on-demand flow is:
- A maintainer manually runs the
Deploy Pseudo-ProductionGitHub Actions workflow. - The workflow restores, builds, and tests the solution.
- The workflow builds container images for docs, admin, and API.
- The workflow pushes those images to GitHub Container Registry.
- The workflow copies the compose bundle to the Linux host.
- The workflow updates the release tag on the host and runs
docker compose pullplusdocker compose up -d.
This gives you a controlled release button without needing Kubernetes, a separate image registry vendor, or a second orchestration layer.
Repo Assets¶
The deployment pipeline is defined in the repository as follows:
.github/workflows/azure-pseudoprod-provision.yml: Azure infrastructure provisioning and first-boot VM bootstrap workflow.github/workflows/pseudoprod-deploy.yml: manual deploy workflowdocker/api.Dockerfile: API container imagedocker/admin.Dockerfile: Admin container imagedocker/docs.Dockerfile: Docs container imageinfra/azure/pseudoprod.bicep: Azure VM, network, NSG, and public IP infrastructure definitioninfra/azure/cloud-init.yaml: first-boot Docker and Compose installation for the Ubuntu VMdeploy/pseudoprod/docker-compose.yml: pseudo-production topologydeploy/pseudoprod/Caddyfile: host routing and TLSdeploy/pseudoprod/.env.example: required runtime variablesdeploy/pseudoprod/bootstrap-host.sh: one-time host preparation helperdeploy/pseudoprod/remote-bootstrap.sh: remote host layout bootstrap used by automationdeploy/pseudoprod/deploy.sh: server-side pull and restart script
Azure Automation¶
If Azure is the chosen host, the repository now supports provisioning the pseudo-production VM through GitHub Actions.
The Provision Azure Pseudo-Production VM workflow does the following:
- Logs into Azure using a service principal.
- Creates or updates the resource group.
- Deploys a Linux VM, public IP, network, subnet, and NSG from Bicep.
- Uses cloud-init to install Docker Engine and the Docker Compose plugin.
- SSHs into the VM and uploads the deployment bundle.
- Creates the runtime
.envfile on the VM from GitHub environment variables and secrets.
The provisioning workflow requires:
AZURE_CREDENTIALSPSEUDOPROD_VM_SSH_PUBLIC_KEYPSEUDOPROD_VM_SSH_PRIVATE_KEY
It also expects the pseudo-production runtime variables and secrets to be present in the GitHub environment so the VM can be initialized with the expected app configuration.
One-Time Host Setup¶
Before the first deployment, either provision a Linux VM manually or run the Azure provisioning workflow.
Recommended baseline:
- Ubuntu 24.04 LTS
- 4 vCPU
- 16 GB RAM minimum
- 100+ GB SSD
Then:
- Create a DNS record for
docs,admin, andapipointing to the VM. - If provisioning manually, copy the compose bundle to the VM.
- If provisioning manually, run
deploy/pseudoprod/bootstrap-host.sh. - Fill in the real secrets inside the host-local
.envfile, or provide them through the GitHub environment used by the Azure provisioning workflow. - Add the GitHub Actions secrets required by the workflows.
The host-local .env file should remain on the server and should not be stored in git.
For local Docker debugging, do not run the compose file without a populated .env file. The compose topology expects the variables listed in deploy/pseudoprod/.env.example, and missing values make docker compose ps output misleading because the stack is not being evaluated with a valid runtime contract.
Required GitHub Secrets¶
The Azure provisioning and deployment workflows expect these repository or environment secrets:
AZURE_CREDENTIALS: Azure service principal JSON used byazure/loginGHCR_PAT: token with package write access and package read access on the host deployment stepPSEUDOPROD_HOST: public SSH hostname or IP for the Linux VM after provisioning, when using the deploy-only workflow directlyPSEUDOPROD_USER: SSH user for deploymentPSEUDOPROD_SSH_KEY: private key used by the workflowPSEUDOPROD_SSH_PORT: optional SSH port, default22PSEUDOPROD_APP_ROOT: absolute path on the server, for example/opt/echospirePSEUDOPROD_VM_SSH_PUBLIC_KEY: SSH public key used when provisioning the Azure VMPSEUDOPROD_VM_SSH_PRIVATE_KEY: matching SSH private key used for first-boot automation and future SSH access
Application secrets such as JWT and AI provider keys belong in the server .env, not in the workflow definition.
Why This Is The Right First Step¶
This approach is the right one for the current project stage because it is:
- simple enough to operate without a dedicated platform team
- close enough to production behavior to expose integration issues
- cheap enough to leave running for a small tester group
- explicit enough that docs, API, and web UX all move together under one release action
It also keeps a clean migration path open later:
- Caddy can be replaced by a managed ingress or load balancer
- Compose services can be split across managed databases and real telemetry infrastructure
- GHCR-built images can later be deployed to Kubernetes, Azure Container Apps, or App Service without rewriting the app containers
Immediate Follow-On Work¶
After this pseudo-production path is in place, the next operational steps are:
- Add downloadable build pipelines for ConsoleGame and Unity clients.
- Add database backup and restore automation for the pseudo-production PostgreSQL instance.
- Replace the Kusto emulator with a real Azure Data Explorer target before a wider beta.
- Add monitoring and alert routing around API health, container restarts, and disk growth.
Kusto Emulator Notes¶
The pseudo-production compose stack now publishes the Kusto emulator on host port 8080 so local diagnostics and manual KQL verification can reach it outside the compose network.
The container uses a custom entrypoint (docker/kusto-entrypoint.sh) that runs rm -rf /kusto/tmp/* before launching the real Kusto process. This prevents the Database MD container path ... must be empty error that occurs when stale runtime metadata in the persistent kusto_data volume references paths inside the tmpfs mount that no longer exist after a container restart. The emulator should now start cleanly on docker compose up without manual intervention.
If the emulator still fails to start, the recovery path is:
docker compose stop kusto && docker compose rm -f kusto- if the problem persists, remove the volume:
docker volume rm echospire_kusto_data docker compose up -d kusto- verify the emulator answers on
http://localhost:8080/EchoSpirebefore testing API ingestion
The API telemetry writer is designed to fall back to the configured NDJSON file when Kusto is unavailable, so a successful API startup does not prove Kusto ingestion is healthy. Validate both the API and the emulator explicitly during telemetry bring-up.