Skip to content

Pseudo-Production And Deployment

This is the recommended first external testing environment for EchoSpire.

It is intentionally not full production. The goal is to create a stable, repeatable place where a small tester cohort can use the current web-facing surfaces, hit the real API, generate telemetry, and provide feedback without forcing the project to solve full-scale infrastructure too early.

Recommendation

Use a single Linux VM with Docker Compose as the first pseudo-production target.

Host the following workloads on that machine:

  • Caddy as the public reverse proxy and TLS terminator
  • EchoSpire Docs as a static site container
  • EchoSpire Admin as a server-rendered ASP.NET container
  • EchoSpire API as an ASP.NET container
  • PostgreSQL as the application database
  • Redis as the cache, queue, and lightweight bus
  • Azure Data Explorer Kusto emulator as the telemetry backend for this phase

Do not treat this stack as real production. The Kusto emulator is acceptable for closed external testing, but it must be replaced before a broader launch.

Hosted Versus Distributed UX

The hosted pseudo-production environment should expose these public entry points:

  • docs.<domain> for design, reference, and tester support material
  • admin.<domain> for the current web UX and operator-facing screens
  • api.<domain> for the service boundary consumed by web and future clients

These UX surfaces should not be hosted as server workloads in this phase:

  • ConsoleGame should be distributed as a downloadable artifact that points at the deployed API
  • Unity should be distributed as a downloadable client build that points at the deployed API
  • Simulation should remain an internal tool or operator-triggered workload until its operational shape is clearer

This keeps the first tester environment focused on the systems that must be online all the time, while leaving client packaging as a separate pipeline concern.

Deployment Flow

The recommended push-on-demand flow is:

  1. A maintainer manually runs the Deploy Pseudo-Production GitHub Actions workflow.
  2. The workflow restores, builds, and tests the solution.
  3. The workflow builds container images for docs, admin, and API.
  4. The workflow pushes those images to GitHub Container Registry.
  5. The workflow copies the compose bundle to the Linux host.
  6. The workflow updates the release tag on the host and runs docker compose pull plus docker compose up -d.

This gives you a controlled release button without needing Kubernetes, a separate image registry vendor, or a second orchestration layer.

Repo Assets

The deployment pipeline is defined in the repository as follows:

  • .github/workflows/azure-pseudoprod-provision.yml: Azure infrastructure provisioning and first-boot VM bootstrap workflow
  • .github/workflows/pseudoprod-deploy.yml: manual deploy workflow
  • docker/api.Dockerfile: API container image
  • docker/admin.Dockerfile: Admin container image
  • docker/docs.Dockerfile: Docs container image
  • infra/azure/pseudoprod.bicep: Azure VM, network, NSG, and public IP infrastructure definition
  • infra/azure/cloud-init.yaml: first-boot Docker and Compose installation for the Ubuntu VM
  • deploy/pseudoprod/docker-compose.yml: pseudo-production topology
  • deploy/pseudoprod/Caddyfile: host routing and TLS
  • deploy/pseudoprod/.env.example: required runtime variables
  • deploy/pseudoprod/bootstrap-host.sh: one-time host preparation helper
  • deploy/pseudoprod/remote-bootstrap.sh: remote host layout bootstrap used by automation
  • deploy/pseudoprod/deploy.sh: server-side pull and restart script

Azure Automation

If Azure is the chosen host, the repository now supports provisioning the pseudo-production VM through GitHub Actions.

The Provision Azure Pseudo-Production VM workflow does the following:

  1. Logs into Azure using a service principal.
  2. Creates or updates the resource group.
  3. Deploys a Linux VM, public IP, network, subnet, and NSG from Bicep.
  4. Uses cloud-init to install Docker Engine and the Docker Compose plugin.
  5. SSHs into the VM and uploads the deployment bundle.
  6. Creates the runtime .env file on the VM from GitHub environment variables and secrets.

The provisioning workflow requires:

  • AZURE_CREDENTIALS
  • PSEUDOPROD_VM_SSH_PUBLIC_KEY
  • PSEUDOPROD_VM_SSH_PRIVATE_KEY

It also expects the pseudo-production runtime variables and secrets to be present in the GitHub environment so the VM can be initialized with the expected app configuration.

One-Time Host Setup

Before the first deployment, either provision a Linux VM manually or run the Azure provisioning workflow.

Recommended baseline:

  • Ubuntu 24.04 LTS
  • 4 vCPU
  • 16 GB RAM minimum
  • 100+ GB SSD

Then:

  1. Create a DNS record for docs, admin, and api pointing to the VM.
  2. If provisioning manually, copy the compose bundle to the VM.
  3. If provisioning manually, run deploy/pseudoprod/bootstrap-host.sh.
  4. Fill in the real secrets inside the host-local .env file, or provide them through the GitHub environment used by the Azure provisioning workflow.
  5. Add the GitHub Actions secrets required by the workflows.

The host-local .env file should remain on the server and should not be stored in git.

For local Docker debugging, do not run the compose file without a populated .env file. The compose topology expects the variables listed in deploy/pseudoprod/.env.example, and missing values make docker compose ps output misleading because the stack is not being evaluated with a valid runtime contract.

Required GitHub Secrets

The Azure provisioning and deployment workflows expect these repository or environment secrets:

  • AZURE_CREDENTIALS: Azure service principal JSON used by azure/login
  • GHCR_PAT: token with package write access and package read access on the host deployment step
  • PSEUDOPROD_HOST: public SSH hostname or IP for the Linux VM after provisioning, when using the deploy-only workflow directly
  • PSEUDOPROD_USER: SSH user for deployment
  • PSEUDOPROD_SSH_KEY: private key used by the workflow
  • PSEUDOPROD_SSH_PORT: optional SSH port, default 22
  • PSEUDOPROD_APP_ROOT: absolute path on the server, for example /opt/echospire
  • PSEUDOPROD_VM_SSH_PUBLIC_KEY: SSH public key used when provisioning the Azure VM
  • PSEUDOPROD_VM_SSH_PRIVATE_KEY: matching SSH private key used for first-boot automation and future SSH access

Application secrets such as JWT and AI provider keys belong in the server .env, not in the workflow definition.

Why This Is The Right First Step

This approach is the right one for the current project stage because it is:

  • simple enough to operate without a dedicated platform team
  • close enough to production behavior to expose integration issues
  • cheap enough to leave running for a small tester group
  • explicit enough that docs, API, and web UX all move together under one release action

It also keeps a clean migration path open later:

  • Caddy can be replaced by a managed ingress or load balancer
  • Compose services can be split across managed databases and real telemetry infrastructure
  • GHCR-built images can later be deployed to Kubernetes, Azure Container Apps, or App Service without rewriting the app containers

Immediate Follow-On Work

After this pseudo-production path is in place, the next operational steps are:

  1. Add downloadable build pipelines for ConsoleGame and Unity clients.
  2. Add database backup and restore automation for the pseudo-production PostgreSQL instance.
  3. Replace the Kusto emulator with a real Azure Data Explorer target before a wider beta.
  4. Add monitoring and alert routing around API health, container restarts, and disk growth.

Kusto Emulator Notes

The pseudo-production compose stack now publishes the Kusto emulator on host port 8080 so local diagnostics and manual KQL verification can reach it outside the compose network.

The container uses a custom entrypoint (docker/kusto-entrypoint.sh) that runs rm -rf /kusto/tmp/* before launching the real Kusto process. This prevents the Database MD container path ... must be empty error that occurs when stale runtime metadata in the persistent kusto_data volume references paths inside the tmpfs mount that no longer exist after a container restart. The emulator should now start cleanly on docker compose up without manual intervention.

If the emulator still fails to start, the recovery path is:

  1. docker compose stop kusto && docker compose rm -f kusto
  2. if the problem persists, remove the volume: docker volume rm echospire_kusto_data
  3. docker compose up -d kusto
  4. verify the emulator answers on http://localhost:8080/EchoSpire before testing API ingestion

The API telemetry writer is designed to fall back to the configured NDJSON file when Kusto is unavailable, so a successful API startup does not prove Kusto ingestion is healthy. Validate both the API and the emulator explicitly during telemetry bring-up.