Deployment Guide

Complete guide for deploying the Riptide Application Manager in customer environments. Real-world deploys involve identity-provider wiring, certificate provisioning, reverse-proxy configuration, host-side .env setup, and validation against the customer's network — there is no single-command shortcut, and this guide doesn't pretend otherwise. Riptide implementers walk customers through the full sequence; this document is the canonical reference for that work.

Table of Contents

A note on localhost references

Throughout this guide, http://localhost:11401 and http://localhost:11402 refer to the AM container's ports as seen on the deployment host itself — these are operator-side commands run from the VPS shell to verify the container is up, run health checks, or inspect the local API. End users always reach AM through the reverse proxy at the customer's public hostname (e.g. https://app.customer.example.com), never via localhost. When you see curl http://localhost:11402/health below, picture it running on the VPS in an SSH session, not in a browser.

Docker Deployment

Prerequisites

  • Docker 20.10+
  • Docker Compose 2.0+
  • 2GB available RAM
  • 1GB available disk space

Configuration

The application uses a centralized appsettings.json at the solution root (not in individual projects) that configures both the API and Web services. This simplifies configuration management by providing a single source of truth.

Environment variables can override any setting in appsettings.json using the double-underscore notation (e.g., Api__Cors__AllowedOrigins__0).

1. Environment Variables

Create .env from template:

cp .env.example .env

Required variables:

# Security - CHANGE THIS!
RIPTIDE_API_KEY=rtk_change-me-in-production

# Database paths (usually don't need to change)
IDENTITY_DB_PATH=/app/data/identity/identity.db
CONFIG_DB_PATH=/app/data/configuration/configuration.db

Optional variables:

# Trial settings
TRIAL_DEFAULT_DURATION_DAYS=30
TRIAL_MAX_DURATION_DAYS=90
TRIAL_CLEANUP_ENABLED=true
TRIAL_CLEANUP_DAYS_AFTER_EXPIRY=7

# Email Provider (Smtp or AwsSes)
EMAIL_PROVIDER=Smtp
EMAIL_ENABLED=true
EMAIL_FROM_ADDRESS=noreply@example.com
EMAIL_FROM_NAME=Riptide Application Manager

# SMTP Email Settings (when using Smtp provider)
SMTP_HOST=smtp.example.com
SMTP_PORT=587
SMTP_USERNAME=your-username
SMTP_PASSWORD=your-password
SMTP_ENABLE_SSL=true

# AWS SES Email Settings (when using AwsSes provider)
# Recommended for VPS deployments with email restrictions
AWS_SES_REGION=us-east-1
AWS_SES_ACCESS_KEY=your-aws-access-key
AWS_SES_SECRET_KEY=your-aws-secret-key
AWS_SES_CONFIGURATION_SET=your-config-set-name

# Session settings
SESSION_IDLE_TIMEOUT_MINUTES=30
SESSION_ABSOLUTE_TIMEOUT_MINUTES=480

# Logging
LOG_LEVEL=Information
LOG_TO_FILE=true
LOG_TO_CONSOLE=true

The root appsettings.json also contains two sections added as part of the 1.3.0 security integration:

  • Riptide:Security — controls HTTP security headers, audit behavior, and compliance settings; consumed by UseRiptideSecurity() middleware.
  • SecurityAudit:ScheduledAudit — controls the background compliance audit service. Set Enabled: true to activate scheduled audits across all active applications.
{
  "SecurityAudit": {
    "ScheduledAudit": {
      "Enabled": false,
      "IntervalHours": 24,
      "Frameworks": ["SOC2", "HIPAA", "FedRAMP", "StateRAMP"]
    }
  }
}

2. Build and Run

# Build images
docker compose build

# Start services (detached)
docker compose up -d

# View logs
docker compose logs -f

# Stop services
docker compose down

# Stop and remove volumes (WARNING: deletes databases)
docker compose down -v

3. Verify Deployment

# Check container status
docker compose ps

# Check health endpoint
curl http://localhost:11402/health

# Expected response:
# {
#   "status": "healthy",
#   "timestamp": "2026-01-27T21:22:38.598Z",
#   "service": "riptide-application-manager-api",
#   "version": "1.0.0"
# }

# Check databases were created
ls -lh data/identity/
ls -lh data/configuration/

Docker Compose Services

The docker-compose.yml defines a single multi-service container:

  • application-manager: Runs both API and Web services
    • Ports: 11401 (Web), 11402 (API)
    • Volumes: ./data, ./logs
    • Auto-restart: unless-stopped

Volume Management

Best Practice: Always store data volumes on the host filesystem, outside the Docker image. Container filesystems are ephemeral — any data written inside a container is lost when the container is removed or rebuilt. Mounting host directories (or named volumes) ensures your databases and logs survive container updates, restarts, and redeployments.

Data persists in host directories:

./data/identity/identity.db          # Identity database
./data/configuration/configuration.db # Configuration database
./logs/                               # Application logs

Backup databases:

# Create backup directory
mkdir -p backups

# Backup both databases
cp data/identity/identity.db backups/identity-$(date +%Y%m%d-%H%M%S).db
cp data/configuration/configuration.db backups/config-$(date +%Y%m%d-%H%M%S).db

Restore from backup:

# Stop services
docker compose down

# Restore databases
cp backups/identity-20260127-142000.db data/identity/identity.db
cp backups/config-20260127-142000.db data/configuration/configuration.db

# Start services
docker compose up -d

First-Time Orientation

This section covers what to expect immediately after the AM container comes up for the first time on a fresh .env — useful when you're sitting on the VPS validating the deploy.

Default admin credentials

On first startup, the entrypoint creates a default admin user (idempotent — re-runs are no-ops):

Username: admin
Password: Admin@2026!

Change this password immediately after first login. The credentials banner is logged on first start; subsequent restarts do not re-print it.

Confirm the API is up

# Run on the VPS — confirms the API container started, ran migrations,
# and bound its HTTP port. The Web port (11401) comes up a few seconds
# later; if you're using the configure-trials-public-host.zsh deploy
# script, it waits for both.
curl -sSf http://localhost:11402/health

A successful response means migrations completed and the schema is current — you can move on to logging in via the reverse proxy.

First login

  1. Open https://<your-customer-hostname>/auth/login in a browser. The hostname is whatever the customer's reverse proxy serves AM under (e.g. https://app.customer.example.com, or for Riptide-hosted environments https://boathouse.riptide.solutions for staging / https://dock.riptide.solutions for production).
  2. Sign in with the default admin credentials.
  3. You'll land on the Dashboard, which shows system metrics and activity.
  4. Use the navbar to explore Users, Applications, Roles, Configuration, etc.

First-startup troubleshooting

Admin user not created. Inspect the entrypoint logs for the credentials banner. If it didn't fire, re-run the seed manually:

docker compose exec application-manager \
    sqlite3 /app/data/identity/identity.db < /app/scripts/create-admin-user.sql

Cannot reach the Web UI through the reverse proxy. Walk the layers from inside out:

# 1. Container running?
docker compose ps

# 2. Web port responding on the host?
curl -sI http://localhost:11401/

# 3. Reverse proxy forwarding?
curl -sI -H "Host: <customer-hostname>" http://localhost/

If 1+2 pass and 3 fails, the issue is in the reverse proxy config (Caddy / nginx site block, AllowedHosts in appsettings.json, or the proxy's Host header forwarding). See Production Deployment below for the canonical reverse-proxy snippets.

Starting fresh. To wipe both databases and start over (destroys all data — only use for first-time validation, never on a host with real data):

docker compose down
sudo rm -rf data/identity/identity.db data/configuration/configuration.db
docker compose up -d

Production Deployment

Docker Production Configuration

1. Environment Hardening

# Use strong API keys (issue via Web UI, or set a bootstrap key)
RIPTIDE_API_KEY=rtk_$(openssl rand -hex 32)

# Disable trial auto-cleanup if manual process preferred
TRIAL_CLEANUP_ENABLED=false

# Set production log level
LOG_LEVEL=Warning
LOG_TO_CONSOLE=false
LOG_TO_FILE=true

# Enable email with AWS SES (recommended for VPS)
EMAIL_PROVIDER=AwsSes
EMAIL_ENABLED=true
AWS_SES_REGION=us-east-1
AWS_SES_ACCESS_KEY=your-access-key
AWS_SES_SECRET_KEY=your-secret-key

2. AWS SES Configuration

Why AWS SES for VPS deployments:

  • Many VPS providers restrict SMTP port 25 to prevent spam
  • AWS SES provides reliable email delivery with reputation management
  • Pay-as-you-go pricing (first 62,000 emails/month free when sending from EC2)
  • Built-in bounce and complaint handling

Setup Steps:

  1. Verify your sender domain in AWS SES:

    # In AWS Console: SES → Verified identities → Create identity
    # Add domain and configure DNS records (DKIM, SPF, DMARC)
    
  2. Create IAM user with SES permissions:

    # Required IAM policy:
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Action": [
            "ses:SendEmail",
            "ses:SendRawEmail"
          ],
          "Resource": "*"
        }
      ]
    }
    
  3. Generate access keys and add to .env:

    EMAIL_PROVIDER=AwsSes
    AWS_SES_REGION=us-east-1
    AWS_SES_ACCESS_KEY=AKIAIOSFODNN7EXAMPLE
    AWS_SES_SECRET_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
    AWS_SES_CONFIGURATION_SET=your-config-set  # Optional
    EMAIL_FROM_ADDRESS=noreply@yourdomain.com
    
  4. Move out of SES sandbox (for production):

    • By default, SES is in sandbox mode (can only send to verified emails)
    • Request production access via AWS Console: SES → Account dashboard → Request production access
    • Typically approved within 24 hours

SMTP Alternative:

If your VPS allows SMTP or you prefer traditional email:

EMAIL_PROVIDER=Smtp
SMTP_HOST=smtp.example.com
SMTP_PORT=587
SMTP_USERNAME=your-username
SMTP_PASSWORD=your-password
SMTP_ENABLE_SSL=true

3. HTTPS/TLS

For production, place a reverse proxy (nginx, Caddy, Traefik) in front:

Example nginx configuration:

server {
    listen 443 ssl http2;
    server_name app.example.com;

    ssl_certificate /etc/ssl/certs/app.example.com.crt;
    ssl_certificate_key /etc/ssl/private/app.example.com.key;

    # Web UI
    location / {
        proxy_pass http://localhost:11401;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }

    # API
    location /api/ {
        proxy_pass http://localhost:11402/;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}
Hostname-based admin obscuring (trials deployments only)

Riptide-hosted trial deployments split the public trial flow (e.g. trials.example.com) and the admin UI (e.g. dock.example.com) onto separate hostnames behind the same Web container. The reverse proxy filters admin paths off the public hostname so trial users never see the admin login on their first visit, and the application redirects / on any host listed in Web:PublicHosts to a configurable landing page (default /trial/register, override via Web:PublicHostRedirectTarget).

This split is opt-in and only relevant when the deployment runs the trials feature. Customer implementations that don't use trials should leave Web:PublicHosts empty (the default) — / then serves the standard landing/login flow and regular users reach the login normally. Implementers who want the public hostname to redirect somewhere other than /trial/register (e.g. a marketing page or /auth/login) set Web:PublicHostRedirectTarget accordingly.

For Caddyfile reference snippets plus the production / beta hostnames Riptide uses, see docs/internal/caddy-deployment.md.

3. Database Backups

Set up automated backups:

#!/bin/bash
# backup-databases.sh

BACKUP_DIR="/var/backups/riptide-app-manager"
DATE=$(date +%Y%m%d-%H%M%S)

mkdir -p "$BACKUP_DIR"

# Backup databases
docker exec riptide-application-manager sqlite3 /app/data/identity/identity.db ".backup '/app/data/identity/backup.db'"
docker cp riptide-application-manager:/app/data/identity/backup.db "$BACKUP_DIR/identity-$DATE.db"

docker exec riptide-application-manager sqlite3 /app/data/configuration/configuration.db ".backup '/app/data/configuration/backup.db'"
docker cp riptide-application-manager:/app/data/configuration/backup.db "$BACKUP_DIR/config-$DATE.db"

# Clean up old backups (keep last 30 days)
find "$BACKUP_DIR" -name "*.db" -mtime +30 -delete

echo "Backup completed: $DATE"

Add to crontab:

# Daily backup at 2 AM
0 2 * * * /usr/local/bin/backup-databases.sh >> /var/log/riptide-backup.log 2>&1

4. Monitoring

Set up health check monitoring:

#!/bin/bash
# healthcheck.sh

HEALTH_URL="http://localhost:11402/health"
ALERT_EMAIL="admin@example.com"

response=$(curl -s -o /dev/null -w "%{http_code}" "$HEALTH_URL")

if [ "$response" != "200" ]; then
    echo "Health check failed! HTTP $response" | \
        mail -s "Riptide App Manager Health Check Failed" "$ALERT_EMAIL"
    exit 1
fi

exit 0

Add to crontab:

# Check every 5 minutes
*/5 * * * * /usr/local/bin/healthcheck.sh

Resource Requirements

Minimum (Development/Small Teams)

  • CPU: 2 cores
  • RAM: 2GB
  • Disk: 10GB
  • CPU: 4 cores
  • RAM: 4GB
  • Disk: 50GB SSD

High Load (Enterprise)

  • CPU: 8+ cores
  • RAM: 8GB+
  • Disk: 100GB+ SSD
  • Consider database optimization (see ADMINISTRATION.md)

Database Management

Migrations

Migrations run automatically on startup. To run manually:

# Identity database
cd src/Riptide.ApplicationManager.Infrastructure
dotnet ef database update --context IdentityDbContext

# Configuration database
dotnet ef database update --context ConfigurationDbContext

EF Core manages all schema changes. The migration files live in src/Riptide.ApplicationManager.Infrastructure/Data/{Identity,Configuration}/Migrations/ and are applied automatically by the API process at startup (the Web process no longer runs migrations — it waits for the API's /health to respond before starting, so migrations are guaranteed complete by the time Web boots). To list applied migrations on a running container:

docker compose exec application-manager \
    sqlite3 /app/data/identity/identity.db "SELECT MigrationId FROM __EFMigrationsHistory ORDER BY MigrationId DESC LIMIT 10;"

To list pending migrations (those in code but not yet in __EFMigrationsHistory), inspect the migrations directory in the source tree.

Creating New Migrations

# Identity migration
dotnet ef migrations add MigrationName --context IdentityDbContext --output-dir Data/Identity/Migrations

# Configuration migration
dotnet ef migrations add MigrationName --context ConfigurationDbContext --output-dir Data/Configuration/Migrations

Database Inspection

# Open identity database
sqlite3 data/identity/identity.db

# List tables
.tables

# Describe table schema
.schema TrialUsers

# Query data
SELECT * FROM TrialUsers LIMIT 10;

# Exit
.quit

Database Vacuum (Optimize)

# Optimize identity database
sqlite3 data/identity/identity.db "VACUUM;"

# Optimize configuration database
sqlite3 data/configuration/configuration.db "VACUUM;"

Troubleshooting

Container Won't Start

# Check logs
docker compose logs

# Check if ports are already in use
lsof -i :11401
lsof -i :11402

# Remove existing container and try again
docker compose down
docker compose up -d

Database Connection Issues

# Check database files exist
ls -l data/identity/identity.db
ls -l data/configuration/configuration.db

# Check permissions
chmod 664 data/identity/identity.db
chmod 664 data/configuration/configuration.db

# Check database integrity
sqlite3 data/identity/identity.db "PRAGMA integrity_check;"
sqlite3 data/configuration/configuration.db "PRAGMA integrity_check;"

Migration Errors

# Reset database (WARNING: deletes all data)
rm data/identity/identity.db
rm data/configuration/configuration.db

# Restart services to recreate
docker compose restart

Performance Issues

# Check container resources
docker stats riptide-application-manager

# Check database sizes
du -h data/identity/identity.db
du -h data/configuration/configuration.db

# Vacuum databases to reclaim space
sqlite3 data/identity/identity.db "VACUUM;"
sqlite3 data/configuration/configuration.db "VACUUM;"

Port Conflicts

If ports 11401 or 11402 are in use, edit docker-compose.yml:

ports:
  - "12401:11401"  # Change host port
  - "12402:11402"  # Change host port

View Detailed Logs

# All logs
docker compose logs

# Follow logs
docker compose logs -f

# Last 100 lines
docker compose logs --tail=100

# Specific service logs
docker logs riptide-application-manager

Upgrading

Minor Updates

# Pull latest code
git pull

# Rebuild and restart
docker compose down
docker compose build
docker compose up -d

Major Updates (with breaking changes)

# 1. Backup databases
mkdir -p backups
cp data/identity/identity.db backups/
cp data/configuration/configuration.db backups/

# 2. Pull updates
git pull

# 3. Check CHANGELOG.md for migration notes

# 4. Rebuild
docker compose build

# 5. Run new migrations
docker compose up -d

# 6. Verify health
curl http://localhost:11402/health

Next Steps


Need help? Open an issue on GitHub or contact support.