Deploying n8n on a VPS
The exact way I deploy n8n in production: Docker Compose, Caddy for TLS, an external Postgres, proper backups and a sensible upgrade path. It costs £6 a month and runs hundreds of workflows for me.
Why self-host n8n at all
n8n is the automation tool I reach for when Zapier gets silly and Make gets clever for the wrong reasons. It is open source, it runs JavaScript, it speaks HTTP, and you can host it yourself for the price of a coffee. The cloud version is fine. The self-hosted version is better, cheaper, and actually yours.
The argument for self-hosting comes down to three things: cost (a £400/month Zapier plan becomes a £6/month VPS), data sovereignty (your customer data does not leave your infrastructure), and extensibility (you can install any npm package and run real code in a Function node). The argument against is that you now run a service. This playbook is about making "running a service" boring.
Self-hosting is a tax. Pay it once, properly, and never again.
Choosing the VPS
I use Hetzner. Their CX22 (£4.50/month, 2 vCPU, 4 GB RAM, 40 GB SSD) handles dozens of workflows comfortably. DigitalOcean and Vultr are fine alternatives if you have credit. Avoid AWS Lightsail unless you already live in AWS — the egress pricing will catch you out.
What you want: 4 GB RAM minimum, an SSD, a Linux distro you trust (Ubuntu 24.04 LTS or Debian 12), and a region close to the APIs you talk to. RAM is the bottleneck, not CPU; n8n's worker model is happy on two cores.
First boot and hardening
Do not run n8n as root. Do not expose port 22 to the world with password auth. Do not skip a firewall. The first ten minutes after a fresh VPS spin-up are the cheapest time to get this right.
bash# As root, on the fresh VPS adduser sarma usermod -aG sudo sarma rsync --archive --chown=sarma:sarma ~/.ssh /home/sarma # Lock down SSH sed -i 's/^#\?PermitRootLogin.*/PermitRootLogin no/' /etc/ssh/sshd_config sed -i 's/^#\?PasswordAuthentication.*/PasswordAuthentication no/' /etc/ssh/sshd_config systemctl reload ssh # Firewall ufw default deny incoming ufw allow OpenSSH ufw allow 80/tcp ufw allow 443/tcp ufw enable # Unattended security updates apt install -y unattended-upgrades dpkg-reconfigure -plow unattended-upgrades
Install Docker via the official convenience script. It is fine. People bicker about this; do not let them slow you down.
bashcurl -fsSL https://get.docker.com | sh usermod -aG docker sarma # log out and back in so the group takes effect
The Docker stack
Three containers, one Compose file, one shared network. n8n itself, Postgres for the database, and Caddy as the reverse proxy. I deliberately do not use n8n's bundled SQLite — it works, until you upgrade and lose every workflow execution log. Use Postgres from day one.
yaml# /opt/n8n/docker-compose.yml services: postgres: image: postgres:16-alpine restart: unless-stopped environment: POSTGRES_USER: n8n POSTGRES_PASSWORD: ${POSTGRES_PASSWORD} POSTGRES_DB: n8n volumes: - ./postgres-data:/var/lib/postgresql/data healthcheck: test: ['CMD', 'pg_isready', '-U', 'n8n'] interval: 10s timeout: 5s retries: 5 n8n: image: docker.n8n.io/n8nio/n8n:latest restart: unless-stopped depends_on: postgres: condition: service_healthy environment: DB_TYPE: postgresdb DB_POSTGRESDB_HOST: postgres DB_POSTGRESDB_DATABASE: n8n DB_POSTGRESDB_USER: n8n DB_POSTGRESDB_PASSWORD: ${POSTGRES_PASSWORD} N8N_HOST: ${N8N_HOST} N8N_PROTOCOL: https WEBHOOK_URL: https://${N8N_HOST}/ N8N_ENCRYPTION_KEY: ${N8N_ENCRYPTION_KEY} N8N_RUNNERS_ENABLED: 'true' EXECUTIONS_DATA_PRUNE: 'true' EXECUTIONS_DATA_MAX_AGE: '168' GENERIC_TIMEZONE: Europe/London volumes: - ./n8n-data:/home/node/.n8n caddy: image: caddy:2-alpine restart: unless-stopped ports: - '80:80' - '443:443' volumes: - ./Caddyfile:/etc/caddy/Caddyfile - ./caddy-data:/data - ./caddy-config:/config
The N8N_ENCRYPTION_KEY is non-negotiable. n8n encrypts every credential with it. Lose the key and every credential in your instance becomes unreadable. Generate it once, write it down somewhere serious, and never let it change.
bash# Generate strong secrets and store them in .env cat > /opt/n8n/.env <<EOF POSTGRES_PASSWORD=$(openssl rand -hex 24) N8N_ENCRYPTION_KEY=$(openssl rand -hex 32) N8N_HOST=n8n.example.com EOF chmod 600 /opt/n8n/.env
Caddy and free TLS
Caddy gets you Let's Encrypt certificates without thinking. It also handles HTTP/2 and HTTP/3 by default and renews everything automatically. The configuration is one file and four lines.
caddy# /opt/n8n/Caddyfile n8n.example.com { reverse_proxy n8n:5678 encode zstd gzip }
Point your DNS A record at the VPS, run docker compose up -d, and within sixty seconds you have a TLS-protected n8n. If certificates do not issue, it is almost always DNS not propagating — wait a few minutes and check docker logs caddy.
Postgres, properly
The Postgres container above is fine for a single-instance n8n. Two improvements are worth making once you depend on this thing:
- Tune memory. The Alpine image ships with conservative defaults. On a 4 GB box, set
shared_buffersto 1 GB andwork_memto 16 MB by mounting a custompostgresql.conf. - Pin the major version. Use
postgres:16-alpinenotpostgres:alpine. Major version upgrades in Postgres require apg_upgrade; a surprise one will eat your weekend.
Backups you can actually restore
A backup you have not restored from is a hope, not a backup. The combination I use: nightly pg_dump to a remote object store, plus a weekly tarball of the n8n data directory (which contains the encryption-at-rest keystore and any binary data nodes have written).
bash#!/usr/bin/env bash # /opt/n8n/backup.sh set -euo pipefail TS=$(date -u +%Y%m%dT%H%M%SZ) DEST=s3://my-backups/n8n cd /opt/n8n docker compose exec -T postgres pg_dump -U n8n n8n | gzip > /tmp/n8n-$TS.sql.gz tar czf /tmp/n8n-data-$TS.tgz n8n-data aws s3 cp /tmp/n8n-$TS.sql.gz $DEST/db/ aws s3 cp /tmp/n8n-data-$TS.tgz $DEST/data/ rm /tmp/n8n-$TS.sql.gz /tmp/n8n-data-$TS.tgz
Cron it nightly, then once a quarter, do the actual restore drill: spin up a throwaway VPS, restore the dump, point a copy of n8n at it, log in, and confirm a credential still decrypts. If it does not, the encryption key is wrong and you have just learned that for free instead of in an emergency.
Restore drills are not optional. The first time you run one will reveal something broken.
Updates without panic
n8n ships frequently. Pin the version in your Compose file, not latest. To upgrade: bump the tag, docker compose pull, then docker compose up -d. Watch the logs once. If anything looks off, docker compose down and re-pin to the previous tag. Because Postgres is external, rolling back is safe.
Read the n8n changelog before any upgrade that crosses a minor version. They occasionally change credential storage formats or node defaults; finding that out at 2am is not the move.
Pitfalls
Every credential becomes unreadable and unrecoverable. Store it in a password manager, write it on paper, do whatever you like — but keep it.
If you put n8n behind Cloudflare with strict SSL and the wrong host header, webhooks from external services will silently 502. Set WEBHOOK_URL explicitly and test with a curl from outside.
Without EXECUTIONS_DATA_PRUNE, the executions table grows without bound. On a small VPS this kills you in weeks. Prune aggressively unless you have a real reason to keep history.
A bad workflow that loads a huge JSON into memory can OOM the host and take Postgres down with it. Set a mem_limit in the Compose file.
Wrap-up
That is the whole stack. £6/month, an hour to set up, and the result is something you control end-to-end. If a workflow breaks, you can docker logs it. If a vendor changes their pricing, you do not care. If you decide tomorrow to move it to a different host, it is one scp and one docker compose up.
Self-hosting is a tax, as I said earlier. But it is a flat tax, and the alternative is an income tax that grows with your business.
Want this done for you?
If you would rather skip the YAK shave and have someone who has done this fifty times set it up properly, that is what I do for a living.
Start a project