Skip to content

Odoo Load Balancing: Multi-Server Setup

DeployMonkey Team · March 11, 2026 8 min read

When You Need Load Balancing

A single well-tuned Odoo server handles most workloads up to 200+ concurrent users. Load balancing becomes necessary when you need either more capacity than a single server can provide, or zero-downtime deployments — where you take one node out of rotation for updates while the other keeps serving traffic.

Architecture Overview

Internet
|
  nginx (load balancer + SSL termination)
|          |
 Odoo Node 1  Odoo Node 2
\              /
 PostgreSQL (shared)
      |
Shared Filestore (NFS or S3)

The key constraint: all Odoo nodes must share the same PostgreSQL database and the same filestore. Everything else (the Odoo application, workers, config) can be replicated across nodes.

Step 1 — Shared PostgreSQL

Run a single PostgreSQL instance (or a replicated cluster) that all Odoo nodes connect to. Point every Odoo node's odoo.conf at the same DB host:

# On both Odoo nodes:
db_host = 10.0.0.5    # PostgreSQL server private IP
db_port = 5432
db_user = odoo
db_password = shared_password

PostgreSQL must have listen_addresses = '*' (or the specific private IPs) and pg_hba.conf must allow connections from both node IPs.

Step 2 — Shared Filestore

Odoo stores uploaded files (attachments, images) in a filestore directory. All nodes must access the same files. Options:

  • NFS: Mount the filestore directory over NFS on all nodes. Simple but adds latency.
  • S3-compatible storage: Use the ir_attachment_s3 module to store attachments in S3. Best for scalability.
  • GlusterFS / Ceph: Distributed filesystem — more complex but highly available.
# In odoo.conf on all nodes (must be the same path, NFS-mounted):
data_dir = /mnt/shared/odoo-filestore

Step 3 — nginx Upstream with Sticky Sessions

Odoo sessions are stored in the database, so requests from the same user can technically go to any node. However, in-memory caches and some module state are node-local, so sticky sessions reduce cache misses and avoid edge cases:

upstream odoo_cluster {
ip_hash;    # sticky sessions by client IP
server 10.0.0.10:8069;
server 10.0.0.11:8069;
keepalive 32;
}

upstream odoo_longpolling {
ip_hash;
server 10.0.0.10:8072;
server 10.0.0.11:8072;
}

server {
listen 443 ssl;
server_name odoo.example.com;

location /longpolling/ {
    proxy_pass http://odoo_longpolling;
    proxy_read_timeout 600s;
}
location /websocket {
    proxy_pass http://odoo_longpolling;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
}
location / {
    proxy_pass http://odoo_cluster;
    proxy_read_timeout 300s;
    proxy_set_header X-Forwarded-Host $host;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_set_header X-Real-IP $remote_addr;
}
}

ip_hash hashes the client IP to consistently route to the same upstream. For users behind shared IPs (corporate NAT), consider hash $cookie_session or using a Redis session store.

Step 4 — Shared Session Store (Optional)

For true stateless load balancing, move Odoo sessions to Redis:

# Install the redis-sessions module or use Odoo Enterprise's built-in Redis support
# In odoo.conf:
session_storage = redis
redis_url = redis://10.0.0.20:6379/0

With Redis sessions, any node can serve any user's requests without sticky sessions.

Step 5 — Health Checks

upstream odoo_cluster {
server 10.0.0.10:8069;
server 10.0.0.11:8069;
}

# nginx Plus or use a third-party module for active health checks.
# For open-source nginx, use passive (failure-based) health checks:
server 10.0.0.10:8069 max_fails=3 fail_timeout=30s;

Or use HAProxy instead of nginx for more sophisticated health checking and statistics.

Zero-Downtime Deployments

# Take node 2 out of rotation:
# Mark it as down in nginx upstream (upstream_conf module or toggle weight=0)

# Update Odoo on node 2:
docker compose pull odoo
docker compose up -d odoo

# Wait for node 2 to be healthy:
while ! curl -sf http://10.0.0.11:8069/web/health; do sleep 5; done

# Bring node 2 back, take node 1 out, repeat

How DeployMonkey Approaches This

DeployMonkey's Enterprise plan includes multi-node deployment with shared PostgreSQL and S3-backed filestore. Load balancing and health checks are configured automatically. For teams that need high availability without managing the infrastructure, it is the most cost-effective path. See Odoo High Availability Setup for the full HA guide.

Start free at deploymonkey.app.

Frequently Asked Questions

Can I run the database on the same server as Odoo in a load-balanced setup?

Only if you have a single database node that both Odoo nodes connect to remotely. Running separate databases per node will cause data divergence — all nodes must share exactly one database.

What happens if the NFS filestore goes down?

Odoo can still serve pages but file uploads and attachment downloads will fail. NFS is a single point of failure unless you use a replicated distributed filesystem or S3.

Is ip_hash reliable for sticky sessions?

Reasonable but not perfect — users behind corporate NAT share an IP and all hit the same node. For better distribution, use cookie-based stickiness or move to a shared session store.

Do I need load balancing for Odoo.sh?

No — Odoo.sh handles scaling internally. This guide applies to self-hosted deployments.