What Causes 502/504 Errors in Odoo?
502 Bad Gateway means nginx cannot reach the Odoo backend. 504 Gateway Timeout means nginx reached Odoo but the request took too long. Both indicate infrastructure issues, not application bugs. Here are the causes in order of frequency.
Cause 1: Odoo Service Not Running (502)
Symptom: 502 on every page, not just slow pages.
Diagnosis:
# Check if Odoo process is running
ps aux | grep odoo
# Check service status
systemctl status odoo
# Check Odoo logs for crash
tail -100 /var/log/odoo/odoo.logCommon causes:
- Odoo crashed due to out-of-memory (OOM kill)
- Service failed to start after server reboot
- Python dependency missing after module update
- Database connection refused (PostgreSQL down)
Fix:
# Restart Odoo
systemctl restart odoo
# If OOM killed, increase memory limit
# Edit /etc/odoo/odoo.conf:
limit_memory_hard = 2684354560 # 2.5GB
limit_memory_soft = 2147483648 # 2GBCause 2: Nginx Proxy Timeout Too Low (504)
Symptom: 504 on reports, large exports, or heavy pages. Normal pages work fine.
Fix:
# In nginx site config:
location / {
proxy_pass http://127.0.0.1:8069;
proxy_read_timeout 720s;
proxy_connect_timeout 720s;
proxy_send_timeout 720s;
}Default nginx timeout is 60 seconds. Odoo reports and large operations can take longer.
Cause 3: All Workers Busy (502/504)
Symptom: Intermittent 502/504, worse during peak hours.
Diagnosis:
# Check worker count in odoo.conf
grep workers /etc/odoo/odoo.conf
# Check active connections
ss -tlnp | grep 8069Fix: Increase worker count based on CPU cores:
# odoo.conf
workers = 5 # (CPU cores × 2) + 1
limit_time_real = 120 # Kill requests longer than 120s
limit_time_cpu = 60 # CPU time limit per requestCause 4: PostgreSQL Connection Limit (502)
Symptom: 502 with log message FATAL: too many connections
Fix:
# postgresql.conf
max_connections = 200 # Default 100, increase for more workers
# Or add connection pooling with PgBouncerCause 5: Long-Running Cron Job (504)
Symptom: 504 at specific times (when cron runs), resolves after cron completes.
Diagnosis:
# Check Odoo log for long cron
grep "cron" /var/log/odoo/odoo.log | grep -i "running\|start\|done"Fix: Schedule heavy crons outside business hours, or increase limit_time_real.
Cause 6: Memory Exhaustion (502)
Symptom: 502 with Killed in system log (OOM killer).
Diagnosis:
# Check OOM killer
dmesg | grep -i "oom\|killed"
# Check memory usage
free -hFix:
# Option 1: Reduce memory per worker
limit_memory_hard = 1610612736 # 1.5GB
# Option 2: Add swap
fallocate -l 4G /swapfile
chmod 600 /swapfile
mkswap /swapfile
swapon /swapfile
# Option 3: Upgrade server RAMCause 7: Slow Database Query (504)
Symptom: 504 on specific pages (list views with many records, reports).
Diagnosis:
# Enable slow query logging in PostgreSQL
# postgresql.conf:
log_min_duration_statement = 1000 # Log queries > 1 second
# Check for missing indexes
SELECT relname, seq_scan, idx_scan
FROM pg_stat_user_tables
WHERE seq_scan > 1000
ORDER BY seq_scan DESC;Fix: Create missing indexes, optimize slow queries, or add store=True to computed fields used in list view sorting.
Quick Diagnostic Checklist
| Check | Command | Expected |
|---|---|---|
| Odoo running? | systemctl status odoo | Active (running) |
| Port open? | ss -tlnp | grep 8069 | LISTEN on 8069 |
| Nginx config? | nginx -t | syntax is ok |
| DB accessible? | psql -U odoo -d dbname -c "SELECT 1" | Returns 1 |
| Disk space? | df -h | Less than 90% used |
| Memory? | free -h | Available > 1GB |
| OOM kills? | dmesg | grep -i oom | No recent entries |
Prevention with AI Monitoring
DeployMonkey's AI agent monitors all of these indicators continuously. It detects worker saturation, memory pressure, slow queries, and service failures — alerting you with specific fixes before users see 502/504 errors. Deploy on DeployMonkey for proactive monitoring on every plan, including free.