Where Odoo Performance Problems Come From
Most Odoo performance issues fall into four categories: slow ORM queries (70%), misconfigured workers/memory (15%), PostgreSQL tuning (10%), and network/proxy issues (5%). This guide addresses all four.
ORM Query Optimization
Anti-Pattern: N+1 Queries
# BAD: N+1 query pattern (1 query + N queries for related records)
for order in self.env['sale.order'].search([]):
partner_name = order.partner_id.name # Triggers 1 query per order
print(f"{partner_name}: {order.amount_total}")
# GOOD: Prefetch by accessing the full recordset first
orders = self.env['sale.order'].search([])
for order in orders:
# Odoo prefetches partner_id for ALL orders in one query
partner_name = order.partner_id.name
print(f"{partner_name}: {order.amount_total}")Anti-Pattern: Search Inside Loop
# BAD: Search for each item
for product in products:
stock = self.env['stock.quant'].search([
('product_id', '=', product.id),
])
total += sum(stock.mapped('quantity'))
# GOOD: One search, filter in Python
all_stock = self.env['stock.quant'].search([
('product_id', 'in', products.ids),
])
for product in products:
product_stock = all_stock.filtered(
lambda s: s.product_id == product
)
total += sum(product_stock.mapped('quantity'))
# BEST: Use read_group
stock_data = self.env['stock.quant'].read_group(
[('product_id', 'in', products.ids)],
['quantity:sum'],
['product_id'],
)Anti-Pattern: Non-Stored Computed Fields in List Views
# BAD: Non-stored computed field displayed in list view
# Each row triggers the compute method → hundreds of queries
total_weight = fields.Float(compute='_compute_weight')
# GOOD: Store the computed field
total_weight = fields.Float(compute='_compute_weight', store=True)Use read_group for Aggregations
# BAD: Read all records and sum in Python
orders = self.env['sale.order'].search([('state', '=', 'sale')])
total = sum(orders.mapped('amount_total'))
# GOOD: Let PostgreSQL do the aggregation
result = self.env['sale.order'].read_group(
[('state', '=', 'sale')],
['amount_total:sum'],
[], # No grouping = one aggregated result
)
total = result[0]['amount_total']PostgreSQL Tuning
# postgresql.conf — optimized for Odoo
shared_buffers = 2GB # 25% of RAM (max 8GB)
effective_cache_size = 6GB # 75% of RAM
work_mem = 32MB # Per-query sort/hash memory
maintenance_work_mem = 512MB # For VACUUM, INDEX
random_page_cost = 1.1 # SSD (default 4.0 for HDD)
checkpoint_completion_target = 0.9
wal_buffers = 64MB
max_connections = 200Create Missing Indexes
-- Find tables with high sequential scan ratios
SELECT relname, seq_scan, idx_scan,
round(100.0 * seq_scan / NULLIF(seq_scan + idx_scan, 0), 1) as seq_pct
FROM pg_stat_user_tables
WHERE seq_scan > 1000
ORDER BY seq_pct DESC
LIMIT 20;
-- Common indexes to add for Odoo
CREATE INDEX CONCURRENTLY idx_sale_order_date_state
ON sale_order(date_order, state);
CREATE INDEX CONCURRENTLY idx_account_move_line_account_date
ON account_move_line(account_id, date);
CREATE INDEX CONCURRENTLY idx_stock_move_state_product
ON stock_move(state, product_id);Worker Configuration
# odoo.conf
workers = 9 # (CPU cores × 2) + 1
limit_memory_hard = 2684354560 # 2.5GB per worker
limit_memory_soft = 2147483648 # 2GB soft limit
limit_time_real = 120 # 120s max per request
limit_time_cpu = 60 # 60s CPU time per request
limit_request = 8192 # Recycle worker after 8192 requestsCaching
# Use ormcache for expensive lookups
from odoo.tools import ormcache
class ResCompany(models.Model):
_inherit = 'res.company'
@ormcache('self.id')
def _get_tax_rate(self):
"""Cache tax rate per company (cleared on write)."""
return self.tax_rateNginx Caching for Static Files
# Cache static assets
location ~* /web/static/ {
proxy_pass http://127.0.0.1:8069;
expires 7d;
add_header Cache-Control "public, immutable";
}Monitoring Performance
- Enable PostgreSQL slow query logging:
log_min_duration_statement = 1000 - Monitor Odoo response times in access logs
- Track worker utilization and memory usage
- Use
EXPLAIN ANALYZEon slow queries
DeployMonkey Performance
DeployMonkey's AI agent monitors performance automatically — detecting slow queries, recommending indexes, and suggesting configuration changes. It knows the difference between a worker issue and a query issue, saving you hours of diagnosis.