Performance Profiling in Odoo
Odoo applications can slow down as data grows. Profiling identifies where time is spent: SQL queries, Python computation, or network I/O. This guide covers every profiling tool available for Odoo development.
Odoo's Built-in Profiler
Since Odoo 15, a built-in profiler is available in developer mode. Click the bug icon in the top menu bar, then enable the profiler. It captures:
- SQL queries with execution time
- Python call stacks with timing
- QWeb template rendering time
The profiler generates a flame graph viewable directly in the browser. It shows the full call stack and highlights the slowest paths.
SQL Query Analysis
Counting Queries
The most common performance issue is excessive SQL queries. Count them around suspect code:
from odoo.tests.common import BaseCase
def _count_queries(self, func):
"""Count SQL queries executed by a function."""
cr = self.env.cr
queries_before = cr.sql_log_count
func()
queries_after = cr.sql_log_count
return queries_after - queries_before
EXPLAIN ANALYZE
Analyze query execution plans directly:
self.env.cr.execute("""
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)
SELECT * FROM sale_order
WHERE partner_id = %s
AND state = 'draft'
""", [partner_id])
for row in self.env.cr.fetchall():
_logger.info(row[0])
Look for sequential scans on large tables (missing index), nested loops with high row counts, and sort operations on unsorted data.
N+1 Query Detection
The N+1 problem occurs when accessing a relational field triggers one query per record instead of one batch query:
# BAD: N+1 queries (1 query per partner)
for order in orders:
print(order.partner_id.name) # each access = 1 query
# IF not prefetched
# GOOD: Odoo prefetches within the same recordset
orders = self.env['sale.order'].search([])
for order in orders:
print(order.partner_id.name) # prefetched in batch
N+1 problems typically happen when you break the recordset by browsing individual IDs:
# BAD: each browse creates a new recordset, breaking prefetch
for order_id in order_ids:
order = self.env['sale.order'].browse(order_id)
print(order.partner_id.name) # N+1!
# GOOD: browse all at once
orders = self.env['sale.order'].browse(order_ids)
for order in orders:
print(order.partner_id.name) # batched
Python Profiling with cProfile
import cProfile
import pstats
def action_process(self):
profiler = cProfile.Profile()
profiler.enable()
self._heavy_computation()
profiler.disable()
stats = pstats.Stats(profiler)
stats.sort_stats('cumulative')
stats.print_stats(20) # top 20 functions
For production profiling, use the command line:
python -m cProfile -o profile.out odoo-bin ...
# Then analyze:
python -c "
import pstats
stats = pstats.Stats('profile.out')
stats.sort_stats('cumulative')
stats.print_stats(30)
"
Computed Field Performance
Computed fields are a frequent source of slowdowns:
# SLOW: recomputes on every access if not stored
total = fields.Float(compute='_compute_total')
# FAST: stored, recomputed only when dependencies change
total = fields.Float(compute='_compute_total', store=True)
# Check recomputation frequency
@api.depends('line_ids.amount') # triggers on ANY line change
def _compute_total(self):
for record in self:
record.total = sum(record.line_ids.mapped('amount'))
Tips for computed field performance:
- Use
store=Truewhen the field is used in search domains or list views - Minimize the number of
@api.dependstriggers - Use
mapped()instead of loops for aggregation - Avoid calling other computed fields inside computation
Database Indexing
Add indexes for fields used in search domains and sorting:
class SaleOrder(models.Model):
_name = 'sale.order'
state = fields.Selection([...], index=True)
partner_id = fields.Many2one('res.partner', index=True)
date_order = fields.Datetime(index=True)
For composite indexes:
def init(self):
self.env.cr.execute("""
CREATE INDEX IF NOT EXISTS
sale_order_partner_state_idx
ON sale_order (partner_id, state)
WHERE state != 'cancel'
""")
Batch Processing Patterns
# Process large datasets in chunks
def _process_all(self):
offset = 0
batch_size = 1000
while True:
batch = self.search([], limit=batch_size, offset=offset)
if not batch:
break
batch._process_batch()
self.env.cr.commit() # commit per batch
offset += batch_size
Memory Profiling
import tracemalloc
def action_heavy(self):
tracemalloc.start()
self._load_large_dataset()
snapshot = tracemalloc.take_snapshot()
top_stats = snapshot.statistics('lineno')
for stat in top_stats[:10]:
_logger.info(stat)
read_group for Aggregation
For reports and dashboards, use read_group() instead of loading all records:
# SLOW: loads all records into memory
orders = self.env['sale.order'].search([])
total = sum(o.amount_total for o in orders)
# FAST: aggregates in SQL
result = self.env['sale.order'].read_group(
domain=[('state', '=', 'sale')],
fields=['amount_total:sum'],
groupby=['partner_id'],
)
# result = [{'partner_id': (1, 'Name'), 'amount_total': 5000}, ...]
Performance Checklist
- Add
index=Trueto fields used in search domains - Use
store=Trueon computed fields used in list/search views - Browse recordsets together, never individual IDs in loops
- Use
read_group()for aggregation instead of loading records - Use
mapped()for batch field access across relations - Commit in batches for large data processing
- Profile SQL with EXPLAIN ANALYZE for slow queries