Skip to content

Odoo Batch Create and Write Performance: Optimization Guide

DeployMonkey Team · March 23, 2026 10 min read

Single vs Batch Create

The biggest performance win in Odoo ORM is batch creation. Creating records one by one triggers the full ORM lifecycle for each record — defaults, computes, constraints, and SQL inserts. Batch creation consolidates these operations:

# SLOW: 1000 individual creates (~45 seconds)
for vals in values_list:
    self.env['sale.order.line'].create(vals)

# FAST: single batch create (~2 seconds)
self.env['sale.order.line'].create(values_list)

Since Odoo 12, the create method accepts a list of dictionaries. Internally, it performs a single multi-row INSERT and batches all compute and constraint checks. This is 10-50x faster for large datasets.

How Batch Create Works Internally

When you call create([vals1, vals2, ..., valsN]), Odoo:

  1. Applies defaults to all records at once
  2. Performs a single multi-row INSERT
  3. Computes stored fields for the entire batch
  4. Runs constraints on the full recordset
  5. Sends a single notification to the bus

Compare this to individual creates where each step runs N times.

Batch Write Optimization

Similar to create, writing to multiple records at once is faster than looping:

# SLOW: N separate UPDATE queries
for record in records:
    record.write({'state': 'done'})

# FAST: single UPDATE query
records.write({'state': 'done'})

When all records get the same values, recordset.write(vals) generates a single UPDATE ... WHERE id IN (...) statement.

Grouped Write Pattern

When different records need different values, group them:

from collections import defaultdict

# Group records by their target state
by_state = defaultdict(lambda: self.env['sale.order'])
for order in orders:
    target = 'done' if order.amount_total > 1000 else 'cancel'
    by_state[target] |= order

# One write per group instead of one per record
for state, group in by_state.items():
    group.write({'state': state})

Avoiding Compute Storms

Stored computed fields recalculate when their dependencies change. A naive write loop triggers recomputation after each write:

# Each write triggers margin recomputation for ALL order lines
for line in order.order_line_ids:
    line.write({'price_unit': new_price})  # triggers _compute_margin each time

# Better: batch write triggers recomputation once
order.order_line_ids.write({'price_unit': new_price})

For complex scenarios where you must set different values, use env.norecompute() (deprecated in newer Odoo) or batch your writes and let the ORM handle recomputation at flush time.

with_context(tracking_disable=True)

Mail tracking adds significant overhead to write operations. Each field change logged in the chatter triggers a message_post. Disable it for bulk operations:

# Disable mail tracking for bulk import
OrderLine = self.env['sale.order.line'].with_context(tracking_disable=True)
OrderLine.create(values_list)

Also useful: mail_create_nolog=True to skip the creation log message, and mail_notrack=True to skip field change tracking.

Efficient Unlink (Delete)

Deleting records has similar batch benefits:

# SLOW: individual deletes
for record in records:
    record.unlink()

# FAST: batch delete
records.unlink()

Batch unlink generates a single DELETE ... WHERE id IN (...) and handles cascade cleanup in one pass.

Import Optimization Context Keys

When importing large datasets, combine context flags:

ctx = {
    'tracking_disable': True,      # no mail tracking
    'mail_create_nolog': True,      # no creation messages
    'no_reset_password': True,      # no password reset emails (res.users)
    'import_file': True,            # import mode flag
    'defer_parent_store_computation': True,  # defer hierarchy recompute
}
Model = self.env['product.template'].with_context(**ctx)
Model.create(big_values_list)

Flush and Invalidation

Odoo 19 uses a write-behind cache. Writes may not hit the database immediately. If you need database consistency mid-operation:

# Force pending writes to database
self.env.flush_all()

# Or flush specific model
self.env['sale.order'].flush_model()

# Invalidate cache to re-read from database
self.env.invalidate_all()

This is crucial when mixing ORM operations with raw SQL queries.

Raw SQL for Extreme Performance

When ORM overhead is unacceptable (millions of records), use raw SQL:

# Bulk update with raw SQL — bypasses ORM entirely
self.env.cr.execute("""
    UPDATE sale_order_line
    SET discount = 10
    WHERE order_id IN (
        SELECT id FROM sale_order WHERE state = 'draft'
    )
""")
# IMPORTANT: invalidate cache after raw SQL
self.env.invalidate_all()

Warning: raw SQL bypasses constraints, computed fields, access rights, and mail tracking. Use it only when you understand the consequences and manually handle side effects.

Performance Benchmarks

OperationRecordsIndividualBatchedSpeedup
create1,00045s2s22x
create10,0008min15s32x
write (same vals)1,00012s0.3s40x
unlink1,00020s1.5s13x

These benchmarks are from a standard Odoo instance with mail tracking enabled. With tracking_disable=True, batch create is even faster.

Best Practices Summary

  • Always use batch create(list_of_dicts) instead of looping
  • Use recordset.write(vals) for same-value updates
  • Group records by target values for different-value writes
  • Disable mail tracking with tracking_disable=True for bulk operations
  • Use flush_all() before raw SQL and invalidate_all() after
  • Reserve raw SQL for extreme cases (millions of records)
  • Profile with log_level = debug_sql to verify query counts