What Is model_create_multi?
In Odoo, the @api.model_create_multi decorator marks a create() method that accepts a list of dictionaries instead of a single dictionary. This enables batch record creation, which is dramatically faster than creating records one at a time.
Since Odoo 14+, the base create() method already handles both single dicts and lists. The @api.model_create_multi decorator is primarily used when you override create() and want to ensure your override receives a list of vals.
Basic Usage
from odoo import models, api
class ProductTemplate(models.Model):
_inherit = 'product.template'
@api.model_create_multi
def create(self, vals_list):
# vals_list is always a list of dicts
for vals in vals_list:
if not vals.get('internal_reference'):
vals['internal_reference'] = self.env[
'ir.sequence'
].next_by_code('product.ref')
return super().create(vals_list)
Without the decorator, if you call create({'name': 'Test'}), the method receives a single dict. With the decorator, it is automatically wrapped in a list: [{'name': 'Test'}].
Why Batch Create Matters
The performance difference between single and batch creates is significant. Each individual create() call triggers: field default computation, constraint validation, computed field recomputation, access rights checking, and potentially database triggers. Batching these operations amortizes the overhead:
# Slow: 1000 individual creates (1000 SQL INSERT statements)
for data in product_data:
self.env['product.product'].create(data)
# Fast: 1 batch create (1 SQL INSERT with 1000 rows)
self.env['product.product'].create(product_data)
In benchmarks, batch creating 1000 records can be 5-20x faster than individual creates, depending on the model's complexity (number of computed fields, constraints, etc.).
Overriding create() Correctly
When overriding create() with the decorator, you must handle the entire list:
@api.model_create_multi
def create(self, vals_list):
# Pre-processing: modify vals before creation
for vals in vals_list:
if vals.get('name'):
vals['name'] = vals['name'].strip()
# Call super to actually create records
records = super().create(vals_list)
# Post-processing: work with created records
for record in records:
record._post_create_hook()
return records
Batch Operations Beyond Create
Batch Write
The write() method already operates on recordsets, so it is naturally batched:
# Updates all records in one SQL UPDATE
records = self.env['sale.order'].search(
[('state', '=', 'draft')])
records.write({'note': 'Updated by batch process'})
Batch Unlink
Similarly, unlink() operates on the full recordset:
# Deletes all matching records in one operation
old_logs = self.env['log.entry'].search([
('create_date', '<', cutoff_date)])
old_logs.unlink()
Batch Read with read()
When you need raw field values without ORM overhead, read() is faster than accessing fields individually:
# Fast: single SQL query, returns list of dicts
data = records.read(['name', 'email', 'phone'])
# Slower: triggers individual field access per record
for record in records:
name = record.name # lazy-loaded per field per batch
Performance Patterns
Prefetching
Odoo prefetches fields in batches of 1000 by default. When you access a field on one record, Odoo loads that field for all records in the same recordset:
partners = self.env['res.partner'].search([])
# Accessing partner.name on the first record loads
# name for up to 1000 partners in one query
for partner in partners:
print(partner.name) # only 1 SQL query total
Avoiding N+1 Queries
The biggest performance mistake is triggering database queries inside loops:
# BAD: N+1 queries
for order in orders:
customer = self.env['res.partner'].browse(order.partner_id)
# each browse triggers a query
# GOOD: prefetch via recordset
for order in orders:
customer_name = order.partner_id.name
# partner_id is prefetched for all orders
Using mapped() for Bulk Field Access
# Get all partner emails from sale orders
emails = orders.mapped('partner_id.email')
# Returns a flat list, follows relations efficiently
with_context(prefetch_fields=False)
When processing large recordsets and you only need specific fields, disable prefetching to reduce memory:
records = self.env['large.model'].with_context(
prefetch_fields=False
).search([])
for record in records:
# Only the accessed fields are loaded
process(record.name, record.code)
Common Mistakes
- Calling create() in a loop: Always collect vals into a list and call create() once.
- Forgetting to call super(): Your override must call
super().create(vals_list)to actually insert records. - Mutating vals after super(): Changes to the vals dicts after super() have no effect on the database.
- Not returning the recordset: The create() method must return the recordset from super().
When Not to Batch
Batching is not always appropriate. If each record creation depends on the previous one (e.g., sequential numbering with gaps), or if you need to handle errors per-record, individual creates with try/except may be necessary.