Skip to content

Odoo Cron Job Timeout — Scheduled Actions Killed by Worker Limit Fix

DeployMonkey Team · March 24, 2026 10 min read

Cron Job Timeout Errors

Odoo scheduled actions (crons) can be killed mid-execution when they exceed time or memory limits:

ERROR odoo.service.server: Worker (pid:12345) timeout (kill=True)
after 120 seconds of inactivity.

WARNING odoo.service.server: Worker (pid:12345) killed by signal 9
(probably out of memory)

ERROR odoo.addons.base.models.ir_cron: Call to cron.method failed:
TransactionRollbackError: could not serialize access due to concurrent update

Understanding Odoo Worker Limits

# In odoo.conf, workers have time and memory limits:
workers = 4
limit_time_real = 120    # Max wall-clock time (seconds) per request
limit_time_cpu = 60      # Max CPU time (seconds) per request
limit_memory_soft = 2147483648   # 2GB soft memory limit
limit_memory_hard = 2684354560   # 2.5GB hard kill limit

# Cron jobs run in regular workers and are subject to these limits
# A cron processing 100,000 records may exceed the 120-second limit

Fix 1: Increase Worker Limits

# In odoo.conf, increase limits for long-running crons:
[options]
limit_time_real = 600     # 10 minutes
limit_time_cpu = 300      # 5 minutes CPU
limit_memory_soft = 4294967296   # 4GB
limit_memory_hard = 5368709120   # 5GB

# Restart Odoo after changes:
sudo systemctl restart odoo

# WARNING: Higher limits mean a single cron can block a worker longer
# Ensure you have enough workers to handle web requests
# Rule of thumb: workers = CPU cores * 2 + 1

Fix 2: Optimize the Cron Logic

# Process records in batches instead of all at once:

def _cron_process_records(self):
    # Wrong: process all records at once
    records = self.search([])
    for record in records:
        record.heavy_operation()  # May timeout

    # Right: process in batches
    batch_size = 100
    offset = 0
    while True:
        records = self.search([], limit=batch_size, offset=offset)
        if not records:
            break
        for record in records:
            record.heavy_operation()
        self.env.cr.commit()  # Commit each batch
        offset += batch_size

Fix 3: Cron Running with --workers=0

# In development (--workers=0), there are no worker limits
# Crons run in the main process without time restrictions

# For one-time heavy operations:
./odoo-bin -c /etc/odoo.conf -d mydb --workers=0
# Then trigger the cron manually from Settings > Scheduled Actions

Fix 4: Cron Overlap Prevention

# If a cron takes longer than its interval,
# the next execution may start before the previous finishes

# This causes: "could not serialize access" errors

# Fix: Add a lock mechanism:
def _cron_with_lock(self):
    # Use a database advisory lock
    self.env.cr.execute(
        "SELECT pg_try_advisory_lock(42, 1)"  # Custom lock ID
    )
    acquired = self.env.cr.fetchone()[0]
    if not acquired:
        _logger.info("Cron already running, skipping")
        return
    try:
        self._do_actual_work()
    finally:
        self.env.cr.execute(
            "SELECT pg_advisory_unlock(42, 1)"
        )

# Also increase the cron interval:
# Settings > Technical > Scheduled Actions > select action
# Change interval from 5 minutes to 30 minutes

Fix 5: Memory Issues

# Crons processing large datasets consume memory
# Each fetched record stays in Odoo's cache

# Fix: Clear cache periodically
def _cron_large_dataset(self):
    batch_size = 500
    record_ids = self.search([]).ids  # Just IDs, not full records

    for i in range(0, len(record_ids), batch_size):
        batch_ids = record_ids[i:i + batch_size]
        records = self.browse(batch_ids)
        for record in records:
            record.process()

        self.env.cr.commit()
        self.env.invalidate_all()  # Clear ORM cache

# Also: avoid storing large data in Python variables
# Use SQL for heavy aggregation instead of Python loops

Fix 6: Dedicated Cron Worker

# Odoo 17+ supports a dedicated cron worker:
# In odoo.conf:
max_cron_threads = 1    # Dedicated cron thread

# This prevents crons from competing with web request workers
# But the cron thread still respects limit_time_real

Fix 7: Monitor Cron Health

# Check cron execution history:
sudo -u postgres psql -d mydb -c "
  SELECT ic.name, ic.active, ic.interval_number, ic.interval_type,
    ic.nextcall, ic.lastcall, ic.numbercall
  FROM ir_cron ic
  WHERE ic.active = true
  ORDER BY ic.nextcall;
"

# Check for stuck crons (lastcall long ago, nextcall in the past):
sudo -u postgres psql -d mydb -c "
  SELECT name, lastcall, nextcall
  FROM ir_cron
  WHERE active = true AND nextcall < NOW() - INTERVAL '1 hour';
"

Prevention

DeployMonkey's AI agent monitors cron execution times and memory usage. The agent detects timeout-prone scheduled actions and recommends batch processing optimizations before they fail in production.