Skip to content

Odoo Cron Jobs with Advisory Locks: Preventing Duplicate Execution

DeployMonkey Team · March 23, 2026 10 min read

The Duplicate Cron Problem

Odoo runs scheduled actions (cron jobs) using worker processes. When multiple workers are active, the same cron job can execute simultaneously on different workers. This causes duplicate emails, double-processing of records, race conditions, and data corruption. PostgreSQL advisory locks solve this elegantly.

What Are Advisory Locks?

Advisory locks are application-level locks managed by PostgreSQL. Unlike row locks, they do not lock any table or row — they lock an arbitrary integer ID. Two sessions trying to acquire the same lock ID will block (or fail) until the first releases it.

# Session-level advisory lock (released at end of session/transaction)
SELECT pg_try_advisory_lock(12345);

# Transaction-level advisory lock (released at COMMIT/ROLLBACK)
SELECT pg_try_advisory_xact_lock(12345);

The pg_try_ variants are non-blocking — they return true if the lock was acquired, false if another session holds it. This is what you want for cron jobs: skip execution instead of waiting.

Basic Pattern: Cron with Advisory Lock

class InstanceCleanupWorker(models.Model):
    _name = 'dm.instance.cleanup.worker'
    _description = 'Instance Cleanup Worker'

    LOCK_ID = 738201  # unique arbitrary integer

    def _cron_cleanup_instances(self):
        """Scheduled action entry point."""
        # Try to acquire advisory lock — skip if another worker has it
        self.env.cr.execute(
            "SELECT pg_try_advisory_lock(%s)", [self.LOCK_ID]
        )
        acquired = self.env.cr.fetchone()[0]
        if not acquired:
            _logger.info('Cleanup worker: another worker is running, skipping.')
            return

        try:
            self._do_cleanup()
        finally:
            # Always release the lock
            self.env.cr.execute(
                "SELECT pg_advisory_unlock(%s)", [self.LOCK_ID]
            )

    def _do_cleanup(self):
        # actual cleanup logic here
        records = self.env['dm.instance'].sudo().search([
            ('cleanup_state', '=', 'pending'),
        ], limit=10)
        for record in records:
            record._process_cleanup()

Session vs Transaction Advisory Locks

TypeFunctionReleased WhenUse Case
Sessionpg_try_advisory_lockExplicit unlock or session endLong-running cron jobs
Transactionpg_try_advisory_xact_lockCOMMIT or ROLLBACKShort operations within a transaction

For cron jobs, use session-level locks with explicit unlock in a finally block. Transaction-level locks can cause issues because Odoo may commit intermediate transactions during long operations.

Choosing Lock IDs

Lock IDs are integers. You need unique IDs per cron job type. Common strategies:

# Strategy 1: Hardcoded constants
CLEANUP_LOCK = 738201
BILLING_LOCK = 738202
HEALTH_CHECK_LOCK = 738203

# Strategy 2: Hash-based (deterministic from string)
import hashlib
def _lock_id(name):
    return int(hashlib.md5(name.encode()).hexdigest()[:8], 16)

# _lock_id('instance_cleanup') -> consistent integer

Per-Record Locking

Sometimes you want to allow parallel processing but prevent two workers from processing the same record:

def _process_batch(self):
    records = self.env['dm.instance'].sudo().search([
        ('cleanup_state', '=', 'pending'),
    ], limit=20)

    for record in records:
        lock_id = 900000 + record.id  # unique per record
        self.env.cr.execute(
            "SELECT pg_try_advisory_lock(%s)", [lock_id]
        )
        if not self.env.cr.fetchone()[0]:
            continue  # another worker is handling this record

        try:
            record._process_cleanup()
            self.env.cr.commit()
        except Exception:
            self.env.cr.rollback()
            _logger.exception('Cleanup failed for instance %s', record.id)
        finally:
            self.env.cr.execute(
                "SELECT pg_advisory_unlock(%s)", [lock_id]
            )

Error Handling and Lock Safety

The most critical rule: always release locks. A leaked advisory lock persists until the database session ends (which could be hours for a persistent Odoo worker).

# WRONG — lock leaks on exception
def _cron_process(self):
    self.env.cr.execute("SELECT pg_try_advisory_lock(1001)")
    if not self.env.cr.fetchone()[0]:
        return
    self._do_work()  # if this raises, lock is never released

# CORRECT — always release
def _cron_process(self):
    self.env.cr.execute("SELECT pg_try_advisory_lock(1001)")
    if not self.env.cr.fetchone()[0]:
        return
    try:
        self._do_work()
    except Exception:
        _logger.exception('Cron job failed')
    finally:
        self.env.cr.execute("SELECT pg_advisory_unlock(1001)")

Cron Job Definition in XML

The scheduled action XML that calls the advisory-locked method:

<record id="cron_cleanup_instances" model="ir.cron">
    <field name="name">Instance Cleanup Worker</field>
    <field name="model_id" ref="model_dm_instance_cleanup_worker"/>
    <field name="state">code</field>
    <field name="code">model._cron_cleanup_instances()</field>
    <field name="interval_number">5</field>
    <field name="interval_type">minutes</field>
    <field name="numbercall">-1</field>
    <field name="active">True</field>
</record>

Monitoring Advisory Locks

Check which advisory locks are currently held:

SELECT pid, classid, objid, granted
FROM pg_locks
WHERE locktype = 'advisory';

This is useful for debugging stuck cron jobs — if a lock appears held but the worker process is dead, you may need to terminate the database session.

Best Practices

  • Always use pg_try_advisory_lock (non-blocking) for cron jobs — never the blocking variant
  • Always release locks in a finally block
  • Use session-level locks for long-running jobs, transaction-level for short operations
  • Log when a lock acquisition is skipped so you can monitor in production
  • Use hardcoded or hash-based lock IDs — never random
  • For per-record locking, offset the lock ID by a large base number to avoid collisions
  • Commit after processing each record to release row-level locks and reduce transaction size