Skip to content

Odoo Scheduled Actions (Cron Jobs): Complete Developer Guide

DeployMonkey Team · March 22, 2026 12 min read

What Are Scheduled Actions?

Scheduled actions (cron jobs) are background tasks that Odoo runs at defined intervals. They handle recurring operations: sending email digests, processing payment reminders, running inventory reorder checks, cleaning up old data, and synchronizing with external systems.

Creating a Scheduled Action

Via XML (Recommended for Modules)

<record id="ir_cron_process_reminders" model="ir.cron">
    <field name="name">Process Payment Reminders</field>
    <field name="model_id" ref="model_account_move"/>
    <field name="state">code</field>
    <field name="code">model._process_payment_reminders()</field>
    <field name="interval_number">1</field>
    <field name="interval_type">days</field>
    <field name="numbercall">-1</field>  <!-- -1 = unlimited -->
    <field name="active">True</field>
    <field name="priority">10</field>
</record>

The Python Method

class AccountMove(models.Model):
    _inherit = 'account.move'

    def _process_payment_reminders(self):
        """Send payment reminders for overdue invoices.
        Called by scheduled action.
        """
        overdue = self.search([
            ('payment_state', '!=', 'paid'),
            ('move_type', '=', 'out_invoice'),
            ('invoice_date_due', '<', fields.Date.today()),
        ])
        for invoice in overdue:
            invoice._send_payment_reminder()
        _logger.info("Processed %d payment reminders", len(overdue))

Interval Types

TypeValueUse Case
minutesinterval_type='minutes'Real-time sync, queue processing
hoursinterval_type='hours'Periodic checks, data aggregation
daysinterval_type='days'Daily reports, cleanup tasks
weeksinterval_type='weeks'Weekly digests, review tasks
monthsinterval_type='months'Monthly reports, archival

Common Patterns

Pattern 1: Batch Processing with Limits

def _process_queue(self):
    """Process items in batches to avoid timeout."""
    BATCH_SIZE = 100
    items = self.search([
        ('state', '=', 'pending'),
    ], limit=BATCH_SIZE, order='create_date asc')

    for item in items:
        try:
            item._process()
            self.env.cr.commit()  # Commit per item to save progress
        except Exception as e:
            self.env.cr.rollback()
            _logger.error("Failed to process %s: %s", item.id, e)
            item.write({'state': 'error', 'error_message': str(e)})

Pattern 2: Idempotent Cron (Safe to Re-Run)

def _sync_external_data(self):
    """Sync data from external API. Safe to run multiple times."""
    last_sync = self.env['ir.config_parameter'].sudo().get_param(
        'my_module.last_sync_date', '2020-01-01'
    )
    # Fetch only records changed since last sync
    new_records = external_api.get_changes(since=last_sync)
    for record in new_records:
        self._upsert_from_external(record)
    # Update last sync timestamp
    self.env['ir.config_parameter'].sudo().set_param(
        'my_module.last_sync_date',
        fields.Datetime.now().isoformat()
    )

Pattern 3: Advisory Lock (Prevent Duplicate Execution)

def _critical_job(self):
    """Only one instance should run at a time."""
    lock_id = 123456  # Unique ID for this job
    self.env.cr.execute(
        "SELECT pg_try_advisory_lock(%s)", [lock_id]
    )
    if not self.env.cr.fetchone()[0]:
        _logger.info("Job already running, skipping")
        return
    try:
        # Do the work
        self._do_critical_work()
    finally:
        self.env.cr.execute(
            "SELECT pg_advisory_unlock(%s)", [lock_id]
        )

Common Pitfalls

  • No error handling — A crash in one record stops processing all remaining records. Use try/except per item.
  • No batch limits — Processing 100,000 records in one cron run causes timeout or OOM. Use LIMIT and process in batches.
  • Not committing progress — If the cron crashes at record 999 of 1000, all 999 are rolled back. Commit per batch.
  • Running during peak hours — Heavy crons compete with user requests. Schedule for off-peak times.
  • No logging — When crons fail silently, nobody notices. Always log start, progress, and completion.

Monitoring Cron Jobs

# Check cron status
SELECT name, active, interval_number, interval_type,
       lastcall, nextcall, numbercall, priority
FROM ir_cron
WHERE active = True
ORDER BY nextcall;

# Check for stuck crons (lastcall too old)
SELECT name, lastcall, nextcall
FROM ir_cron
WHERE active = True
  AND lastcall < NOW() - INTERVAL '2 days';

DeployMonkey Cron Monitoring

DeployMonkey's AI agent monitors scheduled actions: detects stuck crons, long-running jobs, and failed executions. It alerts you with specific diagnostics — which cron, why it failed, and how to fix it.