Skip to content

Odoo + Ollama: Running AI Locally with Your ERP

DeployMonkey Team · March 22, 2026 12 min read

Why Run AI Locally?

Cloud AI APIs (Claude, GPT-4) are powerful but have three drawbacks: cost per token, data leaves your network, and API rate limits. Running AI locally with Ollama eliminates all three. Your prompts, Odoo data, and generated code never leave your server. There are no API costs after the initial hardware investment. And there are no rate limits — you can run as many queries as your hardware handles.

For Odoo specifically, local AI is attractive because ERP data is sensitive: financial records, customer information, employee data, and trade secrets. Keeping AI processing on-premise satisfies compliance requirements without sacrificing AI capabilities.

What Is Ollama?

Ollama is an open-source tool for running large language models locally. It provides a simple CLI and API to download, run, and manage models like Llama 3, Mistral, CodeLlama, and dozens of others. It handles GPU acceleration, memory management, and model loading automatically.

Setup

Install Ollama

# Linux
curl -fsSL https://ollama.ai/install.sh | sh

# macOS
brew install ollama

# Start the service
ollama serve

Download Models

# General purpose (best quality for Odoo tasks)
ollama pull llama3:70b

# Faster, smaller alternative
ollama pull llama3:8b

# Code-focused (best for Odoo module generation)
ollama pull codellama:34b

# Balanced quality/speed
ollama pull mistral:7b

Hardware Requirements

ModelRAM RequiredGPU VRAMBest For
Llama 3 8B8 GB6 GBQuick queries, monitoring, simple analytics
Mistral 7B8 GB6 GBFast responses, chat interfaces
CodeLlama 34B32 GB24 GBOdoo code generation
Llama 3 70B64 GB48 GBBest quality for all tasks

Connecting Ollama to Odoo

import requests
import xmlrpc.client
import json

# Ollama API (running locally)
OLLAMA_URL = 'http://localhost:11434/api/chat'

# Odoo connection
odoo_models = xmlrpc.client.ServerProxy('http://localhost:8069/xmlrpc/2/object')

def ask_odoo_with_local_ai(question: str) -> str:
    """Query Odoo data using local AI."""

    # Step 1: Ask the local model what Odoo query to run
    planning_response = requests.post(OLLAMA_URL, json={
        'model': 'llama3:8b',
        'messages': [{
            'role': 'system',
            'content': 'You are an Odoo data analyst. Given a question, '
                       'output the Odoo model, method, and domain needed '
                       'to answer it. Return JSON only.'
        }, {
            'role': 'user',
            'content': question
        }],
        'stream': False
    })
    query_plan = json.loads(
        planning_response.json()['message']['content']
    )

    # Step 2: Execute the query against Odoo
    result = odoo_models.execute_kw(
        DB, uid, PASSWORD,
        query_plan['model'], query_plan['method'],
        [query_plan.get('domain', [])],
        query_plan.get('kwargs', {})
    )

    # Step 3: Ask the local model to format the answer
    format_response = requests.post(OLLAMA_URL, json={
        'model': 'llama3:8b',
        'messages': [{
            'role': 'user',
            'content': f'Question: {question}\n'
                       f'Data from Odoo: {json.dumps(result, default=str)}\n'
                       f'Provide a clear, formatted answer.'
        }],
        'stream': False
    })

    return format_response.json()['message']['content']

# Example
print(ask_odoo_with_local_ai(
    "How many open sales orders do we have this month?"
))

Use Cases for Local AI + Odoo

1. Data Analytics (Privacy-Sensitive)

Query financial data, employee information, and customer records without sending them to a cloud API. Ideal for: payroll analysis, customer segmentation, financial reporting.

2. Monitoring Agent

Run a local monitoring agent that checks Odoo logs and server metrics continuously. No API costs for high-frequency monitoring.

3. Code Generation (Air-Gapped)

Generate Odoo modules in environments without internet access. CodeLlama 34B produces decent Odoo code locally.

4. Helpdesk Chatbot

Run a customer-facing chatbot that never sends customer data to external servers. Important for GDPR, HIPAA, and SOC 2 compliance.

Local vs Cloud AI Quality Comparison

TaskLlama 3 8B (Local)Llama 3 70B (Local)Claude Sonnet (Cloud)
Simple analyticsGoodExcellentExcellent
Complex analyticsFairGoodExcellent
Odoo code generationPoor-FairGoodExcellent
Log analysisGoodExcellentExcellent
Monitoring alertsGoodExcellentExcellent
Version-specific OdooPoorFairGood (with KB)

Bottom line: local models handle monitoring, simple analytics, and chatbot tasks well. For complex code generation and version-specific Odoo work, cloud models are still significantly better.

Hybrid Approach

The best setup combines local and cloud AI:

  • Local (Ollama) — Monitoring, log analysis, simple analytics, privacy-sensitive queries
  • Cloud (Claude/GPT) — Code generation, complex analysis, version-specific Odoo work

Route queries based on sensitivity and complexity. Sensitive data stays local; complex tasks go to the cloud.

Getting Started

Deploy Odoo on DeployMonkey and install Ollama on the same server or a nearby machine. Start with monitoring and analytics — local models handle these well. Upgrade to larger models (70B) or cloud APIs as your needs grow. DeployMonkey supports both approaches — built-in AI agent for managed monitoring, plus your own local AI for custom tasks.