Local LLMs

Your AI, your hardware, your rules. Email data never leaves your machine.

Complete Privacy
Local LLMs mean your email content never touches external servers. Perfect for sensitive business communications or privacy-conscious users.

How It Works

Same workflow as Claude Code, Gemini, or Codex—just running on your own hardware. You talk to the AI. The AI runs Pontius commands. Your data stays local.

The key requirement: you need a CLI wrapper that lets your local model execute shell commands. Options include:

  • Aider — Works with Ollama and other local backends
  • Open Interpreter — Runs local models with command execution
  • Custom scripts — Build your own with Ollama's API

Ollama Setup

Ollama is the easiest way to run local models:

# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# Pull a capable model
ollama pull llama3.1:8b
# For better performance (requires more RAM)
ollama pull llama3.1:70b

Then connect it to a CLI wrapper like Aider:

# Install Aider
pip install aider-chat
# Run with Ollama
aider --model ollama/llama3.1:8b

First-Time Setup

Once your CLI wrapper is running, ask it to set up Pontius:

You: Can you run pontius setup? I want to connect my email.

The AI runs the setup command and gets a step-by-step guide. It will walk you through connecting your accounts, setting up aliases, and initializing your knowledge base.

Daily Workflow

Tell your AI to clear your email:

You: Let's clear email.
AI: Running 'pontius blitz'...
I've loaded your preferences. Running 'pontius list'...
Work Account (3 new):
UID FROM SUBJECT
37801 [email protected] Question about pricing
37800 [email protected] Payment received
37799 [email protected] Weekly digest
I see:
- 1 customer question → needs response
- 1 billing receipt → archiving
- 1 newsletter → archiving
Stabilizers on (learning) or auto (autonomous)?

The Three Commands

Same three commands as every other AI integration:

pontius setup

Connects your email accounts. The AI gets a detailed guide and walks you through each step.

pontius blitz

Starts an email session. This gives the AI your preferences, the full command reference, and points it at your knowledge base.

pontius help

The full command reference. The AI can run this anytime it needs to check syntax or discover features.

The AGENTS.md File

Create an AGENTS.md file wherever you start your AI sessions:

AGENTS.md
# Email Assistant
You are my AI email assistant. You operate Pontius on my behalf.
## How Pontius Works
Pontius is a CLI email client designed for AI-first workflows. Three commands matter:
- `pontius setup` - Connect email accounts (run if not configured)
- `pontius blitz` - Start an email session (run when I want to clear email)
- `pontius help` - Full command reference (run anytime you need syntax)
## When I Say "Let's Clear Email"
1. Run `pontius blitz` to load my preferences and the session prompt
2. Run `pontius list` to see my inbox
3. Categorize: noise (archive/move), needs response, needs my input
4. Handle noise autonomously based on my patterns
5. Draft responses for my approval (or send if you know the pattern)
6. Ask about anything you're unsure of
## The Knowledge Base
Location: ~/.config/pontius/knowledge/
This is critical. Before responding to any email, check the relevant folders:
- contacts/ - Who this person is, our relationship, their preferences
- templates/ - My proven responses for common situations
- rules/ - Per-account behavior (Work vs Personal)
- policies/ - Hard rules that override everything
- patterns/ - Triggers for automatic actions
## After Each Email Session
Update the knowledge base with what you learned:
- New contacts → add to contacts/contacts.json
- Repeated responses → create a template
- New patterns → add to patterns/patterns.json
- Corrections I made → update the relevant file
The knowledge base should grow every session. Don't skip this step.
## My Preferences
- Sign off: "Best," followed by my name on a new line
- Default tone: Professional but warm
- Always ask before: Anything financial, customer-facing, or to new contacts
- Archive autonomously: Billing receipts, shipping notifications, newsletters

Model Requirements

For effective email management, your model should:

  • Follow multi-step instructions reliably
  • Maintain context across a conversation
  • Execute shell commands when asked
  • Handle JSON files (for knowledge base updates)

Recommended minimum: 8B parameter models. 70B models perform significantly better but require more resources (32GB+ RAM).

Start with Cloud, Migrate to Local
We recommend starting with Claude Code to learn the workflow and build your knowledge base, then migrating to local LLMs once you're comfortable. Your knowledge base transfers seamlessly.

Troubleshooting

Model Struggles with Commands

  • Use larger models (70B+) for better instruction following
  • Be more explicit in your requests: "Run the command pontius list"
  • Check your CLI wrapper supports command execution

Slow Response Times

  • Use quantized models (Q4_K_M) for faster inference
  • Consider GPU acceleration if available
  • Smaller models (8B) respond faster than 70B