Local LLMs + Pontius
Your AI, your hardware, your rules. Run AI-powered email on your own hardware. No cloud APIs, no data leaving your machine.
Why local?
Your emails never leave your machine
Cloud AI is powerful, but your email content passes through external servers. With local models, everything stays on hardware you control.
Zero API calls
No external servers. No third-party access. Your email content never leaves your computer.
Offline capable
Works without internet after initial setup. Perfect for travel, air-gapped environments, or just peace of mind.
Fixed costs
No per-token billing, no API limits, no surprise charges. Just electricity and hardware you already own.
“Your correspondence belongs to you. Not OpenAI. Not Google. Not anyone.”
For lawyers, healthcare, finance, or anyone who takes privacy seriously — local AI means your email content stays private. No exceptions.
What you get
Full email management, fully local
Every Pontius feature works with local models. Same workflow as cloud AI, without any of the privacy tradeoffs.
Privacy-sensitive industries
Legal, healthcare, finance — industries where email content can't touch external APIs. Local models keep everything in-house.
No vendor lock-in
Switch models anytime. Today it's Llama, tomorrow it's the next open-source breakthrough. Your workflow stays the same.
Customization
Fine-tune models on your own patterns. Create a model that truly understands your communication style.
Knowledge base
Build a persistent knowledge base that works across any model. Switch from local to cloud and back — everything comes with you.
Supported tools
Works with your local setup
Ollama
Easiest way to run local models. One command to install and run.
ollama run llama3LM Studio
Desktop app with clean UI. Great for model discovery.
GUI + local serverllama.cpp
Raw performance for power users. Maximum efficiency.
./main -m model.ggufAny CLI tool
If it runs terminal commands, it works with Pontius.
pontius listGetting started
Quick Start with Ollama
1. Install Pontius
$ curl -fsSL https://getpontius.com/install.sh | sh2. Add your email account
$ pontius
# Follow the TUI prompts to add your email3. Activate your license
$ pontius activate YOUR-LICENSE-KEY
License activated successfully.4. Set up your local model
# Install Ollama
$ curl -fsSL https://ollama.com/install.sh | sh
# Pull a capable model
$ ollama pull llama3:8b5. Provide context to your model
You have access to Pontius for email management. Commands:
- pontius list # List recent emails
- pontius read <uid> # Read an email
- pontius reply <uid> "msg" # Reply to an email
- pontius archive <uids> # Archive emails
Execute commands to help manage the user's inbox.Model guide
Recommended models for email
| Model | Size | Best For | RAM |
|---|---|---|---|
| Llama 3 70B | 40GB | Best quality | 48GB+ |
| Llama 3 8B | 4.7GB | Fast, everyday use | 8GB+ |
| Mistral 7B | 4.1GB | Efficient, good quality | 8GB+ |
| Mixtral 8x7B | 26GB | Balanced | 32GB+ |
Start with an 8B model and upgrade if you need better quality.
The secret sauce
It gets smarter every session
Unlike AI tools that start fresh every time, Pontius builds a persistent knowledge base. Contacts learn preferred tones. Common scenarios get templates. Rules automate the routine.
Your knowledge base is portable. Start local, switch to cloud, come back to local — everything you've built follows you.
Learn about the Knowledge Base →Ready to get ahead?
Private AI email management on your own hardware. No compromises.
We believe you should own your software. No monthly rent extraction. No "stop paying, lose access." Buy it, own it, keep it forever.