Skip to main content
Local AI

How to Set Up Local AI on Your Computer

Private, low-cost, and fast for day-to-day missions.

Step 1: Install Ollama

Download from ollama.com and complete installation.

Step 2: Pull your first model

Run:

ollama pull llama3.2:3b
ollama run llama3.2:3b

Step 3: Keep Ollama running

WorkBench auto-checks localhost runtime endpoints. If Ollama is available, it appears in Local Discovery.

Step 4: Connect in EON

Open Get Free AI Power and click Discover Local Models. Runtime mode switches to Hybrid automatically when local models are found.

Recommended by hardware tier

  • Low tier: 1B-3B models for drafting and summarization
  • Medium tier: 7B models for general tasks
  • High tier: 14B+ models for deeper reasoning