Skip to main content
Setup Target: under 4 minutes

Get Free AI Power
11 Providers + Local AI

Connect free cloud providers, save keys in your browser, and auto-discover local AI running on your device. Start with Groq + Gemini for the fastest path.

Quick-Start Paths

🟢 Path A: Free Starter (Recommended)

  1. Create Groq key at console.groq.com.
  2. Create Gemini key at aistudio.google.com.
  3. Optionally add Cerebras or SambaNova (also free).
  4. Paste keys in the form below and Save.
  5. Open WorkBench and start your first mission.

🖥️ Path B: Local AI (No API Keys)

  1. Install Ollama from ollama.ai (free, 1-click).
  2. Run: ollama pull llama3.2 or ollama pull qwen2.5-coder.
  3. Click "Scan for Local AI" below — detected automatically.
  4. Switch runtime to Hybrid or Local.
  5. 100% private — nothing leaves your device.

⚡ Path C: Power Setup

  1. Complete Path A (free cloud).
  2. Install Ollama for local overflow (Path B).
  3. Add Together AI for long-context Builder tasks.
  4. Add DeepSeek for deep reasoning (very cheap).
  5. Enable Hybrid mode for best performance.

🖥️ Local AI Auto-Discovery

EON scans common local AI runtime ports and detects any running instance automatically. Works with Ollama, LM Studio, Jan, llama.cpp, GPT4All, Text Gen WebUI, and Msty.

Click scan to detect any local AI runtimes running on this device. Results appear instantly — no configuration needed if you already have Ollama or similar installed.

Don't have local AI yet?

Ollama is the easiest way to run AI models locally — works on Mac, Linux, and Windows. Pull models like Llama 3.2, Qwen 2.5, Mistral, Phi-4, or DeepSeek with a single command. All private, all free.

Save Provider Keys

Keys are stored only in this browser (localStorage). Nothing is sent to any server. Fill in as many or as few as you need — more providers means better fallback coverage.

No changes saved yet.