🟢 Path A: Free Starter (Recommended)
- Create Groq key at console.groq.com.
- Create Gemini key at aistudio.google.com.
- Optionally add Cerebras or SambaNova (also free).
- Paste keys in the form below and Save.
- Open WorkBench and start your first mission.
Connect free cloud providers, save keys in your browser, and auto-discover local AI running on your device. Start with Groq + Gemini for the fastest path.
ollama pull llama3.2 or ollama pull qwen2.5-coder.EON scans common local AI runtime ports and detects any running instance automatically. Works with Ollama, LM Studio, Jan, llama.cpp, GPT4All, Text Gen WebUI, and Msty.
Click scan to detect any local AI runtimes running on this device. Results appear instantly — no configuration needed if you already have Ollama or similar installed.
Ollama is the easiest way to run AI models locally — works on Mac, Linux, and Windows. Pull models like Llama 3.2, Qwen 2.5, Mistral, Phi-4, or DeepSeek with a single command. All private, all free.
Keys are stored only in this browser (localStorage). Nothing is sent to any server. Fill in as many or as few as you need — more providers means better fallback coverage.
No changes saved yet.