Skip to main content
AI Productivity

How to Run AI Missions for Free

Connect two free API keys and you have access to some of the fastest AI models on Earth โ€” no subscription required.

5 min read ยท EON WorkBench guide ยท โ† Back to blog

EON WorkBench is a browser-based AI mission runner. You write a goal, pick a mode, and it executes using whichever AI provider you've connected. The catch most people miss: the best providers are free, and setup takes under 4 minutes.

What you need before you start

Both give you API keys with generous free tiers. Groq runs Llama and Mixtral at extremely fast inference speeds. Gemini Flash is Google's fastest model and is free up to 1,500 requests/day.

Step-by-step setup

Step 1 โ€” Get your Groq key

Go to console.groq.com โ†’ API Keys โ†’ Create API Key. Copy it to your clipboard.

Step 2 โ€” Get your Gemini key

Go to aistudio.google.com/app/apikey โ†’ Create API key โ†’ select any project. Copy it.

Step 3 โ€” Paste both into WorkBench

Open Get Free AI Power and paste your keys into the Groq and Gemini fields. Hit Save Keys. That's it.

Step 4 โ€” Run your first mission

Go to WorkBench. Pick a mode (Ask, Build, Create, Operate, Signal, or Realm) and type your goal. WorkBench auto-selects the best available provider for that mode.

๐Ÿ’ก Tip: For long-form writing or structured output, Gemini Flash tends to produce better formatting. For raw speed (brainstorming, rewrites), Groq + Llama 3 is hard to beat.

The 6 mission modes explained

  • Ask โ€” Research, Q&A, explain concepts
  • Build โ€” Code generation, architecture planning
  • Create โ€” Content writing, hooks, copy
  • Operate โ€” SOPs, checklists, system design
  • Signal โ€” Market research, investment thesis generation
  • Realm โ€” Community strategy, realm management

Earning Pool Points while you work

Every mission you complete earns Pool Points. Pool Points determine your share of the EONL mint pool โ€” the more you use the platform, the larger your allocation when EONL is distributed. You can track your points in the Vault.

Want even faster AI? Add local models

If you have a decent computer (16GB+ RAM), you can run AI models locally using Ollama. WorkBench auto-discovers local models at startup โ€” no configuration needed.