Multi-objective LLM optimization
Describe what you want the AI to accomplish
Configure your API key to use this model
• Click Settings to configure cloud APIs or local Ollama
• Provide at least 2-3 diverse training examples
• Be specific in your task description
• GEPA optimizes for multiple objectives simultaneously
• Optimization typically takes 30-60 seconds
Download and install Ollama from ollama.com/download
Or install via command line:
curl -fsSL https://ollama.com/install.sh | shOpen a terminal and pull a model. We recommend starting with Llama 3.2:
ollama pull llama3.2Other popular models: mistral, gemma2, llama3.1, deepseek-r1
Run the Ollama server (it may already be running after installation):
ollama serveThe server runs on http://localhost:11434 by default.
Click the API Keys button above and enable Ollama. Use the Test Connection button to verify it's working.
If you're using this site from a deployed URL (not localhost), you need to expose your local Ollama server using a tunnel. No account required!
Quick Setup (One Command)
Install and run Cloudflare Tunnel — works instantly, no signup needed:
brew install cloudflared && cloudflared tunnel --url http://localhost:11434Copy the generated URL (e.g., https://random-words.trycloudflare.com) and paste it in Settings → Local (Ollama) → Server URL
Alternative: Linux/Windows
Download cloudflared from Cloudflare Downloads, then run:
cloudflared tunnel --url http://localhost:11434Configure your task and provide training examples, then run the GEPA optimizer to generate an optimized prompt.