The LLM application developer's user journey typically starts with experimenting in Playgrounds. Log10 has a fully featured playground that let's you compare multiple instances of Chat Completions. You can play with various prompts and hyperparameters to get a qualitative feel for what combination gives the best results. Just set your OpenAI key (under Settings) to get started.

Agent comparison

Here is a demo (opens in a new tab) of using Log10 to experiment with prompts.

We persist the Playgrounds' states so you can return, iterate, and collaborate on the prompts and hyperparameters. You can store multiple Playgrounds.

Currently, we support OpenAI (GPT) models, with Anthropic (Claude) and open source models on the roadmap.