PromptsApr 7, 2026·2 min read

Ell — Prompt Engineering as Code in Python

Treat prompts as versioned Python functions with automatic tracking, visualization, and A/B testing. Like Git for your AI prompts with a beautiful studio UI.

TL;DR
Ell turns prompts into versioned Python functions with automatic tracking and a visual studio.
§01

What it is

Ell is a Python library that treats prompts as versioned, testable, and trackable code artifacts. Instead of managing prompts as strings in config files or databases, you write them as decorated Python functions. Ell automatically tracks changes, enables A/B testing, and provides a visual studio for exploring prompt versions and their outputs.

The library targets ML engineers, prompt engineers, and developers who need systematic prompt management with version history, performance comparison, and reproducibility.

§02

How it saves time or tokens

Prompt iteration without tracking is chaotic: you lose what worked, cannot compare versions, and cannot reproduce results. Ell solves this by versioning every prompt change automatically. The visual studio shows which prompt versions produced better outputs, enabling data-driven optimization instead of guesswork.

§03

How to use

  1. Install Ell: pip install ell-ai.
  2. Decorate your prompt functions with @ell.simple or @ell.complex.
  3. Call the function normally and Ell tracks the invocation, version, and output.
§04

Example

import ell

ell.init(store='./ell_store', autocommit=True)

@ell.simple(model='gpt-4o')
def summarize(text: str) -> str:
    '''You are a concise summarizer. Summarize the following text in 2 sentences.'''
    return text

# Call like a normal function
result = summarize('Long article text here...')
print(result)

# Launch the visual studio to explore versions
# ell-studio --storage ./ell_store
§05

Related on TokRepo

§06

Common pitfalls

  • Ell stores prompt versions locally by default. For team collaboration, configure a shared storage backend.
  • The @ell.simple decorator expects the docstring to be the system prompt and the return value to be the user message. Reversing these causes unexpected behavior.
  • Ell's autocommit tracks every function edit. In rapid iteration, this creates many versions. Use manual commits for cleaner version history.

Frequently Asked Questions

How does Ell track prompt versions?+

Ell hashes the function source code (including the docstring) on each call. When the code changes, it creates a new version automatically. All versions are stored with their outputs, enabling comparison and rollback.

What is ell-studio?+

ell-studio is a web-based visual interface for exploring prompt versions, comparing outputs, and analyzing performance. It reads from the local store and displays version history, token usage, and output quality metrics.

Does Ell work with any LLM provider?+

Yes. Ell supports OpenAI, Anthropic, and any OpenAI-compatible API. You specify the model in the decorator and Ell routes the call to the appropriate provider.

Can I use Ell for structured output?+

Yes. The @ell.complex decorator supports structured outputs, tool calling, and multi-turn conversations. Use @ell.simple for single-turn text generation and @ell.complex for advanced use cases.

Is Ell free?+

Yes. Ell is open source and free to use. The library and ell-studio are both included in the pip package.

Citations (3)
🙏

Source & Thanks

Created by William Guss. Licensed under MIT.

MadcowD/ell — 6k+ stars

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.

Related Assets