Ell — Prompt Engineering as Code in Python
Treat prompts as versioned Python functions with automatic tracking, visualization, and A/B testing. Like Git for your AI prompts with a beautiful studio UI.
What it is
Ell is a Python library that treats prompts as versioned, testable, and trackable code artifacts. Instead of managing prompts as strings in config files or databases, you write them as decorated Python functions. Ell automatically tracks changes, enables A/B testing, and provides a visual studio for exploring prompt versions and their outputs.
The library targets ML engineers, prompt engineers, and developers who need systematic prompt management with version history, performance comparison, and reproducibility.
How it saves time or tokens
Prompt iteration without tracking is chaotic: you lose what worked, cannot compare versions, and cannot reproduce results. Ell solves this by versioning every prompt change automatically. The visual studio shows which prompt versions produced better outputs, enabling data-driven optimization instead of guesswork.
How to use
- Install Ell:
pip install ell-ai. - Decorate your prompt functions with
@ell.simpleor@ell.complex. - Call the function normally and Ell tracks the invocation, version, and output.
Example
import ell
ell.init(store='./ell_store', autocommit=True)
@ell.simple(model='gpt-4o')
def summarize(text: str) -> str:
'''You are a concise summarizer. Summarize the following text in 2 sentences.'''
return text
# Call like a normal function
result = summarize('Long article text here...')
print(result)
# Launch the visual studio to explore versions
# ell-studio --storage ./ell_store
Related on TokRepo
- Prompt Library -- browse and manage reusable prompts
- AI Tools for Coding -- developer tools for AI workflows
Common pitfalls
- Ell stores prompt versions locally by default. For team collaboration, configure a shared storage backend.
- The
@ell.simpledecorator expects the docstring to be the system prompt and the return value to be the user message. Reversing these causes unexpected behavior. - Ell's autocommit tracks every function edit. In rapid iteration, this creates many versions. Use manual commits for cleaner version history.
Frequently Asked Questions
Ell hashes the function source code (including the docstring) on each call. When the code changes, it creates a new version automatically. All versions are stored with their outputs, enabling comparison and rollback.
ell-studio is a web-based visual interface for exploring prompt versions, comparing outputs, and analyzing performance. It reads from the local store and displays version history, token usage, and output quality metrics.
Yes. Ell supports OpenAI, Anthropic, and any OpenAI-compatible API. You specify the model in the decorator and Ell routes the call to the appropriate provider.
Yes. The @ell.complex decorator supports structured outputs, tool calling, and multi-turn conversations. Use @ell.simple for single-turn text generation and @ell.complex for advanced use cases.
Yes. Ell is open source and free to use. The library and ell-studio are both included in the pip package.
Citations (3)
- Ell GitHub— Prompts as versioned Python functions
- Ell Documentation— Visual studio for prompt exploration
- Ell Versioning Docs— Automatic version tracking and comparison
Related on TokRepo
Source & Thanks
Created by William Guss. Licensed under MIT.
MadcowD/ell — 6k+ stars
Discussion
Related Assets
NAPI-RS — Build Node.js Native Addons in Rust
Write high-performance Node.js native modules in Rust with automatic TypeScript type generation and cross-platform prebuilt binaries.
Mamba — Fast Cross-Platform Package Manager
A drop-in conda replacement written in C++ that resolves environments in seconds instead of minutes.
Plasmo — The Browser Extension Framework
Build, test, and publish browser extensions for Chrome, Firefox, and Edge using React or Vue with hot-reload and automatic manifest generation.