BentoML — Build AI Model Serving APIs
BentoML builds model inference REST APIs and multi-model serving systems from Python scripts. 8.6K+ GitHub stars. Auto Docker, dynamic batching, any ML framework. Apache 2.0.
What it is
BentoML is a Python framework for packaging and serving machine learning models as production-ready REST APIs. You decorate a Python class with @bentoml.service and methods with @bentoml.api, and BentoML handles Docker containerization, dynamic request batching, model loading, and API endpoint generation. It supports any ML framework including PyTorch, TensorFlow, Hugging Face Transformers, and scikit-learn.
The tool targets ML engineers and platform teams who need to deploy model inference endpoints without building custom API servers and Docker images from scratch.
How it saves time or tokens
BentoML eliminates the boilerplate of building FastAPI/Flask wrappers around model inference code. A single @bentoml.service decorator replaces hundreds of lines of server setup, health checks, request parsing, and Docker configuration. Dynamic batching automatically groups incoming requests to maximize GPU utilization, improving throughput without changing application code.
How to use
- Install BentoML:
pip install -U bentoml
- Create a service file:
# service.py
import bentoml
@bentoml.service
class Summarizer:
def __init__(self):
from transformers import pipeline
self.pipeline = pipeline('summarization')
@bentoml.api
def summarize(self, text: str) -> str:
result = self.pipeline(text, max_length=130)
return result[0]['summary_text']
- Run locally and containerize:
bentoml serve service:Summarizer
bentoml build
bentoml containerize summarizer:latest
Example
import bentoml
from bentoml.io import JSON
import numpy as np
@bentoml.service(
traffic={'timeout': 60},
resources={'gpu': 1, 'memory': '4Gi'}
)
class ImageClassifier:
def __init__(self):
import torch
self.model = torch.hub.load(
'pytorch/vision', 'resnet50', pretrained=True
)
self.model.eval()
@bentoml.api(batchable=True, batch_dim=0)
def classify(self, images: np.ndarray) -> list:
import torch
tensor = torch.from_numpy(images).float()
with torch.no_grad():
outputs = self.model(tensor)
return outputs.argmax(dim=1).tolist()
Related on TokRepo
- AI tools for coding -- Developer tools for AI application development
- Automation tools -- ML pipeline and deployment automation
Common pitfalls
- The
__init__method runs once at startup; placing slow model loading here is correct, but forgetting to set adequate resource limits causes OOM kills in containers - Dynamic batching requires the
batchable=Trueflag and consistent input shapes; variable-length inputs need padding or separate handling - BentoML builds create large Docker images when model weights are embedded; use external model registries for models over 2GB
Frequently Asked Questions
BentoML supports PyTorch, TensorFlow, Keras, Hugging Face Transformers, scikit-learn, XGBoost, LightGBM, ONNX, and any framework that can run inference in Python. The framework-agnostic design means you write standard Python inference code and BentoML handles the serving infrastructure.
When batchable=True is set on an API method, BentoML collects incoming requests within a configurable time window, groups them into a batch, and sends the batch through the model in a single forward pass. This maximizes GPU utilization by amortizing per-request overhead across multiple inputs.
Yes. BentoML generates Docker images that can be deployed to any container orchestrator. The bentoml containerize command produces standard Docker images. BentoCloud provides managed Kubernetes deployment, and you can also deploy to any self-managed Kubernetes cluster.
TorchServe is PyTorch-specific and focused on serving PyTorch models. BentoML is framework-agnostic, supports any Python ML library, and provides a simpler decorator-based API. BentoML also handles Docker packaging and multi-model composition more naturally.
Yes. BentoML is Apache 2.0 licensed. The core framework is fully open source. BentoCloud is the optional paid managed platform for deployment and scaling, but you can self-host everything with the open-source tools.
Citations (3)
- BentoML GitHub— BentoML builds model serving APIs from Python scripts
- BentoML Documentation— Dynamic batching and auto Docker containerization
- BentoML Framework Guide— Supports PyTorch, TensorFlow, Hugging Face, and other frameworks
Related on TokRepo
Source & Thanks
Created by BentoML. Licensed under Apache 2.0. bentoml/BentoML — 8,600+ GitHub stars
Discussion
Related Assets
NAPI-RS — Build Node.js Native Addons in Rust
Write high-performance Node.js native modules in Rust with automatic TypeScript type generation and cross-platform prebuilt binaries.
Mamba — Fast Cross-Platform Package Manager
A drop-in conda replacement written in C++ that resolves environments in seconds instead of minutes.
Plasmo — The Browser Extension Framework
Build, test, and publish browser extensions for Chrome, Firefox, and Edge using React or Vue with hot-reload and automatic manifest generation.