# Ultralytics YOLO — State-of-the-Art Object Detection > Production-ready object detection, segmentation, classification, and pose estimation models with a simple Python API and CLI, supporting training, validation, and deployment in a single package. ## Install Save in your project root: # Ultralytics YOLO — State-of-the-Art Object Detection ## Quick Use ```bash pip install ultralytics yolo detect predict model=yolo11n.pt source=image.jpg ``` ## Introduction Ultralytics YOLO provides a unified framework for object detection, instance segmentation, image classification, pose estimation, and oriented bounding boxes. It wraps the YOLO model family into a clean Python API and CLI that handles training, validation, prediction, and export in one package. ## What Ultralytics YOLO Does - Runs real-time object detection with models from YOLOv3 through YOLO11 - Performs instance segmentation, pose estimation, and image classification - Trains custom models on your own datasets with minimal configuration - Exports models to ONNX, TensorRT, CoreML, TFLite, and other formats - Tracks objects across video frames with built-in multi-object tracking ## Architecture Overview Ultralytics uses a modular Python architecture with a YAML-based configuration system. Models are defined as task-specific heads on top of shared backbone and neck networks. The training pipeline handles data loading, augmentation, mixed precision, and distributed training. The export engine converts PyTorch checkpoints to optimized inference formats through a unified API. ## Self-Hosting & Configuration - Install via pip: `pip install ultralytics` with Python 3.8+ - Train custom models: `yolo train data=coco.yaml model=yolo11n.pt epochs=100` - Configure via YAML files or Python dictionary arguments - Export for deployment: `yolo export model=best.pt format=onnx` - Run inference on images, video, webcam, or RTSP streams ## Key Features - Unified API for detection, segmentation, classification, and pose estimation - CLI and Python interfaces for every operation - Automatic mixed precision and multi-GPU distributed training - Built-in data augmentation including mosaic, mixup, and copy-paste - Export to 10+ deployment formats including TensorRT and CoreML ## Comparison with Similar Tools - **Detectron2** — More flexible for research but requires more setup code - **MMDetection** — Broader algorithm coverage but steeper learning curve - **YOLO (Darknet)** — Original C implementation, less convenient than Python API - **Torchvision** — General-purpose detection models without YOLO-specific optimizations - **Roboflow** — Cloud-based platform that uses Ultralytics models under the hood ## FAQ **Q: Which YOLO version should I use?** A: YOLO11 is the latest and generally offers the best speed-accuracy tradeoff. Use the nano variant (yolo11n) for edge devices and large variants for maximum accuracy. **Q: Can I train on custom datasets?** A: Yes. Provide images and labels in YOLO format (one txt file per image) and a YAML config pointing to your data directory. **Q: What hardware is required?** A: Training benefits from a GPU with 8GB+ VRAM. Inference runs on CPU, GPU, or edge devices depending on the exported format. **Q: Is Ultralytics YOLO free for commercial use?** A: It is dual-licensed under AGPL-3.0 and a commercial Enterprise License for proprietary applications. ## Sources - https://github.com/ultralytics/ultralytics - https://docs.ultralytics.com/ --- Source: https://tokrepo.com/en/workflows/d66bdad8-3c91-11f1-9bc6-00163e2b0d79 Author: AI Open Source