# Detectron2 — Meta AI Object Detection Platform > A modular computer vision library from Meta AI Research for object detection, segmentation, keypoint detection, and panoptic segmentation, built on PyTorch with a flexible configuration system. ## Install Save in your project root: # Detectron2 — Meta AI Object Detection Platform ## Quick Use ```bash pip install detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu118/torch2.0/index.html python -c "from detectron2 import model_zoo; print(model_zoo.get_config_file('COCO-Detection/faster_rcnn_R_50_FPN_3x.yaml'))" ``` ## Introduction Detectron2 is Meta AI Research's next-generation library for object detection and segmentation. It provides implementations of state-of-the-art algorithms including Faster R-CNN, Mask R-CNN, RetinaNet, and DensePose, all built on a modular PyTorch-based architecture designed for research flexibility and production deployment. ## What Detectron2 Does - Runs object detection with Faster R-CNN, RetinaNet, and DETR architectures - Performs instance and semantic segmentation with Mask R-CNN and panoptic models - Detects keypoints for human pose estimation and dense pose prediction - Provides a model zoo with pretrained weights on COCO, LVIS, and Cityscapes - Supports custom dataset registration and training with YAML-based configuration ## Architecture Overview Detectron2 uses a modular design with interchangeable components: backbones (ResNet, FPN), region proposal networks, ROI heads, and post-processing modules. The configuration system uses YAML files merged with command-line overrides. A centralized registry pattern allows registering custom components without modifying library code. The data pipeline uses a mapper-based design for flexible augmentation. ## Self-Hosting & Configuration - Install from prebuilt wheels matching your CUDA and PyTorch versions - Configure models via YAML files from the model zoo or custom configs - Register custom datasets using `DatasetCatalog.register()` with COCO or custom format - Fine-tune pretrained models by setting `MODEL.WEIGHTS` to a zoo checkpoint - Export models to ONNX or TorchScript for production deployment via `Caffe2Tracing` ## Key Features - Modular architecture with swappable backbones, heads, and loss functions - Comprehensive model zoo with 50+ pretrained configurations - Support for multi-GPU and multi-machine distributed training - Built-in visualization tools for predictions, ground truth, and data augmentation - Research-ready with implementations of PointRend, ViTDet, and MaskFormer ## Comparison with Similar Tools - **Ultralytics YOLO** — Simpler API and faster inference but less architectural flexibility - **MMDetection** — Similar scope with more algorithms but different config system - **Torchvision** — Basic detection models without the full research toolkit - **DETR (standalone)** — Transformer-based detection; Detectron2 includes DETR implementations - **PaddleDetection** — PaddlePaddle-based alternative with comparable model coverage ## FAQ **Q: Is Detectron2 still maintained?** A: Detectron2 is in maintenance mode. Meta AI continues to release new models that build on it, but major new features are developed in other projects. **Q: Can I use Detectron2 for video?** A: Yes. Detectron2 includes support for video instance segmentation and can process video frame by frame with tracking extensions. **Q: What dataset formats are supported?** A: COCO JSON format is natively supported. Custom formats can be registered by providing a function that returns a list of dictionaries with image and annotation fields. **Q: How do I export a trained model?** A: Use `TracingAdapter` to convert models to TorchScript, or use the Caffe2 exporter for ONNX-compatible deployment. ## Sources - https://github.com/facebookresearch/detectron2 - https://detectron2.readthedocs.io/ --- Source: https://tokrepo.com/en/workflows/56e59b91-3c92-11f1-9bc6-00163e2b0d79 Author: AI Open Source