# Nuclio — High-Performance Serverless Framework for Real-Time Data > Nuclio is a serverless framework optimized for real-time and data-intensive workloads, delivering sub-millisecond cold starts and high throughput on Kubernetes, Docker, or bare metal. ## Install Save as a script file and run: # Nuclio — High-Performance Serverless Framework for Real-Time Data ## Quick Use ```bash # Install the CLI # Deploy a function from a YAML spec nuctl deploy my-func --path ./handler.py --runtime python --handler handler:handler --platform local # Invoke it nuctl invoke my-func --body '{"key": "value"}' ``` ## Introduction Nuclio is built for workloads where latency and throughput matter — event processing, real-time inference, IoT data pipelines. Unlike general-purpose FaaS platforms, it keeps function instances warm and processes events in parallel threads within a single container, eliminating per-invocation cold starts. ## What Nuclio Does - Deploys serverless functions on Kubernetes, Docker Swarm, or bare metal - Processes events from HTTP, Kafka, Kinesis, MQTT, RabbitMQ, Cron, and more - Maintains warm function instances with multi-worker concurrency inside each container - Provides a web-based dashboard for deploying, testing, and monitoring functions - Supports Python, Go, Java, Node.js, .NET, and shell runtimes ## Architecture Overview Each Nuclio function runs in a container with an embedded event processor. The processor listens on configured triggers, decodes events, and dispatches them to worker threads running user code. A controller (on Kubernetes, a CRD operator) manages function lifecycle, scaling, and ingress. This in-process model avoids the overhead of spinning up new containers per event, achieving throughput rates of hundreds of thousands of events per second. ## Self-Hosting & Configuration - Deploy on Kubernetes with Helm charts or on Docker with a single docker run command - Configure triggers, resources, and scaling via YAML function specs or the dashboard - Set min/max replicas and target concurrency for autoscaling on Kubernetes - Mount volumes, secrets, and config maps for function dependencies - GPU support available for ML inference workloads ## Key Features - Sub-millisecond warm-start latency with multi-worker parallelism - Built-in triggers for Kafka, MQTT, Kinesis, RabbitMQ, and V3IO stream - Integrated web dashboard for code editing, deployment, and log viewing - GPU-aware scheduling for real-time ML serving functions - Versioned function deployments with canary and blue-green support ## Comparison with Similar Tools - **AWS Lambda** — managed FaaS with broader integrations; Nuclio is self-hosted and optimized for low-latency data processing - **OpenFaaS** — Kubernetes FaaS with simpler architecture; Nuclio offers tighter event-source integrations and higher throughput - **Knative Serving** — scales Kubernetes pods per request; Nuclio's in-process workers avoid per-request container overhead - **Fission** — fast Kubernetes FaaS with environment pooling; Nuclio adds a dashboard and native streaming triggers ## FAQ **Q: How does Nuclio achieve low latency?** A: Functions run as persistent processes with multiple worker threads. Events are dispatched in-process, avoiding container startup overhead. **Q: Can Nuclio run outside Kubernetes?** A: Yes. The local Docker platform lets you run Nuclio functions without a Kubernetes cluster. **Q: Does it support GPU workloads?** A: Yes. Function specs can request GPU resources, and the scheduler places them on GPU-enabled nodes. **Q: Who maintains Nuclio?** A: Nuclio is developed by Iguazio (now part of McKinsey) and the open-source community under the Apache 2.0 license. ## Sources - https://github.com/nuclio/nuclio - https://nuclio.io/docs/latest/ --- Source: https://tokrepo.com/en/workflows/asset-451ff9c1 Author: Script Depot