Introduction
Flower (flwr) is a framework for federated learning that allows training ML models across distributed devices without sharing raw data. It is framework-agnostic, working with PyTorch, TensorFlow, JAX, scikit-learn, and even non-Python ML systems through its gRPC-based protocol.
What Flower Does
- Orchestrates federated training across distributed clients via a central server
- Supports any ML framework through a simple NumPyClient interface
- Implements federated strategies: FedAvg, FedProx, FedAdam, and custom strategies
- Provides simulation mode for testing with thousands of virtual clients on one machine
- Enables federated analytics and evaluation without sharing client data
Architecture Overview
Flower uses a server-client architecture. The server selects a federated strategy and coordinates training rounds by sampling clients, sending model parameters, and aggregating updates. Clients implement fit() and evaluate() methods using any ML library. Communication uses gRPC with configurable serialization. The simulation engine (flwr.simulation) virtualizes thousands of clients using Ray, mapping them to available GPUs for fast prototyping.
Self-Hosting & Configuration
- Install server and client dependencies with pip install flwr
- Start the server with fl.server.start_server() specifying the strategy and number of rounds
- Configure client resources (CPU/GPU) for simulation via client_resources parameter
- Use TLS certificates for secure communication in production deployments
- Deploy to Kubernetes using Flower's deployment guides for multi-node setups
Key Features
- Framework-agnostic: works with PyTorch, TensorFlow, JAX, scikit-learn, and more
- Built-in simulation for rapid prototyping without distributed infrastructure
- Pluggable strategies for custom aggregation algorithms
- Differential privacy integration for privacy-preserving training
- Support for heterogeneous clients with varying data sizes and hardware
Comparison with Similar Tools
- PySyft — Focuses on secure computation (MPC, HE); Flower focuses on federated learning orchestration
- TensorFlow Federated — TF-only; Flower supports any framework
- NVIDIA FLARE — Enterprise-focused with more built-in security; Flower is lighter and more flexible
- FedML — Similar scope; Flower has a larger community and more framework integrations
FAQ
Q: Does Flower support non-IID data? A: Yes. Strategies like FedProx are designed for heterogeneous (non-IID) data distributions across clients.
Q: Can I simulate thousands of clients on one machine? A: Yes. Flower's simulation engine uses Ray to virtualize clients and schedule them across available GPUs.
Q: Is Flower secure for production? A: Flower supports TLS encryption and differential privacy. For additional security, combine with secure aggregation or trusted execution environments.
Q: Does Flower work with LLMs? A: Yes. Flower can federate fine-tuning of large language models using LoRA adapters with PyTorch or JAX clients.