# FLAML — Fast Lightweight AutoML by Microsoft > FLAML finds accurate machine learning models with low computational cost using a cost-frugal search strategy, supporting classification, regression, NLP, and time series tasks. ## Install Save in your project root: # FLAML — Fast Lightweight AutoML by Microsoft ## Quick Use ```bash pip install "flaml[automl]" python -c " from flaml import AutoML from sklearn.datasets import load_iris X, y = load_iris(return_X_y=True) automl = AutoML() automl.fit(X, y, task='classification', time_budget=60) print(f'Best model: {automl.best_estimator}') print(f'Accuracy: {automl.best_result["accuracy"]:.3f}') " ``` ## Introduction FLAML (Fast and Lightweight AutoML) is a library from Microsoft Research that finds high-quality ML models at minimal cost. Its cost-frugal optimization strategy allocates more budget to promising configurations and less to unpromising ones, often matching or beating other AutoML tools in a fraction of the time. ## What FLAML Does - Automatically selects models and tunes hyperparameters for tabular data - Supports classification, regression, and time-series forecasting - Provides a generic hyperparameter tuning API for any custom learner - Integrates with scikit-learn, XGBoost, LightGBM, and CatBoost - Optimizes within a user-specified time or iteration budget ## Architecture Overview FLAML implements a cost-frugal search strategy called BlendSearch that combines Bayesian optimization with bandit-based pruning. It starts with cheap learners like linear models, estimates their potential, and progressively allocates budget to more expensive models like gradient boosting. The search adapts the evaluation cost dynamically based on observed performance. ## Self-Hosting & Configuration - Install via pip with optional extras for specific learners - Call automl.fit() with your data, task type, and time_budget - Specify metric (accuracy, f1, rmse, etc.) to match your goal - Use estimator_list to restrict which model types are explored - Enable early stopping for neural network or deep learning models ## Key Features - Cost-frugal search finds good models faster than random or grid search - Zero-config API that works out of the box for common tasks - Time-series forecasting with automated feature engineering - Custom learner support for integrating any estimator - Lightweight with minimal dependencies beyond scikit-learn ## Comparison with Similar Tools - **Auto-sklearn** — uses Bayesian optimization with meta-learning; FLAML is faster with its cost-frugal approach - **TPOT** — genetic programming over pipeline structures; FLAML focuses on efficient hyperparameter search - **H2O AutoML** — full platform with Java backend; FLAML is pure Python and more lightweight - **Optuna** — general hyperparameter tuning; FLAML adds automated model selection on top ## FAQ **Q: How does FLAML choose which model to try first?** A: FLAML starts with cheap learners and uses cost-aware search to decide when to move to more expensive models. **Q: Can I use FLAML for deep learning?** A: Yes. FLAML supports PyTorch and TensorFlow models via its custom learner interface. **Q: What is the minimum time budget?** A: Even 10-60 seconds can produce reasonable results on small datasets. **Q: Does FLAML support parallel trials?** A: Yes. Set n_concurrent_trials > 1 and optionally use Ray for distributed execution. ## Sources - https://github.com/microsoft/FLAML - https://microsoft.github.io/FLAML/ --- Source: https://tokrepo.com/en/workflows/asset-cc0538c6 Author: AI Open Source