ScriptsMar 31, 2026·2 min read

MLX — Apple Silicon ML Framework

MLX is an array framework for machine learning on Apple silicon by Apple Research. 24.9K+ GitHub stars. NumPy-like API, unified memory, lazy computation, autodiff. Python, C++, Swift. MIT licensed.

TO
TokRepo精选 · Community
Quick Use

Use it first, then decide how deep to go

This block should tell both the user and the agent what to copy, install, and apply first.

# Install
pip install mlx

# Quick start — matrix multiply on Apple GPU
python -c "
import mlx.core as mx
a = mx.random.normal((512, 512))
b = mx.random.normal((512, 512))
c = a @ b
mx.eval(c)
print(f'Result shape: {c.shape}, device: Apple GPU')
"

# For LLM inference, install mlx-lm
pip install mlx-lm
mlx_lm.generate --model mlx-community/Llama-3.2-3B-Instruct-4bit --prompt "Hello"

Intro

MLX is an array framework for machine learning on Apple silicon, developed by Apple machine learning research. With 24,900+ GitHub stars and MIT license, MLX provides a NumPy-like Python API with composable function transformations (autodiff, vectorization, optimization), lazy computation, dynamic graph construction, and a unified memory model — no manual data transfers between CPU and GPU. It supports Python, C++, C, and Swift, making it ideal for training and inference on Mac hardware.

Best for: ML researchers and developers running models locally on Apple silicon (M1/M2/M3/M4) Works with: Claude Code, OpenAI Codex, Cursor, Gemini CLI, Windsurf Platforms: macOS (Apple Silicon), Linux (CUDA/CPU)


Key Features

  • NumPy-like API: Familiar interface for Python ML developers
  • Unified memory: No manual CPU↔GPU data transfers on Apple silicon
  • Lazy computation: Operations evaluated only when needed
  • Composable transforms: Autodiff, vectorization, and graph optimization
  • Multi-language: Python, C++, C, and Swift bindings
  • mlx-lm: Run and fine-tune LLMs locally on Mac (Llama, Mistral, Qwen, etc.)

FAQ

Q: What is MLX? A: MLX is Apple's open-source ML framework with 24.9K+ stars for running machine learning on Apple silicon. It provides a NumPy-like API with unified memory, lazy computation, and autodiff. MIT licensed.

Q: How do I install MLX? A: Run pip install mlx. For LLM inference: pip install mlx-lm. Requires Apple silicon Mac (M1+) or Linux with CUDA.


🙏

Source & Thanks

Created by Apple ML Research. Licensed under MIT. ml-explore/mlx — 24,900+ GitHub stars

Related Assets