# Awesome Embodied Robotics & Agents
> Curated reading list for embodied robotics + agent research. Use it to track key papers, datasets, benchmarks, and trends in embodied AI.
## Install
Copy the content below into your project:
## Quick Use
```bash
git clone https://github.com/zchoi/Awesome-Embodied-Robotics-and-Agent
cd Awesome-Embodied-Robotics-and-Agent
rg -n "benchmark|dataset|survey" README.md
```
## Intro
This repository curates research resources for embodied robotics and agent systems, helping you navigate papers, datasets, and benchmarks in embodied AI.
**Best for:** Researchers and engineers mapping the embodied AI landscape
**Works with:** Any OS; Markdown list with many external links; verify each source as you read
**Setup time:** 5–15 minutes
### Key facts (verified)
- Apache-2.0 licensed list (GitHub API verified).
- Use the directory to build a reading plan: surveys first, then benchmarks/datasets, then implementations.
- GitHub: 1,782 stars · 96 forks; pushed 2026-05-11 (GitHub API verified).
## Main
A practical reading workflow:
1) Start with surveys to build a taxonomy (tasks, sensors, environments, metrics).
2) Pick one benchmark/dataset and trace which papers report results on it.
3) For each method you care about, find an implementation and record reproducibility notes.
### README excerpt (verbatim)
# 🤖 Awesome Embodied Robotics and Agent [](https://github.com/sindresorhus/awesome)
> This is a curated list of "Embodied robotics or agent with Vision-Language Models (VLMs) and Large Language Models (LLMs)" research which is maintained by [haonan](https://zchoi.github.io/).
Watch this repository for the latest updates and **feel free to raise pull requests if you find some interesting papers**!
## News🔥
[2026/05/11] 🎉 Add **NavSpace: How Intelligent Agents Follow Spatial Intelligence Instructions** (**ICRA 2026**), the first benchmark for evaluating spatial intelligence in embodied navigation, with open-sourced dataset, evaluation code, and baseline **SNav**. [[arXiv]](https://arxiv.org/abs/2510.08173) [[Github]](https://github.com/TidalHarley/NavSpace)
[2025/10/30] 🎉 Our survey paper "**A Survey on Efficient Vision-Language-Action Models**" [[arXiv]](https://arxiv.org/abs/2510.24795) has been released!
[2025/04/23] Add **π-0.5**, a lightweight and modular framework designed to integrate perception, control, and learning directly within physical systems.
[2025/03/18] Add some popular vision-language action (VLA) models. 🦾
[2024/06/28] Created a new board about agent self-evolutionary research. 🤖
[2024/06/07] Add **Mobile-Agent-v2**, a mobile device operation assistant with effective navigation via multi-agent collaboration. 🚀
[2024/05/13] Add "**Learning Interactive Real-World Simulators**"——outstanding paper award in ICLR 2024 🥇.
[2024/04/24] Add "**A Survey on Self-Evolution of Large Language Models**", a systematic survey on self-evolution in LLMs! 💥
[2024/04/16] Add some CVPR 2024 papers.
[2024/04/15] Add **MetaGPT**, accepted for oral presentation (top 1.2%) at ICLR 2024, **ranking #1** in the LLM-based Agent category. 🚀
[2024/03/13] Add **CRADLE**, an interesting paper exploring LLM-based agent in Red Dead Redemption II!🎮
## Development of Embodied Robotics and Benchmarks
## 快速使用
```bash
git clone https://github.com/zchoi/Awesome-Embodied-Robotics-and-Agent
cd Awesome-Embodied-Robotics-and-Agent
rg -n "benchmark|dataset|survey" README.md
```
## 简介
该仓库整理了具身机器人与 Agent 系统的研究资料,帮助你在 embodied AI 方向快速导航论文、数据集与 benchmark。
**最适合:** 需要梳理具身 AI 研究版图的研究者与工程师
**适配:** 任意系统;Markdown 目录含大量外链;阅读时建议逐条验证来源与年份
**配置时间:** 5–15 分钟
### 关键事实(已验证)
- 清单仓库为 Apache-2.0 许可证(GitHub API 已验证)。
- 建议先读 surveys,再看 benchmarks/datasets,最后看实现仓库,形成可执行阅读路径。
- GitHub:1,782 stars · 96 forks;最近更新 2026-05-11(GitHub API 验证)。
## 正文
更可执行的阅读方式:
1) 先从 survey 建立分类法(任务/传感器/环境/指标)。
2) 选一个 benchmark/dataset,追溯哪些论文在它上面报告结果。
3) 对你关心的方法,找到实现并记录可复现笔记(版本、超参、依赖)。
### README 原文节选(verbatim)
# 🤖 Awesome Embodied Robotics and Agent [](https://github.com/sindresorhus/awesome)
> This is a curated list of "Embodied robotics or agent with Vision-Language Models (VLMs) and Large Language Models (LLMs)" research which is maintained by [haonan](https://zchoi.github.io/).
Watch this repository for the latest updates and **feel free to raise pull requests if you find some interesting papers**!
## News🔥
[2026/05/11] 🎉 Add **NavSpace: How Intelligent Agents Follow Spatial Intelligence Instructions** (**ICRA 2026**), the first benchmark for evaluating spatial intelligence in embodied navigation, with open-sourced dataset, evaluation code, and baseline **SNav**. [[arXiv]](https://arxiv.org/abs/2510.08173) [[Github]](https://github.com/TidalHarley/NavSpace)
[2025/10/30] 🎉 Our survey paper "**A Survey on Efficient Vision-Language-Action Models**" [[arXiv]](https://arxiv.org/abs/2510.24795) has been released!
[2025/04/23] Add **π-0.5**, a lightweight and modular framework designed to integrate perception, control, and learning directly within physical systems.
[2025/03/18] Add some popular vision-language action (VLA) models. 🦾
[2024/06/28] Created a new board about agent self-evolutionary research. 🤖
[2024/06/07] Add **Mobile-Agent-v2**, a mobile device operation assistant with effective navigation via multi-agent collaboration. 🚀
[2024/05/13] Add "**Learning Interactive Real-World Simulators**"——outstanding paper award in ICLR 2024 🥇.
[2024/04/24] Add "**A Survey on Self-Evolution of Large Language Models**", a systematic survey on self-evolution in LLMs! 💥
[2024/04/16] Add some CVPR 2024 papers.
[2024/04/15] Add **MetaGPT**, accepted for oral presentation (top 1.2%) at ICLR 2024, **ranking #1** in the LLM-based Agent category. 🚀
[2024/03/13] Add **CRADLE**, an interesting paper exploring LLM-based agent in Red Dead Redemption II!🎮
## Development of Embodied Robotics and Benchmarks