KnowledgeMay 13, 2026·3 min read

Awesome Embodied Robotics & Agents

Curated reading list for embodied robotics + agent research. Use it to track key papers, datasets, benchmarks, and trends in embodied AI.

Agent ready

This asset can be read and installed directly by agents

TokRepo exposes a universal CLI command, install contract, metadata JSON, adapter-aware plan, and raw content links so agents can judge fit, risk, and next actions.

Native · 94/100Policy: allow
Agent surface
Any MCP/CLI agent
Kind
Memory
Install
Git
Trust
Trust: Established
Entrypoint
git clone https://github.com/zchoi/Awesome-Embodied-Robotics-and-Agent
Universal CLI install command
npx tokrepo install 81e1cc04-92f1-5b5a-b177-4f37b8fd819d
Intro

This repository curates research resources for embodied robotics and agent systems, helping you navigate papers, datasets, and benchmarks in embodied AI.

Best for: Researchers and engineers mapping the embodied AI landscape

Works with: Any OS; Markdown list with many external links; verify each source as you read

Setup time: 5–15 minutes

Key facts (verified)

  • Apache-2.0 licensed list (GitHub API verified).
  • Use the directory to build a reading plan: surveys first, then benchmarks/datasets, then implementations.
  • GitHub: 1,782 stars · 96 forks; pushed 2026-05-11 (GitHub API verified).

Main

A practical reading workflow:

  1. Start with surveys to build a taxonomy (tasks, sensors, environments, metrics).
  2. Pick one benchmark/dataset and trace which papers report results on it.
  3. For each method you care about, find an implementation and record reproducibility notes.

README excerpt (verbatim)

🤖 Awesome Embodied Robotics and Agent Awesome

This is a curated list of "Embodied robotics or agent with Vision-Language Models (VLMs) and Large Language Models (LLMs)" research which is maintained by haonan.

Watch this repository for the latest updates and feel free to raise pull requests if you find some interesting papers!

News🔥

[2026/05/11] 🎉 Add NavSpace: How Intelligent Agents Follow Spatial Intelligence Instructions (ICRA 2026), the first benchmark for evaluating spatial intelligence in embodied navigation, with open-sourced dataset, evaluation code, and baseline SNav. [arXiv] [Github]
[2025/10/30] 🎉 Our survey paper "A Survey on Efficient Vision-Language-Action Models" [arXiv] has been released!
[2025/04/23] Add π-0.5, a lightweight and modular framework designed to integrate perception, control, and learning directly within physical systems.
[2025/03/18] Add some popular vision-language action (VLA) models. 🦾
[2024/06/28] Created a new board about agent self-evolutionary research. 🤖
[2024/06/07] Add Mobile-Agent-v2, a mobile device operation assistant with effective navigation via multi-agent collaboration. 🚀
[2024/05/13] Add "Learning Interactive Real-World Simulators"——outstanding paper award in ICLR 2024 🥇.
[2024/04/24] Add "A Survey on Self-Evolution of Large Language Models", a systematic survey on self-evolution in LLMs! 💥
[2024/04/16] Add some CVPR 2024 papers.
[2024/04/15] Add MetaGPT, accepted for oral presentation (top 1.2%) at ICLR 2024, ranking #1 in the LLM-based Agent category. 🚀
[2024/03/13] Add CRADLE, an interesting paper exploring LLM-based agent in Red Dead Redemption II!🎮

Development of Embodied Robotics and Benchmarks

🙏

Source & Thanks

Source: https://github.com/zchoi/Awesome-Embodied-Robotics-and-Agent > License: Apache-2.0 > GitHub stars: 1,782 · forks: 96

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.

Related Assets