

EmbodiedAgents π€#
EmbodiedAgents is a fully-loaded framework, written in pure ROS2, for creating interactive physical agents that can understand, remember, and act upon contextual information from their environment.
Production Ready Physical Agents: Designed to be used with autonomous robot systems that operate in real world dynamic environments. EmbodiedAgents makes it simple to create systems that make use of Physical AI.
Intuitive API: Simple pythonic API to utilize local or cloud based ML models (specifically Multimodal LLMs and other transformer based architectures) on robots.
Semantic Memory: Integrates vector databases, semantic routing and other supporting components to quickly build arbitrarily complex graphs for agentic information flow. No need to utilize bloated βGenAIβ frameworks on your robot.
Made in ROS2: Utilizes ROS2 as the underlying distributed communications backbone. Theoretically, all devices that provide a ROS2 package can be utilized to send data to ML models, with callbacks implemented for most commonly used data types and infinite extensibility.
Checkout Installation Instructions π οΈ
Get started with the Quickstart Guide π
Get familiar with Basic Concepts π
Dive right in with Examples β¨
Contributions#
Embodied Agents has been developed in collaboration between Automatika Robotics and Inria. Contributions from the community are most welcome.
Table of Contents#
- EmbodiedAgents π€
- Installation π οΈ
- Quick Start π
- Basic Concepts π
- Examples β¨
- Create a conversational agent with audio
- Prompt engineering for LLMs/MLLMs using vision models
- Create a spatio-temporal semantic map
- Create a Go-to-X component using map data
- Use Tool Calling in Go-to-X
- Create a semantic router to route text queries between different components
- Bringing it all together π€
- Making the System Robust And Production Ready
- API Reference