Examples ✨#
Welcome to the EmbodiedAgents Examples section!
Here you’ll find a curated set of short, focused tutorials that demonstrate how to use EmbodiedAgents to build real-world robotic capabilities through its modular architecture. These examples illustrate how perception, language, planning, memory, and control can be integrated to build powerful embodied systems.
Each tutorial walks through one or more components (e.g., LLM, SpeechToText etc.), and is designed to be run end-to-end. We recommend going through them in order, especially if you’re new to the framework.
🔍 What You’ll Learn#
How to load and configure components
How to connect components in an arbitrary graph
How to build complex physical agents that reason and act in simulated or real environments
- Create a conversational agent with audio
- Prompt engineering for LLMs/MLLMs using vision models
- Create a spatio-temporal semantic map
- Create a Go-to-X component using map data
- Use Tool Calling in Go-to-X
- Create a semantic router to route text queries between different components
- Bringing it all together 🤖
- Making the System Robust And Production Ready
Each example includes:
Minimal working code
Explanation of design choices
Conceptual takeaways
Ways to customize or extend
Stay curious, and feel free to adapt these examples to your robot, simulation, or use case!