Create a Semantic Router to Route Information between Components#
The SemanticRouter component in EmbodiedAgents allows you to route text queries to specific components based on the user’s intent or the output of a preceeding component.
The router operates in two distinct modes:
Vector Mode (Default): This mode uses a Vector DB to calculate the mathematical similarity (distance) between the incoming query and the samples defined in your routes. It is extremely fast and lightweight.
LLM Mode (Agentic): This mode uses an LLM to intelligently analyze the intent of the query and triggers routes accordingly. This is more computationally expensive but can handle complex nuances, context, and negation (e.g., “Don’t go to the kitchen” might be routed differently by an agent than a simple vector similarity search).
In this recipe, we will route queries between two components: a General Purpose LLM (for chatting) and a Go-to-X Component (for navigation commands) that we built in the previous example. Lets start by setting up our components.
Setting up the components#
In the following code snippet we will setup our two components.
from typing import Optional
import json
import numpy as np
from agents.components import LLM, SemanticRouter
from agents.models import OllamaModel
from agents.vectordbs import ChromaDB
from agents.config import LLMConfig, SemanticRouterConfig
from agents.clients import ChromaClient, OllamaClient
from agents.ros import Launcher, Topic, Route
# Start a Llama3.2 based llm component using ollama client
llama = OllamaModel(name="llama", checkpoint="llama3.2:3b")
llama_client = OllamaClient(llama)
# Initialize a vector DB that will store our routes
chroma = ChromaDB()
chroma_client = ChromaClient(db=chroma)
# Make a generic LLM component using the Llama3_2 model
llm_in = Topic(name="text_in_llm", msg_type="String")
llm_out = Topic(name="text_out_llm", msg_type="String")
llm = LLM(
inputs=[llm_in],
outputs=[llm_out],
model_client=llama_client,
trigger=llm_in,
component_name="generic_llm",
)
# Make a Go-to-X component using the same Llama3_2 model
goto_in = Topic(name="goto_in", msg_type="String")
goal_point = Topic(name="goal_point", msg_type="PoseStamped")
config = LLMConfig(enable_rag=True,
collection_name="map",
distance_func="l2",
n_results=1,
add_metadata=True)
goto = LLM(
inputs=[goto_in],
outputs=[goal_point],
model_client=llama_client,
db_client=chroma_client,
trigger=goto_in,
config=config,
component_name='go_to_x'
)
# set a component prompt
goto.set_component_prompt(
template="""From the given metadata, extract coordinates and provide
the coordinates in the following json format:\n {"position": coordinates}"""
)
# pre-process the output before publishing to a topic of msg_type PoseStamped
def llm_answer_to_goal_point(output: str) -> Optional[np.ndarray]:
# extract the json part of the output string (including brackets)
# one can use sophisticated regex parsing here but we'll keep it simple
json_string = output[output.find("{"):output.find("}") + 1]
# load the string as a json and extract position coordinates
# if there is an error, return None, i.e. no output would be published to goal_point
try:
json_dict = json.loads(json_string)
return np.array(json_dict['position'])
except Exception:
return
# add the pre-processing function to the goal_point output topic
goto.add_publisher_preprocessor(goal_point, llm_answer_to_goal_point)
Note
Note that we have reused the same model and its client for both components.
Note
For a detailed explanation of the code for setting up the Go-to-X component, check the previous example.
Caution
In the code block above we are using the same DB client that was setup in this example.
Creating the SemanticRouter#
The SemanticRouter takes an input String topic and sends whatever is published on that topic to a Route. A Route is a thin wrapper around Topic and takes in the name of a topic to publish on and example queries, that would match a potential query that should be published to a particular topic. For example, if we ask our robot a general question, like “Whats the capital of France?”, we do not want that question to be routed to a Go-to-X component, but to a generic LLM. Thus in its route, we would provide examples of general questions. Lets start by creating our routes for the input topics of the two components above.
from agents.ros import Route
# Create the input topic for the router
query_topic = Topic(name="question", msg_type="String")
# Define a route to a topic that processes go-to-x commands
goto_route = Route(routes_to=goto_in,
samples=["Go to the door", "Go to the kitchen",
"Get me a glass", "Fetch a ball", "Go to hallway"])
# Define a route to a topic that is input to an LLM component
llm_route = Route(routes_to=llm_in,
samples=["What is the capital of France?", "Is there life on Mars?",
"How many tablespoons in a cup?", "How are you today?", "Whats up?"])
Option 1: Vector Mode (Similarity)#
This is the standard approach. In Vector mode, the SemanticRouter component works by storing these examples in a vector DB. Distance is calculated between an incoming query’s embedding and the embeddings of example queries to determine which Route(Topic) the query should be sent on. For the database client we will use the ChromaDB client setup in this example. We will specify a router name in our router config, which will act as a collection_name in the database.
from agents.components import SemanticRouter
from agents.config import SemanticRouterConfig
router_config = SemanticRouterConfig(router_name="go-to-router", distance_func="l2")
# Initialize the router component
router = SemanticRouter(
inputs=[query_topic],
routes=[llm_route, goto_route],
default_route=llm_route, # If none of the routes fall within a distance threshold
config=router_config,
db_client=chroma_client, # Providing db_client enables Vector Mode
component_name="router"
)
Option 2: LLM Mode (Agentic)#
Alternatively, we can use an LLM to make routing decisions. This is useful if your routes require “understanding” rather than just similarity. We simply provide a model_client instead of a db_client.
Note
We can even use the same LLM (model_client) as we are using for our other Q&A components.
# No SemanticRouterConfig needed, we can use LLMConfig or let it be default
router = SemanticRouter(
inputs=[query_topic],
routes=[llm_route, goto_route],
model_client=llama_client, # Providing model_client enables LLM Mode
component_name="smart_router"
)
And that is it. Whenever something is published on the input topic question, it will be routed, either to a Go-to-X component or an LLM component. We can now expose this topic to our command interface. The complete code for setting up the router is given below:
1from typing import Optional
2import json
3import numpy as np
4from agents.components import LLM, SemanticRouter
5from agents.models import OllamaModel
6from agents.vectordbs import ChromaDB
7from agents.config import LLMConfig, SemanticRouterConfig
8from agents.clients import ChromaClient, OllamaClient
9from agents.ros import Launcher, Topic, Route
10
11# Start a Llama3.2 based llm component using ollama client
12llama = OllamaModel(name="llama", checkpoint="llama3.2:3b")
13llama_client = OllamaClient(llama)
14
15# Initialize a vector DB that will store our routes
16chroma = ChromaDB()
17chroma_client = ChromaClient(db=chroma)
18
19
20# Make a generic LLM component using the Llama3_2 model
21llm_in = Topic(name="text_in_llm", msg_type="String")
22llm_out = Topic(name="text_out_llm", msg_type="String")
23
24llm = LLM(
25 inputs=[llm_in],
26 outputs=[llm_out],
27 model_client=llama_client,
28 trigger=llm_in,
29 component_name="generic_llm",
30)
31
32
33# Define LLM input and output topics including goal_point topic of type PoseStamped
34goto_in = Topic(name="goto_in", msg_type="String")
35goal_point = Topic(name="goal_point", msg_type="PoseStamped")
36
37config = LLMConfig(
38 enable_rag=True,
39 collection_name="map",
40 distance_func="l2",
41 n_results=1,
42 add_metadata=True,
43)
44
45# initialize the component
46goto = LLM(
47 inputs=[goto_in],
48 outputs=[goal_point],
49 model_client=llama_client,
50 db_client=chroma_client, # check the previous example where we setup this database client
51 trigger=goto_in,
52 config=config,
53 component_name="go_to_x",
54)
55
56# set a component prompt
57goto.set_component_prompt(
58 template="""From the given metadata, extract coordinates and provide
59 the coordinates in the following json format:\n {"position": coordinates}"""
60)
61
62
63# pre-process the output before publishing to a topic of msg_type PoseStamped
64def llm_answer_to_goal_point(output: str) -> Optional[np.ndarray]:
65 # extract the json part of the output string (including brackets)
66 # one can use sophisticated regex parsing here but we'll keep it simple
67 json_string = output[output.find("{") : output.find("}") + 1]
68
69 # load the string as a json and extract position coordinates
70 # if there is an error, return None, i.e. no output would be published to goal_point
71 try:
72 json_dict = json.loads(json_string)
73 return np.array(json_dict["position"])
74 except Exception:
75 return
76
77
78# add the pre-processing function to the goal_point output topic
79goto.add_publisher_preprocessor(goal_point, llm_answer_to_goal_point)
80
81# Create the input topic for the router
82query_topic = Topic(name="question", msg_type="String")
83
84# Define a route to a topic that processes go-to-x commands
85goto_route = Route(
86 routes_to=goto_in,
87 samples=[
88 "Go to the door",
89 "Go to the kitchen",
90 "Get me a glass",
91 "Fetch a ball",
92 "Go to hallway",
93 ],
94)
95
96# Define a route to a topic that is input to an LLM component
97llm_route = Route(
98 routes_to=llm_in,
99 samples=[
100 "What is the capital of France?",
101 "Is there life on Mars?",
102 "How many tablespoons in a cup?",
103 "How are you today?",
104 "Whats up?",
105 ],
106)
107
108# --- MODE 1: VECTOR ROUTING (Active) ---
109router_config = SemanticRouterConfig(router_name="go-to-router", distance_func="l2")
110
111router = SemanticRouter(
112 inputs=[query_topic],
113 routes=[llm_route, goto_route],
114 default_route=llm_route,
115 config=router_config,
116 db_client=chroma_client, # Vector mode requires db_client
117 component_name="router",
118)
119
120# --- MODE 2: LLM ROUTING (Commented Out) ---
121# To use LLM routing (Agentic), comment out the block above and uncomment this:
122#
123# router = SemanticRouter(
124# inputs=[query_topic],
125# routes=[llm_route, goto_route],
126# default_route=llm_route,
127# model_client=llama_client, # LLM mode requires model_client
128# component_name="router",
129# )
130
131# Launch the components
132launcher = Launcher()
133launcher.add_pkg(components=[llm, goto, router])
134launcher.bringup()