This example demonstrates how to create a Strands agent that determines whether to store information to a knowledge base or retrieve information from it based on the user’s query. It showcases a code-defined decision-making workflow that routes user inputs to the appropriate action.
Important: This example requires a knowledge base to be set up. You must initialize the knowledge base ID using the STRANDS_KNOWLEDGE_BASE_ID environment variable:
Terminal window
exportSTRANDS_KNOWLEDGE_BASE_ID=your_kb_id
This example was tested using a Bedrock knowledge base. If you experience odd behavior or missing data, verify that you’ve properly initialized this environment variable.
This example demonstrates a workflow where the agent’s behavior is explicitly defined in code rather than relying on the agent to determine which tools to use. This approach provides several advantages:
The workflow begins with a dedicated classification step that uses the language model to determine user intent:
defdetermine_action(agent, query):
"""Determine if the query is a store or retrieve action."""
result = agent.tool.use_llm(
prompt=f"Query: {query}",
system_prompt=ACTION_SYSTEM_PROMPT
)
# Clean and extract the action
action_text =str(result).lower().strip()
# Default to retrieve if response isn't clear
if"store"in action_text:
return"store"
else:
return"retrieve"
This classification is performed with a specialized system prompt that focuses solely on distinguishing between storage and retrieval intents, making the classification more deterministic.
Conditional Execution Paths
Based on the classification result, the workflow follows one of two distinct execution paths:
if action =="store":
# Store path
agent.tool.memory(action="store",content=query)
print("\nI've stored this information.")
else:
# Retrieve path
result = agent.tool.memory(action="retrieve",query=query,min_score=0.4,max_results=9)
# Generate response from retrieved information
answer = agent.tool.use_llm(prompt=f"User question: \"{query}\"\n\nInformation from knowledge base:\n{result_str}...",
system_prompt=ANSWER_SYSTEM_PROMPT)
Tool Chaining for Retrieval
The retrieval path demonstrates tool chaining, where the output from one tool becomes the input to another:
flowchart LR
A["User Query"] --> B["memory() Retrieval"]
B --> C["use_llm()"]
C --> D["Response"]
This chaining allows the agent to:
First retrieve relevant information from the knowledge base
Then process that information to generate a natural, conversational response
Explicitly defining the workflow in code ensures deterministic agent behavior rather than probabilistic outcomes. The developer precisely controls which tools are executed and in what sequence, eliminating the non-deterministic variability that occurs when an agent autonomously selects tools based on natural language understanding.