Amazon Bedrock
Section titled “Amazon Bedrock”Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models from leading AI companies through a unified API. Strands provides native support for Amazon Bedrock, allowing you to use these powerful models in your agents with minimal configuration.
The BedrockModel class in Strands enables seamless integration with Amazon Bedrock’s API, supporting:
- Text generation
- Multi-Modal understanding (Image, Document, etc.)
- Tool/function calling
- Guardrail configurations
- System Prompt, Tool, and/or Message caching
Getting Started
Section titled “Getting Started”Prerequisites
Section titled “Prerequisites”- AWS Account: You need an AWS account with access to Amazon Bedrock
- AWS Credentials: Configure AWS credentials with appropriate permissions
Required IAM Permissions
Section titled “Required IAM Permissions”To use Amazon Bedrock with Strands, your IAM user or role needs the following permissions:
bedrock:InvokeModelWithResponseStream(for streaming mode)bedrock:InvokeModel(for non-streaming mode)
Here’s a sample IAM policy that grants the necessary permissions:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "bedrock:InvokeModelWithResponseStream", "bedrock:InvokeModel" ], "Resource": "*" } ]}For production environments, it’s recommended to scope down the Resource to specific model ARNs.
Setting Up AWS Credentials
Section titled “Setting Up AWS Credentials”=== “Python”
Strands uses [boto3](https://boto3.amazonaws.com/v1/documentation/api/latest/index.html) (the AWS SDK for Python) to make calls to Amazon Bedrock. Boto3 has its own credential resolution system that determines which credentials to use when making requests to AWS.
For development environments, configure credentials using one of these methods:
**Option 1: AWS CLI**
```bashaws configure```
**Option 2: Environment Variables**
```bashexport AWS_ACCESS_KEY_ID=your_access_keyexport AWS_SECRET_ACCESS_KEY=your_secret_keyexport AWS_SESSION_TOKEN=your_session_token # If using temporary credentialsexport AWS_REGION="us-west-2" # Used if a custom Boto3 Session is not provided```
!!! warning "Region Resolution Priority"
Due to boto3's behavior, the region resolution follows this priority order:
1. Region explicitly passed to `BedrockModel(region_name="...")` 2. Region from boto3 session (AWS_DEFAULT_REGION or profile region from ~/.aws/config) 3. AWS_REGION environment variable 4. Default region (us-west-2)
This means `AWS_REGION` has lower priority than regions set in AWS profiles. If you're experiencing unexpected region behavior, check your AWS configuration files and consider using `AWS_DEFAULT_REGION` or explicitly passing `region_name` to the BedrockModel constructor.
For more details, see the [boto3 issue discussion](https://github.com/boto/boto3/issues/2574).
**Option 3: Custom Boto3 Session**
You can configure a custom [boto3 Session](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/core/session.html) and pass it to the `BedrockModel`:
```pythonimport boto3from strands.models import BedrockModel
# Create a custom boto3 sessionsession = boto3.Session( aws_access_key_id='your_access_key', aws_secret_access_key='your_secret_key', aws_session_token='your_session_token', # If using temporary credentials region_name='us-west-2', profile_name='your-profile' # Optional: Use a specific profile)
# Create a Bedrock model with the custom sessionbedrock_model = BedrockModel( model_id="anthropic.claude-sonnet-4-20250514-v1:0", boto_session=session)```
For complete details on credential configuration and resolution, see the [boto3 credentials documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html#configuring-credentials).=== “TypeScript”
The TypeScript SDK uses the [AWS SDK for JavaScript v3](https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/welcome.html) to make calls to Amazon Bedrock. The SDK has its own credential resolution system that determines which credentials to use when making requests to AWS.
For development environments, configure credentials using one of these methods:
**Option 1: AWS CLI**
```bashaws configure```
**Option 2: Environment Variables**
```bashexport AWS_ACCESS_KEY_ID=your_access_keyexport AWS_SECRET_ACCESS_KEY=your_secret_keyexport AWS_SESSION_TOKEN=your_session_token # If using temporary credentialsexport AWS_REGION="us-west-2"```
**Option 3: Custom Credentials**
```typescriptimport { BedrockModel } from '@strands-agents/sdk/bedrock'
// AWS credentials are configured through the clientConfig parameter// See AWS SDK for JavaScript documentation for all credential options:// https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/setting-credentials-node.html
const bedrockModel = new BedrockModel({ modelId: 'anthropic.claude-sonnet-4-20250514-v1:0', region: 'us-west-2', clientConfig: { credentials: { accessKeyId: 'your_access_key', secretAccessKey: 'your_secret_key', sessionToken: 'your_session_token', // If using temporary credentials }, },})```
For complete details on credential configuration, see the [AWS SDK for JavaScript documentation](https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/setting-credentials-node.html).Basic Usage
Section titled “Basic Usage”=== “Python”
The [`BedrockModel`](../../../api-reference/python/models/bedrock.md#strands.models.bedrock) provider is used by default when creating a basic Agent, and uses the [Claude Sonnet 4](https://aws.amazon.com/blogs/aws/claude-opus-4-anthropics-most-powerful-model-for-coding-is-now-in-amazon-bedrock/) model by default. This basic example creates an agent using this default setup:
```pythonfrom strands import Agent
agent = Agent()
response = agent("Tell me about Amazon Bedrock.")```
You can specify which Bedrock model to use by passing in the model ID string directly to the Agent constructor:
```pythonfrom strands import Agent
# Create an agent with a specific model by passing the model ID stringagent = Agent(model="anthropic.claude-sonnet-4-20250514-v1:0")
response = agent("Tell me about Amazon Bedrock.")```=== “TypeScript”
The [`BedrockModel`](../../../api-reference/typescript/classes/BedrockModel.html) provider is used by default when creating a basic Agent, and uses the [Claude Sonnet 4.5](https://aws.amazon.com/blogs/aws/introducing-claude-sonnet-4-5-in-amazon-bedrock-anthropics-most-intelligent-model-best-for-coding-and-complex-agents/) model by default. This basic example creates an agent using this default setup:
```typescriptimport { Agent } from '@strands-agents/sdk'
const agent = new Agent()
const response = await agent.invoke('Tell me about Amazon Bedrock.')```
You can specify which Bedrock model to use by passing in the model ID string directly to the Agent constructor:
```typescriptimport { Agent } from '@strands-agents/sdk'
// Create an agent using the modelconst agent = new Agent({ model: 'anthropic.claude-sonnet-4-20250514-v1:0' })
const response = await agent.invoke('Tell me about Amazon Bedrock.')```Note: See Bedrock troubleshooting if you encounter any issues.
Custom Configuration
Section titled “Custom Configuration”=== “Python”
For more control over model configuration, you can create an instance of the [`BedrockModel`](../../../api-reference/python/models/bedrock.md#strands.models.bedrock) class:
```pythonfrom strands import Agentfrom strands.models import BedrockModel
# Create a Bedrock model instancebedrock_model = BedrockModel( model_id="us.amazon.nova-premier-v1:0", temperature=0.3, top_p=0.8,)
# Create an agent using the BedrockModel instanceagent = Agent(model=bedrock_model)
# Use the agentresponse = agent("Tell me about Amazon Bedrock.")```=== “TypeScript”
For more control over model configuration, you can create an instance of the [`BedrockModel`](../../../api-reference/typescript/classes/BedrockModel.html) class:
```typescript// Create a Bedrock model instanceconst bedrockModel = new BedrockModel({ modelId: 'us.amazon.nova-premier-v1:0', temperature: 0.3, topP: 0.8,})
// Create an agent using the BedrockModel instanceconst agent = new Agent({ model: bedrockModel })
// Use the agentconst response = await agent.invoke('Tell me about Amazon Bedrock.')```Configuration Options
Section titled “Configuration Options”=== “Python”
The [`BedrockModel`](../../../api-reference/python/models/bedrock.md#strands.models.bedrock) supports various configuration parameters. For a complete list of available options, see the [BedrockModel API reference](../../../api-reference/python/models/bedrock.md#strands.models.bedrock).
Common configuration parameters include:
- `model_id` - The Bedrock model identifier- `temperature` - Controls randomness (higher = more random)- `max_tokens` - Maximum number of tokens to generate- `streaming` - Enable/disable streaming mode- `guardrail_id` - ID of the guardrail to apply- `cache_prompt` / `cache_tools` - Enable prompt/tool caching- `boto_session` - Custom boto3 session for AWS credentials- `additional_request_fields` - Additional model-specific parameters=== “TypeScript”
The [`BedrockModel`](../../../api-reference/typescript/interfaces/BedrockModelOptions.html) supports various configuration parameters. For a complete list of available options, see the [BedrockModelOptions API reference](../../../api-reference/typescript/interfaces/BedrockModelOptions.html).
Common configuration parameters include:
- `modelId` - The Bedrock model identifier- `temperature` - Controls randomness (higher = more random)- `maxTokens` - Maximum number of tokens to generate- `streaming` - Enable/disable streaming mode- `cacheTools` - Enable tool caching- `region` - AWS region to use- `credentials` - AWS credentials configuration- `additionalArgs` - Additional model-specific parametersExample with Configuration
Section titled “Example with Configuration”=== “Python”
```pythonfrom strands import Agentfrom strands.models import BedrockModelfrom botocore.config import Config as BotocoreConfig
# Create a boto client config with custom settingsboto_config = BotocoreConfig( retries={"max_attempts": 3, "mode": "standard"}, connect_timeout=5, read_timeout=60)
# Create a configured Bedrock modelbedrock_model = BedrockModel( model_id="anthropic.claude-sonnet-4-20250514-v1:0", region_name="us-east-1", # Specify a different region than the default temperature=0.3, top_p=0.8, stop_sequences=["###", "END"], boto_client_config=boto_config,)
# Create an agent with the configured modelagent = Agent(model=bedrock_model)
# Use the agentresponse = agent("Write a short story about an AI assistant.")```=== “TypeScript”
```typescript// Create a configured Bedrock modelconst bedrockModel = new BedrockModel({ modelId: 'anthropic.claude-sonnet-4-20250514-v1:0', region: 'us-east-1', // Specify a different region than the default temperature: 0.3, topP: 0.8, stopSequences: ['###', 'END'], clientConfig: { retryMode: 'standard', maxAttempts: 3, },})
// Create an agent with the configured modelconst agent = new Agent({ model: bedrockModel })
// Use the agentconst response = await agent.invoke('Write a short story about an AI assistant.')```Advanced Features
Section titled “Advanced Features”Streaming vs Non-Streaming Mode
Section titled “Streaming vs Non-Streaming Mode”Certain Amazon Bedrock models only support non-streaming tool use, so you can set the streaming configuration to false in order to use these models. Both modes provide the same event structure and functionality in your agent, as the non-streaming responses are converted to the streaming format internally.
=== “Python”
```python# Streaming model (default)streaming_model = BedrockModel( model_id="anthropic.claude-sonnet-4-20250514-v1:0", streaming=True, # This is the default)
# Non-streaming modelnon_streaming_model = BedrockModel( model_id="us.meta.llama3-2-90b-instruct-v1:0", streaming=False, # Disable streaming)```=== “TypeScript”
```typescript// Streaming model (default)const streamingModel = new BedrockModel({ modelId: 'anthropic.claude-sonnet-4-20250514-v1:0', stream: true, // This is the default})
// Non-streaming modelconst nonStreamingModel = new BedrockModel({ modelId: 'us.meta.llama3-2-90b-instruct-v1:0', stream: false, // Disable streaming})```See the Amazon Bedrock documentation for Supported models and model features to learn about the streaming support for different models.
Multimodal Support
Section titled “Multimodal Support”Some Bedrock models support multimodal inputs (Documents, Images, etc.). Here’s how to use them:
=== “Python”
```pythonfrom strands import Agentfrom strands.models import BedrockModel
# Create a Bedrock model that supports multimodal inputsbedrock_model = BedrockModel( model_id="anthropic.claude-sonnet-4-20250514-v1:0")agent = Agent(model=bedrock_model)
# Send the multimodal message to the agentresponse = agent( [ { "document": { "format": "txt", "name": "example", "source": { "bytes": b"Once upon a time..." } } }, { "text": "Tell me about the document." } ])```=== “TypeScript”
```typescriptconst bedrockModel = new BedrockModel({ modelId: 'anthropic.claude-sonnet-4-20250514-v1:0',})
const agent = new Agent({ model: bedrockModel })
const documentBytes = Buffer.from('Once upon a time...')
// Send multimodal content directly to invokeconst response = await agent.invoke([ new DocumentBlock({ format: 'txt', name: 'example', source: { bytes: documentBytes }, }), 'Tell me about the document.',])```For a complete list of input types, please refer to the API Reference.
Guardrails
Section titled “Guardrails”=== “Python”
Amazon Bedrock supports guardrails to help ensure model outputs meet your requirements. Strands allows you to configure guardrails with your [`BedrockModel`](../../../api-reference/python/models/bedrock.md#strands.models.bedrock):
```pythonfrom strands import Agentfrom strands.models import BedrockModel
# Using guardrails with BedrockModelbedrock_model = BedrockModel( model_id="anthropic.claude-sonnet-4-20250514-v1:0", guardrail_id="your-guardrail-id", guardrail_version="DRAFT", guardrail_trace="enabled", # Options: "enabled", "disabled", "enabled_full" guardrail_stream_processing_mode="sync", # Options: "sync", "async" guardrail_redact_input=True, # Default: True guardrail_redact_input_message="Blocked Input!", # Default: [User input redacted.] guardrail_redact_output=False, # Default: False guardrail_redact_output_message="Blocked Output!" # Default: [Assistant output redacted.])
guardrail_agent = Agent(model=bedrock_model)
response = guardrail_agent("Can you tell me about the Strands SDK?")```
Amazon Bedrock supports guardrails to help ensure model outputs meet your requirements. Strands allows you to configure guardrails with your [`BedrockModel`](../../../api-reference/typescript/classes/BedrockModel.html).
When a guardrail is triggered:
- Input redaction (enabled by default): If a guardrail policy is triggered, the input is redacted- Output redaction (disabled by default): If a guardrail policy is triggered, the output is redacted- Custom redaction messages can be specified for both input and output redactions{{ ts_not_supported_code(“Guardrails are not yet supported in the TypeScript SDK”) }}
Caching
Section titled “Caching”Strands supports caching system prompts, tools, and messages to improve performance and reduce costs. Caching allows you to reuse parts of previous requests, which can significantly reduce token usage and latency.
When you enable prompt caching, Amazon Bedrock creates a cache composed of cache checkpoints. These are markers that define the contiguous subsection of your prompt that you wish to cache (often referred to as a prompt prefix). These prompt prefixes should be static between requests; alterations to the prompt prefix in subsequent requests will result in a cache miss.
The cache has a five-minute Time To Live (TTL), which resets with each successful cache hit. During this period, the context in the cache is preserved. If no cache hits occur within the TTL window, your cache expires.
For detailed information about supported models, minimum token requirements, and other limitations, see the Amazon Bedrock documentation on prompt caching.
System Prompt Caching
Section titled “System Prompt Caching”System prompt caching allows you to reuse a cached system prompt across multiple requests. Strands supports two approaches for system prompt caching:
Provider-Agnostic Approach (Recommended)
Use SystemContentBlock arrays to define cache points that work across all model providers:
=== “Python”
```pythonfrom strands import Agentfrom strands.types.content import SystemContentBlock
# Define system content with cache pointssystem_content = [ SystemContentBlock( text="You are a helpful assistant that provides concise answers. " "This is a long system prompt with detailed instructions..." "..." * 1600 # needs to be at least 1,024 tokens ), SystemContentBlock(cachePoint={"type": "default"})]
# Create an agent with SystemContentBlock arrayagent = Agent(system_prompt=system_content)
# First request will cache the system promptresponse1 = agent("Tell me about Python")print(f"Cache write tokens: {response1.metrics.accumulated_usage.get('cacheWriteInputTokens')}")print(f"Cache read tokens: {response1.metrics.accumulated_usage.get('cacheReadInputTokens')}")
# Second request will reuse the cached system promptresponse2 = agent("Tell me about JavaScript")print(f"Cache write tokens: {response2.metrics.accumulated_usage.get('cacheWriteInputTokens')}")print(f"Cache read tokens: {response2.metrics.accumulated_usage.get('cacheReadInputTokens')}")```
**Legacy Bedrock-Specific Approach**
For backwards compatibility, you can still use the Bedrock-specific `cache_prompt` configuration:
```pythonfrom strands import Agentfrom strands.models import BedrockModel
# Using legacy system prompt caching with BedrockModelbedrock_model = BedrockModel( model_id="anthropic.claude-sonnet-4-20250514-v1:0", cache_prompt="default" # This approach is deprecated)
# Create an agent with the modelagent = Agent( model=bedrock_model, system_prompt="You are a helpful assistant that provides concise answers. " + "This is a long system prompt with detailed instructions... ")
response = agent("Tell me about Python")```
> **Note**: The `cache_prompt` configuration is deprecated in favor of the provider-agnostic SystemContentBlock approach. The new approach enables caching across all model providers through a unified interface.=== “TypeScript”
```typescriptconst systemContent = [ 'You are a helpful assistant that provides concise answers. ' + 'This is a long system prompt with detailed instructions...' + '...'.repeat(1600), // needs to be at least 1,024 tokens new CachePointBlock({ cacheType: 'default' }),]
const agent = new Agent({ systemPrompt: systemContent })
// First request will cache the system promptlet cacheWriteTokens = 0let cacheReadTokens = 0
for await (const event of agent.stream('Tell me about Python')) { if (event.type === 'modelMetadataEvent' && event.usage) { cacheWriteTokens = event.usage.cacheWriteInputTokens || 0 cacheReadTokens = event.usage.cacheReadInputTokens || 0 }}console.log(`Cache write tokens: ${cacheWriteTokens}`)console.log(`Cache read tokens: ${cacheReadTokens}`)
// Second request will reuse the cached system promptfor await (const event of agent.stream('Tell me about JavaScript')) { if (event.type === 'modelMetadataEvent' && event.usage) { cacheWriteTokens = event.usage.cacheWriteInputTokens || 0 cacheReadTokens = event.usage.cacheReadInputTokens || 0 }}console.log(`Cache write tokens: ${cacheWriteTokens}`)console.log(`Cache read tokens: ${cacheReadTokens}`)```Tool Caching
Section titled “Tool Caching”Tool caching allows you to reuse a cached tool definition across multiple requests:
=== “Python”
```pythonfrom strands import Agent, toolfrom strands.models import BedrockModelfrom strands_tools import calculator, current_time
# Using tool caching with BedrockModelbedrock_model = BedrockModel( model_id="anthropic.claude-sonnet-4-20250514-v1:0", cache_tools="default")
# Create an agent with the model and toolsagent = Agent( model=bedrock_model, tools=[calculator, current_time])# First request will cache the toolsresponse1 = agent("What time is it?")print(f"Cache write tokens: {response1.metrics.accumulated_usage.get('cacheWriteInputTokens')}")print(f"Cache read tokens: {response1.metrics.accumulated_usage.get('cacheReadInputTokens')}")
# Second request will reuse the cached toolsresponse2 = agent("What is the square root of 1764?")print(f"Cache write tokens: {response2.metrics.accumulated_usage.get('cacheWriteInputTokens')}")print(f"Cache read tokens: {response2.metrics.accumulated_usage.get('cacheReadInputTokens')}")```=== “TypeScript”
```typescriptconst bedrockModel = new BedrockModel({ modelId: 'anthropic.claude-sonnet-4-20250514-v1:0', cacheTools: 'default',})
const agent = new Agent({ model: bedrockModel, // Add your tools here when they become available})
// First request will cache the toolsawait agent.invoke('What time is it?')
// Second request will reuse the cached toolsawait agent.invoke('What is the square root of 1764?')
// Note: Cache metrics are not yet available in the TypeScript SDK```Messages Caching
Section titled “Messages Caching”=== “Python”
Messages caching allows you to reuse a cached conversation across multiple requests. This is not enabled via a configuration in the [`BedrockModel`](../../../api-reference/python/models/bedrock.md#strands.models.bedrock) class, but instead by including a `cachePoint` in the Agent's Messages array:
```pythonfrom strands import Agentfrom strands.models import BedrockModel
# Create a conversation, and add a messages cache point to cache the conversation up to that pointmessages = [ { "role": "user", "content": [ { "document": { "format": "txt", "name": "example", "source": { "bytes": b"This is a sample document!" } } }, { "text": "Use this document in your response." }, { "cachePoint": {"type": "default"} }, ], }, { "role": "assistant", "content": [ { "text": "I will reference that document in my following responses." } ] }]
# Create an agent with the model and messagesagent = Agent( messages=messages)# First request will cache the messageresponse1 = agent("What is in that document?")
# Second request will reuse the cached messageresponse2 = agent("How long is the document?")```=== “TypeScript”
Messages caching allows you to reuse a cached conversation across multiple requests. This is not enabled via a configuration in the [`BedrockModel`](../../../api-reference/typescript/classes/BedrockModel.html) class, but instead by including a `cachePoint` in the Agent's Messages array:
```typescriptconst documentBytes = Buffer.from('This is a sample document!')
const userMessage = new Message({ role: 'user', content: [ new DocumentBlock({ format: 'txt', name: 'example', source: { bytes: documentBytes }, }), 'Use this document in your response.', new CachePointBlock({ cacheType: 'default' }), ],})
const assistantMessage = new Message({ role: 'assistant', content: ['I will reference that document in my following responses.'],})
const agent = new Agent({ messages: [userMessage, assistantMessage],})
// First request will cache the messageawait agent.invoke('What is in that document?')
// Second request will reuse the cached messageawait agent.invoke('How long is the document?')
// Note: Cache metrics are not yet available in the TypeScript SDK```Note: Each model has its own minimum token requirement for creating cache checkpoints. If your system prompt or tool definitions don’t meet this minimum token threshold, a cache checkpoint will not be created. For optimal caching, ensure your system prompts and tool definitions are substantial enough to meet these requirements.
Cache Metrics
Section titled “Cache Metrics”When using prompt caching, Amazon Bedrock provides cache statistics to help you monitor cache performance:
CacheWriteInputTokens: Number of input tokens written to the cache (occurs on first request with new content)CacheReadInputTokens: Number of input tokens read from the cache (occurs on subsequent requests with cached content)
Strands automatically captures these metrics and makes them available:
=== “Python”
Cache statistics are automatically included in `AgentResult.metrics.accumulated_usage`:
```pythonfrom strands import Agent
agent = Agent()response = agent("Hello!")
# Access cache metricscache_write = response.metrics.accumulated_usage.get('cacheWriteInputTokens', 0)cache_read = response.metrics.accumulated_usage.get('cacheReadInputTokens', 0)
print(f"Cache write tokens: {cache_write}")print(f"Cache read tokens: {cache_read}")```
Cache metrics are also automatically recorded in OpenTelemetry traces when telemetry is enabled.=== “TypeScript”
Cache statistics are included in `modelMetadataEvent.usage` during streaming:
```typescriptimport { Agent } from '@strands-agents/sdk'
const agent = new Agent()
for await (const event of agent.stream('Hello!')) { if (event.type === 'modelMetadataEvent' && event.usage) { console.log(`Cache write tokens: ${event.usage.cacheWriteInputTokens || 0}`) console.log(`Cache read tokens: ${event.usage.cacheReadInputTokens || 0}`) }}```Updating Configuration at Runtime
Section titled “Updating Configuration at Runtime”You can update the model configuration during runtime:
=== “Python”
```python# Create the model with initial configurationbedrock_model = BedrockModel( model_id="anthropic.claude-sonnet-4-20250514-v1:0", temperature=0.7)
# Update configuration laterbedrock_model.update_config( temperature=0.3, top_p=0.2,)```=== “TypeScript”
```typescript// Create the model with initial configurationconst bedrockModel = new BedrockModel({ modelId: 'anthropic.claude-sonnet-4-20250514-v1:0', temperature: 0.7,})
// Update configuration laterbedrockModel.updateConfig({ temperature: 0.3, topP: 0.2,})```This is especially useful for tools that need to update the model’s configuration:
=== “Python”
```python@tooldef update_model_id(model_id: str, agent: Agent) -> str: """ Update the model id of the agent
Args: model_id: Bedrock model id to use. """ print(f"Updating model_id to {model_id}") agent.model.update_config(model_id=model_id) return f"Model updated to {model_id}"
@tooldef update_temperature(temperature: float, agent: Agent) -> str: """ Update the temperature of the agent
Args: temperature: Temperature value for the model to use. """ print(f"Updating Temperature to {temperature}") agent.model.update_config(temperature=temperature) return f"Temperature updated to {temperature}"```=== “TypeScript”
```typescriptimport { tool } from '@strands-agents/sdk'import { z } from 'zod'
// Define a tool that updates model configurationconst updateTemperature = tool({ name: 'update_temperature', description: 'Update the temperature of the agent', inputSchema: z.object({ temperature: z.number().describe('Temperature value for the model to use'), }), callback: async ({ temperature }, context) => { if (context.agent?.model && 'updateConfig' in context.agent.model) { context.agent.model.updateConfig({ temperature }) return `Temperature updated to ${temperature}` } return 'Failed to update temperature' },})
const agent = new Agent({ model: new BedrockModel({ modelId: 'anthropic.claude-sonnet-4-20250514-v1:0' }), tools: [updateTemperature],})```Reasoning Support
Section titled “Reasoning Support”Amazon Bedrock models can provide detailed reasoning steps when generating responses. For detailed information about supported models and reasoning token configuration, see the Amazon Bedrock documentation on inference reasoning.
=== “Python”
Strands allows you to enable and configure reasoning capabilities with your [`BedrockModel`](../../../api-reference/python/models/bedrock.md#strands.models.bedrock):
```pythonfrom strands import Agentfrom strands.models import BedrockModel
# Create a Bedrock model with reasoning configurationbedrock_model = BedrockModel( model_id="anthropic.claude-sonnet-4-20250514-v1:0", additional_request_fields={ "thinking": { "type": "enabled", "budget_tokens": 4096 # Minimum of 1,024 } })
# Create an agent with the reasoning-enabled modelagent = Agent(model=bedrock_model)
# Ask a question that requires reasoningresponse = agent("If a train travels at 120 km/h and needs to cover 450 km, how long will the journey take?")```=== “TypeScript”
Strands allows you to enable and configure reasoning capabilities with your [`BedrockModel`](../../../api-reference/typescript/classes/BedrockModel.html):
```typescript// Create a Bedrock model with reasoning configurationconst bedrockModel = new BedrockModel({ modelId: 'anthropic.claude-sonnet-4-20250514-v1:0', additionalRequestFields: { thinking: { type: 'enabled', budget_tokens: 4096, // Minimum of 1,024 }, },})
// Create an agent with the reasoning-enabled modelconst agent = new Agent({ model: bedrockModel })
// Ask a question that requires reasoningconst response = await agent.invoke( 'If a train travels at 120 km/h and needs to cover 450 km, how long will the journey take?')```Note: Not all models support structured reasoning output. Check the inference reasoning documentation for details on supported models.
Structured Output
Section titled “Structured Output”=== “Python”
Amazon Bedrock models support structured output through their tool calling capabilities. When you use `Agent.structured_output()`, the Strands SDK converts your schema to Bedrock's tool specification format.
```pythonfrom pydantic import BaseModel, Fieldfrom strands import Agentfrom strands.models import BedrockModelfrom typing import List, Optional
class ProductAnalysis(BaseModel): """Analyze product information from text.""" name: str = Field(description="Product name") category: str = Field(description="Product category") price: float = Field(description="Price in USD") features: List[str] = Field(description="Key product features") rating: Optional[float] = Field(description="Customer rating 1-5", ge=1, le=5)
bedrock_model = BedrockModel()
agent = Agent(model=bedrock_model)
result = agent.structured_output( ProductAnalysis, """ Analyze this product: The UltraBook Pro is a premium laptop computer priced at $1,299. It features a 15-inch 4K display, 16GB RAM, 512GB SSD, and 12-hour battery life. Customer reviews average 4.5 stars. """)
print(f"Product: {result.name}")print(f"Category: {result.category}")print(f"Price: ${result.price}")print(f"Features: {result.features}")print(f"Rating: {result.rating}")```{{ ts_not_supported_code(“Structured output is not yet supported in the TypeScript SDK”) }}
Troubleshooting
Section titled “Troubleshooting”On-demand throughput isn’t supported
Section titled “On-demand throughput isn’t supported”If you encounter the error:
Invocation of model ID XXXX with on-demand throughput isn’t supported. Retry your request with the ID or ARN of an inference profile that contains this model.
This typically indicates that the model requires Cross-Region Inference, as documented in the Amazon Bedrock documentation on inference profiles. To resolve this issue, prefix your model ID with the appropriate regional identifier (us.or eu.) based on where your agent is running. For example:
Instead of:
anthropic.claude-sonnet-4-20250514-v1:0Use:
us.anthropic.claude-sonnet-4-20250514-v1:0Model identifier is invalid
Section titled “Model identifier is invalid”If you encounter the error:
ValidationException: An error occurred (ValidationException) when calling the ConverseStream operation: The provided model identifier is invalid
This is very likely due to calling Bedrock with an inference model id, such as: us.anthropic.claude-sonnet-4-20250514-v1:0 from a region that does not support inference profiles. If so, pass in a valid model id, as follows:
=== “Python”
```pythonagent = Agent(model="anthropic.claude-3-5-sonnet-20241022-v2:0")```=== “TypeScript”
```typescriptconst agent = new Agent({ model: 'anthropic.claude-3-5-sonnet-20241022-v2:0'})```!!! note ""
Strands uses a default Claude 4 Sonnet inference model from the region of your credentials when no model is provided. So if you did not pass in any model id and are getting the above error, it's very likely due to the `region` from the credentials not supporting inference profiles.