Python Deployment to Docker
Section titled “Python Deployment to Docker”This guide covers deploying Python-based Strands agents using Docker for for local and cloud development.
Prerequisites
Section titled “Prerequisites”- Python 3.10+
- Docker installed and running
- Model provider credentials
Quick Start Setup
Section titled “Quick Start Setup”Install uv:
curl -LsSf https://astral.sh/uv/install.sh | shConfigure Model Provider Credentials:
export OPENAI_API_KEY='<your-api-key>'Note: This example uses OpenAI, but any supported model provider can be configured. See the Strands documentation for all supported model providers.
For instance, to configure AWS credentials:
export AWS_ACCESS_KEY_ID=<'your-access-key-id'> export AWS_SECRET_ACCESS_KEY='<your-secret-access-key'>Project Setup
Section titled “Project Setup”
Open Quick Setup All-in-One Bash Command
Optional: Copy and paste this bash command to create your project with all necessary files and skip remaining "Project Setup" steps below:
setup_agent() {mkdir my-python-agent && cd my-python-agentuv init --python 3.11uv add fastapi "uvicorn[standard]" pydantic strands-agents "strands-agents[openai]"
# Remove the auto-generated main.pyrm -f main.py
cat > agent.py << 'EOF'from fastapi import FastAPI, HTTPExceptionfrom pydantic import BaseModelfrom typing import Dict, Anyfrom datetime import datetime, timezonefrom strands import Agentfrom strands.models.openai import OpenAIModel
app = FastAPI(title="Strands Agent Server", version="1.0.0")
# Note: Any supported model provider can be configured# Automatically uses process.env.OPENAI_API_KEYmodel = OpenAIModel(model_id="gpt-4o")
strands_agent = Agent(model=model)
class InvocationRequest(BaseModel): input: Dict[str, Any]
class InvocationResponse(BaseModel): output: Dict[str, Any]
@app.post("/invocations", response_model=InvocationResponse)async def invoke_agent(request: InvocationRequest): try: user_message = request.input.get("prompt", "") if not user_message: raise HTTPException( status_code=400, detail="No prompt found in input. Please provide a 'prompt' key in the input." )
result = strands_agent(user_message) response = { "message": result.message, "timestamp": datetime.now(timezone.utc).isoformat(), "model": "strands-agent", }
return InvocationResponse(output=response)
except Exception as e: raise HTTPException(status_code=500, detail=f"Agent processing failed: {str(e)}")
@app.get("/ping")async def ping(): return {"status": "healthy"}
def main(): import uvicorn uvicorn.run(app, host="0.0.0.0", port=8080)
if __name__ == "__main__": main()EOF
cat > Dockerfile << 'EOF'# Use uv's Python base imageFROM ghcr.io/astral-sh/uv:python3.11-bookworm-slim
WORKDIR /app
# Copy uv filesCOPY pyproject.toml uv.lock ./
# Install dependenciesRUN uv sync --frozen --no-cache
# Copy agent fileCOPY agent.py ./
# Expose portEXPOSE 8080
# Run applicationCMD ["uv", "run", "python", "agent.py"]EOF
echo "Setup complete! Project created in my-python-agent/"}
setup_agentStep 1: Create project directory and initialize
mkdir my-python-agent && cd my-python-agentuv init --python 3.11Step 2: Add dependencies
uv add fastapi "uvicorn[standard]" pydantic strands-agents "strands-agents[openai]"Step 3: Create agent.py
from fastapi import FastAPI, HTTPExceptionfrom pydantic import BaseModelfrom typing import Dict, Anyfrom datetime import datetime, timezonefrom strands import Agentfrom strands.models.openai import OpenAIModelapp = FastAPI(title="Strands Agent Server", version="1.0.0")
# Note: Any supported model provider can be configured# Automatically uses process.env.OPENAI_API_KEYmodel = OpenAIModel(model_id="gpt-4o")
strands_agent = Agent(model=model)
class InvocationRequest(BaseModel): input: Dict[str, Any]
class InvocationResponse(BaseModel): output: Dict[str, Any]
@app.post("/invocations", response_model=InvocationResponse)async def invoke_agent(request: InvocationRequest): try: user_message = request.input.get("prompt", "") if not user_message: raise HTTPException( status_code=400, detail="No prompt found in input. Please provide a 'prompt' key in the input." )
result = strands_agent(user_message) response = { "message": result.message, "timestamp": datetime.now(timezone.utc).isoformat(), "model": "strands-agent", }
return InvocationResponse(output=response)
except Exception as e: raise HTTPException(status_code=500, detail=f"Agent processing failed: {str(e)}")
@app.get("/ping")async def ping(): return {"status": "healthy"}
def main(): import uvicorn uvicorn.run(app, host="0.0.0.0", port=8080)
if __name__ == "__main__": main()Step 4: Create Dockerfile
# Use uv's Python base imageFROM ghcr.io/astral-sh/uv:python3.11-bookworm-slim
WORKDIR /app
# Copy uv filesCOPY pyproject.toml uv.lock ./
# Install dependenciesRUN uv sync --frozen --no-cache
# Copy agent fileCOPY agent.py ./
# Expose portEXPOSE 8080
# Run applicationCMD ["uv", "run", "python", "agent.py"]Your project structure will now look like:
my-python-agent/├── agent.py # FastAPI application├── Dockerfile # Container configuration├── pyproject.toml # Created by uv init└── uv.lock # Created automatically by uvTest Locally
Section titled “Test Locally”Before deploying with Docker, test your application locally:
# Run the applicationuv run python agent.py
# Test /ping endpointcurl http://localhost:8080/ping
# Test /invocations endpointcurl -X POST http://localhost:8080/invocations \ -H "Content-Type: application/json" \ -d '{ "input": {"prompt": "What is artificial intelligence?"} }'Deploy to Docker
Section titled “Deploy to Docker”Step 1: Build Docker Image
Section titled “Step 1: Build Docker Image”Build your Docker image:
docker build -t my-agent-image:latest .Step 2: Run Docker Container
Section titled “Step 2: Run Docker Container”Run the container with model provider credentials:
docker run -p 8080:8080 \ -e OPENAI_API_KEY=$OPENAI_API_KEY \ my-agent-image:latestThis example uses OpenAI credentials by default, but any model provider credentials can be passed as environment variables when running the image. For instance, to pass AWS credentials:
docker run -p 8080:8080 \ -e AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID \ -e AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY \ -e AWS_REGION=us-east-1 \ my-agent-image:latestStep 3: Test Your Deployment
Section titled “Step 3: Test Your Deployment”Test the endpoints:
# Health checkcurl http://localhost:8080/ping
# Test agent invocationcurl -X POST http://localhost:8080/invocations \ -H "Content-Type: application/json" \ -d '{"input": {"prompt": "What is artificial intelligence?"}}'Step 4: Making Changes
Section titled “Step 4: Making Changes”When you modify your code, rebuild and run:
# Rebuild imagedocker build -t my-agent-image:latest .
# Stop existing container (if running)docker stop $(docker ps -q --filter ancestor=my-agent-image:latest)
# Run new containerdocker run -p 8080:8080 \ -e OPENAI_API_KEY=$OPENAI_API_KEY \ my-agent-image:latestTroubleshooting
Section titled “Troubleshooting”- Container not starting: Check logs with
docker logs $(docker ps -q --filter ancestor=my-agent-image:latest) - Connection refused: Verify app is listening on 0.0.0.0:8080
- Image build fails: Check
pyproject.tomland dependencies - Port already in use: Use different port mapping
-p 8081:8080
Docker Compose for Local Development
Section titled “Docker Compose for Local Development”Optional: Docker Compose is only recommended for local development. Most cloud service providers only support raw Docker commands, not Docker Compose.
For local development and testing, Docker Compose provides a more convenient way to manage your container:
# Example for OpenAIversion: '3.8'services: my-python-agent: build: . ports: - "8080:8080" environment: - OPENAI_API_KEY=<your-api-key>Run with Docker Compose:
# Start servicesdocker-compose up --build
# Run in backgrounddocker-compose up -d --build
# Stop servicesdocker-compose downOptional: Deploy to Cloud Container Service
Section titled “Optional: Deploy to Cloud Container Service”Once your application works locally with Docker, you can deploy it to any cloud-hosted container service. The Docker container you’ve created is the foundation for deploying to the cloud platform of your choice (AWS, GCP, Azure, etc).
Our other deployment guides build on this Docker foundation to show you how to deploy to specific cloud services:
- Amazon Bedrock AgentCore - Deploy to AWS with Bedrock integration
- AWS Fargate - Deploy to AWS’s managed container service
- Amazon EKS - Deploy to Kubernetes on AWS
- Amazon EC2 - Deploy directly to EC2 instances