Strands Agents represents AWS's entry into the open-source agent framework space with a philosophy that fundamentally differs from workflow-centric approaches like LangGraph. Instead of requiring developers to explicitly define control flows and state machines, Strands trusts the reasoning capabilities of modern foundation models to handle planning, tool selection, and execution autonomously. An agent is defined with just three components: a model provider, a system prompt, and a list of tools, and the SDK manages the agentic loop of reasoning, acting, observing, and reflecting.
The framework supports a wide range of model providers including Amazon Bedrock, Anthropic, OpenAI, Google Gemini, Ollama for local development, and LiteLLM for unified access to hundreds of models. Native Model Context Protocol integration gives agents access to thousands of pre-built MCP servers as tools, while the @tool decorator turns any Python function into an agent-callable capability. Multi-agent patterns including agent-as-tool composition, swarm orchestration, and graph-based workflows enable complex systems. Recent additions include TypeScript support, bidirectional audio streaming for voice agents, and edge device deployment with llama.cpp.
Strands is already battle-tested in production at AWS, powering Amazon Q Developer, AWS Glue, and VPC Reachability Analyzer. The deployment toolkit provides reference architectures for AWS Lambda, Fargate, EKS, and Bedrock AgentCore, with built-in OpenTelemetry observability for monitoring agent behavior at scale. Enterprise adopters like Smartsheet, Jit, and Scale AI have reported significant acceleration in agent development timelines, with some teams achieving production-ready solutions in days rather than months.