Module 01 โ AgentCore Architecture & Concepts
- Explain what Amazon Bedrock AgentCore is and the problem it solves
- Describe the core service components and how they relate
- Understand the flat resource model and project structure
- Explain why AgentCore exists alongside other AWS agent offerings
@latest, which points to an older incompatible release:
npm install -g @aws/agentcore@preview
# Verify installation
agentcore --version
# Bootstrap your AWS account (one-time setup per region)
agentcore bootstrapaws configure or an active SSO session) and sufficient IAM permissions to create CloudFormation stacks, Lambda functions, and ECR repositories.
agentcore create \
--name myagentcore \
--framework Strands \
--model-provider Bedrock \
--memory none
cd myagentcoreagentcore create --defaults creates a harness project (config-driven, no Python code). This curriculum teaches agent projects. Use the command above, not --defaults.
What is Amazon Bedrock AgentCore?
AgentCore is AWS's managed infrastructure platform for deploying AI agents at scale. It sits above raw Lambda/ECS and below higher-level products like Bedrock Agents, giving you a "runtime for agents" that handles:
- Scalable execution โ containerized agent processes that scale on demand
- Session management โ multi-turn conversation context, user/session IDs
- Persistent memory โ cross-conversation recall with configurable strategies
- Tool connectivity โ the Gateway layer, an MCP-compatible proxy to external tools
- Identity & credentials โ secure API key and OAuth storage, no secrets in code
- Observability โ traces, logs, and evaluation pipelines built in
- Policy enforcement โ Cedar-based fine-grained authorization on tool calls
Key insight: AgentCore lets you bring any framework (Strands, LangGraph, OpenAI Agents, Google ADK) and any model (Bedrock, Anthropic, OpenAI, Gemini) and deploys it consistently.
Where AgentCore Fits
The Six Core Service Areas
| Service Area | What It Does | project.json Key |
|---|---|---|
| Runtime | Managed execution environment for agent code | runtimes |
| Memory | Persistent, multi-strategy context storage | memories |
| Gateway | MCP-compatible proxy to external tools | agentCoreGateways |
| Identity | Secure credential storage (API keys, OAuth) | credentials |
| Evaluations | LLM-as-a-Judge quality monitoring | evaluators |
| Policy | Cedar-based authorization on tool calls | policyEngines |
Resource Limits Reference
| Resource | Default / Limit |
|---|---|
| Session idle timeout | 300s (configurable with --idle-timeout) |
| Session max lifetime | 3600s (configurable with --max-lifetime) |
| CodeZip max size | 250 MB |
| Agent name max length | 48 characters |
| Project name max length | 23 characters (alphanumeric, starts with letter) |
| Memory event expiry | 7โ365 days |
The Flat Resource Model
AgentCore uses a flat resource model โ all resources (agents, memories, gateways, credentials) are independent top-level items defined in project.json. They are NOT nested inside each other.
An agent discovers its memory through an environment variable injected at runtime:
MEMORY_MYMEMORYNAME_ID=<memory-id>
AGENTCORE_GATEWAY_MYGATEWAY_URL=https://...This decoupling means you can share a memory across multiple agents, swap gateways without changing agent code, and redeploy resources independently.
Project Structure
myagentcore/
โโโ agentcore/
โ โโโ project.json # Source of truth โ all resource specs
โ โ # (older projects may use agentcore.json)
โ โโโ aws-targets.json # Deployment targets (account + region)
โ โโโ .env.local # Secrets for local dev (gitignored)
โ โโโ .llm-context/ # TypeScript type defs (added after first deploy)
โโโ cdk/ # Generated CDK app โ at project ROOT, not inside agentcore/
โโโ app/
โ โโโ myagentcore/ # Agent source code
โ โโโ main.py # Agent entry point
โ โโโ pyproject.toml # Python deps
โโโ evaluators/ # Custom evaluator code (optional)project.json is the single source of truth. The CDK code in cdk/ is generated from it โ never edit CDK directly. Renaming a resource in project.json = destroy + recreate in CloudFormation.project.json Anatomy
{
"name": "myagentcore",
"version": 1,
"tags": { "agentcore:project-name": "myagentcore" },
"runtimes": [], // agents
"memories": [], // memory resources
"credentials": [], // API keys, OAuth providers
"evaluators": [], // LLM-as-a-Judge definitions
"onlineEvalConfigs": [], // continuous eval configs
"agentCoreGateways": [], // gateways + targets
"policyEngines": [] // Cedar policy engines
}Supported Frameworks & Models
| Framework | Best For |
|---|---|
| Strands Agents | AWS-native, streaming, recommended for Bedrock |
| LangChain/LangGraph | Graph-based workflows, complex reasoning chains |
| Google ADK | Gemini models |
| OpenAI Agents | OpenAI models |
| Model Provider | Auth Method |
|---|---|
| Amazon Bedrock | AWS credentials (no API key needed) |
| Anthropic | API key |
| OpenAI | API key |
| Google Gemini | API key |
Deployment Model
AgentCore uses AWS CDK under the hood. When you run agentcore deploy, the CLI reads project.json, synthesizes CDK CloudFormation, and deploys via CDK toolkit. All resources are tracked as CloudFormation stacks โ you get drift detection, rollback, and change sets for free.
Hands-On Labs
cd ~/myagentcore
# View the source of truth config
cat agentcore/project.json
# Check the deployment target
cat agentcore/aws-targets.jsonQuestions to answer:
- What is the name of the agent runtime in your project?
- What memory strategies are configured (if any)?
- Which region and account is the deployment target?
- Where is the
cdk/directory โ insideagentcore/or at the project root?
Based on what you found in Lab 1.1, describe the architecture of your myagentcore project:
- Agent โ uses what memory?
- Agent โ connects to what gateway?
- What model/framework does the agent use?
- What would need to change in
project.jsonto add memory?
Knowledge Check
b) Through an environment variable injected by the runtime:
MEMORY_<NAME>_IDc) Through a direct API call to the AgentCore control plane
d) Through a config file in the CDK stack
MEMORY_<NAME>_ID as an environment variable. The agent code reads os.getenv("MEMORY_MYMEMORY_ID"). This is the flat resource model โ the agent code doesn't need to know the physical memory ARN at dev time.MyMemory to ProdMemory in project.json and run agentcore deploy. What happens?b) The existing memory is destroyed and a new one is created (all stored memories are lost)
c) The deploy fails with a validation error
d) Nothing โ rename is ignored, the old name is preserved
name maps to the CloudFormation logical ID. Rename = CloudFormation delete + create. For memory this means all stored data is lost. This is why renaming is considered destructive.project.jsonb)
aws-targets.jsonc)
cdk/ (the generated CDK app, at project root)d)
app/MyAgent/main.pycdk/ (project root) is generated from project.json. Manual edits will be overwritten on the next agentcore deploy. Always modify resources through project.json or CLI commands.b) LangChain/LangGraph
c) AWS Step Functions
d) Google ADK
e) OpenAI Agents SDK
b) Terraform
c) AWS CDK (Cloud Development Kit)
d) Direct API calls to each service
project.json and deploys them using the CDK toolkit. This gives you CloudFormation drift detection, rollback, and change previews.Key Takeaways
- AgentCore = managed runtime for AI agents (bring your own framework + model)
- The flat resource model decouples agents from their memory/gateways via env vars
project.jsonis the single source of truth โ never edit the generated CDK- Renaming resources is a destructive operation (destroy + recreate)
- Six service areas: Runtime, Memory, Gateway, Identity, Evaluations, Policy
- CDK is the deployment engine โ all resources are CloudFormation stacks
- Config file is
project.json(insideagentcore/);cdk/is at the project root --defaultscreates a harness project; use--framework Strands --model-provider Bedrock --memory nonefor an agent project
Module 02 โ CLI Fundamentals
- Install and verify the AgentCore CLI
- Navigate both TUI (interactive) and non-interactive CLI modes
- Use
--jsonoutput effectively for scripting and automation - Understand command aliases and global flags
Installation
@latest points to 0.13.1 (old, missing many commands). @preview is 1.0.0-preview.x with all current commands.# Always install the @preview tag
npm install -g @aws/agentcore@preview
agentcore --version # should show 1.0.0-preview.x
# Prerequisites
# Node.js 20.x or later
# uv for Python agents: brew install uv
# AWS credentials: aws configure
# Upgrading from old Python toolkit
pip uninstall bedrock-agentcore-starter-toolkit
# Check for updates
agentcore update # Check and install
agentcore update --check # Check onlyTwo Modes: TUI vs Non-Interactive
TUI (Terminal UI) โ launch with no arguments: agentcore. A full terminal UI with menus, wizards, and live dashboards. Use when learning, exploring, or wanting guided setup wizards.
add, deploy, invoke). One-time commands like create don't appear in the TUI but ARE available from the CLI. Always use agentcore --help for the full command list.Non-interactive (CLI) mode โ triggered by passing any flag or argument. Use in CI/CD pipelines, scripting, or when you know exactly what you want.
agentcore deploy -y
agentcore status --json
agentcore invoke "Hello"
agentcore help modes # Explains both modes in detailCommand Aliases
| Full Command | Alias |
|---|---|
deploy | dp |
dev | d |
invoke | i |
status | s |
logs | l |
traces | t |
package | pkg |
Global Flags
| Flag | What It Does |
|---|---|
-h, --help | Show help for any command |
--version | Print CLI version (root only) |
--json | Machine-readable JSON output |
-y, --yes | Auto-confirm prompts (deploy, remove) |
--dry-run | Preview without taking action |
The --json Flag
# Human-readable status
agentcore status
# Machine-readable JSON
agentcore status --json
# Pipe through jq to extract specific fields
agentcore status --json | jq '.resources[] | select(.resourceType == "agent")'
# Invoke and extract just the response text
agentcore invoke "Hello" --json | jq -r '.response'
# Use in CI โ check deployment success
agentcore deploy -y --json | jq '.success'agentcore validate && agentcore deploy -y --json | tee deploy-output.json
# jq '.success' deploy-output.jsonThe --dry-run Flag
agentcore deploy --dry-run # Show what CDK would deploy
agentcore deploy --diff # Show CDK diff (what changes)
agentcore remove all --dry-run # Preview what would be removed
agentcore create --name Foo --defaults --dry-run # Preview project creation--dry-run before destructive operations in production.The validate Command
agentcore validate # Validate project.json in current dir
agentcore validate -d ./path # Validate a specific project directoryThe status Command
agentcore status # All resources in default target
agentcore status --json # JSON output
agentcore status --runtime MyAgent # Filter to one agent
agentcore status --type memory # Filter by resource type
agentcore status --state deployed # Filter by state
# Resource types: agent, harness, runtime-endpoint, memory, credential,
# gateway, evaluator, online-eval, policy-engine, policy,
# config-bundle, ab-testHands-On Labs
agentcore --version
agentcore update --checkagentcore --help
agentcore add --help
agentcore deploy --help
agentcore logs --helpChallenge: Without looking at the docs, find the flag that lets you stream the response when invoking an agent. (Hint: agentcore invoke --help)
agentcore deploy -y before this lab โ it provisions AWS resources (~5โ10 min on first run).cd ~/myagentcore
# Check status โ human readable
agentcore status
# Check status โ JSON (pipe through jq)
agentcore status --json | jq '.resources | length'
agentcore status --json | jq '.resources[] | {name: .name, type: .resourceType, state: .state}'
# Filter to just the agent
agentcore status --type agentAnswer: How many total resources are deployed? What is the state of each resource? What ARN is shown for the agent runtime?
cd ~/myagentcore
agentcore status --json | jq '.resources[0]'
agentcore status --json | jq '[.resources[] | {name, type: .resourceType}]'
agentcore validateKnowledge Check
agentcore deploy without any human interaction. Which flags do you need?--silent --autob)
-y --jsonc)
--no-promptd)
--batch-y suppresses confirmation prompts, --json provides machine-readable output. These are the standard CI/CD flags.agentcore invoke?agentcore inb)
agentcore invc)
agentcore id)
agentcore runagentcore i is the alias for invoke. Learn all aliases: d=dev, dp=deploy, i=invoke, s=status, l=logs, t=traces, pkg=package.agentcore deploy --previewb)
agentcore deploy --dry-runc)
agentcore deploy --diffd)
agentcore status --pending--diff shows the CDK diff (what would change in CloudFormation) without deploying. --dry-run runs the synthesis but also skips deploy. Both are useful; --diff is more targeted for reviewing changes.agentcore validate checks which file?agentcore/cdk/cdk.jsonb)
agentcore/aws-targets.jsonc)
agentcore/project.jsond)
app/main.pyvalidate checks project.json against the schema. It runs before deploy internally, but you should run it manually before committing config changes.agentcore logs command specifically?agentcore help logsb)
agentcore logs -hc)
agentcore logs --helpd) Both b and c work
-h and --help work on any command. -h is shorter; --help is the long form. Both are valid.Key Takeaways
- Two modes: TUI (interactive) and CLI (non-interactive) โ flags trigger CLI mode
--jsonturns any command into an automation building block--dry-runand--diffare your safety net before destructive operations- Learn the aliases:
d,dp,i,s,l,t,pkg - Always
agentcore validatebeforeagentcore deploy agentcore statusis your primary diagnostic tool- TUI menu is a subset of commands โ use
--helpfor the full list
Module 03 โ Project Lifecycle
- Create an AgentCore project from scratch with the full range of options
- Understand the complete project lifecycle: create โ validate โ deploy โ iterate โ teardown
- Use
deploy,status,validate, andpackagefluently - Understand deployment targets and how to manage multiple environments
The Full Lifecycle
agentcore create
# Minimal โ wizard fills the rest
agentcore create
# Fully non-interactive agent project (requires ALL THREE: --framework, --model-provider, --memory)
agentcore create \
--name MyProject \
--framework Strands \
--model-provider Bedrock \
--memory shortTerm
# With long-and-short-term memory
agentcore create \
--name MyProject \
--framework Strands \
--model-provider Bedrock \
--memory longAndShortTerm
# VPC networking (for private environments)
agentcore create \
--name MyProject \
--defaults \
--network-mode VPC \
--subnets subnet-abc,subnet-def \
--security-groups sg-123
# Preview without creating
agentcore create --name MyProject --defaults --dry-run
# Note: --defaults creates a HARNESS project. For an agent project, use --framework instead.
agentcore create \
--name MyProject \
--framework Strands \
--model-provider Bedrock \
--memory none| Flag | Description |
|---|---|
--name | Project name (alphanumeric, max 23 chars, starts with letter) |
--defaults | Use defaults โ creates a harness project (not agent) |
--framework | Strands, LangChain_LangGraph, GoogleADK, OpenAIAgents |
--model-provider | Bedrock, Anthropic, OpenAI, Gemini |
--memory | none, shortTerm, longAndShortTerm |
--build | CodeZip (default) or Container |
--protocol | HTTP (default), MCP, or A2A |
--network-mode | PUBLIC (default) or VPC |
--dry-run | Preview without creating |
What create Generates
MyProject/
โโโ agentcore/
โ โโโ project.json # Resource config (one agent already defined)
โ โโโ aws-targets.json # Default target (your account + region)
โ โโโ .env.local # Gitignored secrets placeholder
โโโ cdk/ # Generated CDK app (at project ROOT)
โโโ app/
โโโ MyAgent/ # Agent source code
โโโ main.py # Entry point with @app.entrypoint decorator
โโโ pyproject.toml # Python depsagentcore validate
Validates project.json against the schema. Run this before every deploy:
agentcore validate # Validates current directory
agentcore validate -d ./path # Validate a specific projectCatches: missing required fields, invalid names, invalid cross-references, schema version mismatches.
agentcore deploy
agentcore validate && agentcore deploy -y
# Preview before deploying
agentcore deploy --diff # Show what changes in CloudFormation
agentcore deploy --dry-run # Full synthesis without deploying
# Deploy with verbose output
agentcore deploy -y -v
# Deploy to a named target
agentcore deploy --target staging -y
# JSON output for CI/CD
agentcore deploy -y --jsonWhat deploy does under the hood:
- Reads
project.json+aws-targets.json - Runs the CDK synthesizer to produce CloudFormation
- Calls CDK deploy (which calls CloudFormation)
- Updates
agentcore/.cli/deployed-state.jsonwith the resulting ARNs
agentcore status
agentcore status # All resources
agentcore status --json # JSON output
agentcore status --type agent
agentcore status --type memory
agentcore status --runtime MyAgent
agentcore status --state deployed
agentcore status --state pending-removalStates: deployed (in AWS, matching config), local-only (in project.json, not yet deployed), pending-removal (removed from config but still in AWS โ deploy to clean up).
agentcore package
agentcore package # Package all runtimes
agentcore package --runtime MyAgent # Package one runtime
agentcore package -d ./my-project # Specify project directoryDeployment Targets
# aws-targets.json
[
{ "name": "default", "description": "Development", "account": "123456789012", "region": "us-east-1" },
{ "name": "staging", "description": "Staging", "account": "987654321098", "region": "us-east-1" }
]
# Deploy to a specific environment
agentcore deploy --target staging -y
agentcore status --target staging
agentcore invoke "Hello" --target stagingagentcore remove
agentcore remove agent --name MyAgent -y
agentcore remove memory --name SharedMemory -y
agentcore remove credential --name OpenAI
agentcore remove gateway --name MyGateway
# Nuclear option โ remove everything
agentcore remove all -y
agentcore remove all --dry-run # Preview firstagentcore remove removes from project.json and marks resources for deletion. Resources stay in AWS until the next agentcore deploy.The Config Lifecycle
# Add to project.json โ deploy โ deployed-state.json updated with ARNs
# Remove from project.json โ deploy โ resource deleted from AWS
# deployed-state.json is auto-managed โ NEVER edit it manually
# It tracks: ARNs, endpoint URLs, last deploy timestampsHands-On Labs
cd ~/myagentcore
# Read the full config
cat agentcore/project.json
# Read deployment targets
cat agentcore/aws-targets.json
# Check deployed state (exists after first deploy)
cat agentcore/.cli/deployed-state.json 2>/dev/null || echo "No deployed state yet"Identify: What resources are in the config? What account/region is the default target? After deploying, what ARNs appear in deployed-state.json?
cd ~/myagentcore
agentcore validateIs it valid? What does the output tell you?
cd ~/myagentcore
agentcore deploy --diffThis shows what CloudFormation would change if you ran deploy. On a fresh project, it will show all new resources to be created.
# Preview creating a project with memory โ no files written
agentcore create \
--name TrainingTest \
--framework Strands \
--model-provider Bedrock \
--memory longAndShortTerm \
--dry-runObserve what files would be generated. Notice that cdk/ is at the project root and the config is named project.json.
Knowledge Check
agentcore remove agent --name MyAgent. What happens to the agent in AWS immediately after this command?b) The agent is removed from
project.json but still exists in AWS until the next agentcore deployc) The agent is put into a pending-removal state and deleted after 24 hours
d) Nothing โ you need to run
agentcore deploy first before you can removeremove removes from project.json (sets state to pending-removal). The resource stays in AWS until the next agentcore deploy, which synthesizes the CDK without that resource and CloudFormation deletes it.agentcore/project.jsonb)
agentcore/aws-targets.jsonc)
agentcore/.cli/deployed-state.jsond)
agentcore/cdk/cdk.out/agentcore/.cli/deployed-state.json is the auto-managed runtime state file. Never edit it manually. It tracks ARNs, endpoint URLs, and is updated after every deploy.aws-targets.json with two targets: default and staging. How do you deploy specifically to staging?agentcore deploy --env stagingb)
agentcore deploy --target stagingc)
agentcore deploy --profile stagingd) Edit the default target to point to staging, then deploy
--target <name> selects a named target from aws-targets.json. This is how you manage dev/staging/prod with a single codebase.agentcore deploy --diff do?b) Shows CloudFormation diff of what would change, without deploying
c) Shows differences between two deployment targets
d) Shows the diff between
project.json and deployed-state.json--diff runs CDK synthesis and shows the CloudFormation change set โ a diff of what resources would be added, modified, or deleted. No changes are made.--name?b) 64 characters
c) 23 characters
d) 100 characters
Key Takeaways
- Lifecycle: create โ validate โ deploy โ iterate (add/remove resources + redeploy) โ teardown
project.jsondrives everything; CDK synthesizes from it- Always
validatebeforedeployโ catches schema errors before CDK fails - Use
--diffand--dry-runbefore destructive operations deployed-state.jsonis auto-managed โ never edit it- Multiple targets (
aws-targets.json) enable dev/staging/prod workflows - Resource removal is two-step: remove from config โ deploy to delete from AWS
--frameworkrequires all three:--framework,--model-provider,--memory
Module 04 โ Agent Types & Frameworks
- Distinguish the three agent creation types: template, BYO, and import
- Understand the two build types: CodeZip vs Container
- Know the supported frameworks and model providers
- Add agents to an existing project and configure advanced options
Three Ways to Create an Agent
1. Template Agent (default --type create)
The CLI scaffolds a complete working agent from a framework template. Best for new agents.
agentcore add agent \
--name MyAgent \
--framework Strands \
--model-provider Bedrock \
--memory shortTerm2. BYO โ Bring Your Own Code (--type byo)
You already have agent code and want to register it with AgentCore without changing its structure.
agentcore add agent \
--name MyAgent \
--type byo \
--code-location ./my-existing-agent \
--entrypoint main.py \
--language PythonBYO supports Python, TypeScript, and "Other" (any binary/script). Your entry point must implement the AgentCore HTTP/MCP/A2A protocol using the @app.entrypoint decorator.
3. Import from Bedrock Agents (--type import)
agentcore add agent \
--name MyAgent \
--type import \
--agent-id AGENT123 \
--agent-alias-id ALIAS456 \
--region us-east-1 \
--framework Strands \
--memory noneTwo Build Types
CodeZip (default)
Python source code is zipped and deployed directly to AgentCore's managed Python runtime. No Docker required, faster build and deploy. Best for pure Python agents without custom system dependencies.
Container
Agent is packaged as a Docker container image, built via AWS CodeBuild (ARM64), pushed to ECR, and deployed to AgentCore runtime. Full control over the runtime environment, supports any language, requires a Dockerfile.
Supported Frameworks
| Framework | Flag Value | Best For |
|---|---|---|
| Strands Agents | Strands | AWS-native, streaming, Bedrock integration |
| LangChain/LangGraph | LangChain_LangGraph | Graph-based workflows |
| Google ADK | GoogleADK | Gemini models |
| OpenAI Agents SDK | OpenAIAgents | OpenAI models |
| Your Situation | Recommended |
|---|---|
| Bedrock-native, streaming responses | Strands Agents |
| Complex branching / DAG workflows | LangChain/LangGraph |
| Using Gemini models | Google ADK |
| Using OpenAI models | OpenAI Agents SDK |
Supported Model Providers
| Provider | Flag Value | Default Model | Needs API Key? |
|---|---|---|---|
| Amazon Bedrock | Bedrock | us.anthropic.claude-sonnet-4-5... | No |
| Anthropic | Anthropic | claude-sonnet-4-5... | Yes |
| OpenAI | OpenAI | gpt-4.1 | Yes |
| Google Gemini | Gemini | gemini-2.5-flash | Yes |
Agent Protocols
| Protocol | Flag | Use Case |
|---|---|---|
| HTTP | HTTP | Standard REST invocation โ most common |
| MCP | MCP | Agent acts as an MCP tool server (agents-as-tools) |
| A2A | A2A | Agent-to-Agent protocol for multi-agent orchestration |
Full add agent Flag Reference
agentcore add agent \
--name <name> # Agent name (max 48 chars)
--type create|byo|import # How to create the agent
--framework <fw> # Strands, LangChain_LangGraph, etc.
--model-provider <p> # Bedrock, Anthropic, OpenAI, Gemini
--api-key <key> # For non-Bedrock providers
--memory none|shortTerm|longAndShortTerm
--build CodeZip|Container
--protocol HTTP|MCP|A2A
--network-mode PUBLIC|VPC
--authorizer-type AWS_IAM|CUSTOM_JWT
--idle-timeout <seconds> # Kill session after N seconds idle
--max-lifetime <seconds> # Max session lifetime
--code-location <path> # BYO only
--entrypoint <file> # BYO only, default: main.py
--agent-id <id> # Import only
--agent-alias-id <id> # Import onlyGenerated main.py (Strands)
from bedrock_agentcore.runtime import BedrockAgentCoreApp
from strands import Agent
from strands.models import BedrockModel
app = BedrockAgentCoreApp()
@app.entrypoint
async def invoke(payload, context):
session_id = getattr(context, 'session_id', 'default-session')
user_id = getattr(context, 'user_id', 'default-user')
agent = Agent(
model=BedrockModel(model_id="us.anthropic.claude-sonnet-4-5-20250514-v1:0"),
system_prompt="You are a helpful assistant.",
)
response = await agent.invoke_async(payload)
return {"response": str(response)}Hands-On Labs
cd ~/myagentcore
ls app/
cat app/myagentcore/main.py
cat app/myagentcore/pyproject.tomlQuestions: What framework does the generated agent use? What model is configured? What does the @app.entrypoint decorator do?
cd ~/myagentcore
cat agentcore/project.json | \
python3 -c "import sys,json; d=json.load(sys.stdin); print(json.dumps(d['runtimes'], indent=2))"Identify: build type, protocol, networkMode, any authorizerType.
agentcore add agent --helpQuestions: What is the --type flag's default value? What protocols are supported? What languages are supported for BYO agents?
cd ~/myagentcore
agentcore add agent --help
# Optional experiment (modifies project.json):
# agentcore add agent \
# --name SecondAgent \
# --framework LangChain_LangGraph \
# --model-provider Bedrock \
# --memory noneKnowledge Check
./my-bot/. How do you add it to your project without regenerating from a template?agentcore add agent --type existing --code-location ./my-botb)
agentcore add agent --type byo --code-location ./my-bot --entrypoint main.pyc) Copy the files manually to
app/ and edit project.jsond)
agentcore add agent --import ./my-bot--type byo is the "Bring Your Own Code" option. Point to your existing code directory with --code-location and specify the entry file with --entrypoint.Dockerfile in the agent directory?b) Container
c) Both
d) Neither โ Dockerfile is optional for both
--protocol MCP means what?b) The agent exposes itself as an MCP tool server (other agents can use it as a tool)
c) The agent uses the Model Context Protocol for memory retrieval
d) MCP is required for gateway-connected agents
--protocol MCP, it exposes itself as an MCP tool server. Other agents, Claude, or MCP clients can invoke it as a tool. Different from an agent that uses MCP tools via a gateway.--framework LangGraph --model gpt-4.1b)
--framework LangChain_LangGraph --model-provider OpenAI --api-key sk-...c)
--framework LangChain --model-provider OpenAId)
--framework LangGraph --provider OpenAILangChain_LangGraph (the full enum value, not just "LangGraph"). The --model-provider OpenAI flag tells AgentCore to use OpenAI, and --api-key provides the credential.--idle-timeout flag control?b) How long an agent runtime session stays alive after the last request
c) How long until unused API credentials expire
d) The HTTP timeout for agent invocations
--idle-timeout (in seconds) controls when the runtime kills a session that hasn't received requests. Short-lived agents for one-shot tasks, longer sessions for conversational agents.Key Takeaways
- Three creation types: template (scaffold), BYO (bring existing code), import (from Bedrock Agents)
- Two build types: CodeZip (zip + managed Python runtime) vs Container (Docker + ECR + CodeBuild)
- Four frameworks: Strands, LangChain/LangGraph, Google ADK, OpenAI Agents
- Four model providers: Bedrock (no API key), Anthropic, OpenAI, Gemini (last three need API key)
- Three protocols: HTTP (default), MCP (expose as tool server), A2A (multi-agent)
--authorizer-typecontrols inbound auth:AWS_IAM(default) orCUSTOM_JWT- The
@app.entrypointdecorator is the integration point between your code and the runtime
Module 05 โ Local Development
- Run an agent locally with hot-reload using
agentcore dev - Invoke both local and deployed agents with
agentcore invoke - Use MCP protocol-specific dev commands
- Debug local agents using exec and logs
agentcore dev โ the Local Dev Server
agentcore dev opens a web browser chat interface by default. To use the terminal TUI, add the --no-browser flag.# Start dev server (opens browser chat UI by default)
agentcore dev
# Use terminal TUI instead of browser
agentcore dev --no-browser
# Non-interactive โ logs to stdout (useful in one terminal, invoke in another)
agentcore dev --logs
# Target a specific runtime (required when multiple agents in project)
agentcore dev --runtime MyAgent
# Use a different port
agentcore dev --port 3000CodeZip agents: Dev runs via uvicorn with a file watcher โ changes to Python files restart automatically.
Container agents: Dev builds the Docker image and runs it with a volume mount so code changes are reflected immediately.
Invoking the Local Dev Server
# While agentcore dev is running in one terminal, invoke from another:
agentcore dev "What can you do?"
agentcore dev "Tell me a long story" --stream
agentcore dev "Hello" --runtime MyAgent
# MCP protocol dev commands
agentcore dev list-tools
agentcore dev call-tool --tool myTool --input '{"arg": "value"}'agentcore invoke โ Invoke Deployed Agents
# Basic invocation
agentcore invoke "What can you do?"
# Stream the response in real-time
agentcore invoke "Explain quantum computing" --stream
# Continue an existing session (multi-turn conversation)
agentcore invoke "What did I ask before?" --session-id abc123
# Invoke a specific runtime
agentcore invoke --runtime MyAgent "Hello"
# Invoke against a specific deployment target
agentcore invoke "Hello" --target staging
# JSON output
agentcore invoke "Hello" --json
# Set a user ID (for memory namespacing)
agentcore invoke "Hello" --user-id alicePrompt Sources (priority order)
--prompt <text>(highest priority)- Positional argument:
agentcore invoke "Hello" --prompt-file <path>: read from a file- Piped stdin:
cat prompt.txt | agentcore invoke
# Long structured prompt from file
agentcore invoke --prompt-file ./test-prompt.json --json
# MCP protocol agents
agentcore invoke call-tool --tool myTool --input '{"key": "value"}'
# JWT auth agents
agentcore invoke "Hello" --bearer-token <your-jwt-token>Remote Exec โ Debug Inside the Runtime Container
# List files in the agent directory
agentcore invoke --exec "ls -la /app"
# Check environment variables
agentcore invoke --exec "env | grep MEMORY"
# Check OS/environment
agentcore invoke --exec "cat /etc/os-release"
# Run a Python script inside the runtime
agentcore invoke --exec "python script.py"
# With timeout (default: 60s)
agentcore invoke --exec "python long-script.py" --timeout 120
# Also works locally
agentcore dev --exec "pip list"Use cases: inspect installed packages, check env vars, run data migration scripts, debug file permissions. This is your "remote shell" into a running AgentCore runtime.
Gateway Environment Variables in Local Dev
agentcore dev reads URLs from deployed-state.json. You must deploy at least once before local dev can inject gateway URLs.# After deploying, these are auto-injected into agentcore dev:
AGENTCORE_GATEWAY_MYGATEWAY_URL=https://bedrock-agentcore.us-east-1.amazonaws.com/...
AGENTCORE_GATEWAY_MYGATEWAY_MANAGED_OAUTH_TOKEN=...Hands-On Labs
cd ~/myagentcore
cat agentcore/project.json | \
python3 -c "import sys,json; d=json.load(sys.stdin); print(json.dumps(d['runtimes'][0], indent=2))"Find: What is the entrypoint? What is the codeLocation? What build type is it?
agentcore deploy -y before this lab.cd ~/myagentcore
agentcore invoke "What can you do?"
agentcore invoke "Tell me about Amazon Bedrock AgentCore" --stream
agentcore invoke "Hello" --jsonagentcore deploy -y before this lab.cd ~/myagentcore
agentcore invoke --exec "env" | grep -E "MEMORY|GATEWAY|AWS"
agentcore invoke --exec "pip list 2>/dev/null || uv pip list"
agentcore invoke --exec "ls -la /app"agentcore deploy -y before this lab.cd ~/myagentcore
RESPONSE=$(agentcore invoke "My name is Alex and I'm learning AgentCore" --json)
SESSION_ID=$(echo $RESPONSE | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('sessionId',''))")
echo "Session ID: $SESSION_ID"
agentcore invoke "What did I just tell you?" --session-id "$SESSION_ID"Knowledge Check
AnalysisAgent and SummaryAgent. How do you start the dev server for just SummaryAgent?agentcore dev SummaryAgentb)
agentcore dev --agent SummaryAgentc)
agentcore dev --runtime SummaryAgentd)
agentcore dev --name SummaryAgent--runtime <name> is the flag to select a specific runtime. Consistent across dev, invoke, logs, traces, and status.agentcore invoke?--input <file>b)
--from-file <file>c)
--prompt-file <file>d)
--file <file>--prompt-file <path> reads the prompt from a file. Useful for long structured prompts (JSON, multi-paragraph instructions).agentcore dev on a project with a deployed gateway. What prerequisite must be true for gateway env vars to be injected?project.jsonb) The gateway must have been deployed at least once (
agentcore deploy must have run)c) You must manually set
AGENTCORE_GATEWAY_URL in .env.locald) No prerequisite โ the CLI fetches the URL dynamically
agentcore dev reads the URLs from deployed-state.json which is populated by agentcore deploy.agentcore ssh --runtime MyAgentb)
agentcore invoke --exec "ls /app"c)
agentcore debug --runtime MyAgent "ls /app"d)
agentcore exec "ls /app"agentcore invoke --exec "command" executes a shell command inside the running agent container. This is the only way to "shell into" a running AgentCore runtime โ there's no SSH.--conversation-id <id>b)
--context-id <id>c)
--session-id <id>d)
--thread-id <id>--session-id <id> continues a specific session. The session ID is returned in the JSON response from the first invocation.Key Takeaways
agentcore dev= hot-reload local server (opens browser UI by default; use--no-browserfor TUI)agentcore invoke= call the deployed endpoint (add--streamfor real-time)- Four prompt sources in priority order:
--prompt> positional >--prompt-file> stdin --session-idenables multi-turn conversations--execis your remote shell into the container โ use it for debugging- Gateway env vars in local dev require a prior
agentcore deploy --runtime <name>is the consistent flag for targeting specific agents across all commands
Module 06 โ Memory Deep Dive
- Understand short-term vs long-and-short-term memory
- Know all four memory strategies and when to use each
- Add memory to a project and configure it in agent code
- Understand namespaces, event expiry, and memory record streaming
Why Memory?
Without memory, every conversation with an agent starts from zero. AgentCore Memory provides cross-session recall, within-session context, and semantic search over past interactions. Memory is a managed service โ you don't host it, it scales automatically.
The Three --memory Shorthands
| Shorthand | What Gets Created | Use Case |
|---|---|---|
none | No memory resource | Stateless agents |
shortTerm | Memory with no strategies (event expiry only) | Session context within a window |
longAndShortTerm | Memory with all four strategies | Full persistent memory |
The Four Memory Strategies
| Strategy | What It Does | Best For |
|---|---|---|
SEMANTIC | Vector-based similarity search over stored facts | Retrieving relevant past context by meaning |
SUMMARIZATION | Compresses conversation history into summaries | Long conversations, token efficiency |
USER_PREFERENCE | Stores explicit user preferences | Personalization ("I prefer dark mode") |
EPISODIC | Captures meaningful interaction episodes + reflections | Long-term relationship building |
SEMANTIC uses a vector store (higher cost, best recall). SUMMARIZATION compresses tokens (saves cost on long conversations). USER_PREFERENCE is low-volume but high-value. EPISODIC provides rich long-term context at moderate cost. Start with shortTerm (no strategies), then add SEMANTIC + SUMMARIZATION as needed.Long-and-short-term namespaces:
SEMANTIC โ /users/{actorId}/facts
USER_PREFERENCE โ /users/{actorId}/preferences
SUMMARIZATION โ /summaries/{actorId}/{sessionId}
EPISODIC โ /episodes/{actorId}/{sessionId}
โ /episodes/{actorId} (reflections)Adding Memory via CLI
agentcore add memory \
--name SharedMemory \
--strategies SEMANTIC,SUMMARIZATION,USER_PREFERENCE,EPISODIC \
--expiry 30 # Days until events expire (7-365, default 30)
# With Kinesis streaming
agentcore add memory \
--name MyMemory \
--strategies SEMANTIC \
--data-stream-arn arn:aws:kinesis:us-east-1:123456789012:stream/my-stream \
--stream-content-level FULL_CONTENTMemory Configuration in project.json
{
"memories": [
{
"name": "MyMemory",
"eventExpiryDuration": 30,
"strategies": [
{ "type": "SEMANTIC" },
{ "type": "SUMMARIZATION" },
{ "type": "USER_PREFERENCE" },
{ "type": "EPISODIC", "reflectionNamespaces": ["/episodes/{actorId}"] }
]
}
]
}How Agents Discover Memory
Each memory gets an environment variable: MEMORY_<UPPERCASENAME>_ID
MyMemoryโMEMORY_MYMEMORY_IDSharedMemoryโMEMORY_SHAREDMEMORY_IDprod_memoryโMEMORY_PROD_MEMORY_ID
import os
memory_id = os.getenv("MEMORY_MYMEMORY_ID")Memory Integration in Strands Agents
import os
from bedrock_agentcore.memory.integrations.strands.config import AgentCoreMemoryConfig, RetrievalConfig
from bedrock_agentcore.memory.integrations.strands.session_manager import AgentCoreMemorySessionManager
MEMORY_ID = os.getenv("MEMORY_MYMEMORY_ID")
REGION = os.getenv("AWS_REGION")
def get_memory_session_manager(session_id: str, actor_id: str):
if not MEMORY_ID:
return None
retrieval_config = {
f"/users/{actor_id}/facts": RetrievalConfig(top_k=3, relevance_score=0.5),
f"/summaries/{actor_id}/{session_id}": RetrievalConfig(top_k=3, relevance_score=0.5)
}
return AgentCoreMemorySessionManager(
AgentCoreMemoryConfig(
memory_id=MEMORY_ID, session_id=session_id,
actor_id=actor_id, retrieval_config=retrieval_config,
), REGION
)Adding Memory to an Existing Agent
# 1. Add memory resource
agentcore add memory --name MyMemory --strategies SEMANTIC,SUMMARIZATION
# 2. Create memory directory
mkdir -p app/MyAgent/memory
# 3. Create app/MyAgent/memory/session.py with session manager code (see above)
# 4. Update main.py to import and use it:
# from memory.session import get_memory_session_manager
# session_manager = get_memory_session_manager(session_id, user_id)
# agent = Agent(model=..., session_manager=session_manager)
# 5. Deploy
agentcore deployMEMORY_SHAREDMEMORY_ID env var. Useful for multi-agent workflows where agents need shared context.Hands-On Labs
cd ~/myagentcore
agentcore add memory --name SharedMemory --strategies SEMANTIC,SUMMARIZATION --expiry 30
cat agentcore/project.json | \
python3 -c "import sys,json; d=json.load(sys.stdin); print(json.dumps(d['memories'], indent=2))"Questions: What env var does the agent use to find this memory? What strategies are configured? What is the expiry?
cd ~/myagentcore
ls app/myagentcore/ 2>/dev/null
# Preview what memory integration code would be generated:
agentcore create \
--name memorytest \
--framework Strands \
--model-provider Bedrock \
--memory longAndShortTerm \
--dry-runcd ~/myagentcore
agentcore status --type memory
agentcore add memory --helpagentcore add memory --helpWhich strategy would you use for a customer support agent that needs to remember user preferences across sessions?
Knowledge Check
CustomerData. What environment variable does the agent use to get its ID?AGENTCORE_MEMORY_CustomerDatab)
MEMORY_ID_CUSTOMERDATAc)
MEMORY_CUSTOMERDATA_IDd)
AGENTCORE_MEMORY_IDMEMORY_<UPPERCASENAME>_ID. CustomerData โ MEMORY_CUSTOMERDATA_ID.SEMANTICb)
SUMMARIZATIONc)
USER_PREFERENCEd)
EPISODICUSER_PREFERENCE is specifically for storing explicit preferences expressed by users.SharedMemory is referenced by Agent A and Agent B. Which statement is true?b) Both agents read from and write to the same memory store
c) Only the first agent to deploy gets access
d) Sharing memory requires a special
--shared flag on both agentseventExpiryDuration: 30 mean in a memory config?b) Raw conversation events expire after 30 days (extracted facts/summaries persist)
c) The memory strategy runs every 30 days
d) Sessions expire after 30 days of inactivity
eventExpiryDuration controls the raw event log retention. Extracted memories (facts, summaries, preferences) are stored separately and persist beyond this window.memory/session.py to an existing agent, what else must you do?b) Update
project.json to link the memory to the agentc) Update
main.py to import and pass the session manager to the Agent constructor, then redeployd) Run
agentcore add memory --attach MyAgentmain.py to import get_memory_session_manager, call it with session_id and user_id, and pass the result to Agent(...). Then agentcore deploy.Key Takeaways
- Three shorthands:
none,shortTerm(events only),longAndShortTerm(all 4 strategies) - Four strategies: SEMANTIC (vector search), SUMMARIZATION (compress history), USER_PREFERENCE (prefs), EPISODIC (episodes)
- Memory is discovered via env var:
MEMORY_<UPPERCASENAME>_ID - Memory is a flat resource โ easily shared across multiple agents
- Namespaces scope retrieval by user (
actorId) and session (sessionId) eventExpiryDurationcontrols raw event log (not extracted memories)- Stream memory changes to Kinesis for event-driven architectures
Module 07 โ Gateway & Gateway Targets
- Explain what an AgentCore Gateway is and why it exists
- Add gateways and gateway targets using the CLI
- Configure all five target types correctly
- Understand inbound and outbound authentication options
What Is a Gateway?
An AgentCore Gateway is an MCP-compatible proxy that sits between your agent and its tools. Instead of your agent calling APIs directly, it calls the gateway, which discovers available tools from all configured targets, routes tool calls to the correct backend, handles authentication, and optionally enforces Cedar policies.
Analogy: A gateway is like an API gateway that speaks MCP. Your agent only needs to know the gateway URL.
--authorizer-type AWS_IAM or CUSTOM_JWT in production. NONE auth (the default) leaves the gateway endpoint open to anyone with the URL within your AWS account boundary. For external-facing gateways, CUSTOM_JWT with an OIDC provider is recommended.Quick Start Pattern
# Recommended order: gateway BEFORE agent (new agents auto-wire gateway client code)
# 1. Create project
agentcore create --name MyProject --framework Strands --model-provider Bedrock --memory none
cd MyProject
# 2. Add gateway
agentcore add gateway --name my-gateway
# 3. Add target(s)
agentcore add gateway-target \
--type mcp-server \
--name weather-tools \
--endpoint https://mcp.example.com/mcp \
--gateway my-gateway
# 4. Create agent (automatically gets gateway client code)
agentcore add agent --name MyAgent --framework Strands --model-provider Bedrock --memory none
# 5. Deploy
agentcore deploy -yagentcore add gateway
# Simplest (no auth โ dev/testing)
agentcore add gateway --name my-gateway
# Production with CUSTOM_JWT
agentcore add gateway \
--name my-gateway \
--authorizer-type CUSTOM_JWT \
--discovery-url https://idp.example.com/.well-known/openid-configuration \
--allowed-audience my-api \
--allowed-clients my-client-id \
--client-id agent-client-id \
--client-secret agent-client-secret| Flag | Description |
|---|---|
--name | Gateway name (alphanumeric + hyphens, 1-100 chars) |
--authorizer-type | NONE (default), AWS_IAM, CUSTOM_JWT |
--no-semantic-search | Disable semantic tool discovery |
--exception-level | NONE (default) or ALL (returns full error detail) |
--policy-engine | Attach a Cedar policy engine |
Five Gateway Target Types
1. mcp-server
agentcore add gateway-target \
--type mcp-server \
--name weather-tools \
--endpoint https://mcp.example.com/mcp \
--gateway my-gateway \
--outbound-auth oauth \
--oauth-client-id my-client \
--oauth-client-secret my-secret \
--oauth-discovery-url https://auth.example.com/.well-known/openid-configuration2. api-gateway
agentcore add gateway-target \
--type api-gateway \
--name PetStore \
--rest-api-id abc123 \
--stage prod \
--tool-filter-path '/pets/*' \
--tool-filter-methods GET,POST \
--gateway my-gateway3. open-api-schema
# Requires outbound auth (oauth or api-key)
agentcore add gateway-target \
--type open-api-schema \
--name PetStoreAPI \
--schema specs/petstore.json \
--gateway my-gateway \
--outbound-auth oauth \
--credential-name MyOAuth4. smithy-model
# IAM role auth exclusively โ no outbound auth flags needed
agentcore add gateway-target \
--type smithy-model \
--name MyService \
--schema models/service.json \
--gateway my-gateway5. lambda-function-arn
# IAM role auth exclusively
agentcore add gateway-target \
--type lambda-function-arn \
--name MyLambdaTools \
--lambda-arn arn:aws:lambda:us-east-1:123456789012:function:my-func \
--tool-schema-file tools.json \
--gateway my-gatewayAuth Matrix
| Target Type | Outbound Auth Options |
|---|---|
mcp-server | oauth or none |
api-gateway | api-key or none |
open-api-schema | oauth or api-key (required) |
smithy-model | IAM role only (no flag needed) |
lambda-function-arn | IAM role only (no flag needed) |
Inbound Auth
| Type | Description | When to Use |
|---|---|---|
NONE | No auth (open) | Dev/testing only |
AWS_IAM | SigV4 signed requests | AWS-native agents |
CUSTOM_JWT | OIDC/JWT validation | External IdPs, M2M |
Hands-On Labs
cd ~/myagentcore
agentcore add gateway --name my-gateway
cat agentcore/project.json | \
python3 -c "import sys,json; d=json.load(sys.stdin); print(json.dumps(d.get('agentCoreGateways',[]), indent=2))"Questions: What authorizer type is configured by default? What env var does the agent use to find this gateway's URL?
agentcore add gateway-target --helpObserve: What are the five --type values? Which target types require --schema? Which support --outbound-auth oauth?
agentcore deploy -y after adding the gateway in Lab 7.1.cd ~/myagentcore
agentcore fetch access --name my-gateway --type gateway --jsonWhat URL does the gateway expose? What auth method is in use?
cd ~/myagentcore
agentcore status --type gateway
agentcore invoke "What tools do you have available?" --streamKnowledge Check
open-api-schema gateway target requires what for outbound authentication?b) IAM role only
c) Either
oauth or api-key โ outbound auth is requiredd) Only
oauth is supported for OpenAPI targetsopen-api-schema targets require outbound auth (oauth or api-key). Unlike mcp-server which can use none, OpenAPI schema targets must have outbound auth configured.mcp-serverb)
api-gatewayc)
smithy-modeld)
lambda-function-arnlambda-function-arn is the target type for connecting to an existing Lambda function by its ARN. You also need --tool-schema-file.lambda-function-arn target, how does the gateway authenticate with the Lambda?b) OAuth client credentials flow
c) IAM role (automatic, no configuration needed)
d) Bearer token from CUSTOM_JWT
lambda:InvokeFunction permission. No outbound auth flags needed.--authorizer-type CUSTOM_JWT and provide --client-id and --client-secret. What does the CLI create automatically?b) A managed OAuth credential that the agent uses to get Bearer tokens at runtime
c) An API Gateway authorizer
d) An IAM role for the gateway
--client-id and --client-secret, the CLI creates a managed OAuth credential in AgentCore Identity (Secrets Manager). The agent's generated code uses this to obtain Bearer tokens automatically.GATEWAY_URLb)
AGENTCORE_GATEWAY_URLc)
AGENTCORE_GATEWAY_<NAME>_URL (where NAME is uppercased)d)
MCP_SERVER_URLAGENTCORE_GATEWAY_<UPPERCASENAME>_URL. So a gateway named my-gateway โ AGENTCORE_GATEWAY_MY_GATEWAY_URL.Key Takeaways
- Gateway = MCP-compatible proxy between agent and tools
- Create gateway BEFORE creating agents โ new agents auto-wire gateway client code
- Five target types: mcp-server, api-gateway, open-api-schema, smithy-model, lambda-function-arn
- Lambda and Smithy targets use IAM role auth exclusively (no flags needed)
- open-api-schema requires outbound auth; mcp-server and api-gateway support
none - Inbound auth:
NONE(dev),AWS_IAM(native),CUSTOM_JWT(external IdPs) - Gateway URL injected as
AGENTCORE_GATEWAY_<NAME>_URLenv var at runtime --exception-levelvalid values:NONEorALL(not DEBUG)
Module 08 โ Credentials & Identity
- Understand AgentCore's identity model and how credentials flow
- Add API key and OAuth credential providers
- Understand inbound vs outbound credentials
- Know where secrets are stored locally vs in AWS
The Identity Problem AgentCore Solves
AgentCore Identity uses AWS Secrets Manager under the hood to store credentials securely. Your agent code never sees the raw secret โ it's injected at runtime through the AgentCore Identity service.
Two Types of Credentials
1. API Key Credentials
agentcore add credential \
--name OpenAI \
--api-key sk-...
# Or just name it โ CLI will prompt for the key
agentcore add credential --name MyToolStored in: local dev โ .env.local as AGENTCORE_CREDENTIAL_<PROJECT><NAME>=<value> | deployed โ AgentCore Identity (Secrets Manager)
2. OAuth Credentials
agentcore add credential \
--name MyOAuthProvider \
--type oauth \
--discovery-url https://idp.example.com/.well-known/openid-configuration \
--client-id my-client-id \
--client-secret my-client-secret \
--scopes read,writeCredential Configuration in project.json
{
"credentials": [
{ "authorizerType": "ApiKeyCredentialProvider", "name": "OpenAI" },
{
"authorizerType": "OAuthCredentialProvider",
"name": "MyOAuthProvider",
"discoveryUrl": "https://idp.example.com/.well-known/openid-configuration",
"scopes": ["read", "write"]
}
]
}project.json โ only the configuration. Secrets live in .env.local (local) and Secrets Manager (deployed).The .env.local File
# API key credential named "OpenAI" in project "MyProject"
AGENTCORE_CREDENTIAL_MYPROJECTOPENAI=sk-...
# OAuth credential named "MyOAuth" in project "MyProject"
AGENTCORE_CREDENTIAL_MYPROJECTMYOAUTH_CLIENT_ID=my-client-id
AGENTCORE_CREDENTIAL_MYPROJECTMYOAUTH_CLIENT_SECRET=my-secretPattern: AGENTCORE_CREDENTIAL_<PROJECTNAME><CREDNAME> (all uppercase, no separators).
Credentials for Gateway Outbound Auth
# Named credential (reusable across multiple targets)
agentcore add credential \
--name MyOAuthProvider \
--type oauth \
--discovery-url https://auth.example.com/.well-known/openid-configuration \
--client-id my-client \
--client-secret my-secret
agentcore add gateway-target \
--type mcp-server \
--name secure-tools \
--endpoint https://api.example.com/mcp \
--gateway my-gateway \
--outbound-auth oauth \
--credential-name MyOAuthProvider
# Inline (single-target, not reusable)
agentcore add gateway-target \
--type mcp-server \
--name secure-tools \
--endpoint https://api.example.com/mcp \
--gateway my-gateway \
--outbound-auth oauth \
--oauth-client-id my-client \
--oauth-client-secret my-secret \
--oauth-discovery-url https://auth.example.com/.well-known/openid-configurationInbound vs Outbound Authentication
| Direction | What It Is | Config Location |
|---|---|---|
| Inbound | How callers authenticate TO your agent or gateway | authorizerType on agent/gateway |
| Outbound | How your agent/gateway authenticates TO upstream services | outboundAuth on gateway targets |
Inbound options: AWS_IAM (default for agents), CUSTOM_JWT, NONE (gateways only, dev)
Outbound options: none, api-key, oauth, IAM role (implicit for Lambda/Smithy)
CUSTOM_JWT Deep Dive
agentcore add agent \
--name MyAgent \
--framework Strands \
--authorizer-type CUSTOM_JWT \
--discovery-url https://idp.example.com/.well-known/openid-configuration \
--allowed-audience my-api \
--allowed-clients my-client-id \
--allowed-scopes read write \
--client-id agent-bearer-client \
--client-secret agent-bearer-secret \
--custom-claims '[{"claim": "tenant_id", "value": "acme-corp"}]'agentcore add credential --name MyAPI --api-key <new-key> updates .env.local. (2) agentcore deploy pushes the new key to Secrets Manager. (3) The next agent invocation picks up the new secret automatically. Zero-downtime rotation.Hands-On Labs
cd ~/myagentcore
cat agentcore/project.json | \
python3 -c "import sys,json; d=json.load(sys.stdin); print(json.dumps(d.get('credentials',[]), indent=2))"Questions: What credentials are defined? Are they API key or OAuth type? Where is a non-Bedrock model provider's credential stored?
cd ~/myagentcore
ls -la agentcore/.env.local 2>/dev/null && \
grep -o 'AGENTCORE_[A-Z_]*' agentcore/.env.local 2>/dev/null || \
echo "No .env.local yet"What env var names follow the AGENTCORE_CREDENTIAL_<PROJECTNAME><CREDNAME> pattern?
agentcore deploy -y before this lab.cd ~/myagentcore
agentcore status --type credential --jsonKnowledge Check
MyProject and you have a credential named WeatherAPI. What is the key name in .env.local?WEATHER_API_KEYb)
AGENTCORE_CREDENTIAL_MYPROJECTWEATHERAPIc)
AGENTCORE_MYPROJECT_WEATHERAPId)
CREDENTIAL_WEATHERAPIAGENTCORE_CREDENTIAL_<PROJECTNAME><CREDNAME> all uppercase, no separators.b) Inbound = how callers authenticate to the gateway; outbound = how the gateway authenticates to targets
c) They always use different auth methods (inbound = JWT, outbound = API key)
d) They are the same thing, just different names
lambda-function-arn gateway target uses what outbound authentication?x-api-key headerb) OAuth client credentials
c) IAM role (automatic, no configuration)
d) No authentication โ Lambda functions are always public
lambda:InvokeFunction permission.agentcore add credential instead of inline?b) Named credentials can be reused across multiple gateway targets
c) Separate credentials support more auth types
d) There is no advantage
--credential-name. Inline credentials create a one-off credential tied to that specific target.project.json. True or false?b) False โ project.json stores config only; secrets are in .env.local (local) and Secrets Manager (deployed)
project.json contains ONLY configuration (names, URLs, types). Actual secrets live in .env.local (local dev) and AgentCore Identity (Secrets Manager) for deployed environments. You can safely commit project.json.Key Takeaways
- Two credential types:
ApiKeyCredentialProvider(string secret) andOAuthCredentialProvider(client credentials) - Secrets are NEVER in
project.jsonโ they're in.env.local(local) and Secrets Manager (deployed) - Local env var pattern:
AGENTCORE_CREDENTIAL_<PROJECTNAME><CREDNAME> - Inbound auth = who can call your resource; outbound auth = how your resource calls upstream
- Named credentials are reusable; inline credentials are single-target
- Lambda and Smithy targets use IAM role auth exclusively
- CUSTOM_JWT enables external IdPs, M2M flows, non-AWS callers
Module 9 โ Observability: Logs & Traces
- Stream and search agent runtime logs using
agentcore logs - List and download traces using
agentcore traces - Understand the structure of a trace and what it contains
- Debug common agent issues using logs and traces
- Use JSON output to build observability workflows
The agentcore logs Command
Stream or search agent runtime logs from CloudWatch.
# Stream logs in real-time (follow mode โ Ctrl+C to stop)
agentcore logs
# Target a specific runtime
agentcore logs --runtime MyAgent
# Search historical logs (last 1 hour for errors)
agentcore logs --since 1h --level error
# Search a time range
agentcore logs --since 2d --until 1d --query "timeout"
# JSON Lines output (one JSON object per line)
agentcore logs --json
# Limit number of lines
agentcore logs --since 1h -n 100Time Format Options
Both --since and --until accept relative (30m, 1h, 2d) or ISO 8601 (2026-05-13T10:00:00Z) timestamps. With no flags, the command streams in real-time.
Log Levels
| Level | Use | Flag |
|---|---|---|
debug | Verbose internal detail | --level debug |
info | Normal operations | --level info |
warn | Warnings | --level warn |
error | Errors only | --level error |
All logs Flags
| Flag | Description |
|---|---|
--runtime <name> | Target specific runtime |
--since <time> | Start time (defaults to 1h ago in search mode) |
--until <time> | End time (defaults to now) |
--level <level> | Filter: error, warn, info, debug |
-n, --limit <n> | Max lines to return |
--query <text> | Server-side text filter |
--json | JSON Lines output |
Traces: agentcore traces list / get
Traces capture the full reasoning chain for one agent invocation. They take 5โ10 minutes to appear after an invocation โ processed asynchronously.
# List recent traces
agentcore traces list
agentcore traces list --runtime MyAgent --limit 50
agentcore traces list --since 1h
# Download a full trace by ID
agentcore traces get <traceId>
agentcore traces get abc123 --output ./debug-trace.json
agentcore traces get abc123 --runtime MyAgentWhat's In a Trace?
| Field | Description |
|---|---|
| Input | The user's prompt |
| Thinking | The LLM's reasoning steps |
| Tool calls | Which tools were invoked, with input arguments |
| Tool results | What each tool returned |
| Output | The final agent response |
| Timing | Latency at each step |
| Session/user IDs | For multi-turn context correlation |
Observability Architecture
Debugging Patterns
# Pattern 1: Find recent errors
agentcore logs --since 1h --level error
# Pattern 2: Debug a specific invocation
agentcore invoke "Hello" --json > response.json
agentcore traces list --runtime MyAgent --limit 5
agentcore traces get <traceId> --output ./debug.json
# Pattern 3: Real-time monitoring while testing
# Terminal 1: agentcore logs --runtime MyAgent
# Terminal 2: agentcore invoke "Test message"
# Pattern 4: Search for a specific error
agentcore logs --since 24h --level error --query "timeout"
# Pattern 5: JSON pipeline for custom analysis
agentcore logs --since 1h --level error --json | \
jq -r '[.timestamp, .message] | @tsv'Quick Troubleshooting Reference
| Symptom | First Command | What to Look For |
|---|---|---|
| Agent returns wrong answer | agentcore traces get <id> | Tool call inputs โ was context correct? |
| Agent ignores a tool | agentcore traces list | Available tools list in trace; gateway reachable? |
| Invocation timeout | agentcore logs --level error --since 1h | "timeout" or "deadline exceeded" messages |
| No traces appearing | agentcore status --runtime MyAgent | Is state "deployed"? Wait 5โ10 min after invoke |
| Gateway auth failure | agentcore logs --since 1h --query "401" | 401/403 responses from gateway targets |
| Memory not persisting | agentcore logs --since 1h --query "memory" | Memory errors; check MEMORY_ID env var |
Hands-On Labs
agentcore deploy -y before this lab.cd ~/myagentcore
# Stream logs in real-time (Ctrl+C to stop)
agentcore logs --runtime myagentcore
# Or search recent logs
agentcore logs --runtime myagentcore --since 30mWhat do you observe? Are there any errors? What log level are most messages?
agentcore deploy -y before this lab.cd ~/myagentcore
# Step 1: Invoke the agent
agentcore invoke "What tools do you have available?" --json
# Step 2: Wait 5-10 minutes for traces to propagate
# Then list traces
agentcore traces list --runtime myagentcore --limit 5
# Step 3: If a trace ID shows up, download it
# agentcore traces get <traceId> --output ./my-trace.jsonHow many steps appear in the trace? Can you see the tool selection reasoning?
cd ~/myagentcore
# Search for any errors in the last 24 hours
agentcore logs --runtime myagentcore --since 24h --level error
# If no errors, search for info messages
agentcore logs --runtime myagentcore --since 1h --level info -n 20
# JSON pipeline: count log levels
agentcore logs --since 1h --json 2>/dev/null | jq '.level' | sort | uniq -ccd ~/myagentcore
# What's the current deployment state of all resources?
agentcore status --json | python3 -c "
import sys, json
data = json.load(sys.stdin)
for r in data.get('resources', []):
print(f\"{r['resourceType']:20} {r['name']:30} {r['state']}\")
"Are all resources in deployed state? Any in error or updating?
Knowledge Check
b) About 5โ10 minutes โ traces are processed asynchronously
c) Up to 1 hour
d) Until the next day โ traces are batched daily
agentcore logs --time 2h --filter errorb)
agentcore logs --since 2h --level errorc)
agentcore logs --last 2h --severity errord)
agentcore logs --hours 2 --level error--since 2h and --level error is the correct syntax. The --since flag accepts relative times like 30m, 1h, 2d.agentcore traces save <traceId> trace.jsonb)
agentcore traces get <traceId> --output trace.jsonc)
agentcore traces export <traceId> --file trace.jsond)
agentcore logs --trace <traceId> > trace.jsonagentcore traces get <traceId> --output <path> downloads and saves the full trace. The --output flag writes to a file rather than printing to stdout.agentcore logs --json produce?b) JSON Lines (one JSON object per line)
c) Pretty-printed JSON to stdout
d) A JSON file written to the current directory
--json outputs JSON Lines format โ one valid JSON object per line. This is designed for piping through tools like jq and log processors that handle streaming line-by-line.agentcore logs --runtime DataProcessor --followb)
agentcore logs --runtime DataProcessor (streaming is the default with no time flags)c)
agentcore logs --runtime DataProcessor --streamd)
agentcore logs --runtime DataProcessor --tailagentcore logs with no time flags defaults to follow/streaming mode. Adding --runtime scopes it to a specific agent. Ctrl+C to stop.Key Takeaways
agentcore logsโ streaming or historical search;--level errorfor filtering;--since 1hfor time rangeagentcore traces listโ see recent invocation traces (allow 5โ10 min after invoke)agentcore traces get <id> --output file.jsonโ full trace for detailed debugging or sharing--jsonoutputs JSON Lines โ one object per line, perfect forjqpipelinesagentcore logs evalsโ separate command for evaluation/monitoring logs- Debugging workflow: invoke โ wait 5โ10 min โ list traces โ get trace โ examine tool calls
agentcore status --jsonโ check deployment state of all resources programmatically
Module 10 โ Evaluations
- Create custom LLM-as-a-Judge evaluators with rating scales and placeholders
- Run on-demand evaluations against historical agent traces
- Set up continuous online evaluations for live traffic sampling
- Use builtin evaluators like
Builtin.Faithfulness - Interpret evaluation scores and build CI/CD quality gates
Four Evaluation Concepts
| Concept | Description |
|---|---|
| Evaluator | A custom LLM judge: model + instructions + rating scale |
| On-demand eval | One-off run against historical traces |
| Online eval | Continuous sampling and scoring of live traffic |
| Builtin evaluator | Pre-built evaluators (e.g. Builtin.Faithfulness) |
Three Evaluation Levels
| Level | What It Evaluates |
|---|---|
SESSION | Overall quality across an entire conversation |
TRACE | Per-turn accuracy of individual agent responses |
TOOL_CALL | Correctness of individual tool selections |
Creating Evaluators
# Interactive wizard
agentcore add evaluator
# Non-interactive
agentcore add evaluator \
--name ResponseQuality \
--level SESSION \
--model us.anthropic.claude-sonnet-4-5-20250514-v1:0 \
--instructions "Evaluate the agent response quality. Context: {context}" \
--rating-scale 1-5-qualityInstruction Placeholders
Your --instructions must include placeholders appropriate for the level:
| Placeholder | Available At | What It Contains |
|---|---|---|
{context} | SESSION, TRACE, TOOL_CALL | Full conversation history |
{assistant_turn} | TRACE only | The specific assistant response being judged |
{available_tools} | SESSION, TOOL_CALL | List of available tools |
{tool_turn} | TOOL_CALL only | The specific tool call + result |
# SESSION โ did the agent complete the task?
"Evaluate whether the agent fulfilled the user's request. Context: {context}"
# TRACE โ per-turn accuracy
"Rate the accuracy of this response. Context: {context}. Response: {assistant_turn}"
# TOOL_CALL โ did it pick the right tool?
"Evaluate tool selection. Context: {context}. Tool call: {tool_turn}"Rating Scale Presets
| Preset | Type | Values |
|---|---|---|
1-5-quality | Numerical | Poor(1), Fair(2), Good(3), Very Good(4), Excellent(5) |
1-3-simple | Numerical | Low(1), Medium(2), High(3) |
pass-fail | Categorical | Pass, Fail |
good-neutral-bad | Categorical | Good, Neutral, Bad |
Score Normalization
Scores normalize to 0.0โ1.0 regardless of scale. AgentCore uses value/max:
- 1-5 scale: score 4 โ
4/5 = 0.8 - pass-fail: Pass โ
1.0, Fail โ0.0 - 1-3 scale: score 2 โ
2/3 โ 0.67
Running Evaluations
On-Demand Evaluations
# Evaluate a specific agent against recent traces
agentcore run eval \
--runtime MyAgent \
--evaluator ResponseQuality \
--days 7
# Multiple evaluators
agentcore run eval \
--runtime MyAgent \
--evaluator ResponseQuality Builtin.Faithfulness \
--days 14
# Target a specific session
agentcore run eval \
--runtime MyAgent \
--evaluator ResponseQuality \
--session-id abc123 \
--days 7
# Save results to file
agentcore run eval --runtime MyAgent --evaluator ResponseQuality --days 7 --output ./eval-result.json
# View eval history
agentcore evals history
agentcore evals history --runtime MyAgent --limit 5
agentcore evals history --jsonagentcore/.cli/eval-runs/.Online Evaluations (Continuous Monitoring)
# Set up continuous evaluation (samples X% of live requests)
agentcore add online-eval \
--name QualityMonitor \
--runtime MyAgent \
--evaluator ResponseQuality Builtin.Faithfulness \
--sampling-rate 10 # evaluate 10% of requests
# Enable immediately on deploy
agentcore add online-eval \
--name QualityMonitor \
--runtime MyAgent \
--evaluator ResponseQuality \
--sampling-rate 5 \
--enable-on-create
# Pause and resume
agentcore pause online-eval QualityMonitor
agentcore resume online-eval QualityMonitor| Sampling Rate | Use Case |
|---|---|
| 1โ5% | Production monitoring (cost-sensitive) |
| 10โ25% | Development and staging |
| 100% | Full coverage during testing |
Builtin Evaluators
# Use a builtin โ no custom definition needed
agentcore run eval --runtime MyAgent --evaluator Builtin.Faithfulness
# Mix builtins with custom evaluators
agentcore run eval --runtime MyAgent --evaluator ResponseQuality Builtin.FaithfulnessRemoval Constraint
# Wrong order โ will fail if referenced by an online eval config
agentcore remove evaluator --name ResponseQuality
# Right order: remove online eval config first
agentcore remove online-eval --name QualityMonitor
agentcore remove evaluator --name ResponseQualityCI/CD Quality Gate Pattern
# Run eval and fail pipeline if score is below threshold
result=$(agentcore run eval --runtime MyAgent --evaluator ResponseQuality --days 1 --json)
score=$(echo "$result" | jq '.run.results[0].aggregateScore')
if (( $(echo "$score < 0.7" | bc -l) )); then
echo "Quality gate FAILED: score $score < 0.7"
exit 1
fi
echo "Quality gate PASSED: score $score"Full Eval Setup Workflow
# 1. Add a custom evaluator
agentcore add evaluator \
--name ResponseQuality \
--level SESSION \
--model us.anthropic.claude-sonnet-4-5-20250514-v1:0 \
--instructions "Evaluate whether the agent fulfilled the user's request. Context: {context}"
# 2. Deploy (creates the evaluator in AWS)
agentcore deploy -y
# 3. Generate traces
agentcore invoke "Hello, what can you do?"
agentcore invoke "Search AWS docs for S3 pricing"
# 4. Wait ~10 min, then run on-demand eval
agentcore run eval --runtime MyAgent --evaluator ResponseQuality --days 1
# 5. Review results
agentcore evals history --runtime MyAgent
# 6. Set up continuous monitoring
agentcore add online-eval \
--name QualityMonitor \
--runtime MyAgent \
--evaluator ResponseQuality \
--sampling-rate 10
agentcore deploy -yHands-On Labs
cd ~/myagentcore
cat agentcore/project.json | \
python3 -c "
import sys, json
d = json.load(sys.stdin)
evals = d.get('evaluators', [])
online = d.get('onlineEvalConfigs', [])
print('Evaluators:', json.dumps(evals, indent=2))
print('Online eval configs:', json.dumps(online, indent=2))
"Are any evaluators configured? Are there online eval configs?
cd ~/myagentcore
agentcore evals history --json 2>/dev/null || echo "No eval history yet"Without running any commands, design an evaluator for the myagentcore agent:
- Name:
DocSearchQuality - Level:
SESSION - What instructions would you write? What placeholder(s) are needed?
- What rating scale makes sense?
Write out the agentcore add evaluator command you would use, then compare to:
agentcore add evaluator \
--name DocSearchQuality \
--level SESSION \
--model us.anthropic.claude-sonnet-4-5-20250514-v1:0 \
--instructions "Evaluate whether the agent answered the user's question accurately and completely using the available documentation tools. Context: {context}" \
--rating-scale 1-5-qualityagentcore deploy -y before this lab.cd ~/myagentcore
# Generate traces to evaluate against later
agentcore invoke "What is Amazon Bedrock AgentCore?" --json
agentcore invoke "What are the differences between CodeZip and Container build types?" --json
agentcore invoke "How does the flat resource model work in AgentCore?" --jsonThese invocations generate traces. After deploying an evaluator, you can run agentcore run eval against them.
Knowledge Check
{context}b)
{available_tools}c)
{assistant_turn}d)
{tool_turn}{assistant_turn} is only available at TRACE level. It contains the specific assistant response being evaluated. At SESSION level you only have {context} and {available_tools}.b) 0.8
c) 4.0
d) 80
value/max normalization: 4/5 = 0.8. This simple formula maps the score proportionally to the 0.0โ1.0 range.agentcore run eval --runtime MyAgent --sampling 5b)
agentcore add online-eval --runtime MyAgent --evaluator MyEval --sampling-rate 5c)
agentcore add evaluator --continuous --rate 5d)
agentcore add online-eval --runtime MyAgent --evaluator MyEval --sampling-rate 5agentcore add online-eval creates the config with --sampling-rate 5 for 5% sampling. After adding, run agentcore deploy to activate it. (b and d are identical here โ the correct flag is --sampling-rate on add online-eval.)b) The evaluator is referenced by an active online eval config
c) Evaluators cannot be removed once deployed
d) The evaluator has history that must be archived first
onlineEvalConfig that references the evaluator before you can remove the evaluator itself. This prevents broken references in deployed configurations.agentcore run eval --runtime MyAgent --builtin Faithfulnessb)
agentcore run eval --runtime MyAgent --evaluator Builtin.Faithfulnessc)
agentcore run eval --runtime MyAgent --evaluator-type builtin --name Faithfulnessd) You must first create a builtin evaluator with
agentcore add evaluator --type builtinBuiltin.<Name> prefix in the --evaluator flag. No setup required โ builtins are pre-deployed by AgentCore. You can mix builtins with custom evaluators.Key Takeaways
- LLM-as-a-Judge: a separate LLM grades your agent's responses against your rubric
- Three evaluation levels: SESSION (whole conversation), TRACE (per turn), TOOL_CALL (per tool use)
- Placeholders inject real data:
{context},{assistant_turn},{tool_turn},{available_tools} - Scores normalize to 0.0โ1.0: AgentCore uses
value/max(e.g. 4 on 1-5 scale = 0.8) agentcore run eval= on-demand;agentcore add online-eval= continuous samplingBuiltin.Faithfulnessand other builtins work without custom definitions- Remove online eval configs before removing evaluators (referential integrity)
- CI/CD gate: run eval โ check JSON score โ fail pipeline if below threshold
Module 11 โ Advanced Topics
- Understand and use Config Bundles for versioned runtime configuration
- Run batch evaluations across all agent sessions at scale
- Use the
run recommendationcommand to optimize agent prompts and tools - Configure Cedar policy engines for fine-grained tool authorization
- Know the
importcommand and multi-agent patterns (A2A/MCP protocols)
Config Bundles [preview]
Config bundles let you version, branch, and A/B test your agent's runtime configuration (prompts, model settings, tool descriptions) without redeploying code.
# Add a config bundle to your project
agentcore add config-bundle --name MyBundle
# List versions of a bundle
agentcore config-bundle versions MyBundle
# or: agentcore cb versions MyBundle
# Diff two versions
agentcore cb diff MyBundle --v1 1 --v2 2
# Create a new branch (for A/B testing)
agentcore cb create-branch MyBundle --branch experiment-a
# Promote a version (make it the primary)
agentcore promote config-bundle MyBundle --version 3A/B Testing with Config Bundles
# View A/B test details and results
agentcore ab-test <testName>
# List A/B tests (use status, not list)
agentcore status --type ab-testA/B tests route traffic between two bundle versions (or branches) to compare quality scores.
Creating an Agent with Config Bundle Support
agentcore add agent \
--name MyAgent \
--framework Strands \
--model-provider Bedrock \
--with-config-bundleThe --with-config-bundle flag auto-wires config bundle support. A configBundleName field is added to the runtime spec in project.json:
{
"runtimes": [
{
"name": "MyAgent",
"configBundleName": "MyBundle"
}
]
}Batch Evaluations [preview]
While on-demand evals run against a time window of traces, batch evaluations run across all sessions โ useful for comprehensive quality reporting and periodic audits.
# Run batch evaluation
agentcore run batch-evaluation \
--runtime MyAgent \
--evaluator ResponseQuality \
--days 30
# Stop a running batch evaluation
agentcore stop batch-evaluation <jobId>
# View batch eval history
agentcore evals history --runtime MyAgent| Eval Type | Use Case | Command |
|---|---|---|
| On-demand | Debug a time window of traces | agentcore run eval |
| Online | Continuous live traffic sampling | agentcore add online-eval |
| Batch | Periodic audits across all sessions | agentcore run batch-evaluation |
Recommendations [preview]
The run recommendation command uses AI to suggest improvements to your agent's prompts and tool descriptions based on evaluation results.
# Run recommendation analysis
agentcore run recommendation \
--runtime MyAgent \
--evaluator ResponseQuality \
--days 7
# View recommendation history
agentcore recommendations
agentcore recommendations --runtime MyAgentThe output suggests specific changes: "Rewrite your system prompt to include X" or "The tool description for Y should be clearer about Z." This is the AI-powered optimization loop โ eval reveals weak spots, recommendations fix them.
Cedar Policy Engines
AgentCore integrates with Cedar for fine-grained authorization on gateway tool calls. Cedar is AWS's open-source authorization language used in IAM and Verified Permissions.
# Add a policy engine (attached to a gateway)
agentcore add policy-engine \
--name MyPolicyEngine \
--gateway my-gateway \
--mode ENFORCE # or LOG_ONLY
# Add a Cedar policy to the engine
agentcore add policy \
--name AllowReadonly \
--engine MyPolicyEngine \
--policy-file ./policies/readonly.cedar
# Create gateway with policy engine
agentcore add gateway \
--name my-gateway \
--policy-engine MyPolicyEngine \
--policy-engine-mode ENFORCE| Mode | Behavior | When to Use |
|---|---|---|
LOG_ONLY | Logs policy decisions but does NOT block unauthorized calls | Initial setup โ observe what would be blocked |
ENFORCE | Actively denies unauthorized tool calls | Production โ after tuning policies in LOG_ONLY |
LOG_ONLY first. Observe what Cedar would have blocked. Tune policies. Then switch to ENFORCE. Avoids production outages from over-blocking.The import Command
Import resources from an existing Bedrock AgentCore Starter Toolkit project, or import a deployed runtime/memory/gateway into the current CLI project model.
# Import from starter toolkit project
agentcore import --source ./old-toolkit-project
# Import a specific runtime by ARN
agentcore import --runtime-arn arn:aws:bedrock-agentcore:...
# Import a memory by ARN
agentcore import --memory-arn arn:aws:bedrock-agentcore:...
# Import a gateway
agentcore import --gateway-arn arn:aws:bedrock-agentcore:...import updates project.json and deployed-state.json to reference the existing AWS resource โ letting you adopt the CLI project model without destroying and recreating existing deployments.
Multi-Agent Patterns
A2A Protocol (Agent-to-Agent)
# Create an agent that exposes the A2A protocol
agentcore add agent \
--name WorkerAgent \
--framework Strands \
--protocol A2AAn agent with --protocol A2A exposes itself as an A2A-compatible endpoint. Other agents discover and invoke it as a tool. It's the "tool server" side of A2A.
MCP Protocol (Agent as Tool Server)
agentcore add agent \
--name SpecialistAgent \
--framework Strands \
--protocol MCPThis agent appears as a tool in any MCP client that connects to its endpoint โ including Claude, other AgentCore agents, or any MCP-compatible client.
Container Builds & VPC Networking
Container Builds
# Create with container build
agentcore add agent \
--name MyAgent \
--build Container \
--framework Strands
# Customize the Dockerfile
cat app/MyAgent/Dockerfile
# Dev with container
agentcore dev # builds and runs Docker locally
agentcore dev --exec "pip list" # exec into local containerContainer build flow: agentcore deploy โ CodeBuild โ Docker image (ARM64) โ ECR โ AgentCore runtime pulls from ECR.
VPC Networking
agentcore add agent \
--name PrivateAgent \
--framework Strands \
--network-mode VPC \
--subnets subnet-abc123,subnet-def456 \
--security-groups sg-abc123For agents that need to access private resources (RDS, private APIs). The runtime runs inside your VPC and accesses private resources through the VPC's routing.
CI/CD Pipeline Pattern (Full)
#!/bin/bash
# 1. Validate config
agentcore validate || exit 1
# 2. Preview changes
agentcore deploy --diff
# 3. Deploy
agentcore deploy -y --json
# 4. Verify deployment
agentcore status --json | jq '.resources[] | select(.state != "deployed")' | \
grep -q '.' && { echo "Some resources not deployed!"; exit 1; }
# 5. Run smoke test
agentcore invoke "Health check" --json | jq -e '.response' || exit 1
# 6. Wait for traces
sleep 300 # 5 min
# 7. Quality gate
score=$(agentcore run eval --runtime MyAgent --evaluator QualityCheck --days 1 --json | \
jq '.run.results[0].aggregateScore')
if (( $(echo "$score < 0.7" | bc -l) )); then
echo "Quality gate failed: $score < 0.7"
exit 1
fi
echo "Pipeline passed! Score: $score"Other Advanced Features
Telemetry
agentcore telemetry --status # Check current setting
agentcore telemetry --enable # Opt in to anonymous analytics
agentcore telemetry --disable # Opt outResource Tags for Cost Allocation
{
"name": "MyProject",
"tags": {
"Environment": "production",
"Team": "platform",
"CostCenter": "engineering"
},
"runtimes": [
{
"name": "MyAgent",
"tags": {
"Environment": "staging"
}
}
]
}Tags flow through to CloudFormation resources and appear in Cost Explorer. Runtime-level tags override project-level tags for the same key.
Hands-On Labs
# Check what config-bundle commands are available
agentcore config-bundle --help
agentcore cb --help
# Check ab-test commands
agentcore ab-test --helpWhat operations are available on config bundles? Can you list bundles, diff versions, create branches?
agentcore run batch-evaluation --help
# And the recommendation command
agentcore run recommendation --helpWhat flags are required for batch evaluation? What does --days control?
agentcore add policy-engine --help
agentcore add policy --helpWhat modes are available? What policy format does it accept?
agentcore import --helpWhat can you import? What types of ARNs does it accept?
agentcore deploy -y before this lab.cd ~/myagentcore
# Full config
cat agentcore/project.json | python3 -m json.tool
# What resources are deployed?
agentcore status
# Invoke the agent
agentcore invoke "Give me a summary of what Amazon Bedrock AgentCore is and what you can help me with" --streamSME Challenge โ without referring to the docs, explain:
- How does the agent discover the gateway URL at runtime?
- How does the flat resource model enable sharing memory across multiple agents?
- What would happen if you renamed the agent in
project.jsonand redeployed? - What is the difference between a harness project and an agent project?
- Why is
cdk/at the project root rather than insideagentcore/?
Knowledge Check
b) You can version, branch, and A/B test prompts without redeploying agent code
c) Config bundles are required for Container build agents
d) Config bundles enable VPC networking
b) Logs policy evaluation results but does not block unauthorized tool calls
c) Logs tool calls to S3 for audit purposes
d) Enforces policies and logs all decisions
b) It exposes itself as an A2A endpoint that other agents can call
c) It enables asynchronous invocation
d) It connects to agent registries automatically
--protocol A2A exposes itself as an A2A-compatible endpoint. Other agents discover and invoke it. It's the "tool server" side of A2A. The orchestrator that calls it uses standard tool-calling.b) AI-suggested improvements to your agent's prompts and tool descriptions based on eval results
c) A recommendation on which model provider to use
d) A list of recommended gateway targets to add
run recommendation analyzes evaluation results (low scores, failure patterns) and suggests specific changes to system prompts, tool descriptions, or model parameters. It's an AI-powered optimization loop.agentcore create --type importb)
agentcore import --runtime-arn <arn>c)
agentcore add agent --type import --source arn:...d) Manually edit
project.json with the ARNagentcore import --runtime-arn <arn> imports an existing deployed runtime into the current project, updating project.json and deployed-state.json to reference the existing AWS resource without destroying it.Key Takeaways
- Config bundles = versioned runtime config; enables A/B testing without code redeploys
- Batch evaluation = run evaluators across all sessions at scale (for periodic audits)
run recommendation= AI suggestions to improve prompts/tools based on eval results- Cedar policy engines = fine-grained tool authorization (LOG_ONLY first, then ENFORCE)
import= adopt existing deployed resources into a CLI project without data loss- A2A and MCP protocols = agents can be tools for other agents (multi-agent orchestration)
- Container builds = full control over runtime, ARM64, ECR-hosted
- VPC networking = private resource access within your network
- Tags = cost allocation and resource organization across CloudFormation stacks