The complete reference for managing Amazon Web Services from the command line. From S3 buckets to Lambda functions, all 200+ services at your fingertips.
The most common AWS CLI commands you will reach for every day. Keep this section bookmarked.
Upload a file to an S3 bucket
aws s3 cp file.txt s3://my-bucket/
aws s3 cp s3://my-bucket/file.txt ./
Sync a local directory to S3
aws s3 sync ./build s3://my-bucket/ \
--delete --exclude "*.tmp"
List running EC2 instances
aws ec2 describe-instances \
--filters "Name=instance-state-name,Values=running" \
--query "Reservations[].Instances[].[InstanceId,InstanceType,State.Name]" \
--output table
Invoke a Lambda function
aws lambda invoke \
--function-name my-func \
--payload '{"key":"value"}' \
output.json
Check current identity
aws sts get-caller-identity
Tail logs from a log group
aws logs tail /aws/lambda/my-func \
--follow --since 1h
Authenticate with AWS SSO
aws sso login --profile prod
aws sts get-caller-identity --profile prod
Deploy a stack from a template
aws cloudformation deploy \
--template-file template.yaml \
--stack-name my-stack \
--capabilities CAPABILITY_IAM
Getting the AWS CLI v2 installed on any platform. Version 2 is recommended for all new installations.
Official PKG installer or Homebrew
# Official installer
curl "https://awscli.amazonaws.com/AWSCLIV2.pkg" \
-o "AWSCLIV2.pkg"
sudo installer -pkg AWSCLIV2.pkg -target /
# Or via Homebrew
brew install awscli
Bundled installer for x86_64 or ARM
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" \
-o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
# Verify
aws --version
MSI installer or winget
# Download and run MSI from:
# https://awscli.amazonaws.com/AWSCLIV2.msi
# Or via winget
winget install Amazon.AWSCLI
Run from the official image
docker run --rm -it \
-v ~/.aws:/root/.aws \
amazon/aws-cli s3 ls
# Check version
aws --version
# aws-cli/2.15.30 Python/3.11.8 Linux/6.1.0 ...
# Update on Linux
sudo ./aws/install --update
# Update via Homebrew (macOS)
brew upgrade awscli
# Enable auto-prompt (interactive completion)
aws --cli-auto-prompt
# Bash (add to ~/.bashrc)
complete -C '/usr/local/bin/aws_completer' aws
# Zsh (add to ~/.zshrc)
autoload bashcompinit && bashcompinit
autoload -Uz compinit && compinit
complete -C '/usr/local/bin/aws_completer' aws
# Fish
complete --command aws --no-files --arguments \
'(begin; set --local --export COMP_SHELL fish; set --local --export COMP_LINE (commandline); aws_completer | sed "s/ $//"; end)'
How the AWS CLI resolves credentials, manages profiles, and handles authentication. This is the most important section to understand properly.
# Interactive setup (creates ~/.aws/credentials + ~/.aws/config)
aws configure
# AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
# AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
# Default region name [None]: us-east-1
# Default output format [None]: json
# Configure a named profile
aws configure --profile staging
# Set a specific value
aws configure set region us-west-2 --profile prod
aws configure set output table
# Read a value
aws configure get region --profile prod
aws configure list
# ~/.aws/credentials — secret keys (never commit this)
[default]
aws_access_key_id = AKIAIOSFODNN7EXAMPLE
aws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
[staging]
aws_access_key_id = AKIAI44QH8DHBEXAMPLE
aws_secret_access_key = je7MtGbClwBF/2Zp9Utk/h3yCo8nvbEXAMPLEKEY
# ~/.aws/config — profiles, regions, output
[default]
region = us-east-1
output = json
[profile staging]
region = us-west-2
output = table
[profile prod]
role_arn = arn:aws:iam::123456789012:role/ProdAdmin
source_profile = default
region = us-east-1
~/.aws/credentials, profiles are bare names like [staging]. In ~/.aws/config, non-default profiles must be prefixed with [profile staging].The AWS CLI checks credentials in this exact priority order:
| Priority | Source | Details |
|---|---|---|
| 1 | Command line flags | --profile, --region, --output |
| 2 | Environment variables | AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN |
| 3 | AWS SSO token | Cached in ~/.aws/sso/cache/ |
| 4 | Credentials file | ~/.aws/credentials |
| 5 | Config file | ~/.aws/config |
| 6 | Container credentials | ECS task role via AWS_CONTAINER_CREDENTIALS_RELATIVE_URI |
| 7 | Instance metadata | EC2 instance profile / IMDS (169.254.169.254) |
# Override profile for all commands in this shell
export AWS_PROFILE=staging
export AWS_DEFAULT_REGION=eu-west-1
# Use specific credentials (e.g., in CI/CD)
export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
export AWS_SESSION_TOKEN=FwoGZXIvYXdz... # if using STS
# Disable pager (output goes directly to stdout)
export AWS_PAGER=""
# Set default output format
export AWS_DEFAULT_OUTPUT=table
# Configure SSO profile (in ~/.aws/config)
[profile sso-dev]
sso_start_url = https://my-corp.awsapps.com/start
sso_region = us-east-1
sso_account_id = 123456789012
sso_role_name = DeveloperAccess
region = us-west-2
output = json
# Login (opens browser for OAuth)
aws sso login --profile sso-dev
# Use the SSO profile
aws s3 ls --profile sso-dev
# Logout (clears cached tokens)
aws sso logout
# Profile-based role assumption (automatic when using --profile)
[profile cross-account]
role_arn = arn:aws:iam::987654321098:role/CrossAccountRole
source_profile = default
duration_seconds = 3600
# Manual role assumption
CREDS=$(aws sts assume-role \
--role-arn arn:aws:iam::987654321098:role/MyRole \
--role-session-name my-session \
--query 'Credentials' \
--output json)
export AWS_ACCESS_KEY_ID=$(echo $CREDS | jq -r .AccessKeyId)
export AWS_SECRET_ACCESS_KEY=$(echo $CREDS | jq -r .SecretAccessKey)
export AWS_SESSION_TOKEN=$(echo $CREDS | jq -r .SessionToken)
# With MFA
aws sts assume-role \
--role-arn arn:aws:iam::123456789012:role/AdminRole \
--role-session-name admin-session \
--serial-number arn:aws:iam::123456789012:mfa/my-user \
--token-code 123456
Object storage with 99.999999999% durability. Buckets, objects, versioning, lifecycle policies.
# List all buckets
aws s3 ls
# Create a bucket
aws s3 mb s3://my-new-bucket
aws s3 mb s3://my-regional-bucket --region eu-west-1
# Remove an empty bucket
aws s3 rb s3://my-old-bucket
# Remove a bucket and ALL contents (dangerous!)
aws s3 rb s3://my-old-bucket --force
# List objects in a bucket
aws s3 ls s3://my-bucket/
aws s3 ls s3://my-bucket/prefix/ --recursive --human-readable --summarize
# Upload a file
aws s3 cp index.html s3://my-bucket/
# Download a file
aws s3 cp s3://my-bucket/data.csv ./local/
# Copy between buckets
aws s3 cp s3://source-bucket/file.txt s3://dest-bucket/
# Upload an entire directory
aws s3 cp ./dist/ s3://my-bucket/assets/ --recursive
# Move (copy + delete source)
aws s3 mv s3://my-bucket/old-key.txt s3://my-bucket/new-key.txt
# Sync local to S3 (only changed files)
aws s3 sync ./build s3://my-bucket/ \
--delete \
--exclude "*.map" \
--exclude ".git/*" \
--include "*.html" \
--cache-control "max-age=31536000"
# Sync S3 to local
aws s3 sync s3://my-bucket/backups/ ./local-backups/
# Dry run (preview what would change)
aws s3 sync ./build s3://my-bucket/ --dryrun
# Generate a presigned URL (default 1 hour)
aws s3 presign s3://my-bucket/private-file.pdf
# Custom expiration (seconds)
aws s3 presign s3://my-bucket/report.pdf --expires-in 3600
# Presigned URL for upload (PUT)
aws s3 presign s3://my-bucket/uploads/file.zip \
--expires-in 600
# Get bucket versioning status
aws s3api get-bucket-versioning --bucket my-bucket
# Enable versioning
aws s3api put-bucket-versioning --bucket my-bucket \
--versioning-configuration Status=Enabled
# Set lifecycle policy
aws s3api put-bucket-lifecycle-configuration \
--bucket my-bucket \
--lifecycle-configuration file://lifecycle.json
# Server-side encryption
aws s3 cp file.txt s3://my-bucket/ \
--sse aws:kms \
--sse-kms-key-id alias/my-key
# Set bucket policy
aws s3api put-bucket-policy --bucket my-bucket \
--policy file://policy.json
# Enable static website hosting
aws s3 website s3://my-bucket/ \
--index-document index.html \
--error-document error.html
Virtual servers in the cloud. Launch, manage, and terminate instances programmatically.
# List all instances
aws ec2 describe-instances
# Filter by state
aws ec2 describe-instances \
--filters "Name=instance-state-name,Values=running"
# Filter by tag
aws ec2 describe-instances \
--filters "Name=tag:Environment,Values=production"
# Compact table output
aws ec2 describe-instances \
--query "Reservations[].Instances[].[InstanceId,InstanceType,State.Name,PublicIpAddress,Tags[?Key=='Name']|[0].Value]" \
--output table
# Specific instance
aws ec2 describe-instances --instance-ids i-0123456789abcdef0
# Launch an instance
aws ec2 run-instances \
--image-id ami-0c55b159cbfafe1f0 \
--instance-type t3.micro \
--key-name my-keypair \
--security-group-ids sg-0123456789abcdef0 \
--subnet-id subnet-0123456789abcdef0 \
--count 1 \
--tag-specifications \
'ResourceType=instance,Tags=[{Key=Name,Value=my-server},{Key=Environment,Value=dev}]'
# Stop instances
aws ec2 stop-instances --instance-ids i-0123456789abcdef0
# Start instances
aws ec2 start-instances --instance-ids i-0123456789abcdef0
# Terminate instances (permanent!)
aws ec2 terminate-instances --instance-ids i-0123456789abcdef0
# Reboot
aws ec2 reboot-instances --instance-ids i-0123456789abcdef0
# Create a key pair (save the .pem file!)
aws ec2 create-key-pair \
--key-name my-keypair \
--query 'KeyMaterial' \
--output text > my-keypair.pem
chmod 400 my-keypair.pem
# List key pairs
aws ec2 describe-key-pairs
# Delete a key pair
aws ec2 delete-key-pair --key-name old-keypair
# Create a security group
aws ec2 create-security-group \
--group-name web-sg \
--description "Web server security group" \
--vpc-id vpc-0123456789abcdef0
# Add inbound rule (allow SSH)
aws ec2 authorize-security-group-ingress \
--group-id sg-0123456789abcdef0 \
--protocol tcp --port 22 \
--cidr 203.0.113.0/24
# Add inbound rule (allow HTTP/HTTPS)
aws ec2 authorize-security-group-ingress \
--group-id sg-0123456789abcdef0 \
--protocol tcp --port 443 --cidr 0.0.0.0/0
# List your AMIs
aws ec2 describe-images --owners self
# Find latest Amazon Linux 2023
aws ec2 describe-images \
--owners amazon \
--filters "Name=name,Values=al2023-ami-2023*-x86_64" \
--query "sort_by(Images, &CreationDate)[-1].[ImageId,Name]" \
--output text
# Create an AMI from an instance
aws ec2 create-image \
--instance-id i-0123456789abcdef0 \
--name "my-app-backup-$(date +%Y%m%d)" \
--no-reboot
# Create a snapshot of an EBS volume
aws ec2 create-snapshot \
--volume-id vol-0123456789abcdef0 \
--description "Daily backup"
Run code without provisioning servers. Pay only for compute time consumed.
# Synchronous invocation
aws lambda invoke \
--function-name my-function \
--payload '{"name":"World"}' \
--cli-binary-format raw-in-base64-out \
response.json
# View the response
cat response.json
# Asynchronous invocation
aws lambda invoke \
--function-name my-function \
--invocation-type Event \
--payload '{"action":"process"}' \
--cli-binary-format raw-in-base64-out \
/dev/null
# Dry run (validate permissions)
aws lambda invoke \
--function-name my-function \
--invocation-type DryRun \
--payload '{}' \
--cli-binary-format raw-in-base64-out \
/dev/null
# Create a function from a zip
zip -r function.zip index.js node_modules/
aws lambda create-function \
--function-name my-function \
--runtime nodejs20.x \
--handler index.handler \
--role arn:aws:iam::123456789012:role/lambda-role \
--zip-file fileb://function.zip
# Update function code
aws lambda update-function-code \
--function-name my-function \
--zip-file fileb://function.zip
# Update configuration
aws lambda update-function-configuration \
--function-name my-function \
--timeout 30 \
--memory-size 512 \
--environment "Variables={DB_HOST=mydb.example.com,NODE_ENV=production}"
# Publish a version
aws lambda publish-version --function-name my-function
# Create / update alias
aws lambda create-alias \
--function-name my-function \
--name prod --function-version 3
# Tail logs in real-time
aws logs tail /aws/lambda/my-function --follow
# Logs from the last 30 minutes
aws logs tail /aws/lambda/my-function --since 30m
# Filter logs
aws logs filter-log-events \
--log-group-name /aws/lambda/my-function \
--filter-pattern "ERROR" \
--start-time $(date -d '1 hour ago' +%s000)
# List log groups
aws logs describe-log-groups \
--log-group-name-prefix /aws/lambda/
# List all functions
aws lambda list-functions \
--query "Functions[].[FunctionName,Runtime,MemorySize,Timeout]" \
--output table
# Get function details
aws lambda get-function --function-name my-function
# Delete a function
aws lambda delete-function --function-name my-function
# Add a trigger (S3 event)
aws lambda add-permission \
--function-name my-function \
--statement-id s3-trigger \
--action lambda:InvokeFunction \
--principal s3.amazonaws.com \
--source-arn arn:aws:s3:::my-bucket
# List event source mappings (SQS, DynamoDB, Kinesis)
aws lambda list-event-source-mappings \
--function-name my-function
Control who can do what in your AWS account. Users, roles, groups, and policies.
# List users
aws iam list-users \
--query "Users[].[UserName,CreateDate]" --output table
# Create a user
aws iam create-user --user-name deploy-bot
# Create access keys for a user
aws iam create-access-key --user-name deploy-bot
# List access keys
aws iam list-access-keys --user-name deploy-bot
# Deactivate / delete access keys
aws iam update-access-key \
--user-name deploy-bot \
--access-key-id AKIAIOSFODNN7EXAMPLE \
--status Inactive
aws iam delete-access-key \
--user-name deploy-bot \
--access-key-id AKIAIOSFODNN7EXAMPLE
# Delete a user (must remove keys, policies, groups first)
aws iam delete-user --user-name deploy-bot
# List roles
aws iam list-roles \
--query "Roles[].[RoleName,Arn]" --output table
# Create a role with trust policy
aws iam create-role \
--role-name lambda-exec-role \
--assume-role-policy-document file://trust-policy.json
# Example trust-policy.json for Lambda:
# {
# "Version": "2012-10-17",
# "Statement": [{
# "Effect": "Allow",
# "Principal": { "Service": "lambda.amazonaws.com" },
# "Action": "sts:AssumeRole"
# }]
# }
# Attach a managed policy to a role
aws iam attach-role-policy \
--role-name lambda-exec-role \
--policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
# List policies attached to a role
aws iam list-attached-role-policies --role-name lambda-exec-role
# List all policies (customer-managed only)
aws iam list-policies --scope Local \
--query "Policies[].[PolicyName,Arn]" --output table
# Create an inline policy
aws iam put-user-policy \
--user-name deploy-bot \
--policy-name S3DeployPolicy \
--policy-document file://s3-policy.json
# Create a managed policy
aws iam create-policy \
--policy-name MyS3ReadPolicy \
--policy-document file://policy.json
# Get policy document (latest version)
aws iam get-policy-version \
--policy-arn arn:aws:iam::123456789012:policy/MyPolicy \
--version-id $(aws iam get-policy \
--policy-arn arn:aws:iam::123456789012:policy/MyPolicy \
--query 'Policy.DefaultVersionId' --output text)
# Simulate a policy
aws iam simulate-principal-policy \
--policy-source-arn arn:aws:iam::123456789012:user/deploy-bot \
--action-names s3:GetObject s3:PutObject \
--resource-arns "arn:aws:s3:::my-bucket/*"
# Create a group and add user
aws iam create-group --group-name Developers
aws iam add-user-to-group --user-name alice --group-name Developers
aws iam attach-group-policy \
--group-name Developers \
--policy-arn arn:aws:iam::aws:policy/PowerUserAccess
# List groups for a user
aws iam list-groups-for-user --user-name alice
# Enable virtual MFA
aws iam create-virtual-mfa-device \
--virtual-mfa-device-name alice-mfa \
--outfile qrcode.png \
--bootstrap-method QRCodePNG
# Activate MFA (need two consecutive codes)
aws iam enable-mfa-device \
--user-name alice \
--serial-number arn:aws:iam::123456789012:mfa/alice-mfa \
--authentication-code1 123456 \
--authentication-code2 789012
CloudFormation for IaC, CloudWatch for monitoring, and SQS/SNS for messaging.
# Deploy / update a stack
aws cloudformation deploy \
--template-file template.yaml \
--stack-name my-app-stack \
--parameter-overrides \
Environment=prod \
InstanceType=t3.medium \
--capabilities CAPABILITY_IAM CAPABILITY_NAMED_IAM \
--tags Key=Project,Value=MyApp
# Create a stack (more control than deploy)
aws cloudformation create-stack \
--stack-name my-stack \
--template-body file://template.yaml \
--parameters ParameterKey=Env,ParameterValue=prod
# List stacks
aws cloudformation list-stacks \
--stack-status-filter CREATE_COMPLETE UPDATE_COMPLETE \
--query "StackSummaries[].[StackName,StackStatus,CreationTime]" \
--output table
# Describe stack events (troubleshooting)
aws cloudformation describe-stack-events \
--stack-name my-stack \
--query "StackEvents[].[Timestamp,LogicalResourceId,ResourceStatus,ResourceStatusReason]" \
--output table
# Get stack outputs
aws cloudformation describe-stacks \
--stack-name my-stack \
--query "Stacks[0].Outputs"
# Delete a stack
aws cloudformation delete-stack --stack-name my-old-stack
# Wait for stack to complete
aws cloudformation wait stack-create-complete \
--stack-name my-stack
# Validate a template
aws cloudformation validate-template \
--template-body file://template.yaml
# Tail log group in real-time
aws logs tail /aws/lambda/my-func --follow --since 1h
# Search logs
aws logs filter-log-events \
--log-group-name /aws/ecs/my-service \
--filter-pattern "ERROR" \
--start-time $(date -d '24 hours ago' +%s000)
# Put a custom metric
aws cloudwatch put-metric-data \
--namespace MyApp \
--metric-name RequestCount \
--value 1 \
--unit Count \
--dimensions Service=API,Environment=prod
# Get metric statistics
aws cloudwatch get-metric-statistics \
--namespace AWS/EC2 \
--metric-name CPUUtilization \
--dimensions Name=InstanceId,Value=i-0123456789abcdef0 \
--start-time $(date -u -d '1 hour ago' +%Y-%m-%dT%H:%M:%S) \
--end-time $(date -u +%Y-%m-%dT%H:%M:%S) \
--period 300 \
--statistics Average Maximum
# Create an alarm
aws cloudwatch put-metric-alarm \
--alarm-name high-cpu \
--metric-name CPUUtilization \
--namespace AWS/EC2 \
--statistic Average \
--period 300 \
--threshold 80 \
--comparison-operator GreaterThanThreshold \
--evaluation-periods 2 \
--alarm-actions arn:aws:sns:us-east-1:123456789012:alerts
# List alarms
aws cloudwatch describe-alarms \
--state-value ALARM --output table
# Create a queue
aws sqs create-queue --queue-name my-queue
aws sqs create-queue --queue-name my-dlq # dead-letter queue
# List queues
aws sqs list-queues
# Send a message
aws sqs send-message \
--queue-url https://sqs.us-east-1.amazonaws.com/123456789012/my-queue \
--message-body '{"order_id": "12345"}' \
--delay-seconds 0
# Receive messages
aws sqs receive-message \
--queue-url https://sqs.us-east-1.amazonaws.com/123456789012/my-queue \
--max-number-of-messages 10 \
--wait-time-seconds 20
# Delete a message (after processing)
aws sqs delete-message \
--queue-url https://sqs.us-east-1.amazonaws.com/123456789012/my-queue \
--receipt-handle "AQEBwJnKyrHigUMZj6rYigCgxl..."
# Purge all messages
aws sqs purge-queue \
--queue-url https://sqs.us-east-1.amazonaws.com/123456789012/my-queue
# Get queue attributes
aws sqs get-queue-attributes \
--queue-url https://sqs.us-east-1.amazonaws.com/123456789012/my-queue \
--attribute-names ApproximateNumberOfMessages
# Create a topic
aws sns create-topic --name my-alerts
# List topics
aws sns list-topics
# Subscribe an email endpoint
aws sns subscribe \
--topic-arn arn:aws:sns:us-east-1:123456789012:my-alerts \
--protocol email \
--notification-endpoint ops@example.com
# Subscribe a Lambda function
aws sns subscribe \
--topic-arn arn:aws:sns:us-east-1:123456789012:my-alerts \
--protocol lambda \
--notification-endpoint arn:aws:lambda:us-east-1:123456789012:function:my-handler
# Publish a message
aws sns publish \
--topic-arn arn:aws:sns:us-east-1:123456789012:my-alerts \
--subject "Deployment Complete" \
--message "Version 2.3.1 deployed to production"
# List subscriptions
aws sns list-subscriptions-by-topic \
--topic-arn arn:aws:sns:us-east-1:123456789012:my-alerts
Fully managed NoSQL database. Single-digit millisecond performance at any scale.
# List tables
aws dynamodb list-tables
# Create a table
aws dynamodb create-table \
--table-name Users \
--attribute-definitions \
AttributeName=UserId,AttributeType=S \
AttributeName=Email,AttributeType=S \
--key-schema \
AttributeName=UserId,KeyType=HASH \
--global-secondary-indexes \
'IndexName=EmailIndex,KeySchema=[{AttributeName=Email,KeyType=HASH}],Projection={ProjectionType=ALL}' \
--billing-mode PAY_PER_REQUEST
# Describe a table
aws dynamodb describe-table --table-name Users
# Delete a table
aws dynamodb delete-table --table-name Users
# Put an item
aws dynamodb put-item \
--table-name Users \
--item '{
"UserId": {"S": "user-001"},
"Email": {"S": "alice@example.com"},
"Name": {"S": "Alice"},
"Age": {"N": "30"}
}'
# Get an item
aws dynamodb get-item \
--table-name Users \
--key '{"UserId": {"S": "user-001"}}' \
--consistent-read
# Update an item
aws dynamodb update-item \
--table-name Users \
--key '{"UserId": {"S": "user-001"}}' \
--update-expression "SET Age = :age, #n = :name" \
--expression-attribute-names '{"#n": "Name"}' \
--expression-attribute-values '{":age": {"N": "31"}, ":name": {"S": "Alice Smith"}}' \
--return-values ALL_NEW
# Delete an item
aws dynamodb delete-item \
--table-name Users \
--key '{"UserId": {"S": "user-001"}}'
# Query (requires partition key)
aws dynamodb query \
--table-name Orders \
--key-condition-expression "CustomerId = :cid AND OrderDate BETWEEN :start AND :end" \
--expression-attribute-values '{
":cid": {"S": "cust-42"},
":start": {"S": "2024-01-01"},
":end": {"S": "2024-12-31"}
}'
# Scan (full table — use sparingly)
aws dynamodb scan --table-name Users \
--filter-expression "Age > :min" \
--expression-attribute-values '{":min": {"N": "25"}}' \
--select COUNT
# Batch write (up to 25 items)
aws dynamodb batch-write-item \
--request-items file://batch-items.json
# batch-items.json format:
# {
# "Users": [
# { "PutRequest": { "Item": { "UserId": {"S":"u1"}, "Name": {"S":"Bob"} }}},
# { "DeleteRequest": { "Key": { "UserId": {"S":"u2"} }}}
# ]
# }
# Batch get (up to 100 items)
aws dynamodb batch-get-item \
--request-items '{
"Users": {
"Keys": [
{"UserId": {"S": "user-001"}},
{"UserId": {"S": "user-002"}}
]
}
}'
# List clusters
aws ecs list-clusters
# Create a cluster
aws ecs create-cluster --cluster-name my-cluster
# Register a task definition
aws ecs register-task-definition \
--cli-input-json file://task-def.json
# Run a Fargate task
aws ecs run-task \
--cluster my-cluster \
--task-definition my-app:3 \
--launch-type FARGATE \
--network-configuration '{
"awsvpcConfiguration": {
"subnets": ["subnet-abc123"],
"securityGroups": ["sg-abc123"],
"assignPublicIp": "ENABLED"
}
}'
# Create a service
aws ecs create-service \
--cluster my-cluster \
--service-name my-service \
--task-definition my-app:3 \
--desired-count 2 \
--launch-type FARGATE \
--network-configuration '{
"awsvpcConfiguration": {
"subnets": ["subnet-abc123"],
"securityGroups": ["sg-abc123"],
"assignPublicIp": "ENABLED"
}
}'
# Update service (deploy new version)
aws ecs update-service \
--cluster my-cluster \
--service my-service \
--task-definition my-app:4 \
--force-new-deployment
# List services and tasks
aws ecs list-services --cluster my-cluster
aws ecs list-tasks --cluster my-cluster --service-name my-service
# Exec into a running container
aws ecs execute-command \
--cluster my-cluster \
--task arn:aws:ecs:us-east-1:123456789012:task/abc123 \
--container my-container \
--interactive \
--command "/bin/sh"
# View container logs
aws logs tail /ecs/my-service --follow
# List clusters
aws eks list-clusters
# Create a cluster
aws eks create-cluster \
--name my-k8s-cluster \
--role-arn arn:aws:iam::123456789012:role/EKSClusterRole \
--resources-vpc-config \
subnetIds=subnet-abc123,subnet-def456,securityGroupIds=sg-abc123
# Update kubeconfig (connect kubectl)
aws eks update-kubeconfig \
--name my-k8s-cluster \
--region us-east-1
# Describe cluster
aws eks describe-cluster --name my-k8s-cluster
# Create a managed node group
aws eks create-nodegroup \
--cluster-name my-k8s-cluster \
--nodegroup-name workers \
--node-role arn:aws:iam::123456789012:role/EKSNodeRole \
--subnets subnet-abc123 subnet-def456 \
--instance-types t3.medium \
--scaling-config minSize=1,maxSize=5,desiredSize=2
# List node groups
aws eks list-nodegroups --cluster-name my-k8s-cluster
# Delete a cluster
aws eks delete-cluster --name my-k8s-cluster
# Authenticate Docker with ECR
aws ecr get-login-password --region us-east-1 | \
docker login --username AWS --password-stdin \
123456789012.dkr.ecr.us-east-1.amazonaws.com
# Create a repository
aws ecr create-repository --repository-name my-app
# Tag and push an image
docker tag my-app:latest 123456789012.dkr.ecr.us-east-1.amazonaws.com/my-app:latest
docker push 123456789012.dkr.ecr.us-east-1.amazonaws.com/my-app:latest
# List images
aws ecr list-images --repository-name my-app
# Describe images (get digest, size, push date)
aws ecr describe-images --repository-name my-app \
--query "imageDetails | sort_by(@, &imagePushedAt) | [-5:].[imageTags[0],imageSizeInBytes,imagePushedAt]" \
--output table
Master the --query and --output flags to extract exactly the data you need.
| Format | Flag | Best For |
|---|---|---|
| --output json | Default | Piping to jq, programmatic use, full detail |
| --output table | Human-readable | Interactive terminal viewing, quick audits |
| --output text | Tab-delimited | Shell scripting, awk/cut pipelines |
| --output yaml | YAML | Config files, readable structured data |
| --output yaml-stream | Streaming YAML | Large result sets, progressive output |
# Select specific fields
aws ec2 describe-instances \
--query "Reservations[].Instances[].[InstanceId,State.Name]"
# Filter with conditions
aws ec2 describe-instances \
--query "Reservations[].Instances[?State.Name=='running'].[InstanceId,InstanceType]"
# Named columns (creates objects instead of arrays)
aws ec2 describe-instances \
--query "Reservations[].Instances[].{ID:InstanceId,Type:InstanceType,State:State.Name,IP:PublicIpAddress}" \
--output table
# Flatten nested arrays
aws ec2 describe-instances \
--query "Reservations[].Instances[]"
# Get a single value
aws ec2 describe-instances \
--instance-ids i-0123456789abcdef0 \
--query "Reservations[0].Instances[0].PublicIpAddress" \
--output text
# Sort results
aws ec2 describe-instances \
--query "sort_by(Reservations[].Instances[], &LaunchTime)[].[InstanceId,LaunchTime]"
# Count results
aws ec2 describe-instances \
--query "length(Reservations[].Instances[?State.Name=='running'][])"
# Tag filtering
aws ec2 describe-instances \
--query "Reservations[].Instances[].Tags[?Key=='Name'].Value | []" \
--output text
# Pipe expressions (apply function to filtered result)
aws s3api list-objects-v2 --bucket my-bucket \
--query "Contents[?Size > \`1048576\`] | sort_by(@, &LastModified) | [-5:].[Key,Size]"
# Multi-select with mixed types
aws lambda list-functions \
--query "Functions[].{Name:FunctionName,Runtime:Runtime,MB:MemorySize,Timeout:Timeout}" \
--output table
# Starts-with / contains filtering
aws s3api list-objects-v2 --bucket my-bucket \
--query "Contents[?starts_with(Key, 'logs/')].{File:Key,Size:Size}"
# Max/min/sum
aws s3api list-objects-v2 --bucket my-bucket \
--query "max_by(Contents, &Size).{Largest:Key,Bytes:Size}"
# Combine with --output text for scripting
INSTANCE_IDS=$(aws ec2 describe-instances \
--filters "Name=tag:Environment,Values=dev" \
--query "Reservations[].Instances[].InstanceId" \
--output text)
for id in $INSTANCE_IDS; do
echo "Stopping $id..."
aws ec2 stop-instances --instance-ids "$id"
done
--query for server-side filtering and jq for client-side transformation. JMESPath is faster because it filters before the data reaches your terminal, but jq has more expressive power for complex transforms.Real-world scripting patterns for automation, CI/CD pipelines, and operational scripts.
# Wait for an instance to be running
aws ec2 run-instances --image-id ami-abc123 --instance-type t3.micro \
--query 'Instances[0].InstanceId' --output text
aws ec2 wait instance-running --instance-ids i-0123456789abcdef0
# Wait for a stack to complete
aws cloudformation create-stack --stack-name my-stack --template-body file://cf.yaml
aws cloudformation wait stack-create-complete --stack-name my-stack
# Wait for a DynamoDB table to be active
aws dynamodb wait table-exists --table-name Users
# Wait for an ECS service to stabilize
aws ecs wait services-stable \
--cluster my-cluster --services my-service
# Auto-pagination (default in CLI v2)
aws s3api list-objects-v2 --bucket my-bucket
# Manual pagination with --max-items and --starting-token
aws s3api list-objects-v2 --bucket my-bucket \
--max-items 100
# If NextToken is returned, use it:
aws s3api list-objects-v2 --bucket my-bucket \
--max-items 100 \
--starting-token "eyJNYXJrZXIiOiBudWxsLC..."
# Paginate in a loop
TOKEN=""
while true; do
if [ -z "$TOKEN" ]; then
RESULT=$(aws s3api list-objects-v2 --bucket my-bucket --max-items 1000)
else
RESULT=$(aws s3api list-objects-v2 --bucket my-bucket --max-items 1000 \
--starting-token "$TOKEN")
fi
echo "$RESULT" | jq -r '.Contents[]?.Key'
TOKEN=$(echo "$RESULT" | jq -r '.NextToken // empty')
[ -z "$TOKEN" ] && break
done
# Add to ~/.bashrc or ~/.zshrc
# Quick identity check
alias awswho='aws sts get-caller-identity'
# List running instances as a table
alias ec2ls='aws ec2 describe-instances \
--filters "Name=instance-state-name,Values=running" \
--query "Reservations[].Instances[].{ID:InstanceId,Type:InstanceType,Name:Tags[?Key==\`Name\`]|[0].Value,IP:PublicIpAddress}" \
--output table'
# Quick S3 bucket sizes
alias s3sizes='aws s3 ls --summarize --human-readable --recursive'
# Profile switcher function
awsp() {
export AWS_PROFILE="$1"
aws sts get-caller-identity
}
# Quick function to find AMI
ami-latest() {
aws ec2 describe-images --owners amazon \
--filters "Name=name,Values=al2023-ami-*-x86_64" \
--query "sort_by(Images, &CreationDate)[-1].[ImageId,Name]" \
--output text
}
# Deploy to S3 + CloudFront invalidation
aws s3 sync ./dist s3://my-website-bucket/ \
--delete \
--cache-control "public, max-age=31536000" \
--exclude "index.html"
aws s3 cp ./dist/index.html s3://my-website-bucket/ \
--cache-control "no-cache"
aws cloudfront create-invalidation \
--distribution-id E1234567890ABC \
--paths "/*"
# Deploy Lambda function
zip -r function.zip src/ node_modules/
aws lambda update-function-code \
--function-name my-api \
--zip-file fileb://function.zip
aws lambda wait function-updated --function-name my-api
aws lambda publish-version --function-name my-api
# ECS rolling deploy
aws ecs update-service \
--cluster prod \
--service my-api \
--task-definition my-api:$(aws ecs describe-task-definition \
--task-definition my-api \
--query 'taskDefinition.revision') \
--force-new-deployment
aws ecs wait services-stable --cluster prod --services my-api
# Tag resources for cost tracking
aws ec2 create-tags \
--resources i-abc123 vol-def456 \
--tags Key=CostCenter,Value=Engineering Key=Project,Value=MyApp
# Enable debug logging
aws s3 ls --debug 2>&1 | head -50
# Dry run (EC2 commands)
aws ec2 run-instances --dry-run \
--image-id ami-abc123 --instance-type t3.micro
# Check for errors in scripts
set -euo pipefail
RESULT=$(aws lambda invoke \
--function-name my-func \
--payload '{}' \
--cli-binary-format raw-in-base64-out \
response.json 2>&1)
if [ $? -ne 0 ]; then
echo "ERROR: Lambda invoke failed: $RESULT" >&2
exit 1
fi
STATUS=$(echo "$RESULT" | jq -r '.StatusCode')
if [ "$STATUS" -ne 200 ]; then
echo "ERROR: Lambda returned status $STATUS" >&2
cat response.json >&2
exit 1
fi
# Retry with exponential backoff
MAX_RETRIES=5
for i in $(seq 1 $MAX_RETRIES); do
aws s3 cp large-file.zip s3://my-bucket/ && break
echo "Attempt $i failed, retrying in $((2**i)) seconds..."
sleep $((2**i))
done
# Global CLI config for retries (in ~/.aws/config)
# [default]
# retry_mode = adaptive
# max_attempts = 5
aws ec2 describe-volumes \
--filters Name=status,Values=available \
--query "Volumes[].[VolumeId,Size,CreateTime]" \
--output table
for bucket in $(aws s3api list-buckets \
--query 'Buckets[].Name' --output text); do
acl=$(aws s3api get-bucket-acl \
--bucket "$bucket" 2>/dev/null)
echo "$acl" | grep -q AllUsers && \
echo "PUBLIC: $bucket"
done
aws ec2 stop-instances --instance-ids \
$(aws ec2 describe-instances \
--filters "Name=tag:Env,Values=dev" \
"Name=instance-state-name,Values=running" \
--query "Reservations[].Instances[].InstanceId" \
--output text)
aws secretsmanager list-secrets \
--query "SecretList[].Name" \
--output text | tr '\t' '\n' | \
while read name; do
echo "=== $name ==="
aws secretsmanager get-secret-value \
--secret-id "$name" \
--query SecretString --output text
done
aws ce get-cost-and-usage \
--time-period Start=$(date -d '30 days ago' +%Y-%m-%d),End=$(date +%Y-%m-%d) \
--granularity MONTHLY \
--metrics BlendedCost \
--group-by Type=DIMENSION,Key=SERVICE \
--query "ResultsByTime[0].Groups | sort_by(@, &Metrics.BlendedCost.Amount) | [-10:].[Keys[0],Metrics.BlendedCost.Amount]" \
--output table
# Create new key, update config, delete old
NEW_KEY=$(aws iam create-access-key \
--user-name deploy-bot \
--query 'AccessKey.[AccessKeyId,SecretAccessKey]' \
--output text)
echo "New key: $NEW_KEY"
# After updating ~/.aws/credentials:
aws iam delete-access-key \
--user-name deploy-bot \
--access-key-id OLDKEYIDHERE
--profile in production scripts instead of relying on environment variables. It makes your scripts portable and auditable. Combine with --region for fully explicit commands that work anywhere.~/.aws/credentials to version control. Use IAM roles for EC2/ECS/Lambda, SSO for humans, and OIDC federation for CI/CD (GitHub Actions, GitLab CI).