diff --git a/tuts/026-kinesis-data-streams/README.md b/tuts/026-kinesis-data-streams/README.md new file mode 100644 index 00000000..7b65b108 --- /dev/null +++ b/tuts/026-kinesis-data-streams/README.md @@ -0,0 +1,61 @@ +# Kinesis: Process real-time stock data + +Create a Kinesis data stream with a Lambda producer that generates stock trades and a Lambda consumer that stores them in DynamoDB. + +## Source + +https://docs.aws.amazon.com/streams/latest/dev/tutorial-stock-data-kplkcl2.html + +## Use case + +- ID: kinesis/getting-started +- Phase: create +- Complexity: intermediate +- Core actions: kinesis:CreateStream, kinesis:PutRecord, lambda:CreateEventSourceMapping + +## What it does + +1. Creates a Kinesis data stream (1 shard) +2. Creates an IAM role with Kinesis, Lambda, and DynamoDB permissions +3. Creates a Python producer Lambda that generates random stock trades +4. Creates a Python consumer Lambda that writes trades to DynamoDB +5. Creates a DynamoDB table (on-demand billing) +6. Connects the stream to the consumer via event source mapping +7. Produces 10 stock trades and verifies they land in DynamoDB +8. Cleans up all resources + +## Running + +```bash +bash kinesis-data-streams.sh +``` + +To auto-run with cleanup: + +```bash +echo 'y' | bash kinesis-data-streams.sh +``` + +## Resources created + +- Kinesis data stream (1 shard) +- IAM role (with Lambda, Kinesis, and DynamoDB policies) +- 2 Lambda functions (Python 3.12): producer and consumer +- DynamoDB table (on-demand) +- Event source mapping +- 2 CloudWatch log groups (created automatically by Lambda) + +## Estimated time + +- Run: ~2.5 minutes (stream creation takes ~30s, event source mapping activation ~60s) +- Cleanup: ~10 seconds + +## Cost + +Kinesis: $0.015/shard-hour. DynamoDB: on-demand pricing. Both are negligible for a single tutorial run. Clean up promptly to avoid ongoing Kinesis charges. + +## Related docs + +- [Tutorial: Process real-time stock data using KPL and KCL](https://docs.aws.amazon.com/streams/latest/dev/tutorial-stock-data-kplkcl2.html) +- [Using Lambda with Kinesis](https://docs.aws.amazon.com/lambda/latest/dg/with-kinesis.html) +- [Amazon Kinesis Data Streams terminology](https://docs.aws.amazon.com/streams/latest/dev/key-concepts.html) diff --git a/tuts/026-kinesis-data-streams/REVISION-HISTORY.md b/tuts/026-kinesis-data-streams/REVISION-HISTORY.md new file mode 100644 index 00000000..1e0d91b6 --- /dev/null +++ b/tuts/026-kinesis-data-streams/REVISION-HISTORY.md @@ -0,0 +1,8 @@ +# Revision History: 026-kinesis-data-streams + +## Shell (CLI script) + +### 2026-04-14 v1 published +- Type: functional +- Initial version + diff --git a/tuts/026-kinesis-data-streams/kinesis-data-streams.md b/tuts/026-kinesis-data-streams/kinesis-data-streams.md new file mode 100644 index 00000000..0ee6e5e9 --- /dev/null +++ b/tuts/026-kinesis-data-streams/kinesis-data-streams.md @@ -0,0 +1,147 @@ +# Process real-time data with Amazon Kinesis Data Streams + +This tutorial shows you how to process real-time stock trade data using Amazon Kinesis Data Streams. You create a data stream, set up a Lambda producer to generate trades, connect a Lambda consumer to process them, and store results in DynamoDB. + +## Prerequisites + +- AWS CLI configured with credentials and a default region +- Permissions to create Kinesis streams, Lambda functions, IAM roles, and DynamoDB tables + +## Step 1: Create a Kinesis data stream + +```bash +aws kinesis create-stream --stream-name stock-stream --shard-count 1 +aws kinesis wait stream-exists --stream-name stock-stream +``` + +## Step 2: Create an execution role + +Create a role with permissions for Lambda, Kinesis, and DynamoDB: + +```bash +aws iam create-role --role-name kinesis-tutorial-role \ + --assume-role-policy-document '{ + "Version":"2012-10-17", + "Statement":[{"Effect":"Allow","Principal":{"Service":"lambda.amazonaws.com"},"Action":"sts:AssumeRole"}] + }' + +aws iam attach-role-policy --role-name kinesis-tutorial-role \ + --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole +aws iam attach-role-policy --role-name kinesis-tutorial-role \ + --policy-arn arn:aws:iam::aws:policy/AmazonKinesisReadOnlyAccess +``` + +Add an inline policy for Kinesis writes and DynamoDB access: + +```bash +aws iam put-role-policy --role-name kinesis-tutorial-role --policy-name kinesis-dynamodb \ + --policy-document '{ + "Version":"2012-10-17", + "Statement":[ + {"Effect":"Allow","Action":["kinesis:PutRecord","kinesis:PutRecords"],"Resource":"*"}, + {"Effect":"Allow","Action":["dynamodb:PutItem","dynamodb:CreateTable","dynamodb:DescribeTable"],"Resource":"*"} + ] + }' +``` + +## Step 3: Create the producer function + +The producer generates random stock trades and writes them to the Kinesis stream. + +```python +# producer.py +import boto3, json, random, time, os + +def lambda_handler(event, context): + kinesis = boto3.client('kinesis') + stream = os.environ['STREAM_NAME'] + tickers = ['AAPL', 'AMZN', 'MSFT', 'GOOGL', 'TSLA', 'NFLX', 'NVDA', 'META'] + for _ in range(10): + ticker = random.choice(tickers) + trade = {'ticker': ticker, 'type': random.choice(['BUY', 'SELL']), + 'price': round(random.uniform(50, 500), 2), + 'quantity': random.randint(1, 100), + 'timestamp': int(time.time() * 1000)} + kinesis.put_record(StreamName=stream, Data=json.dumps(trade), PartitionKey=ticker) + return {'statusCode': 200, 'body': '10 trades sent'} +``` + +Deploy: + +```bash +zip producer.zip producer.py +aws lambda create-function --function-name stock-producer \ + --zip-file fileb://producer.zip --handler producer.lambda_handler \ + --runtime python3.12 --role \ + --environment Variables={STREAM_NAME=stock-stream} +``` + +## Step 4: Create the consumer function + +The consumer reads trades from the stream and stores them in DynamoDB. + +```python +# consumer.py +import boto3, json, base64, os + +def lambda_handler(event, context): + dynamodb = boto3.resource('dynamodb') + table = dynamodb.Table(os.environ['TABLE_NAME']) + for record in event['Records']: + payload = base64.b64decode(record['kinesis']['data']).decode() + trade = json.loads(payload) + table.put_item(Item={ + 'TradeId': f"{trade['timestamp']}-{trade['ticker']}", + 'Ticker': trade['ticker'], 'Type': trade['type'], + 'Price': str(trade['price']), 'Quantity': trade['quantity']}) + return {'statusCode': 200} +``` + +## Step 5: Create a DynamoDB table + +```bash +aws dynamodb create-table --table-name stock-trades \ + --key-schema AttributeName=TradeId,KeyType=HASH \ + --attribute-definitions AttributeName=TradeId,AttributeType=S \ + --billing-mode PAY_PER_REQUEST +aws dynamodb wait table-exists --table-name stock-trades +``` + +## Step 6: Connect the stream to the consumer + +```bash +aws lambda create-event-source-mapping \ + --function-name stock-consumer \ + --event-source-arn \ + --batch-size 100 --starting-position LATEST +``` + +## Step 7: Produce trades and verify + +Invoke the producer, then check DynamoDB: + +```bash +aws lambda invoke --function-name stock-producer response.json +aws dynamodb scan --table-name stock-trades --limit 3 \ + --query 'Items[].{Ticker:Ticker.S,Type:Type.S,Price:Price.S}' --output table +``` + +## Cleanup + +```bash +aws lambda delete-event-source-mapping --uuid +aws lambda delete-function --function-name stock-producer +aws lambda delete-function --function-name stock-consumer +aws dynamodb delete-table --table-name stock-trades +aws kinesis delete-stream --stream-name stock-stream +aws iam detach-role-policy --role-name kinesis-tutorial-role --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole +aws iam detach-role-policy --role-name kinesis-tutorial-role --policy-arn arn:aws:iam::aws:policy/AmazonKinesisReadOnlyAccess +aws iam delete-role-policy --role-name kinesis-tutorial-role --policy-name kinesis-dynamodb +aws iam delete-role --role-name kinesis-tutorial-role +``` + +The script automates all steps including cleanup. Run it with: + +```bash +bash kinesis-data-streams.sh +``` diff --git a/tuts/026-kinesis-data-streams/kinesis-data-streams.sh b/tuts/026-kinesis-data-streams/kinesis-data-streams.sh new file mode 100644 index 00000000..53733d11 --- /dev/null +++ b/tuts/026-kinesis-data-streams/kinesis-data-streams.sh @@ -0,0 +1,224 @@ +#!/bin/bash +# Tutorial: Process real-time data with Amazon Kinesis Data Streams +# Source: https://docs.aws.amazon.com/streams/latest/dev/tutorial-stock-data-kplkcl2.html + +WORK_DIR=$(mktemp -d) +LOG_FILE="$WORK_DIR/kinesis-ds-$(date +%Y%m%d-%H%M%S).log" +exec > >(tee -a "$LOG_FILE") 2>&1 + +REGION=${AWS_DEFAULT_REGION:-${AWS_REGION:-$(aws configure get region 2>/dev/null)}} +if [ -z "$REGION" ]; then + echo "ERROR: No AWS region configured. Set one with: export AWS_DEFAULT_REGION=us-east-1" + exit 1 +fi +export AWS_DEFAULT_REGION="$REGION" +echo "Region: $REGION" + +RANDOM_ID=$(cat /dev/urandom | tr -dc 'a-z0-9' | fold -w 8 | head -n 1) +STREAM_NAME="stock-stream-${RANDOM_ID}" +ROLE_NAME="kinesis-tut-role-${RANDOM_ID}" +PRODUCER_NAME="stock-producer-${RANDOM_ID}" +CONSUMER_NAME="stock-consumer-${RANDOM_ID}" +TABLE_NAME="stock-trades-${RANDOM_ID}" + +handle_error() { echo "ERROR on line $1"; trap - ERR; cleanup; exit 1; } +trap 'handle_error $LINENO' ERR + +cleanup() { + echo "" + echo "Cleaning up resources..." + [ -n "$EVENT_SOURCE_UUID" ] && \ + aws lambda delete-event-source-mapping --uuid "$EVENT_SOURCE_UUID" > /dev/null 2>&1 && echo " Deleted event source mapping" + aws lambda delete-function --function-name "$PRODUCER_NAME" 2>/dev/null && echo " Deleted function $PRODUCER_NAME" + aws lambda delete-function --function-name "$CONSUMER_NAME" 2>/dev/null && echo " Deleted function $CONSUMER_NAME" + aws dynamodb delete-table --table-name "$TABLE_NAME" > /dev/null 2>&1 && echo " Deleted table $TABLE_NAME" + aws kinesis delete-stream --stream-name "$STREAM_NAME" 2>/dev/null && echo " Deleted stream $STREAM_NAME" + aws iam detach-role-policy --role-name "$ROLE_NAME" \ + --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole 2>/dev/null + aws iam detach-role-policy --role-name "$ROLE_NAME" \ + --policy-arn arn:aws:iam::aws:policy/AmazonKinesisReadOnlyAccess 2>/dev/null + # Delete inline policy + aws iam delete-role-policy --role-name "$ROLE_NAME" --policy-name kinesis-dynamodb 2>/dev/null + aws iam delete-role --role-name "$ROLE_NAME" 2>/dev/null && echo " Deleted role $ROLE_NAME" + aws logs delete-log-group --log-group-name "/aws/lambda/$PRODUCER_NAME" 2>/dev/null + aws logs delete-log-group --log-group-name "/aws/lambda/$CONSUMER_NAME" 2>/dev/null && echo " Deleted log groups" + rm -rf "$WORK_DIR" + echo "Cleanup complete." +} + +# Step 1: Create Kinesis data stream +echo "Step 1: Creating Kinesis data stream: $STREAM_NAME" +aws kinesis create-stream --stream-name "$STREAM_NAME" --shard-count 1 +echo " Waiting for stream to become active..." +aws kinesis wait stream-exists --stream-name "$STREAM_NAME" +STREAM_ARN=$(aws kinesis describe-stream-summary --stream-name "$STREAM_NAME" \ + --query 'StreamDescriptionSummary.StreamARN' --output text) +echo " Stream ARN: $STREAM_ARN" + +# Step 2: Create IAM role +echo "Step 2: Creating IAM role: $ROLE_NAME" +ROLE_ARN=$(aws iam create-role --role-name "$ROLE_NAME" \ + --assume-role-policy-document '{ + "Version":"2012-10-17", + "Statement":[{"Effect":"Allow","Principal":{"Service":"lambda.amazonaws.com"},"Action":"sts:AssumeRole"}] + }' --query 'Role.Arn' --output text) +aws iam attach-role-policy --role-name "$ROLE_NAME" \ + --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole +aws iam attach-role-policy --role-name "$ROLE_NAME" \ + --policy-arn arn:aws:iam::aws:policy/AmazonKinesisReadOnlyAccess +aws iam put-role-policy --role-name "$ROLE_NAME" --policy-name kinesis-dynamodb \ + --policy-document '{ + "Version":"2012-10-17", + "Statement":[ + {"Effect":"Allow","Action":["kinesis:PutRecord","kinesis:PutRecords"],"Resource":"*"}, + {"Effect":"Allow","Action":["dynamodb:PutItem","dynamodb:CreateTable","dynamodb:DescribeTable"],"Resource":"*"} + ] + }' +echo " Role ARN: $ROLE_ARN" +echo " Waiting for role propagation..." +sleep 10 + +# Step 3: Create producer Lambda function +echo "Step 3: Creating producer function: $PRODUCER_NAME" +cat > "$WORK_DIR/producer.py" << PYEOF +import boto3, json, random, time, os + +def lambda_handler(event, context): + kinesis = boto3.client('kinesis') + stream = os.environ['STREAM_NAME'] + tickers = ['AAPL', 'AMZN', 'MSFT', 'GOOGL', 'TSLA', 'NFLX', 'NVDA', 'META'] + trades = [] + for _ in range(10): + ticker = random.choice(tickers) + trade = { + 'ticker': ticker, + 'type': random.choice(['BUY', 'SELL']), + 'price': round(random.uniform(50, 500), 2), + 'quantity': random.randint(1, 100), + 'timestamp': int(time.time() * 1000) + } + kinesis.put_record(StreamName=stream, Data=json.dumps(trade), PartitionKey=ticker) + trades.append(trade) + print(f"Produced {len(trades)} trades") + return {'statusCode': 200, 'body': f'{len(trades)} trades sent'} +PYEOF +(cd "$WORK_DIR" && zip producer.zip producer.py > /dev/null) + +aws lambda create-function --function-name "$PRODUCER_NAME" \ + --zip-file "fileb://$WORK_DIR/producer.zip" \ + --handler producer.lambda_handler --runtime python3.12 \ + --role "$ROLE_ARN" --timeout 30 \ + --architectures x86_64 \ + --environment "Variables={STREAM_NAME=$STREAM_NAME}" \ + --query 'FunctionArn' --output text +aws lambda wait function-active-v2 --function-name "$PRODUCER_NAME" + +# Step 4: Create consumer Lambda function +echo "Step 4: Creating consumer function: $CONSUMER_NAME" +cat > "$WORK_DIR/consumer.py" << PYEOF +import boto3, json, base64, os, time + +def lambda_handler(event, context): + dynamodb = boto3.resource('dynamodb') + table_name = os.environ['TABLE_NAME'] + table = dynamodb.Table(table_name) + processed = 0 + for record in event['Records']: + payload = base64.b64decode(record['kinesis']['data']).decode() + trade = json.loads(payload) + table.put_item(Item={ + 'TradeId': f"{trade['timestamp']}-{trade['ticker']}", + 'Ticker': trade['ticker'], + 'Type': trade['type'], + 'Price': str(trade['price']), + 'Quantity': trade['quantity'], + 'Timestamp': trade['timestamp'] + }) + processed += 1 + print(f"Processed {processed} trades") + return {'statusCode': 200} +PYEOF +(cd "$WORK_DIR" && zip consumer.zip consumer.py > /dev/null) + +aws lambda create-function --function-name "$CONSUMER_NAME" \ + --zip-file "fileb://$WORK_DIR/consumer.zip" \ + --handler consumer.lambda_handler --runtime python3.12 \ + --role "$ROLE_ARN" --timeout 30 \ + --architectures x86_64 \ + --environment "Variables={TABLE_NAME=$TABLE_NAME}" \ + --query 'FunctionArn' --output text +aws lambda wait function-active-v2 --function-name "$CONSUMER_NAME" + +# Step 5: Create DynamoDB table +echo "Step 5: Creating DynamoDB table: $TABLE_NAME" +aws dynamodb create-table --table-name "$TABLE_NAME" \ + --key-schema AttributeName=TradeId,KeyType=HASH \ + --attribute-definitions AttributeName=TradeId,AttributeType=S \ + --billing-mode PAY_PER_REQUEST \ + --query 'TableDescription.TableArn' --output text +aws dynamodb wait table-exists --table-name "$TABLE_NAME" +echo " Table active" + +# Step 6: Connect Kinesis stream to consumer Lambda +echo "Step 6: Creating event source mapping (stream → consumer)" +EVENT_SOURCE_UUID=$(aws lambda create-event-source-mapping \ + --function-name "$CONSUMER_NAME" \ + --event-source-arn "$STREAM_ARN" \ + --batch-size 100 \ + --starting-position LATEST \ + --query 'UUID' --output text) +echo " Event source mapping: $EVENT_SOURCE_UUID" +echo " Waiting for mapping to become active..." +for i in $(seq 1 20); do + STATE=$(aws lambda get-event-source-mapping --uuid "$EVENT_SOURCE_UUID" \ + --query 'State' --output text 2>/dev/null || true) + [ "$STATE" = "Enabled" ] && break + sleep 5 +done +echo " State: $STATE" + +# Step 7: Produce stock trades +echo "Step 7: Producing stock trades" +aws lambda invoke --function-name "$PRODUCER_NAME" \ + --cli-binary-format raw-in-base64-out \ + "$WORK_DIR/producer-response.json" > /dev/null +echo " $(cat "$WORK_DIR/producer-response.json")" + +# Step 8: Verify trades in DynamoDB +echo "Step 8: Verifying trades in DynamoDB (waiting for consumer to process)..." +sleep 10 +FOUND_TRADES=false +for i in $(seq 1 18); do + COUNT=$(aws dynamodb scan --table-name "$TABLE_NAME" --select COUNT \ + --query 'Count' --output text 2>/dev/null || echo "0") + if [ "$COUNT" -gt 0 ] 2>/dev/null; then + echo " Found $COUNT trades in DynamoDB" + aws dynamodb scan --table-name "$TABLE_NAME" --limit 3 \ + --query 'Items[].{Ticker:Ticker.S,Type:Type.S,Price:Price.S}' --output table + FOUND_TRADES=true + break + fi + sleep 5 +done +if [ "$FOUND_TRADES" = false ]; then + echo " Trades not yet visible (Kinesis consumer polling can take up to 60s)" +fi + +echo "" +echo "Tutorial complete." +echo "Do you want to clean up all resources? (y/n): " +read -r CHOICE +if [[ "$CHOICE" =~ ^[Yy]$ ]]; then + cleanup +else + echo "Resources left running. Manual cleanup commands:" + echo " aws lambda delete-event-source-mapping --uuid $EVENT_SOURCE_UUID" + echo " aws lambda delete-function --function-name $PRODUCER_NAME" + echo " aws lambda delete-function --function-name $CONSUMER_NAME" + echo " aws dynamodb delete-table --table-name $TABLE_NAME" + echo " aws kinesis delete-stream --stream-name $STREAM_NAME" + echo " aws iam detach-role-policy --role-name $ROLE_NAME --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole" + echo " aws iam detach-role-policy --role-name $ROLE_NAME --policy-arn arn:aws:iam::aws:policy/AmazonKinesisReadOnlyAccess" + echo " aws iam delete-role-policy --role-name $ROLE_NAME --policy-name kinesis-dynamodb" + echo " aws iam delete-role --role-name $ROLE_NAME" +fi diff --git a/tuts/095-aws-acm-gs/README.md b/tuts/095-aws-acm-gs/README.md new file mode 100644 index 00000000..39222255 --- /dev/null +++ b/tuts/095-aws-acm-gs/README.md @@ -0,0 +1,50 @@ +# ACM: Request and manage certificates + +Request an SSL/TLS certificate with DNS validation, inspect the certificate, list certificates, and add tags using the AWS CLI. + +## Source + +https://docs.aws.amazon.com/acm/latest/userguide/gs.html + +## Use case + +- ID: acm/getting-started +- Phase: create +- Complexity: beginner +- Core actions: acm:RequestCertificate, acm:DescribeCertificate + +## What it does + +1. Requests a certificate with DNS validation +2. Describes the certificate +3. Shows the DNS validation record +4. Lists certificates +5. Adds tags to the certificate + +## Running + +```bash +bash aws-acm-gs.sh +``` + +## Resources created + +- ACM certificate (pending validation) + +The certificate is free. ACM-issued public certificates have no cost. The script prompts you to clean up when it finishes. + +## Estimated time + +- Run: ~7 seconds + +## Cost + +Free. There is no charge for ACM-issued public SSL/TLS certificates. + +## Related docs + +- [Getting started with ACM](https://docs.aws.amazon.com/acm/latest/userguide/gs.html) +- [Requesting a public certificate](https://docs.aws.amazon.com/acm/latest/userguide/gs-acm-request-public.html) +- [DNS validation](https://docs.aws.amazon.com/acm/latest/userguide/dns-validation.html) +- [Tagging ACM certificates](https://docs.aws.amazon.com/acm/latest/userguide/tags.html) +- [Deleting certificates](https://docs.aws.amazon.com/acm/latest/userguide/gs-acm-delete.html) diff --git a/tuts/095-aws-acm-gs/REVISION-HISTORY.md b/tuts/095-aws-acm-gs/REVISION-HISTORY.md new file mode 100644 index 00000000..1093489e --- /dev/null +++ b/tuts/095-aws-acm-gs/REVISION-HISTORY.md @@ -0,0 +1,8 @@ +# Revision History: 095-aws-acm-gs + +## Shell (CLI script) + +### 2026-04-14 v1 published +- Type: functional +- Initial version + diff --git a/tuts/095-aws-acm-gs/aws-acm-gs.md b/tuts/095-aws-acm-gs/aws-acm-gs.md new file mode 100644 index 00000000..43144bd7 --- /dev/null +++ b/tuts/095-aws-acm-gs/aws-acm-gs.md @@ -0,0 +1,81 @@ +# Request and manage certificates with AWS Certificate Manager + +This tutorial shows you how to request an SSL/TLS certificate with DNS validation, inspect the certificate and its validation record, list certificates, and add tags. + +## Prerequisites + +- AWS CLI configured with credentials and a default region +- Permissions for `acm:RequestCertificate`, `acm:DescribeCertificate`, `acm:ListCertificates`, `acm:AddTagsToCertificate`, `acm:ListTagsForCertificate`, `acm:DeleteCertificate` + +## Step 1: Request a certificate + +Request a certificate for a domain using DNS validation: + +```bash +CERT_ARN=$(aws acm request-certificate \ + --domain-name "$DOMAIN" \ + --validation-method DNS \ + --query 'CertificateArn' --output text) +echo "Certificate ARN: $CERT_ARN" +``` + +ACM creates the certificate in `PENDING_VALIDATION` status. The certificate won't be issued until you add the DNS validation record to your domain's DNS configuration. + +> **Note:** The script uses a subdomain of `example.com`, which is a reserved domain. The certificate will stay in `PENDING_VALIDATION` because DNS validation can't complete for a domain you don't own. + +## Step 2: Describe the certificate + +```bash +aws acm describe-certificate --certificate-arn "$CERT_ARN" \ + --query 'Certificate.{Domain:DomainName,Status:Status,Type:Type,Validation:DomainValidationOptions[0].ValidationMethod}' \ + --output table +``` + +This shows the domain name, current status, certificate type (AMAZON_ISSUED), and validation method. + +## Step 3: Show the DNS validation record + +```bash +aws acm describe-certificate --certificate-arn "$CERT_ARN" \ + --query 'Certificate.DomainValidationOptions[0].ResourceRecord.{Name:Name,Type:Type,Value:Value}' \ + --output table +``` + +ACM provides a CNAME record that you add to your domain's DNS to prove ownership. In production, you would create this record in Route 53 or your DNS provider. + +The validation record may be empty on the first describe call. The script waits briefly before querying. + +## Step 4: List certificates + +```bash +aws acm list-certificates \ + --query 'CertificateSummaryList[?contains(DomainName, `tutorial-`)].{Domain:DomainName,Status:Status,ARN:CertificateArn}' \ + --output table +``` + +## Step 5: Add tags + +```bash +aws acm add-tags-to-certificate --certificate-arn "$CERT_ARN" \ + --tags Key=Environment,Value=tutorial Key=Project,Value=acm-gs +aws acm list-tags-for-certificate --certificate-arn "$CERT_ARN" \ + --query 'Tags[].{Key:Key,Value:Value}' --output table +``` + +Tags help you organize and track certificates. You can add up to 50 tags per certificate. + +## Cleanup + +Delete the certificate: + +```bash +aws acm delete-certificate --certificate-arn "$CERT_ARN" +``` + +Certificates in `PENDING_VALIDATION` status can be deleted immediately. Certificates that are in use by another AWS service (such as an Elastic Load Balancer) must be disassociated first. + +The script automates all steps including cleanup: + +```bash +bash aws-acm-gs.sh +``` diff --git a/tuts/095-aws-acm-gs/aws-acm-gs.sh b/tuts/095-aws-acm-gs/aws-acm-gs.sh new file mode 100644 index 00000000..99f847fc --- /dev/null +++ b/tuts/095-aws-acm-gs/aws-acm-gs.sh @@ -0,0 +1,77 @@ +#!/bin/bash +# Tutorial: Request and manage SSL/TLS certificates with AWS Certificate Manager +# Source: https://docs.aws.amazon.com/acm/latest/userguide/gs.html + +WORK_DIR=$(mktemp -d) +LOG_FILE="$WORK_DIR/acm-$(date +%Y%m%d-%H%M%S).log" +exec > >(tee -a "$LOG_FILE") 2>&1 + +REGION=${AWS_DEFAULT_REGION:-${AWS_REGION:-$(aws configure get region 2>/dev/null)}} +if [ -z "$REGION" ]; then + echo "ERROR: No AWS region configured. Set one with: export AWS_DEFAULT_REGION=us-east-1" + exit 1 +fi +export AWS_DEFAULT_REGION="$REGION" +echo "Region: $REGION" + +RANDOM_ID=$(cat /dev/urandom | tr -dc 'a-z0-9' | fold -w 8 | head -n 1) +DOMAIN="tutorial-${RANDOM_ID}.example.com" + +handle_error() { echo "ERROR on line $1"; trap - ERR; cleanup; exit 1; } +trap 'handle_error $LINENO' ERR + +cleanup() { + echo "" + echo "Cleaning up resources..." + [ -n "$CERT_ARN" ] && aws acm delete-certificate --certificate-arn "$CERT_ARN" 2>/dev/null && \ + echo " Deleted certificate $CERT_ARN" + rm -rf "$WORK_DIR" + echo "Cleanup complete." +} + +# Step 1: Request a certificate +echo "Step 1: Requesting a certificate for $DOMAIN" +CERT_ARN=$(aws acm request-certificate \ + --domain-name "$DOMAIN" \ + --validation-method DNS \ + --query 'CertificateArn' --output text) +echo " Certificate ARN: $CERT_ARN" + +# Step 2: Describe the certificate +echo "Step 2: Describing the certificate" +sleep 2 +aws acm describe-certificate --certificate-arn "$CERT_ARN" \ + --query 'Certificate.{Domain:DomainName,Status:Status,Type:Type,Validation:DomainValidationOptions[0].ValidationMethod}' --output table + +# Step 3: Show DNS validation record +echo "Step 3: DNS validation record (you would add this to your DNS)" +sleep 3 +aws acm describe-certificate --certificate-arn "$CERT_ARN" \ + --query 'Certificate.DomainValidationOptions[0].ResourceRecord.{Name:Name,Type:Type,Value:Value}' --output table + +# Step 4: List certificates +echo "Step 4: Listing certificates" +aws acm list-certificates \ + --query 'CertificateSummaryList[?contains(DomainName, `tutorial-`)].{Domain:DomainName,Status:Status,ARN:CertificateArn}' --output table + +# Step 5: Add tags +echo "Step 5: Adding tags to the certificate" +aws acm add-tags-to-certificate --certificate-arn "$CERT_ARN" \ + --tags Key=Environment,Value=tutorial Key=Project,Value=acm-gs +aws acm list-tags-for-certificate --certificate-arn "$CERT_ARN" \ + --query 'Tags[].{Key:Key,Value:Value}' --output table + +echo "" +echo "Tutorial complete." +echo "Note: The certificate will remain in PENDING_VALIDATION status because" +echo "example.com is not a real domain. In production, you would add the DNS" +echo "record from Step 3 to your domain's DNS configuration." +echo "" +echo "Do you want to clean up all resources? (y/n): " +read -r CHOICE +if [[ "$CHOICE" =~ ^[Yy]$ ]]; then + cleanup +else + echo "Manual cleanup:" + echo " aws acm delete-certificate --certificate-arn $CERT_ARN" +fi diff --git a/tuts/100-aws-secrets-manager-gs/README.md b/tuts/100-aws-secrets-manager-gs/README.md new file mode 100644 index 00000000..6f7cbaec --- /dev/null +++ b/tuts/100-aws-secrets-manager-gs/README.md @@ -0,0 +1,55 @@ +# Secrets Manager: Store and retrieve secrets + +## Source + +https://docs.aws.amazon.com/secretsmanager/latest/userguide/getting-started.html + +## Use case + +- **ID**: secretsmanager/getting-started +- **Level**: beginner +- **Core actions**: `secretsmanager:CreateSecret`, `secretsmanager:GetSecretValue`, `secretsmanager:PutSecretValue` + +## Steps + +1. Create a secret with JSON credentials +2. Retrieve the secret +3. Update the secret value +4. Retrieve the updated secret +5. Describe the secret metadata +6. Tag the secret + +## Resources created + +| Resource | Type | +|----------|------| +| `tutorial/db-creds-` | Secret | + +## Cost + +$0.40/month per secret. The secret is deleted immediately during cleanup, so no ongoing cost. + +## Duration + +~7 seconds + +## Related docs + +- [Getting started with Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/getting-started.html) +- [Create and manage secrets](https://docs.aws.amazon.com/secretsmanager/latest/userguide/managing-secrets.html) +- [Rotate secrets automatically](https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets.html) +- [Tag secrets](https://docs.aws.amazon.com/secretsmanager/latest/userguide/managing-secrets_tagging.html) + +--- + +## Appendix + +| Field | Value | +|-------|-------| +| Date | 2026-04-14 | +| Script lines | 78 | +| Exit code | 0 | +| Runtime | 7s | +| Steps | 6 | +| Issues | None | +| Version | v1 | diff --git a/tuts/100-aws-secrets-manager-gs/REVISION-HISTORY.md b/tuts/100-aws-secrets-manager-gs/REVISION-HISTORY.md new file mode 100644 index 00000000..7766993d --- /dev/null +++ b/tuts/100-aws-secrets-manager-gs/REVISION-HISTORY.md @@ -0,0 +1,8 @@ +# Revision History: 100-aws-secrets-manager-gs + +## Shell (CLI script) + +### 2026-04-14 v1 published +- Type: functional +- Initial version + diff --git a/tuts/100-aws-secrets-manager-gs/aws-secrets-manager-gs.md b/tuts/100-aws-secrets-manager-gs/aws-secrets-manager-gs.md new file mode 100644 index 00000000..13b7690d --- /dev/null +++ b/tuts/100-aws-secrets-manager-gs/aws-secrets-manager-gs.md @@ -0,0 +1,104 @@ +# Store and retrieve secrets with AWS Secrets Manager + +## Overview + +In this tutorial, you use the AWS CLI to create a secret containing JSON database credentials, retrieve and update the secret value, inspect secret metadata, and tag the secret for organization. You then delete the secret immediately without a recovery window. + +## Prerequisites + +- AWS CLI installed and configured with appropriate permissions. +- An IAM principal with permissions for `secretsmanager:CreateSecret`, `secretsmanager:GetSecretValue`, `secretsmanager:PutSecretValue`, `secretsmanager:DescribeSecret`, `secretsmanager:TagResource`, and `secretsmanager:DeleteSecret`. + +## Step 1: Create a secret + +Create a secret with JSON-formatted database credentials. + +```bash +SECRET_NAME="tutorial/db-creds-$(openssl rand -hex 4)" + +SECRET_ARN=$(aws secretsmanager create-secret \ + --name "$SECRET_NAME" \ + --description "Tutorial database credentials" \ + --secret-string '{"username":"admin","password":"tutorial-pass-12345","engine":"mysql","host":"db.example.com","port":3306}' \ + --query 'ARN' --output text) +echo "Secret ARN: $SECRET_ARN" +``` + +Secrets Manager stores the secret string as-is. JSON format is conventional for database credentials because the SDKs and rotation functions expect it. + +## Step 2: Retrieve the secret + +```bash +aws secretsmanager get-secret-value --secret-id "$SECRET_NAME" \ + --query '{Name:Name,Value:SecretString}' --output table +``` + +The `SecretString` field contains the JSON you stored. For binary secrets, use `SecretBinary` instead. + +## Step 3: Update the secret value + +Replace the secret value with new credentials using `put-secret-value`. + +```bash +aws secretsmanager put-secret-value --secret-id "$SECRET_NAME" \ + --secret-string '{"username":"admin","password":"new-secure-pass-67890","engine":"mysql","host":"db.example.com","port":3306}' +``` + +Secrets Manager creates a new version of the secret. The previous version is still accessible by version ID. + +## Step 4: Retrieve the updated secret + +Confirm the secret now contains the updated password. + +```bash +aws secretsmanager get-secret-value --secret-id "$SECRET_NAME" \ + --query 'SecretString' --output text | python3 -m json.tool +``` + +## Step 5: Describe the secret + +View the secret's metadata, including creation date, last changed date, and version count. + +```bash +aws secretsmanager describe-secret --secret-id "$SECRET_NAME" \ + --query '{Name:Name,Description:Description,Created:CreatedDate,LastChanged:LastChangedDate,Versions:VersionIdsToStages|length(@)}' \ + --output table +``` + +`describe-secret` returns metadata only — it never returns the secret value. + +## Step 6: Tag the secret + +Add tags to organize and control access to the secret. + +```bash +aws secretsmanager tag-resource --secret-id "$SECRET_NAME" \ + --tags Key=Environment,Value=tutorial Key=Application,Value=database + +aws secretsmanager describe-secret --secret-id "$SECRET_NAME" \ + --query 'Tags[].{Key:Key,Value:Value}' --output table +``` + +## Cleanup + +Delete the secret immediately with `--force-delete-without-recovery`. This skips the default 7–30 day recovery window. + +```bash +aws secretsmanager delete-secret --secret-id "$SECRET_NAME" \ + --force-delete-without-recovery +``` + +Without `--force-delete-without-recovery`, Secrets Manager schedules deletion after a recovery window (default 30 days), during which you can restore the secret. + +The script automates all steps including cleanup: + +```bash +bash aws-secrets-manager-gs.sh +``` + +## Related resources + +- [Getting started with Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/getting-started.html) +- [Create and manage secrets](https://docs.aws.amazon.com/secretsmanager/latest/userguide/managing-secrets.html) +- [Rotate secrets automatically](https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets.html) +- [Tag secrets](https://docs.aws.amazon.com/secretsmanager/latest/userguide/managing-secrets_tagging.html) diff --git a/tuts/100-aws-secrets-manager-gs/aws-secrets-manager-gs.sh b/tuts/100-aws-secrets-manager-gs/aws-secrets-manager-gs.sh new file mode 100644 index 00000000..9e135beb --- /dev/null +++ b/tuts/100-aws-secrets-manager-gs/aws-secrets-manager-gs.sh @@ -0,0 +1,78 @@ +#!/bin/bash +# Tutorial: Store and retrieve secrets with AWS Secrets Manager +# Source: https://docs.aws.amazon.com/secretsmanager/latest/userguide/getting-started.html + +WORK_DIR=$(mktemp -d) +LOG_FILE="$WORK_DIR/secretsmanager-$(date +%Y%m%d-%H%M%S).log" +exec > >(tee -a "$LOG_FILE") 2>&1 + +REGION=${AWS_DEFAULT_REGION:-${AWS_REGION:-$(aws configure get region 2>/dev/null)}} +if [ -z "$REGION" ]; then + echo "ERROR: No AWS region configured. Set one with: export AWS_DEFAULT_REGION=us-east-1" + exit 1 +fi +export AWS_DEFAULT_REGION="$REGION" +echo "Region: $REGION" + +RANDOM_ID=$(cat /dev/urandom | tr -dc 'a-z0-9' | fold -w 8 | head -n 1) +SECRET_NAME="tutorial/db-creds-${RANDOM_ID}" + +handle_error() { echo "ERROR on line $1"; trap - ERR; cleanup; exit 1; } +trap 'handle_error $LINENO' ERR + +cleanup() { + echo "" + echo "Cleaning up resources..." + aws secretsmanager delete-secret --secret-id "$SECRET_NAME" \ + --force-delete-without-recovery > /dev/null 2>&1 && \ + echo " Deleted secret $SECRET_NAME (immediate, no recovery)" + rm -rf "$WORK_DIR" + echo "Cleanup complete." +} + +# Step 1: Create a secret +echo "Step 1: Creating secret: $SECRET_NAME" +SECRET_ARN=$(aws secretsmanager create-secret --name "$SECRET_NAME" \ + --description "Tutorial database credentials" \ + --secret-string '{"username":"admin","password":"tutorial-pass-12345","engine":"mysql","host":"db.example.com","port":3306}' \ + --query 'ARN' --output text) +echo " Secret ARN: $SECRET_ARN" + +# Step 2: Retrieve the secret +echo "Step 2: Retrieving the secret" +aws secretsmanager get-secret-value --secret-id "$SECRET_NAME" \ + --query '{Name:Name,Value:SecretString}' --output table + +# Step 3: Update the secret +echo "Step 3: Updating the secret value" +aws secretsmanager put-secret-value --secret-id "$SECRET_NAME" \ + --secret-string '{"username":"admin","password":"new-secure-pass-67890","engine":"mysql","host":"db.example.com","port":3306}' > /dev/null +echo " Secret updated" + +# Step 4: Retrieve the updated secret +echo "Step 4: Retrieving updated secret" +aws secretsmanager get-secret-value --secret-id "$SECRET_NAME" \ + --query 'SecretString' --output text | python3 -m json.tool + +# Step 5: Describe the secret +echo "Step 5: Describing the secret" +aws secretsmanager describe-secret --secret-id "$SECRET_NAME" \ + --query '{Name:Name,Description:Description,Created:CreatedDate,LastChanged:LastChangedDate,Versions:VersionIdsToStages|length(@)}' --output table + +# Step 6: Tag the secret +echo "Step 6: Adding tags" +aws secretsmanager tag-resource --secret-id "$SECRET_NAME" \ + --tags Key=Environment,Value=tutorial Key=Application,Value=database +aws secretsmanager describe-secret --secret-id "$SECRET_NAME" \ + --query 'Tags[].{Key:Key,Value:Value}' --output table + +echo "" +echo "Tutorial complete." +echo "Do you want to clean up all resources? (y/n): " +read -r CHOICE +if [[ "$CHOICE" =~ ^[Yy]$ ]]; then + cleanup +else + echo "Manual cleanup:" + echo " aws secretsmanager delete-secret --secret-id $SECRET_NAME --force-delete-without-recovery" +fi diff --git a/tuts/101-aws-step-functions-gs/README.md b/tuts/101-aws-step-functions-gs/README.md new file mode 100644 index 00000000..9f5dfb97 --- /dev/null +++ b/tuts/101-aws-step-functions-gs/README.md @@ -0,0 +1,55 @@ +# Step Functions: Create and run a state machine + +## Source + +https://docs.aws.amazon.com/step-functions/latest/dg/getting-started-with-sfn.html + +## Use case + +- **ID**: stepfunctions/getting-started +- **Level**: intermediate +- **Core actions**: `states:CreateStateMachine`, `states:StartExecution` + +## Steps + +1. Create an IAM role for Step Functions +2. Create a state machine (Pass → Wait → Choice → Succeed) +3. Start an execution +4. Wait for execution to complete +5. Get execution results +6. Get execution history + +## Resources created + +| Resource | Type | +|----------|------| +| State machine | `AWS::StepFunctions::StateMachine` | +| IAM role | `AWS::IAM::Role` | + +## Duration + +~22 seconds + +## Cost + +Step Functions free tier includes 4,000 state transitions per month. This tutorial uses approximately 5 transitions per execution. All resources are removed during cleanup. + +## Related docs + +- [Getting started with Step Functions](https://docs.aws.amazon.com/step-functions/latest/dg/getting-started-with-sfn.html) +- [Amazon States Language](https://docs.aws.amazon.com/step-functions/latest/dg/concepts-amazon-states-language.html) +- [Step Functions API reference](https://docs.aws.amazon.com/step-functions/latest/apireference/Welcome.html) + +--- + +## Appendix + +| Field | Value | +|-------|-------| +| Date | 2026-04-14 | +| Script lines | 133 | +| Exit code | 0 | +| Runtime | 22s | +| Steps | 6 | +| Issues | None | +| Version | v1 | diff --git a/tuts/101-aws-step-functions-gs/REVISION-HISTORY.md b/tuts/101-aws-step-functions-gs/REVISION-HISTORY.md new file mode 100644 index 00000000..1b80f2cc --- /dev/null +++ b/tuts/101-aws-step-functions-gs/REVISION-HISTORY.md @@ -0,0 +1,8 @@ +# Revision History: 101-aws-step-functions-gs + +## Shell (CLI script) + +### 2026-04-14 v1 published +- Type: functional +- Initial version + diff --git a/tuts/101-aws-step-functions-gs/aws-step-functions-gs.md b/tuts/101-aws-step-functions-gs/aws-step-functions-gs.md new file mode 100644 index 00000000..23cb5b17 --- /dev/null +++ b/tuts/101-aws-step-functions-gs/aws-step-functions-gs.md @@ -0,0 +1,157 @@ +# Create and run a Step Functions state machine + +This tutorial shows you how to create an IAM role for Step Functions, define a state machine with Pass, Wait, Choice, and Succeed states, run an execution, and inspect the results. + +## Prerequisites + +- AWS CLI configured with credentials and a default region +- Permissions for `states:CreateStateMachine`, `states:StartExecution`, `states:DescribeExecution`, `states:GetExecutionHistory`, `states:DeleteStateMachine`, `iam:CreateRole`, `iam:PutRolePolicy`, `iam:DeleteRolePolicy`, `iam:DeleteRole` + +## Step 1: Create an IAM role + +Create a role that allows the Step Functions service to assume it. + +```bash +ROLE_ARN=$(aws iam create-role --role-name sfn-tut-role \ + --assume-role-policy-document '{ + "Version":"2012-10-17", + "Statement":[{ + "Effect":"Allow", + "Principal":{"Service":"states.amazonaws.com"}, + "Action":"sts:AssumeRole" + }] + }' --query 'Role.Arn' --output text) +echo "Role ARN: $ROLE_ARN" +``` + +Attach a policy for CloudWatch Logs so the state machine can log execution events: + +```bash +aws iam put-role-policy --role-name sfn-tut-role --policy-name sfn-logs \ + --policy-document '{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Action":["logs:*"],"Resource":"*"}]}' +``` + +Wait for the role to propagate before using it: + +```bash +sleep 10 +``` + +## Step 2: Create a state machine + +Define a state machine with four states: Pass produces a greeting, Wait pauses for 2 seconds, Choice branches on the message value, and Succeed ends the workflow. + +```json +{ + "Comment": "A Hello World state machine", + "StartAt": "Greeting", + "States": { + "Greeting": { + "Type": "Pass", + "Result": {"message": "Hello from Step Functions!"}, + "Next": "WaitStep" + }, + "WaitStep": { + "Type": "Wait", + "Seconds": 2, + "Next": "ChoiceStep" + }, + "ChoiceStep": { + "Type": "Choice", + "Choices": [ + { + "Variable": "$.message", + "StringEquals": "Hello from Step Functions!", + "Next": "SuccessStep" + } + ], + "Default": "FailStep" + }, + "SuccessStep": { + "Type": "Succeed" + }, + "FailStep": { + "Type": "Fail", + "Error": "UnexpectedMessage", + "Cause": "Message did not match expected value" + } + } +} +``` + +Save this definition to a file and create the state machine: + +```bash +SM_ARN=$(aws stepfunctions create-state-machine \ + --name tut-state-machine \ + --definition file://definition.json \ + --role-arn "$ROLE_ARN" \ + --query 'stateMachineArn' --output text) +echo "State machine ARN: $SM_ARN" +``` + +The Pass state sets `$.message` to a greeting. The Choice state checks this value and routes to Succeed when it matches. If the message were different, the workflow would take the Default branch to Fail. + +## Step 3: Start an execution + +```bash +EXEC_ARN=$(aws stepfunctions start-execution \ + --state-machine-arn "$SM_ARN" \ + --input '{"key": "value"}' \ + --query 'executionArn' --output text) +echo "Execution ARN: $EXEC_ARN" +``` + +The input JSON is available to the first state, but this state machine uses a Pass state with a hardcoded `Result`, so the input is replaced. + +## Step 4: Wait for execution to complete + +Poll the execution status until it reaches a terminal state: + +```bash +for i in $(seq 1 15); do + STATUS=$(aws stepfunctions describe-execution --execution-arn "$EXEC_ARN" \ + --query 'status' --output text) + echo "Status: $STATUS" + [ "$STATUS" = "SUCCEEDED" ] || [ "$STATUS" = "FAILED" ] || [ "$STATUS" = "TIMED_OUT" ] && break + sleep 3 +done +``` + +The Wait state adds a 2-second pause, so the execution typically completes in about 3 seconds. + +## Step 5: Get execution results + +```bash +aws stepfunctions describe-execution --execution-arn "$EXEC_ARN" \ + --query '{Status:status,Input:input,Output:output,Started:startDate,Stopped:stopDate}' \ + --output table +``` + +A successful execution shows `SUCCEEDED` status with the greeting message as output. + +## Step 6: Get execution history + +```bash +aws stepfunctions get-execution-history --execution-arn "$EXEC_ARN" \ + --query 'events[?type!=`TaskStateEntered` && type!=`TaskStateExited`].{Id:id,Type:type}' \ + --output table | head -20 +``` + +The history shows each state transition: `ExecutionStarted`, `PassStateEntered`, `WaitStateEntered`, `ChoiceStateEntered`, `SucceedStateEntered`, and `ExecutionSucceeded`. + +## Cleanup + +Delete the state machine, then remove the IAM role and its inline policy: + +```bash +aws stepfunctions delete-state-machine --state-machine-arn "$SM_ARN" +aws iam delete-role-policy --role-name sfn-tut-role --policy-name sfn-logs +aws iam delete-role --role-name sfn-tut-role +``` + +## Related resources + +- [Getting started with Step Functions](https://docs.aws.amazon.com/step-functions/latest/dg/getting-started-with-sfn.html) +- [Amazon States Language](https://docs.aws.amazon.com/step-functions/latest/dg/concepts-amazon-states-language.html) +- [Step Functions API reference](https://docs.aws.amazon.com/step-functions/latest/apireference/Welcome.html) diff --git a/tuts/101-aws-step-functions-gs/aws-step-functions-gs.sh b/tuts/101-aws-step-functions-gs/aws-step-functions-gs.sh new file mode 100644 index 00000000..4653168b --- /dev/null +++ b/tuts/101-aws-step-functions-gs/aws-step-functions-gs.sh @@ -0,0 +1,133 @@ +#!/bin/bash +# Tutorial: Create and run a Step Functions state machine +# Source: https://docs.aws.amazon.com/step-functions/latest/dg/getting-started-with-sfn.html + +WORK_DIR=$(mktemp -d) +LOG_FILE="$WORK_DIR/stepfunctions-$(date +%Y%m%d-%H%M%S).log" +exec > >(tee -a "$LOG_FILE") 2>&1 + +REGION=${AWS_DEFAULT_REGION:-${AWS_REGION:-$(aws configure get region 2>/dev/null)}} +if [ -z "$REGION" ]; then + echo "ERROR: No AWS region configured. Set one with: export AWS_DEFAULT_REGION=us-east-1" + exit 1 +fi +export AWS_DEFAULT_REGION="$REGION" +echo "Region: $REGION" + +RANDOM_ID=$(cat /dev/urandom | tr -dc 'a-z0-9' | fold -w 8 | head -n 1) +SM_NAME="tut-state-machine-${RANDOM_ID}" +ROLE_NAME="sfn-tut-role-${RANDOM_ID}" + +handle_error() { echo "ERROR on line $1"; trap - ERR; cleanup; exit 1; } +trap 'handle_error $LINENO' ERR + +cleanup() { + echo "" + echo "Cleaning up resources..." + [ -n "$SM_ARN" ] && aws stepfunctions delete-state-machine --state-machine-arn "$SM_ARN" 2>/dev/null && \ + echo " Deleted state machine $SM_NAME" + aws iam delete-role-policy --role-name "$ROLE_NAME" --policy-name sfn-logs 2>/dev/null + aws iam delete-role --role-name "$ROLE_NAME" 2>/dev/null && echo " Deleted role $ROLE_NAME" + rm -rf "$WORK_DIR" + echo "Cleanup complete." +} + +# Step 1: Create IAM role +echo "Step 1: Creating IAM role: $ROLE_NAME" +ROLE_ARN=$(aws iam create-role --role-name "$ROLE_NAME" \ + --assume-role-policy-document '{ + "Version":"2012-10-17", + "Statement":[{"Effect":"Allow","Principal":{"Service":"states.amazonaws.com"},"Action":"sts:AssumeRole"}] + }' --query 'Role.Arn' --output text) +aws iam put-role-policy --role-name "$ROLE_NAME" --policy-name sfn-logs \ + --policy-document '{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Action":["logs:*"],"Resource":"*"}]}' +echo " Role ARN: $ROLE_ARN" +sleep 10 + +# Step 2: Create state machine +echo "Step 2: Creating state machine: $SM_NAME" +cat > "$WORK_DIR/definition.json" << 'EOF' +{ + "Comment": "A Hello World state machine", + "StartAt": "Greeting", + "States": { + "Greeting": { + "Type": "Pass", + "Result": {"message": "Hello from Step Functions!"}, + "Next": "WaitStep" + }, + "WaitStep": { + "Type": "Wait", + "Seconds": 2, + "Next": "ChoiceStep" + }, + "ChoiceStep": { + "Type": "Choice", + "Choices": [ + { + "Variable": "$.message", + "StringEquals": "Hello from Step Functions!", + "Next": "SuccessStep" + } + ], + "Default": "FailStep" + }, + "SuccessStep": { + "Type": "Succeed" + }, + "FailStep": { + "Type": "Fail", + "Error": "UnexpectedMessage", + "Cause": "Message did not match expected value" + } + } +} +EOF + +SM_ARN=$(aws stepfunctions create-state-machine \ + --name "$SM_NAME" \ + --definition "file://$WORK_DIR/definition.json" \ + --role-arn "$ROLE_ARN" \ + --query 'stateMachineArn' --output text) +echo " State machine ARN: $SM_ARN" + +# Step 3: Start an execution +echo "Step 3: Starting execution" +EXEC_ARN=$(aws stepfunctions start-execution \ + --state-machine-arn "$SM_ARN" \ + --input '{"key": "value"}' \ + --query 'executionArn' --output text) +echo " Execution ARN: $EXEC_ARN" + +# Step 4: Wait for execution to complete +echo "Step 4: Waiting for execution to complete..." +for i in $(seq 1 15); do + STATUS=$(aws stepfunctions describe-execution --execution-arn "$EXEC_ARN" \ + --query 'status' --output text) + echo " Status: $STATUS" + [ "$STATUS" = "SUCCEEDED" ] || [ "$STATUS" = "FAILED" ] || [ "$STATUS" = "TIMED_OUT" ] && break + sleep 3 +done + +# Step 5: Get execution results +echo "Step 5: Execution results" +aws stepfunctions describe-execution --execution-arn "$EXEC_ARN" \ + --query '{Status:status,Input:input,Output:output,Started:startDate,Stopped:stopDate}' --output table + +# Step 6: Get execution history +echo "Step 6: Execution history (key events)" +aws stepfunctions get-execution-history --execution-arn "$EXEC_ARN" \ + --query 'events[?type!=`TaskStateEntered` && type!=`TaskStateExited`].{Id:id,Type:type}' --output table | head -20 + +echo "" +echo "Tutorial complete." +echo "Do you want to clean up all resources? (y/n): " +read -r CHOICE +if [[ "$CHOICE" =~ ^[Yy]$ ]]; then + cleanup +else + echo "Manual cleanup:" + echo " aws stepfunctions delete-state-machine --state-machine-arn $SM_ARN" + echo " aws iam delete-role-policy --role-name $ROLE_NAME --policy-name sfn-logs" + echo " aws iam delete-role --role-name $ROLE_NAME" +fi diff --git a/tuts/112-amazon-cognito-gs/README.md b/tuts/112-amazon-cognito-gs/README.md new file mode 100644 index 00000000..41ec7ffd --- /dev/null +++ b/tuts/112-amazon-cognito-gs/README.md @@ -0,0 +1,56 @@ +# Cognito: Create a user pool and manage users + +## Source + +https://docs.aws.amazon.com/cognito/latest/developerguide/getting-started-user-pools.html + +## Use case + +- **ID**: cognito/getting-started +- **Level**: beginner +- **Core actions**: `cognito-idp:CreateUserPool`, `cognito-idp:AdminCreateUser` + +## Steps + +1. Create a user pool with email sign-in and password policy +2. Create an app client for authentication +3. Create a user with admin API +4. Set a permanent password +5. List users in the pool +6. Describe the user pool + +## Resources created + +| Resource | Type | +|----------|------| +| `tut-pool-` | Cognito user pool | +| `tutorial-app` | User pool app client | + +## Cost + +Free tier covers 50,000 monthly active users (MAUs). This tutorial creates no MAU charges because the user never authenticates through a hosted UI or token endpoint. + +## Duration + +~9 seconds + +## Related docs + +- [Getting started with user pools](https://docs.aws.amazon.com/cognito/latest/developerguide/getting-started-user-pools.html) +- [Creating a user pool](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pool-as-user-directory.html) +- [Managing users](https://docs.aws.amazon.com/cognito/latest/developerguide/how-to-manage-user-accounts.html) +- [Amazon Cognito pricing](https://aws.amazon.com/cognito/pricing/) + +--- + +## Appendix + +| Field | Value | +|-------|-------| +| Date | 2026-04-14 | +| Script lines | 85 | +| Exit code | 0 | +| Runtime | 9s | +| Steps | 6 | +| Issues | None | +| Version | v1 | diff --git a/tuts/112-amazon-cognito-gs/REVISION-HISTORY.md b/tuts/112-amazon-cognito-gs/REVISION-HISTORY.md new file mode 100644 index 00000000..75752dce --- /dev/null +++ b/tuts/112-amazon-cognito-gs/REVISION-HISTORY.md @@ -0,0 +1,8 @@ +# Revision History: 112-amazon-cognito-gs + +## Shell (CLI script) + +### 2026-04-14 v1 published +- Type: functional +- Initial version + diff --git a/tuts/112-amazon-cognito-gs/amazon-cognito-gs.md b/tuts/112-amazon-cognito-gs/amazon-cognito-gs.md new file mode 100644 index 00000000..74d2ed6c --- /dev/null +++ b/tuts/112-amazon-cognito-gs/amazon-cognito-gs.md @@ -0,0 +1,112 @@ +# Create a user pool and manage users with Amazon Cognito + +## Overview + +In this tutorial, you use the AWS CLI to create an Amazon Cognito user pool with email-based sign-in, add an app client, create a user, set a permanent password, and inspect the pool. You then delete the user pool during cleanup. + +## Prerequisites + +- AWS CLI installed and configured with appropriate permissions. +- An IAM principal with permissions for `cognito-idp:CreateUserPool`, `cognito-idp:CreateUserPoolClient`, `cognito-idp:AdminCreateUser`, `cognito-idp:AdminSetUserPassword`, `cognito-idp:ListUsers`, `cognito-idp:DescribeUserPool`, and `cognito-idp:DeleteUserPool`. + +## Step 1: Create a user pool + +Create a user pool that accepts email addresses as usernames and auto-verifies email. + +```bash +RANDOM_ID=$(openssl rand -hex 4) +POOL_NAME="tut-pool-${RANDOM_ID}" + +POOL_ID=$(aws cognito-idp create-user-pool --pool-name "$POOL_NAME" \ + --auto-verified-attributes email \ + --username-attributes email \ + --policies '{"PasswordPolicy":{"MinimumLength":8,"RequireUppercase":true,"RequireLowercase":true,"RequireNumbers":true,"RequireSymbols":false}}' \ + --query 'UserPool.Id' --output text) +echo "Pool ID: $POOL_ID" +``` + +The `--username-attributes email` setting lets users sign in with their email address instead of a separate username. `--auto-verified-attributes email` marks email as verified when an admin creates the user with a verified email attribute. + +## Step 2: Create an app client + +Create an app client that allows username/password authentication. + +```bash +CLIENT_ID=$(aws cognito-idp create-user-pool-client \ + --user-pool-id "$POOL_ID" \ + --client-name "tutorial-app" \ + --explicit-auth-flows ALLOW_USER_PASSWORD_AUTH ALLOW_REFRESH_TOKEN_AUTH \ + --query 'UserPoolClient.ClientId' --output text) +echo "Client ID: $CLIENT_ID" +``` + +App clients define how applications authenticate against the user pool. `ALLOW_USER_PASSWORD_AUTH` enables direct username/password sign-in, which is useful for server-side applications. + +## Step 3: Create a user + +Use the admin API to create a user directly, suppressing the welcome email. + +```bash +aws cognito-idp admin-create-user --user-pool-id "$POOL_ID" \ + --username "tutorial@example.com" \ + --user-attributes Name=email,Value=tutorial@example.com Name=email_verified,Value=true \ + --temporary-password "TutPass1!" \ + --message-action SUPPRESS \ + --query 'User.{Username:Username,Status:UserStatus,Created:UserCreateDate}' --output table +``` + +`--message-action SUPPRESS` prevents Cognito from sending an invitation email. The user is created in `FORCE_CHANGE_PASSWORD` status, meaning they must change their password on first sign-in. + +## Step 4: Set a permanent password + +Set a permanent password so the user moves to `CONFIRMED` status without going through the change-password flow. + +```bash +aws cognito-idp admin-set-user-password --user-pool-id "$POOL_ID" \ + --username "tutorial@example.com" \ + --password "Tutorial1Pass!" --permanent +``` + +In production, you would let users change their own password through the authentication flow. The admin API is useful for migrations and testing. + +## Step 5: List users + +List all users in the pool to confirm the user status. + +```bash +aws cognito-idp list-users --user-pool-id "$POOL_ID" \ + --query 'Users[].{Username:Username,Status:UserStatus,Enabled:Enabled}' --output table +``` + +## Step 6: Describe the user pool + +View the pool configuration and user count. + +```bash +aws cognito-idp describe-user-pool --user-pool-id "$POOL_ID" \ + --query 'UserPool.{Name:Name,Id:Id,Status:Status,Users:EstimatedNumberOfUsers,MFA:MfaConfiguration}' \ + --output table +``` + +## Cleanup + +Delete the user pool. This removes the pool, all users, and all app clients. + +```bash +aws cognito-idp delete-user-pool --user-pool-id "$POOL_ID" +``` + +The Cognito free tier covers 50,000 MAUs. Since this tutorial only creates a user without authenticating through a token endpoint, there are no charges. + +The script automates all steps including cleanup: + +```bash +bash amazon-cognito-gs.sh +``` + +## Related resources + +- [Getting started with user pools](https://docs.aws.amazon.com/cognito/latest/developerguide/getting-started-user-pools.html) +- [Creating a user pool](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pool-as-user-directory.html) +- [Managing users](https://docs.aws.amazon.com/cognito/latest/developerguide/how-to-manage-user-accounts.html) +- [Amazon Cognito pricing](https://aws.amazon.com/cognito/pricing/) diff --git a/tuts/112-amazon-cognito-gs/amazon-cognito-gs.sh b/tuts/112-amazon-cognito-gs/amazon-cognito-gs.sh new file mode 100644 index 00000000..fdac920e --- /dev/null +++ b/tuts/112-amazon-cognito-gs/amazon-cognito-gs.sh @@ -0,0 +1,85 @@ +#!/bin/bash +# Tutorial: Create a Cognito user pool and manage users +# Source: https://docs.aws.amazon.com/cognito/latest/developerguide/getting-started-user-pools.html + +WORK_DIR=$(mktemp -d) +LOG_FILE="$WORK_DIR/cognito-$(date +%Y%m%d-%H%M%S).log" +exec > >(tee -a "$LOG_FILE") 2>&1 + +REGION=${AWS_DEFAULT_REGION:-${AWS_REGION:-$(aws configure get region 2>/dev/null)}} +if [ -z "$REGION" ]; then + echo "ERROR: No AWS region configured. Set one with: export AWS_DEFAULT_REGION=us-east-1" + exit 1 +fi +export AWS_DEFAULT_REGION="$REGION" +echo "Region: $REGION" + +RANDOM_ID=$(cat /dev/urandom | tr -dc 'a-z0-9' | fold -w 8 | head -n 1) +POOL_NAME="tut-pool-${RANDOM_ID}" + +handle_error() { echo "ERROR on line $1"; trap - ERR; cleanup; exit 1; } +trap 'handle_error $LINENO' ERR + +cleanup() { + echo "" + echo "Cleaning up resources..." + [ -n "$POOL_ID" ] && aws cognito-idp delete-user-pool --user-pool-id "$POOL_ID" 2>/dev/null && \ + echo " Deleted user pool $POOL_ID" + rm -rf "$WORK_DIR" + echo "Cleanup complete." +} + +# Step 1: Create a user pool +echo "Step 1: Creating user pool: $POOL_NAME" +POOL_ID=$(aws cognito-idp create-user-pool --pool-name "$POOL_NAME" \ + --auto-verified-attributes email \ + --username-attributes email \ + --policies '{"PasswordPolicy":{"MinimumLength":8,"RequireUppercase":true,"RequireLowercase":true,"RequireNumbers":true,"RequireSymbols":false}}' \ + --query 'UserPool.Id' --output text) +echo " Pool ID: $POOL_ID" + +# Step 2: Create an app client +echo "Step 2: Creating app client" +CLIENT_ID=$(aws cognito-idp create-user-pool-client \ + --user-pool-id "$POOL_ID" \ + --client-name "tutorial-app" \ + --explicit-auth-flows ALLOW_USER_PASSWORD_AUTH ALLOW_REFRESH_TOKEN_AUTH \ + --query 'UserPoolClient.ClientId' --output text) +echo " Client ID: $CLIENT_ID" + +# Step 3: Create a user (admin) +echo "Step 3: Creating a user" +aws cognito-idp admin-create-user --user-pool-id "$POOL_ID" \ + --username "tutorial@example.com" \ + --user-attributes Name=email,Value=tutorial@example.com Name=email_verified,Value=true \ + --temporary-password "TutPass1!" \ + --message-action SUPPRESS \ + --query 'User.{Username:Username,Status:UserStatus,Created:UserCreateDate}' --output table + +# Step 4: Set permanent password +echo "Step 4: Setting permanent password" +aws cognito-idp admin-set-user-password --user-pool-id "$POOL_ID" \ + --username "tutorial@example.com" \ + --password "Tutorial1Pass!" --permanent > /dev/null +echo " Password set" + +# Step 5: List users +echo "Step 5: Listing users" +aws cognito-idp list-users --user-pool-id "$POOL_ID" \ + --query 'Users[].{Username:Username,Status:UserStatus,Enabled:Enabled}' --output table + +# Step 6: Describe the user pool +echo "Step 6: User pool details" +aws cognito-idp describe-user-pool --user-pool-id "$POOL_ID" \ + --query 'UserPool.{Name:Name,Id:Id,Status:Status,Users:EstimatedNumberOfUsers,MFA:MfaConfiguration}' --output table + +echo "" +echo "Tutorial complete." +echo "Do you want to clean up all resources? (y/n): " +read -r CHOICE +if [[ "$CHOICE" =~ ^[Yy]$ ]]; then + cleanup +else + echo "Manual cleanup:" + echo " aws cognito-idp delete-user-pool --user-pool-id $POOL_ID" +fi