diff --git a/tuts/107-amazon-efs-gs/README.md b/tuts/107-amazon-efs-gs/README.md new file mode 100644 index 0000000..5e220e0 --- /dev/null +++ b/tuts/107-amazon-efs-gs/README.md @@ -0,0 +1,43 @@ +# EFS: Create a file system + +## Source + +https://docs.aws.amazon.com/efs/latest/ug/getting-started.html + +## Use case + +- **ID**: efs/getting-started +- **Level**: intermediate +- **Core actions**: `elasticfilesystem:CreateFileSystem`, `elasticfilesystem:CreateMountTarget`, `elasticfilesystem:PutLifecycleConfiguration` + +## Steps + +1. Create an encrypted file system +2. Wait for the file system to be available +3. Describe the file system +4. Create a mount target in the default VPC +5. Wait for the mount target +6. Describe mount targets +7. Set a lifecycle policy + +## Resources created + +| Resource | Type | +|----------|------| +| `tutorial-efs-` | EFS file system | +| Mount target in default VPC subnet | Mount target | + +## Cost + +Per-GB pricing. $0.30/GB-month (Standard), $0.025/GB-month (Infrequent Access). An empty file system has no storage cost. Clean up promptly to avoid charges. + +## Duration + +~114 seconds (mount target creation accounts for most of the wait) + +## Related docs + +- [Getting started with Amazon EFS](https://docs.aws.amazon.com/efs/latest/ug/getting-started.html) +- [Creating file systems](https://docs.aws.amazon.com/efs/latest/ug/creating-using-create-fs.html) +- [Creating mount targets](https://docs.aws.amazon.com/efs/latest/ug/accessing-fs.html) +- [EFS lifecycle management](https://docs.aws.amazon.com/efs/latest/ug/lifecycle-management-efs.html) diff --git a/tuts/107-amazon-efs-gs/REVISION-HISTORY.md b/tuts/107-amazon-efs-gs/REVISION-HISTORY.md new file mode 100644 index 0000000..38a2cf2 --- /dev/null +++ b/tuts/107-amazon-efs-gs/REVISION-HISTORY.md @@ -0,0 +1,8 @@ +# Revision History: 107-amazon-efs-gs + +## Shell (CLI script) + +### 2026-04-14 v1 published +- Type: functional +- Initial version + diff --git a/tuts/107-amazon-efs-gs/amazon-efs-gs.md b/tuts/107-amazon-efs-gs/amazon-efs-gs.md new file mode 100644 index 0000000..e03ead2 --- /dev/null +++ b/tuts/107-amazon-efs-gs/amazon-efs-gs.md @@ -0,0 +1,129 @@ +# Create a file system with Amazon EFS + +This tutorial shows you how to create an encrypted Amazon EFS file system, create a mount target in your default VPC, configure a lifecycle policy, and clean up all resources. + +## Prerequisites + +- AWS CLI configured with credentials and a default region +- A default VPC with at least one subnet in the configured region +- Permissions for `elasticfilesystem:CreateFileSystem`, `elasticfilesystem:DescribeFileSystems`, `elasticfilesystem:CreateMountTarget`, `elasticfilesystem:DescribeMountTargets`, `elasticfilesystem:PutLifecycleConfiguration`, `elasticfilesystem:DescribeLifecycleConfiguration`, `elasticfilesystem:DeleteMountTarget`, `elasticfilesystem:DeleteFileSystem`, `ec2:DescribeVpcs`, `ec2:DescribeSubnets` + +## Step 1: Create an encrypted file system + +```bash +RANDOM_ID=$(openssl rand -hex 4) +FS_TOKEN="tut-efs-${RANDOM_ID}" + +FS_ID=$(aws efs create-file-system --creation-token "$FS_TOKEN" \ + --performance-mode generalPurpose \ + --throughput-mode bursting \ + --encrypted \ + --tags Key=Name,Value="tutorial-efs-${RANDOM_ID}" \ + --query 'FileSystemId' --output text) +echo "File system ID: $FS_ID" +``` + +The creation token is an idempotency key — calling `create-file-system` again with the same token returns the existing file system instead of creating a duplicate. + +## Step 2: Wait for the file system to be available + +```bash +for i in $(seq 1 15); do + STATE=$(aws efs describe-file-systems --file-system-id "$FS_ID" \ + --query 'FileSystems[0].LifeCycleState' --output text) + echo "State: $STATE" + [ "$STATE" = "available" ] && break + sleep 5 +done +``` + +## Step 3: Describe the file system + +```bash +aws efs describe-file-systems --file-system-id "$FS_ID" \ + --query 'FileSystems[0].{Id:FileSystemId,State:LifeCycleState,Encrypted:Encrypted,Performance:PerformanceMode,Size:SizeInBytes.Value}' \ + --output table +``` + +## Step 4: Create a mount target + +A mount target provides a network endpoint for mounting the file system. Create one in a subnet of your default VPC. + +```bash +VPC_ID=$(aws ec2 describe-vpcs --filters "Name=isDefault,Values=true" \ + --query 'Vpcs[0].VpcId' --output text) +SUBNET_ID=$(aws ec2 describe-subnets --filters "Name=vpc-id,Values=$VPC_ID" \ + --query 'Subnets[0].SubnetId' --output text) + +MT_ID=$(aws efs create-mount-target --file-system-id "$FS_ID" \ + --subnet-id "$SUBNET_ID" \ + --query 'MountTargetId' --output text) +echo "Mount target: $MT_ID" +``` + +You need one mount target per Availability Zone. This tutorial creates one for demonstration. + +## Step 5: Wait for the mount target + +Mount target creation typically takes 60–90 seconds. + +```bash +for i in $(seq 1 15); do + MT_STATE=$(aws efs describe-mount-targets --mount-target-id "$MT_ID" \ + --query 'MountTargets[0].LifeCycleState' --output text) + echo "State: $MT_STATE" + [ "$MT_STATE" = "available" ] && break + sleep 10 +done +``` + +## Step 6: Describe mount targets + +```bash +aws efs describe-mount-targets --file-system-id "$FS_ID" \ + --query 'MountTargets[].{Id:MountTargetId,Subnet:SubnetId,State:LifeCycleState,IP:IpAddress}' \ + --output table +``` + +## Step 7: Set a lifecycle policy + +Move files not accessed for 30 days to the Infrequent Access (IA) storage class to reduce costs. + +```bash +aws efs put-lifecycle-configuration --file-system-id "$FS_ID" \ + --lifecycle-policies '[{"TransitionToIA":"AFTER_30_DAYS"}]' + +aws efs describe-lifecycle-configuration --file-system-id "$FS_ID" \ + --query 'LifecyclePolicies' --output table +``` + +## Cleanup + +Delete mount targets first, then the file system. You must wait for mount targets to finish deleting before the file system can be removed. + +```bash +aws efs delete-mount-target --mount-target-id "$MT_ID" + +# Wait for mount target deletion +for i in $(seq 1 12); do + MT_COUNT=$(aws efs describe-mount-targets --file-system-id "$FS_ID" \ + --query 'MountTargets | length(@)' --output text 2>/dev/null || echo "0") + [ "$MT_COUNT" = "0" ] && break + sleep 10 +done + +aws efs delete-file-system --file-system-id "$FS_ID" +``` + +EFS charges per GB stored ($0.30/GB-month for Standard, $0.025/GB-month for IA). An empty file system has no storage cost. The script automates all steps including cleanup: + +```bash +bash amazon-efs-gs.sh +``` + +## Related resources + +- [Getting started with Amazon EFS](https://docs.aws.amazon.com/efs/latest/ug/getting-started.html) +- [Creating file systems](https://docs.aws.amazon.com/efs/latest/ug/creating-using-create-fs.html) +- [Creating mount targets](https://docs.aws.amazon.com/efs/latest/ug/accessing-fs.html) +- [EFS lifecycle management](https://docs.aws.amazon.com/efs/latest/ug/lifecycle-management-efs.html) diff --git a/tuts/107-amazon-efs-gs/amazon-efs-gs.sh b/tuts/107-amazon-efs-gs/amazon-efs-gs.sh new file mode 100644 index 0000000..f922ead --- /dev/null +++ b/tuts/107-amazon-efs-gs/amazon-efs-gs.sh @@ -0,0 +1,118 @@ +#!/bin/bash +# Tutorial: Create an Amazon EFS file system +# Source: https://docs.aws.amazon.com/efs/latest/ug/getting-started.html + +WORK_DIR=$(mktemp -d) +LOG_FILE="$WORK_DIR/efs-$(date +%Y%m%d-%H%M%S).log" +exec > >(tee -a "$LOG_FILE") 2>&1 + +REGION=${AWS_DEFAULT_REGION:-${AWS_REGION:-$(aws configure get region 2>/dev/null)}} +if [ -z "$REGION" ]; then + echo "ERROR: No AWS region configured. Set one with: export AWS_DEFAULT_REGION=us-east-1" + exit 1 +fi +export AWS_DEFAULT_REGION="$REGION" +echo "Region: $REGION" + +RANDOM_ID=$(cat /dev/urandom | tr -dc 'a-z0-9' | fold -w 8 | head -n 1) +FS_TOKEN="tut-efs-${RANDOM_ID}" + +handle_error() { echo "ERROR on line $1"; trap - ERR; cleanup; exit 1; } +trap 'handle_error $LINENO' ERR + +cleanup() { + echo "" + echo "Cleaning up resources..." + if [ -n "$FS_ID" ]; then + # Delete mount targets first + for MT_ID in $(aws efs describe-mount-targets --file-system-id "$FS_ID" \ + --query 'MountTargets[].MountTargetId' --output text 2>/dev/null); do + aws efs delete-mount-target --mount-target-id "$MT_ID" 2>/dev/null + echo " Deleted mount target $MT_ID" + done + # Wait for mount targets to be deleted + for i in $(seq 1 12); do + MT_COUNT=$(aws efs describe-mount-targets --file-system-id "$FS_ID" \ + --query 'MountTargets | length(@)' --output text 2>/dev/null || echo "0") + [ "$MT_COUNT" = "0" ] && break + sleep 10 + done + aws efs delete-file-system --file-system-id "$FS_ID" 2>/dev/null && \ + echo " Deleted file system $FS_ID" + fi + rm -rf "$WORK_DIR" + echo "Cleanup complete." +} + +# Step 1: Create a file system +echo "Step 1: Creating EFS file system" +FS_ID=$(aws efs create-file-system --creation-token "$FS_TOKEN" \ + --performance-mode generalPurpose \ + --throughput-mode bursting \ + --encrypted \ + --tags Key=Name,Value="tutorial-efs-${RANDOM_ID}" \ + --query 'FileSystemId' --output text) +echo " File system ID: $FS_ID" + +# Step 2: Wait for file system to be available +echo "Step 2: Waiting for file system to be available..." +for i in $(seq 1 15); do + STATE=$(aws efs describe-file-systems --file-system-id "$FS_ID" \ + --query 'FileSystems[0].LifeCycleState' --output text) + echo " State: $STATE" + [ "$STATE" = "available" ] && break + sleep 5 +done + +# Step 3: Describe the file system +echo "Step 3: File system details" +aws efs describe-file-systems --file-system-id "$FS_ID" \ + --query 'FileSystems[0].{Id:FileSystemId,State:LifeCycleState,Encrypted:Encrypted,Performance:PerformanceMode,Size:SizeInBytes.Value}' --output table + +# Step 4: Create a mount target +echo "Step 4: Creating mount target" +VPC_ID=$(aws ec2 describe-vpcs --filters "Name=isDefault,Values=true" --query 'Vpcs[0].VpcId' --output text) +SUBNET_ID=$(aws ec2 describe-subnets --filters "Name=vpc-id,Values=$VPC_ID" --query 'Subnets[0].SubnetId' --output text) +echo " VPC: $VPC_ID, Subnet: $SUBNET_ID" + +MT_ID=$(aws efs create-mount-target --file-system-id "$FS_ID" --subnet-id "$SUBNET_ID" \ + --query 'MountTargetId' --output text) +echo " Mount target: $MT_ID" + +# Step 5: Wait for mount target +echo "Step 5: Waiting for mount target to be available..." +for i in $(seq 1 15); do + MT_STATE=$(aws efs describe-mount-targets --mount-target-id "$MT_ID" \ + --query 'MountTargets[0].LifeCycleState' --output text) + echo " State: $MT_STATE" + [ "$MT_STATE" = "available" ] && break + sleep 10 +done + +# Step 6: Describe mount targets +echo "Step 6: Mount target details" +aws efs describe-mount-targets --file-system-id "$FS_ID" \ + --query 'MountTargets[].{Id:MountTargetId,Subnet:SubnetId,State:LifeCycleState,IP:IpAddress}' --output table + +# Step 7: Set lifecycle policy +echo "Step 7: Setting lifecycle policy (move to IA after 30 days)" +aws efs put-lifecycle-configuration --file-system-id "$FS_ID" \ + --lifecycle-policies "[{\"TransitionToIA\":\"AFTER_30_DAYS\"}]" > /dev/null +aws efs describe-lifecycle-configuration --file-system-id "$FS_ID" \ + --query 'LifecyclePolicies' --output table + +echo "" +echo "Tutorial complete." +echo "To mount: sudo mount -t nfs4 $FS_ID.efs.$REGION.amazonaws.com:/ /mnt/efs" +echo "" +echo "Do you want to clean up all resources? (y/n): " +read -r CHOICE +if [[ "$CHOICE" =~ ^[Yy]$ ]]; then + cleanup +else + echo "Resources left running. EFS charges per GB stored." + echo "Manual cleanup:" + echo " aws efs delete-mount-target --mount-target-id $MT_ID" + echo " sleep 60" + echo " aws efs delete-file-system --file-system-id $FS_ID" +fi diff --git a/tuts/127-amazon-glacier-gs/README.md b/tuts/127-amazon-glacier-gs/README.md new file mode 100644 index 0000000..532471a --- /dev/null +++ b/tuts/127-amazon-glacier-gs/README.md @@ -0,0 +1,38 @@ +# Amazon Glacier Gs + +An AWS CLI tutorial that demonstrates Glacier operations. + +## Running + +```bash +bash amazon-glacier-gs.sh +``` + +To auto-run with cleanup: + +```bash +echo 'y' | bash amazon-glacier-gs.sh +``` + +## What it does + +1. Creating vault: $VAULT_NAME +2. Describing vault +3. Uploading an archive +4. Listing vaults +5. Initiating inventory retrieval + +## Resources created + +- Vault + +The script prompts you to clean up resources when it finishes. + +## Cost + +Free tier eligible for most operations. Clean up resources after use to avoid charges. + +## Related docs + +- [AWS CLI glacier reference](https://docs.aws.amazon.com/cli/latest/reference/glacier/index.html) + diff --git a/tuts/127-amazon-glacier-gs/REVISION-HISTORY.md b/tuts/127-amazon-glacier-gs/REVISION-HISTORY.md new file mode 100644 index 0000000..a51e105 --- /dev/null +++ b/tuts/127-amazon-glacier-gs/REVISION-HISTORY.md @@ -0,0 +1,8 @@ +# Revision History: 127-amazon-glacier-gs + +## Shell (CLI script) + +### 2026-04-14 v1 published +- Type: functional +- Initial version + diff --git a/tuts/127-amazon-glacier-gs/amazon-glacier-gs.md b/tuts/127-amazon-glacier-gs/amazon-glacier-gs.md new file mode 100644 index 0000000..94c7cac --- /dev/null +++ b/tuts/127-amazon-glacier-gs/amazon-glacier-gs.md @@ -0,0 +1,31 @@ +# Amazon Glacier Gs + +## Prerequisites + +1. AWS CLI installed and configured (`aws configure`) +2. Appropriate IAM permissions for the AWS services used + +## Step 1: Creating vault: $VAULT_NAME + +The script handles this step automatically. See `amazon-glacier-gs.sh` for the exact CLI commands. + +## Step 2: Describing vault + +The script handles this step automatically. See `amazon-glacier-gs.sh` for the exact CLI commands. + +## Step 3: Uploading an archive + +The script handles this step automatically. See `amazon-glacier-gs.sh` for the exact CLI commands. + +## Step 4: Listing vaults + +The script handles this step automatically. See `amazon-glacier-gs.sh` for the exact CLI commands. + +## Step 5: Initiating inventory retrieval + +The script handles this step automatically. See `amazon-glacier-gs.sh` for the exact CLI commands. + +## Cleanup + +The script prompts you to clean up all created resources. If you need to clean up manually, check the script log for the resource names that were created. + diff --git a/tuts/127-amazon-glacier-gs/amazon-glacier-gs.sh b/tuts/127-amazon-glacier-gs/amazon-glacier-gs.sh new file mode 100644 index 0000000..d4e335d --- /dev/null +++ b/tuts/127-amazon-glacier-gs/amazon-glacier-gs.sh @@ -0,0 +1,34 @@ +#!/bin/bash +WORK_DIR=$(mktemp -d) +exec > >(tee -a "$WORK_DIR/glacier-$(date +%Y%m%d-%H%M%S).log") 2>&1 +REGION=${AWS_DEFAULT_REGION:-${AWS_REGION:-$(aws configure get region 2>/dev/null))} +[ -z "$REGION" ] && echo "ERROR: No region" && exit 1 +export AWS_DEFAULT_REGION="$REGION" +echo "Region: $REGION" +RANDOM_ID=$(cat /dev/urandom | tr -dc 'a-z0-9' | fold -w 8 | head -n 1) +VAULT_NAME="tut-vault-${RANDOM_ID}" +handle_error() { echo "ERROR on line $1"; trap - ERR; cleanup; exit 1; } +trap 'handle_error $LINENO' ERR +cleanup() { echo ""; echo "Cleaning up..."; aws glacier delete-vault --vault-name "$VAULT_NAME" 2>/dev/null && echo " Deleted vault $VAULT_NAME"; rm -rf "$WORK_DIR"; echo "Done."; } +echo "Step 1: Creating vault: $VAULT_NAME" +aws glacier create-vault --vault-name "$VAULT_NAME" --account-id - +echo " Vault created" +echo "Step 2: Describing vault" +aws glacier describe-vault --vault-name "$VAULT_NAME" --account-id - --query '{Name:VaultName,ARN:VaultARN,Created:CreationDate}' --output table +echo "Step 3: Uploading an archive" +echo "Hello from Glacier tutorial" > "$WORK_DIR/archive.txt" +ARCHIVE_ID=$(aws glacier upload-archive --vault-name "$VAULT_NAME" --account-id - --body "$WORK_DIR/archive.txt" --query 'archiveId' --output text) +echo " Archive ID: ${ARCHIVE_ID:0:40}..." +echo "Step 4: Listing vaults" +aws glacier list-vaults --account-id - --query 'VaultList[?starts_with(VaultName, `tut-`)].{Name:VaultName,Archives:NumberOfArchives,Size:SizeInBytes}' --output table +echo "Step 5: Initiating inventory retrieval" +JOB_ID=$(aws glacier initiate-job --vault-name "$VAULT_NAME" --account-id - --job-parameters '{"Type":"inventory-retrieval"}' --query 'jobId' --output text) +echo " Job ID: ${JOB_ID:0:40}..." +echo " (Inventory retrieval takes 3-5 hours — not waiting)" +echo "" +echo "Tutorial complete." +echo "Note: The vault contains an archive and cannot be deleted until the archive is removed." +echo "Archive deletion takes 24 hours to process. The vault will need manual cleanup later." +echo "Do you want to attempt cleanup? (y/n): " +read -r CHOICE +[[ "$CHOICE" =~ ^[Yy]$ ]] && { echo "Deleting archive (takes 24h to process)..."; aws glacier delete-archive --vault-name "$VAULT_NAME" --account-id - --archive-id "$ARCHIVE_ID" 2>/dev/null; echo " Archive deletion initiated. Delete vault after 24h:"; echo " aws glacier delete-vault --vault-name $VAULT_NAME --account-id -"; rm -rf "$WORK_DIR"; } || echo "Manual: aws glacier delete-archive then delete-vault" diff --git a/tuts/156-s3-event-notifications/README.md b/tuts/156-s3-event-notifications/README.md new file mode 100644 index 0000000..305fe08 --- /dev/null +++ b/tuts/156-s3-event-notifications/README.md @@ -0,0 +1,41 @@ +# S3 Events + +An AWS CLI tutorial that demonstrates S3 operations. + +## Running + +```bash +bash s3-events.sh +``` + +To auto-run with cleanup: + +```bash +echo 'y' | bash s3-events.sh +``` + +## What it does + +1. Creating SQS queue for notifications +2. Creating bucket with event notification +3. Uploading a file to trigger notification +4. Reading notification from SQS + +## Resources created + +- Bucket +- Queue +- Bucket Notification Configuration + +The script prompts you to clean up resources when it finishes. + +## Cost + +Free tier eligible for most operations. Clean up resources after use to avoid charges. + +## Related docs + +- [AWS CLI s3 reference](https://docs.aws.amazon.com/cli/latest/reference/s3/index.html) +- [AWS CLI s3api reference](https://docs.aws.amazon.com/cli/latest/reference/s3api/index.html) +- [AWS CLI sqs reference](https://docs.aws.amazon.com/cli/latest/reference/sqs/index.html) + diff --git a/tuts/156-s3-event-notifications/REVISION-HISTORY.md b/tuts/156-s3-event-notifications/REVISION-HISTORY.md new file mode 100644 index 0000000..4ee12b6 --- /dev/null +++ b/tuts/156-s3-event-notifications/REVISION-HISTORY.md @@ -0,0 +1,8 @@ +# Revision History: 156-s3-event-notifications + +## Shell (CLI script) + +### 2026-04-14 v1 published +- Type: functional +- Initial version + diff --git a/tuts/156-s3-event-notifications/s3-events.md b/tuts/156-s3-event-notifications/s3-events.md new file mode 100644 index 0000000..6ec4c8f --- /dev/null +++ b/tuts/156-s3-event-notifications/s3-events.md @@ -0,0 +1,27 @@ +# S3 Events + +## Prerequisites + +1. AWS CLI installed and configured (`aws configure`) +2. Appropriate IAM permissions for the AWS services used + +## Step 1: Creating SQS queue for notifications + +The script handles this step automatically. See `s3-events.sh` for the exact CLI commands. + +## Step 2: Creating bucket with event notification + +The script handles this step automatically. See `s3-events.sh` for the exact CLI commands. + +## Step 3: Uploading a file to trigger notification + +The script handles this step automatically. See `s3-events.sh` for the exact CLI commands. + +## Step 4: Reading notification from SQS + +The script handles this step automatically. See `s3-events.sh` for the exact CLI commands. + +## Cleanup + +The script prompts you to clean up all created resources. If you need to clean up manually, check the script log for the resource names that were created. + diff --git a/tuts/156-s3-event-notifications/s3-events.sh b/tuts/156-s3-event-notifications/s3-events.sh new file mode 100644 index 0000000..cc94237 --- /dev/null +++ b/tuts/156-s3-event-notifications/s3-events.sh @@ -0,0 +1,23 @@ +#!/bin/bash +WORK_DIR=$(mktemp -d); exec > >(tee -a "$WORK_DIR/s3-events.log") 2>&1 +REGION=${AWS_DEFAULT_REGION:-${AWS_REGION:-$(aws configure get region 2>/dev/null))}; [ -z "$REGION" ] && echo "ERROR: No region" && exit 1; export AWS_DEFAULT_REGION="$REGION"; ACCOUNT=$(aws sts get-caller-identity --query 'Account' --output text); echo "Region: $REGION" +RANDOM_ID=$(cat /dev/urandom | tr -dc 'a-z0-9' | fold -w 8 | head -n 1); BUCKET="s3-events-tut-${RANDOM_ID}-${ACCOUNT}"; QUEUE="s3-events-tut-${RANDOM_ID}" +handle_error() { echo "ERROR on line $1"; trap - ERR; cleanup; exit 1; }; trap 'handle_error $LINENO' ERR +cleanup() { echo ""; echo "Cleaning up..."; aws s3 rm "s3://$BUCKET" --recursive --quiet 2>/dev/null; aws s3 rb "s3://$BUCKET" 2>/dev/null && echo " Deleted bucket"; [ -n "$QUEUE_URL" ] && aws sqs delete-queue --queue-url "$QUEUE_URL" 2>/dev/null && echo " Deleted queue"; rm -rf "$WORK_DIR"; echo "Done."; } +echo "Step 1: Creating SQS queue for notifications" +QUEUE_URL=$(aws sqs create-queue --queue-name "$QUEUE" --query 'QueueUrl' --output text) +QUEUE_ARN=$(aws sqs get-queue-attributes --queue-url "$QUEUE_URL" --attribute-names QueueArn --query 'Attributes.QueueArn' --output text) +aws sqs set-queue-attributes --queue-url "$QUEUE_URL" --attributes "{\"Policy\":\"{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":[{\\\"Effect\\\":\\\"Allow\\\",\\\"Principal\\\":{\\\"Service\\\":\\\"s3.amazonaws.com\\\"},\\\"Action\\\":\\\"sqs:SendMessage\\\",\\\"Resource\\\":\\\"$QUEUE_ARN\\\"}]}\"}" +echo " Queue: $QUEUE_URL" +echo "Step 2: Creating bucket with event notification" +if [ "$REGION" = "us-east-1" ]; then aws s3api create-bucket --bucket "$BUCKET" > /dev/null; else aws s3api create-bucket --bucket "$BUCKET" --create-bucket-configuration LocationConstraint="$REGION" > /dev/null; fi +aws s3api put-bucket-notification-configuration --bucket "$BUCKET" --notification-configuration "{\"QueueConfigurations\":[{\"QueueArn\":\"$QUEUE_ARN\",\"Events\":[\"s3:ObjectCreated:*\"]}]}" +echo " Notifications configured" +echo "Step 3: Uploading a file to trigger notification" +echo "test data" > "$WORK_DIR/test.txt" +aws s3 cp "$WORK_DIR/test.txt" "s3://$BUCKET/test.txt" --quiet +sleep 5 +echo "Step 4: Reading notification from SQS" +aws sqs receive-message --queue-url "$QUEUE_URL" --max-number-of-messages 1 --wait-time-seconds 10 --query 'Messages[0].Body' --output text 2>/dev/null | python3 -c "import sys,json;d=json.loads(sys.stdin.read());r=d.get('Records',[{}])[0];print(f\" Event: {r.get('eventName','?')}\\n Bucket: {r.get('s3',{}).get('bucket',{}).get('name','?')}\\n Key: {r.get('s3',{}).get('object',{}).get('key','?')}\")" 2>/dev/null || echo " No notification yet" +echo ""; echo "Tutorial complete." +echo "Do you want to clean up? (y/n): "; read -r CHOICE; [[ "$CHOICE" =~ ^[Yy]$ ]] && cleanup diff --git a/tuts/167-s3-presigned-urls/README.md b/tuts/167-s3-presigned-urls/README.md new file mode 100644 index 0000000..b2e3a62 --- /dev/null +++ b/tuts/167-s3-presigned-urls/README.md @@ -0,0 +1,40 @@ +# S3 Presigned + +An AWS CLI tutorial that demonstrates S3 operations. + +## Running + +```bash +bash s3-presigned.sh +``` + +To auto-run with cleanup: + +```bash +echo 'y' | bash s3-presigned.sh +``` + +## What it does + +1. Creating bucket +2. Uploading a file +3. Generating presigned download URL (expires in 5 min) +4. Testing presigned download +5. Generating presigned upload URL +6. Listing objects + +## Resources created + +- Bucket + +The script prompts you to clean up resources when it finishes. + +## Cost + +Free tier eligible for most operations. Clean up resources after use to avoid charges. + +## Related docs + +- [AWS CLI s3 reference](https://docs.aws.amazon.com/cli/latest/reference/s3/index.html) +- [AWS CLI s3api reference](https://docs.aws.amazon.com/cli/latest/reference/s3api/index.html) + diff --git a/tuts/167-s3-presigned-urls/REVISION-HISTORY.md b/tuts/167-s3-presigned-urls/REVISION-HISTORY.md new file mode 100644 index 0000000..e445391 --- /dev/null +++ b/tuts/167-s3-presigned-urls/REVISION-HISTORY.md @@ -0,0 +1,8 @@ +# Revision History: 167-s3-presigned-urls + +## Shell (CLI script) + +### 2026-04-14 v1 published +- Type: functional +- Initial version + diff --git a/tuts/167-s3-presigned-urls/s3-presigned.md b/tuts/167-s3-presigned-urls/s3-presigned.md new file mode 100644 index 0000000..4b4e38e --- /dev/null +++ b/tuts/167-s3-presigned-urls/s3-presigned.md @@ -0,0 +1,35 @@ +# S3 Presigned + +## Prerequisites + +1. AWS CLI installed and configured (`aws configure`) +2. Appropriate IAM permissions for the AWS services used + +## Step 1: Creating bucket + +The script handles this step automatically. See `s3-presigned.sh` for the exact CLI commands. + +## Step 2: Uploading a file + +The script handles this step automatically. See `s3-presigned.sh` for the exact CLI commands. + +## Step 3: Generating presigned download URL (expires in 5 min) + +The script handles this step automatically. See `s3-presigned.sh` for the exact CLI commands. + +## Step 4: Testing presigned download + +The script handles this step automatically. See `s3-presigned.sh` for the exact CLI commands. + +## Step 5: Generating presigned upload URL + +The script handles this step automatically. See `s3-presigned.sh` for the exact CLI commands. + +## Step 6: Listing objects + +The script handles this step automatically. See `s3-presigned.sh` for the exact CLI commands. + +## Cleanup + +The script prompts you to clean up all created resources. If you need to clean up manually, check the script log for the resource names that were created. + diff --git a/tuts/167-s3-presigned-urls/s3-presigned.sh b/tuts/167-s3-presigned-urls/s3-presigned.sh new file mode 100644 index 0000000..052b96b --- /dev/null +++ b/tuts/167-s3-presigned-urls/s3-presigned.sh @@ -0,0 +1,24 @@ +#!/bin/bash +WORK_DIR=$(mktemp -d); exec > >(tee -a "$WORK_DIR/presign.log") 2>&1 +REGION=${AWS_DEFAULT_REGION:-${AWS_REGION:-$(aws configure get region 2>/dev/null))}; [ -z "$REGION" ] && echo "ERROR: No region" && exit 1; export AWS_DEFAULT_REGION="$REGION"; ACCOUNT=$(aws sts get-caller-identity --query 'Account' --output text); echo "Region: $REGION" +RANDOM_ID=$(cat /dev/urandom | tr -dc 'a-z0-9' | fold -w 8 | head -n 1); BUCKET="presign-tut-${RANDOM_ID}-${ACCOUNT}" +handle_error() { echo "ERROR on line $1"; trap - ERR; cleanup; exit 1; }; trap 'handle_error $LINENO' ERR +cleanup() { echo ""; echo "Cleaning up..."; aws s3 rm "s3://$BUCKET" --recursive --quiet 2>/dev/null; aws s3 rb "s3://$BUCKET" 2>/dev/null && echo " Deleted bucket"; rm -rf "$WORK_DIR"; echo "Done."; } +echo "Step 1: Creating bucket" +if [ "$REGION" = "us-east-1" ]; then aws s3api create-bucket --bucket "$BUCKET" > /dev/null; else aws s3api create-bucket --bucket "$BUCKET" --create-bucket-configuration LocationConstraint="$REGION" > /dev/null; fi +echo "Step 2: Uploading a file" +echo "Hello from presigned URL tutorial" > "$WORK_DIR/data.txt" +aws s3 cp "$WORK_DIR/data.txt" "s3://$BUCKET/data.txt" --quiet +echo "Step 3: Generating presigned download URL (expires in 5 min)" +DOWNLOAD_URL=$(aws s3 presign "s3://$BUCKET/data.txt" --expires-in 300) +echo " URL: ${DOWNLOAD_URL:0:80}..." +echo "Step 4: Testing presigned download" +curl -s "$DOWNLOAD_URL" +echo "" +echo "Step 5: Generating presigned upload URL" +UPLOAD_URL=$(aws s3 presign "s3://$BUCKET/uploaded.txt" --expires-in 300) +echo " Upload URL generated (expires in 5 min)" +echo "Step 6: Listing objects" +aws s3api list-objects-v2 --bucket "$BUCKET" --query 'Contents[].{Key:Key,Size:Size}' --output table +echo ""; echo "Tutorial complete." +echo "Do you want to clean up? (y/n): "; read -r CHOICE; [[ "$CHOICE" =~ ^[Yy]$ ]] && cleanup diff --git a/tuts/171-s3-versioning/README.md b/tuts/171-s3-versioning/README.md new file mode 100644 index 0000000..1bfd3fd --- /dev/null +++ b/tuts/171-s3-versioning/README.md @@ -0,0 +1,40 @@ +# S3 Versioning + +An AWS CLI tutorial that demonstrates S3 operations. + +## Running + +```bash +bash s3-versioning.sh +``` + +To auto-run with cleanup: + +```bash +echo 'y' | bash s3-versioning.sh +``` + +## What it does + +1. Creating versioned bucket +2. Uploading multiple versions +3. Listing versions +4. Getting a specific version +5. Deleting (creates delete marker) + +## Resources created + +- Bucket +- Bucket Versioning + +The script prompts you to clean up resources when it finishes. + +## Cost + +Free tier eligible for most operations. Clean up resources after use to avoid charges. + +## Related docs + +- [AWS CLI s3 reference](https://docs.aws.amazon.com/cli/latest/reference/s3/index.html) +- [AWS CLI s3api reference](https://docs.aws.amazon.com/cli/latest/reference/s3api/index.html) + diff --git a/tuts/171-s3-versioning/REVISION-HISTORY.md b/tuts/171-s3-versioning/REVISION-HISTORY.md new file mode 100644 index 0000000..90483cd --- /dev/null +++ b/tuts/171-s3-versioning/REVISION-HISTORY.md @@ -0,0 +1,8 @@ +# Revision History: 171-s3-versioning + +## Shell (CLI script) + +### 2026-04-14 v1 published +- Type: functional +- Initial version + diff --git a/tuts/171-s3-versioning/s3-versioning.md b/tuts/171-s3-versioning/s3-versioning.md new file mode 100644 index 0000000..3c890f1 --- /dev/null +++ b/tuts/171-s3-versioning/s3-versioning.md @@ -0,0 +1,31 @@ +# S3 Versioning + +## Prerequisites + +1. AWS CLI installed and configured (`aws configure`) +2. Appropriate IAM permissions for the AWS services used + +## Step 1: Creating versioned bucket + +The script handles this step automatically. See `s3-versioning.sh` for the exact CLI commands. + +## Step 2: Uploading multiple versions + +The script handles this step automatically. See `s3-versioning.sh` for the exact CLI commands. + +## Step 3: Listing versions + +The script handles this step automatically. See `s3-versioning.sh` for the exact CLI commands. + +## Step 4: Getting a specific version + +The script handles this step automatically. See `s3-versioning.sh` for the exact CLI commands. + +## Step 5: Deleting (creates delete marker) + +The script handles this step automatically. See `s3-versioning.sh` for the exact CLI commands. + +## Cleanup + +The script prompts you to clean up all created resources. If you need to clean up manually, check the script log for the resource names that were created. + diff --git a/tuts/171-s3-versioning/s3-versioning.sh b/tuts/171-s3-versioning/s3-versioning.sh new file mode 100644 index 0000000..a0c143b --- /dev/null +++ b/tuts/171-s3-versioning/s3-versioning.sh @@ -0,0 +1,26 @@ +#!/bin/bash +WORK_DIR=$(mktemp -d); exec > >(tee -a "$WORK_DIR/s3v.log") 2>&1 +REGION=${AWS_DEFAULT_REGION:-${AWS_REGION:-$(aws configure get region 2>/dev/null))}; [ -z "$REGION" ] && echo "ERROR: No region" && exit 1; export AWS_DEFAULT_REGION="$REGION"; ACCOUNT=$(aws sts get-caller-identity --query 'Account' --output text); echo "Region: $REGION" +RANDOM_ID=$(cat /dev/urandom | tr -dc 'a-z0-9' | fold -w 8 | head -n 1); BUCKET="s3ver-tut-${RANDOM_ID}-${ACCOUNT}" +handle_error() { echo "ERROR on line $1"; trap - ERR; cleanup; exit 1; }; trap 'handle_error $LINENO' ERR +cleanup() { echo ""; echo "Cleaning up..."; aws s3api list-object-versions --bucket "$BUCKET" --query '{Objects: Versions[].{Key:Key,VersionId:VersionId}, Quiet: `true`}' > "$WORK_DIR/del.json" 2>/dev/null && aws s3api delete-objects --bucket "$BUCKET" --delete "file://$WORK_DIR/del.json" > /dev/null 2>&1; aws s3api list-object-versions --bucket "$BUCKET" --query '{Objects: DeleteMarkers[].{Key:Key,VersionId:VersionId}, Quiet: `true`}' > "$WORK_DIR/del2.json" 2>/dev/null && aws s3api delete-objects --bucket "$BUCKET" --delete "file://$WORK_DIR/del2.json" > /dev/null 2>&1; aws s3 rb "s3://$BUCKET" 2>/dev/null && echo " Deleted bucket"; rm -rf "$WORK_DIR"; echo "Done."; } +echo "Step 1: Creating versioned bucket" +if [ "$REGION" = "us-east-1" ]; then aws s3api create-bucket --bucket "$BUCKET" > /dev/null; else aws s3api create-bucket --bucket "$BUCKET" --create-bucket-configuration LocationConstraint="$REGION" > /dev/null; fi +aws s3api put-bucket-versioning --bucket "$BUCKET" --versioning-configuration Status=Enabled +echo "Step 2: Uploading multiple versions" +echo "version 1" > "$WORK_DIR/file.txt"; aws s3 cp "$WORK_DIR/file.txt" "s3://$BUCKET/file.txt" --quiet +echo "version 2" > "$WORK_DIR/file.txt"; aws s3 cp "$WORK_DIR/file.txt" "s3://$BUCKET/file.txt" --quiet +echo "version 3" > "$WORK_DIR/file.txt"; aws s3 cp "$WORK_DIR/file.txt" "s3://$BUCKET/file.txt" --quiet +echo " Uploaded 3 versions" +echo "Step 3: Listing versions" +aws s3api list-object-versions --bucket "$BUCKET" --prefix file.txt --query 'Versions[].{Key:Key,VersionId:VersionId,IsLatest:IsLatest,Size:Size}' --output table +echo "Step 4: Getting a specific version" +OLDEST=$(aws s3api list-object-versions --bucket "$BUCKET" --prefix file.txt --query 'Versions[-1].VersionId' --output text) +aws s3api get-object --bucket "$BUCKET" --key file.txt --version-id "$OLDEST" "$WORK_DIR/old.txt" > /dev/null +echo " Oldest version content: $(cat "$WORK_DIR/old.txt")" +echo "Step 5: Deleting (creates delete marker)" +aws s3api delete-object --bucket "$BUCKET" --key file.txt > /dev/null +echo " Delete marker created" +aws s3api list-object-versions --bucket "$BUCKET" --prefix file.txt --query 'DeleteMarkers[].{Key:Key,IsLatest:IsLatest}' --output table +echo ""; echo "Tutorial complete." +echo "Do you want to clean up? (y/n): "; read -r CHOICE; [[ "$CHOICE" =~ ^[Yy]$ ]] && cleanup