Skip to content

Commit 59904dd

Browse files
committed
genai instructions
1 parent 8591f8e commit 59904dd

14 files changed

Lines changed: 1286 additions & 0 deletions

instra/README.md

Lines changed: 63 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,63 @@
1+
# AWS Developer Tutorials - Instructions
2+
3+
This directory contains instructions and resources for generating new AWS CLI tutorials and scripts.
4+
5+
## Overview
6+
7+
The instructions in this directory guide you through the process of creating high-quality AWS CLI tutorials and scripts using Amazon Q Developer CLI. These instructions are designed to help you:
8+
9+
1. Collect information about AWS services and use cases
10+
2. Generate working AWS CLI scripts
11+
3. Create comprehensive tutorials that explain the scripts
12+
4. Validate and improve both scripts and tutorials
13+
14+
## Directory Structure
15+
16+
- `/tutorial-gen`: Step-by-step instructions for generating tutorials and scripts
17+
18+
## Tutorial Generation Process
19+
20+
The tutorial generation process is divided into several steps, each with its own instruction file in the `/tutorial-gen` directory:
21+
22+
1. **Information Collection**
23+
- Collect documentation topics
24+
- Gather example CLI commands
25+
26+
2. **Script Creation**
27+
- Generate an initial script
28+
- Test and run the script
29+
- Validate script functionality
30+
- Simplify and improve the script
31+
32+
3. **Tutorial Creation**
33+
- Draft a tutorial based on the script
34+
- Validate tutorial content
35+
36+
4. **Finalization**
37+
- Address feedback
38+
- Make final improvements
39+
40+
## Using These Instructions
41+
42+
To generate a new tutorial:
43+
44+
1. Start with the file `0-general-instructions.md` in the `/tutorial-gen` directory
45+
2. Follow the numbered instruction files in sequence
46+
3. Use Amazon Q Developer CLI to assist with each step
47+
4. Place the final tutorial and script in a new folder in the `/tuts` directory
48+
49+
## Example Usage with Amazon Q Developer CLI
50+
51+
```bash
52+
q "read the instructions in the ../../instra/tutorial-gen folder and follow them in order, using this topic: https://docs.aws.amazon.com/payment-cryptography/latest/userguide/getting-started.html when instructed to run the script in step 2b, it's ok to actually run the script and create resources. this is part of the process. when you generate the script, be careful to check required options and option names for each command."
53+
```
54+
55+
This command instructs Amazon Q Developer CLI to:
56+
1. Read the tutorial generation instructions
57+
2. Follow them in order
58+
3. Use the AWS Payment Cryptography getting started guide as the source material
59+
4. Generate and test a script for this service
60+
61+
## Contributing
62+
63+
If you have suggestions for improving the tutorial generation process, please submit them as pull requests or issues in the repository.
Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
# GENERAL INSTRUCTIONS
2+
3+
these instructions take a tutorial URL as input. get the URL from the user if they didn't provide one already.
4+
5+
you also need an identifier for this tutorial. prefix all filenames with the identifier. If the user didn't provide an identifier, use the name of the current directory.
6+
7+
if the output files for some or all steps already exist in the current directory, use these as input. Limit modifications to existing docs to changes that improve the functionality or guidance.
8+
9+
note the wall time when you start processing a prompt and when you return control to the user. show the actual time elapsed.
10+
11+
## ⚠️ REQUIRED REFERENCE MATERIALS ⚠️
12+
When creating tutorials for specific AWS services:
13+
1. CRITICAL FOR VPC CREATION:
14+
- Use the architecture defined in vpc-example.md as reference
15+
- This step is mandatory before creating any VPC resources
Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
# Collect topic
2+
3+
use the docs MCP server to retrieve the tutorial and store it in markdown format in the current directory. use a max length of 20000. name the file "1-input-topic.md"
4+
5+
## formatting
6+
7+
Don't add linebreaks in the middle of a paragraph. Keep all of the text in the paragraph on one line. Ensure that there is an empty line between all headers, paragraphs, example titles, and code blocks.
8+
9+
For any relative path links, replace the './' with the full path that it represents.
10+
11+
## baseline tutorial
12+
13+
Create a CLI tutorial that does what's in this topic. This is the baseline tutorial that you create without any additional input or instructions. Call it 1-baseline.md
Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
# Collect examples
2+
3+
read the input topic, which includes the identifier and "1-input-topic" in the filename.
4+
5+
get the full list of API commands for the service, and general information about the service, by running aws <service-name> help. note that the service name might not match the service name used in the input tutorial. some service APIs are bundled together. For example, EC2 has APIs for autoscaling, VPC, and elastic block store, in addition to EC2 instance APIs. reference the documentation if you can't find the right service name.
6+
7+
## command inventory
8+
save a list of the commands returned by the help command to a file named 1-commands.md.
9+
10+
## copy examples
11+
read the AWS CLI examples for the service API actions from the AWS CLI source repository. check the user directory for the aws-cli repo. If it is there, copy the examples folder for the service into the working directory like this: cp -r ~/aws-cli/awscli/examples/<service-name>.
12+
13+
## generate workflow
14+
determine which CLI commands can be run to accomplish each of the steps in the tutorial. use the CLI examples and API names as a reference point. if there aren't examples the correspond to a step, determine which API needs to be used and refer to its API documentation to figure out what command you need to run.
15+
16+
create a CLI workflow document, cli-workflow.md as an example. each section in the workflow document lists API commands that correspond to a section in the input tutorial. there might not be a 1:1 relationship between actions that the user takes in the tutorial and CLI commands. figure out the intent of the tutorial step and determine which CLI commands are necessary to accomplish it. name this doc 1-cli-workflow.md.
Lines changed: 34 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,34 @@
1+
# create a script
2+
based on the cli workflow doc 1-cli-workflow.md, generate a shell script that runs all of the included CLI commands. Use the AWS CLI --query operator and --text output flag as necessary to capture resource IDs, secrets, and other information from command output. keep the script as simple as possible. don't create redundant resources or use options that are not relevant to the use case. name this script 2-cli-script.sh. make it executable.
3+
4+
## portability
5+
to keep the script portable, avoid using the region option in commands. use random names for resources that require a unique name. if the script uses S3 buckets, generate a random 12 digit hex identifier for the bucket an prepend it with the API name of the service (all lowercase). use this as the bucket name.
6+
7+
don't use jq or other commands that are not available by default in linux systems. don't use the --cli-binary-format option as this is not available to AWS CLI v1 users. when you need to read a parameter value from a file, use the cat command in a subshell like this: $(cat config.json)
8+
9+
## security
10+
apply security best practices to all API operations when possible. do not create resources that are publicly accessible. do not create security policies or network rules that have open permissions, such as resource identifiers with wildcards, or IP address ranges. use least privilege permissions at all times, or call out specifically that there is a requirement to use something less secure. note in comments when anything in the script can't be used in production environments due to security or scaling concerns.
11+
12+
do not every hardcode passwords or keys. If you need a database password, generate a new secure password and store it in AWS Secrets Manager. Retrieve the password from secrets manager when you need to use it to access resources. don't create IAM users. Assume that the user already has AWS credentials configured in their development environment.
13+
14+
## cleanup
15+
keep track of all resources that you create and ensure that all resources that you create are also cleaned up. before deleting any resources, pause the script and show a list of all of the resources that it created, so that the user can confirm.
16+
17+
When prompting for user input (especially for cleanup confirmation):
18+
1. Use separate `echo` statements with visual formatting (like separator lines) instead of `read -p`
19+
2. Use a plain `read -r` command to capture input
20+
3. Format the prompt to be clearly visible, for example:
21+
```bash
22+
echo ""
23+
echo "==========================================="
24+
echo "CLEANUP CONFIRMATION"
25+
echo "==========================================="
26+
echo "Do you want to clean up all created resources? (y/n): "
27+
read -r CLEANUP_CHOICE
28+
```
29+
This ensures the prompt is properly displayed across different terminal environments.
30+
31+
## error handling
32+
handle all errors within the script, and log all commands and outputs to a log file. when you capture the output of a command in a variable, check the output for errors and handle them before processing the output. use case insensitive pattern matching to get all varations of the word error. whether an error is caught or not, print the output of all commands.
33+
34+
if the script encounters an error, print a list of all of the resources created prior to the error, and attempt to delete them in the reverse order of when you created them. when resources depend on one another, use wait commands to confirm that the first resource is ready before creating the second one, and the second one is deleted before deleting the first one.
Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
# run the script
2+
3+
run the script created in the previous step, 2-cli-script.sh. if there are errors, determine the cause of the error and update the script. verify that all resources that were created are cleaned up. each time you run the script, record the name of the log, any errors that occurred, and the final status of each resource in a test run record. Continue running and updating the script until it runs end to end without error.
4+
5+
When you update the script, add a version number to the name, instead of coming up with a new name. For example, if you update 2-cli-script.sh, save the initial version as 2-cli-script-v1.sh and create 2-cli-script-v2.sh with the changes. Increment the version number each time you iterate on the script. When you save a log or test report, include the version of the script in the name, such as 2-cli-script-v2-log.md or 2-cli-script-v2-report.mdd
Lines changed: 79 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,79 @@
1+
2+
# static analysis
3+
4+
After you've confirmed that the script is functional, perform a thorough static analysis of the provided shell script without executing it. Focus on identifying potential issues in the following categories:
5+
6+
1. Resource Dependencies and Sequencing:
7+
- Identify operations that create dependencies between resources
8+
- Verify that resources are created in the correct order
9+
- Check for operations that modify the same resource multiple times
10+
- Flag any attempts to associate resources that might already have existing associations
11+
12+
2. Error Handling and Resilience:
13+
- Evaluate error handling mechanisms
14+
- Check for proper exit strategies when commands fail
15+
- Identify sections that might leave resources in an inconsistent state
16+
- Verify cleanup procedures are comprehensive and properly sequenced
17+
18+
3. Security and Best Practices:
19+
- Identify overly permissive security configurations
20+
- Flag hardcoded credentials or sensitive information
21+
- Check for adherence to infrastructure-as-code best practices
22+
- Verify proper resource tagging and naming conventions
23+
24+
4. Resource Limitations:
25+
- Identify potential service quotas or limits that might be exceeded
26+
- Check for resource creation without corresponding cleanup
27+
- Flag operations that might incur unexpected costs
28+
29+
5. Logic and Control Flow:
30+
- Analyze conditional logic for potential flaws
31+
- Verify loop constructs for proper termination conditions
32+
- Check for race conditions or timing issues
33+
- Identify potential infinite loops or deadlocks
34+
35+
6. AWS-Specific Concerns (for AWS scripts):
36+
- Verify proper handling of AWS region settings
37+
- Check for proper IAM permissions and least privilege principles
38+
- Identify potential cross-region or cross-account issues
39+
- Verify proper handling of AWS resource identifiers
40+
41+
## Output Files and Response Format
42+
43+
When validating a script, create the following output files:
44+
45+
1. Validation Report File:
46+
- Name: `[original-script-name]-validation-report.md`
47+
- Content: Detailed analysis of issues found in the script
48+
- Location: Save in the validation-tools directory
49+
50+
2. Fixed Script File (ONLY if HIGH severity issues are found):
51+
- Name: `[original-script-name]-fixed.sh`
52+
- Content: Complete fixed version of the script with comments explaining changes
53+
- Location: Save in the validation-tools directory
54+
55+
3. Response to User:
56+
- Provide a brief summary of the validation results
57+
- Include the number and types of issues found (High/Medium/Low)
58+
- Clearly state which issues will be fixed (HIGH severity) and which won't be fixed (MEDIUM and LOW severity)
59+
- Mention that the detailed report and fixed script (if applicable) have been saved as files
60+
- DO NOT include the entire fixed script in your response to the user
61+
62+
The validation report should include:
63+
- A summary of potential issues categorized by severity (High, Medium, Low)
64+
- Clear distinction between issues that will be fixed in the fixed script (HIGH severity) and those that won't be fixed (MEDIUM and LOW severity)
65+
- Line numbers or code snippets where issues were found
66+
- Specific recommendations for addressing each issue
67+
- Suggestions for improving the overall script quality and reliability
68+
69+
In the fixed script:
70+
- Add comments before each fixed section explaining what HIGH severity issue is being addressed
71+
- Include a header comment summarizing all HIGH severity issues that were fixed
72+
- Optionally include comments about MEDIUM and LOW severity issues that weren't fixed but could be improved in the future
73+
74+
## File Organization
75+
76+
- If validation reports and fixed scripts already exist for the script being validated, move the existing files to the `archive` directory before creating new ones
77+
- This ensures that the main directory contains only the most recent validation results while preserving previous work
78+
79+
This naming convention ensures clear association between original scripts, validation reports, and fixed versions.
Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
# create a simplified script
2+
3+
verify that the most recent version of the script is named correctly. It should be named "2-cli-script-vX.sh" with X being the iteration version, which should be the highest version of any script in the folder. If the name is something like "2-cli-script-final-fixed.sh" it is not always clear if this is the most recent version. rename the file if needed.
4+
5+
create a simplified version of the script. don't include any error handling. just run the cli commands like a customer would when learning the API for the first time. this is not a production script and doesn't need to organize commands into functions. as long as the script runs end to end and doesn't leave any resources behind, it's good. call the script 2-simple-script.sh
Lines changed: 36 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,36 @@
1+
# draft a tutorial
2+
3+
Use the example commands and output in cli-workflow.md to generate a tutorial, with a section for each group of related commands, and sections on prerequisites and next steps. Include guidance before and after every code block, even if it's just one sentence. reference the content in golden-tutorial.md as an example of a good tutorial. If content in the prerequisites section of the golden tutorial applies to this tutorial, reuse it. Name the output file 3-tutorial-draft.md.
4+
5+
## links
6+
7+
The tutorial may be published in the service guide, so don't include any general links to the guide or documentation landing page. In the next steps section, link to a topic in the service guide for each feature or use case listed. The prerequisites section can also have links, but avoid adding links to the core sections of the tutorial where readers are following instructions. Links in these sections can pull the reader away from the tutorial unnecessarily.
8+
9+
## formatting
10+
11+
Only use two levels of headers. H1 for the topic title, and H2 for the sections. To add a title to a code block or procedure, just use bold text.
12+
13+
Use sentence case for all headers and titles.
14+
Use present tense and active voice as much as possible.
15+
16+
Don't add linebreaks in the middle of a paragraph. Keep all of the text in the paragraph on one line. Ensure that there is an empty line between all headers, paragraphs, example titles, and code blocks.
17+
18+
For any relative path links, replace the './' with the full path that it represents.
19+
20+
## portability
21+
22+
Omit the --region option in example commands, unless it is required because by the specific use case or the service API. For example, if a command requires you to specify an availability zone, you need to ensure that you are calling the service in the same AWS Region as the availability zone. Otherwise, assume that the reader wants to create resources in the Region that they configured when they set up the AWS CLI, or write a script that they can run in any Region.
23+
24+
## naming rules
25+
26+
**account ids** - Replace 12 digit AWS account numbers with 123456789012. For examples with two account numbers, use 234567890123 for the second number.
27+
28+
**GUIDs** - Obfuscate GUIDs by making the second character sequence in the guid "xmpl".
29+
30+
**resource IDs** - For hex sequences, replace characters in the example with "abcd1234". For other numeric IDs, renumber starting with 1234. For alphanumric ID strings, replace characters 5-8 with "xmpl".
31+
32+
**timestamps** - Replace timestamps with a value representing January 13th of the current year.
33+
34+
**IP addresses** - Replace public IP addresses with fictitious addresses such as 203.0.113.75 or another address in the 203.0.113
35+
36+
**bucket names** - For S3 buckets, the name in the tutorial must start with "amzn-s3-demo". The script can't use this name because it's reserved for documentation. Leave the script as is but replace the prefix used by the script with "amzn-s3-demo" in the tutorial.

0 commit comments

Comments
 (0)