Skip to content

Commit f5ef224

Browse files
committed
Add Python example pages
Add step-by-step examples for common patterns: boto3 S3, Kafka producer, OpenAI Chat API, SSE streaming, streaming OpenAI responses, ECR with IRSA, OpenFaaS REST API, and Playwright web testing. Signed-off-by: Han Verstraete (OpenFaaS Ltd) <han@openfaas.com>
1 parent 9088cc8 commit f5ef224

9 files changed

Lines changed: 1322 additions & 13 deletions

File tree

Lines changed: 189 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,189 @@
1+
Call AWS services from a function using ambient credentials instead of static access keys. With [IRSA](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html), the function's pod is automatically assigned temporary credentials via a Kubernetes Service Account mapped to an IAM role.
2+
3+
Use-cases:
4+
5+
* Accessing any AWS service (S3, DynamoDB, SQS, ECR, etc.) without static keys
6+
* Meeting security policies that prohibit long-lived credentials
7+
* Simplifying secret rotation by relying on short-lived tokens
8+
9+
This example creates and queries ECR repositories using `boto3`, but the same approach works for any AWS service. It requires OpenFaaS to be deployed on [AWS EKS](https://aws.amazon.com/eks/) with IRSA enabled. See [Creating an IAM OIDC provider for your cluster](https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html) for setup, or [Manage AWS Resources from OpenFaaS Functions With IRSA](https://www.openfaas.com/blog/irsa-functions/) for an end-to-end walkthrough.
10+
11+
## Overview
12+
13+
handler.py:
14+
15+
```python
16+
import os
17+
import json
18+
import boto3
19+
20+
ecrClient = None
21+
22+
def initECR():
23+
session = boto3.Session(
24+
region_name=os.getenv('AWS_REGION'),
25+
)
26+
return session.client('ecr')
27+
28+
def handle(event, context):
29+
global ecrClient
30+
31+
if ecrClient is None:
32+
ecrClient = initECR()
33+
34+
if event.method != 'POST':
35+
return {
36+
"statusCode": 405,
37+
"body": "Method not allowed"
38+
}
39+
40+
body = json.loads(event.body)
41+
name = body.get('name')
42+
43+
if not name:
44+
return {
45+
"statusCode": 400,
46+
"body": "Missing in body: name"
47+
}
48+
49+
# Check if the repository already exists
50+
try:
51+
ecrClient.describe_repositories(repositoryNames=[name])
52+
return {
53+
"statusCode": 200,
54+
"body": json.dumps({"message": "Repository already exists"})
55+
}
56+
except ecrClient.exceptions.RepositoryNotFoundException:
57+
pass
58+
59+
# Create the repository
60+
response = ecrClient.create_repository(
61+
repositoryName=name,
62+
imageTagMutability='MUTABLE',
63+
encryptionConfiguration={
64+
'encryptionType': 'AES256',
65+
},
66+
imageScanningConfiguration={
67+
'scanOnPush': False,
68+
},
69+
)
70+
71+
return {
72+
"statusCode": 201,
73+
"body": json.dumps({
74+
"arn": response['repository']['repositoryArn']
75+
})
76+
}
77+
```
78+
79+
requirements.txt:
80+
81+
```
82+
boto3
83+
```
84+
85+
stack.yaml:
86+
87+
```yaml
88+
functions:
89+
ecr-create-repo:
90+
lang: python3-http-debian
91+
handler: ./ecr-create-repo
92+
image: ttl.sh/openfaas-examples/ecr-create-repo:latest
93+
annotations:
94+
com.openfaas.serviceaccount: openfaas-create-ecr-repo
95+
environment:
96+
AWS_REGION: eu-west-1
97+
```
98+
99+
No secrets are needed. The `com.openfaas.serviceaccount` annotation tells OpenFaaS which Kubernetes Service Account to attach to the function's pod. EKS then mounts a short-lived token for that service account, and the AWS SDK picks up the credentials automatically — no access keys to store or rotate.
100+
101+
The `AWS_REGION` environment variable is required by the SDK to know which region to connect to.
102+
103+
## Step-by-step walkthrough
104+
105+
### Create an IAM Policy
106+
107+
Create a policy that grants the permissions your function needs:
108+
109+
```json
110+
{
111+
"Version": "2012-10-17",
112+
"Statement": [
113+
{
114+
"Effect": "Allow",
115+
"Action": [
116+
"ecr:CreateRepository",
117+
"ecr:DeleteRepository",
118+
"ecr:DescribeRepositories"
119+
],
120+
"Resource": "*"
121+
}
122+
]
123+
}
124+
```
125+
126+
Save the above to `ecr-policy.json` and create the policy:
127+
128+
```bash
129+
aws iam create-policy \
130+
--policy-name ecr-create-query-repository \
131+
--policy-document file://ecr-policy.json
132+
```
133+
134+
Note the ARN from the output, e.g. `arn:aws:iam::ACCOUNT_NUMBER:policy/ecr-create-query-repository`.
135+
136+
### Create an IAM Role and Kubernetes Service Account
137+
138+
Use `eksctl` to create a Kubernetes Service Account in the `openfaas-fn` namespace that is linked to an IAM role with the policy attached:
139+
140+
```bash
141+
export ARN=arn:aws:iam::ACCOUNT_NUMBER:policy/ecr-create-query-repository
142+
143+
eksctl create iamserviceaccount \
144+
--name openfaas-create-ecr-repo \
145+
--namespace openfaas-fn \
146+
--cluster <cluster-name> \
147+
--role-name ecr-create-query-repository \
148+
--attach-policy-arn $ARN \
149+
--region eu-west-1 \
150+
--approve
151+
```
152+
153+
This can also be done manually by creating the IAM Role in AWS, followed by a Kubernetes Service Account annotated with `eks.amazonaws.com/role-arn`.
154+
155+
### Create the function
156+
157+
Pull the template and scaffold a new function:
158+
159+
```bash
160+
faas-cli template store pull python3-http-debian
161+
faas-cli new --lang python3-http-debian ecr-create-repo \
162+
--prefix ttl.sh/openfaas-examples
163+
```
164+
165+
Update `ecr-create-repo/handler.py` and `ecr-create-repo/requirements.txt` with the code from the overview above.
166+
167+
### Deploy and invoke
168+
169+
Build, push and deploy the function with `faas-cli up`:
170+
171+
```bash
172+
faas-cli up \
173+
--filter ecr-create-repo \
174+
--tag digest
175+
```
176+
177+
Create a new ECR repository by invoking the function:
178+
179+
```bash
180+
curl -X POST http://127.0.0.1:8080/function/ecr-create-repo \
181+
-H "Content-Type: application/json" \
182+
-d '{"name":"tenant1/fn1"}'
183+
```
184+
185+
The response contains the ARN of the newly created repository:
186+
187+
```json
188+
{"arn": "arn:aws:ecr:eu-west-1:ACCOUNT_NUMBER:repository/tenant1/fn1"}
189+
```
Lines changed: 139 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,139 @@
1+
Publish messages to a Kafka topic from a function using the `confluent-kafka` package. This lets you bridge HTTP-triggered functions into event-driven pipelines, using Kafka as a decoupling layer between your API and downstream consumers.
2+
3+
Use-cases:
4+
5+
* Publishing events or audit logs to a Kafka topic
6+
* Decoupling workloads by writing to a message bus
7+
* Feeding data pipelines from HTTP endpoints
8+
9+
This example uses the `confluent-kafka` package with SASL/SSL authentication. Broker credentials are stored as [OpenFaaS secrets](/reference/secrets/).
10+
11+
If you'd like to trigger functions from Kafka topics instead, see [Trigger functions from Kafka](/openfaas-pro/kafka-events).
12+
13+
## Overview
14+
15+
handler.py:
16+
17+
```python
18+
import os
19+
import socket
20+
from confluent_kafka import Producer
21+
22+
# Initialise the producer once and reuse it across invocations
23+
# to keep the broker connection alive between requests.
24+
kafkaProducer = None
25+
26+
def initProducer():
27+
username = read_secret('kafka-broker-username')
28+
password = read_secret('kafka-broker-password')
29+
broker = os.getenv("kafka_broker")
30+
31+
conf = {
32+
'bootstrap.servers': broker,
33+
'security.protocol': 'SASL_SSL',
34+
'sasl.mechanism': 'PLAIN',
35+
'sasl.username': username,
36+
'sasl.password': password,
37+
'client.id': socket.gethostname()
38+
}
39+
40+
return Producer(conf)
41+
42+
def handle(event, context):
43+
global kafkaProducer
44+
45+
if kafkaProducer is None:
46+
kafkaProducer = initProducer()
47+
48+
topic = 'faas-request'
49+
50+
# Produce the request body as a message and wait for delivery
51+
kafkaProducer.produce(topic, value=event.body)
52+
kafkaProducer.flush()
53+
54+
return {
55+
"statusCode": 200,
56+
"body": "Message produced to {}".format(topic)
57+
}
58+
59+
def read_secret(name):
60+
with open("/var/openfaas/secrets/" + name, "r") as f:
61+
return f.read().strip()
62+
```
63+
64+
requirements.txt:
65+
66+
```
67+
confluent-kafka
68+
```
69+
70+
stack.yaml:
71+
72+
```yaml
73+
functions:
74+
kafka-producer:
75+
lang: python3-http-debian
76+
handler: ./kafka-producer
77+
image: ttl.sh/openfaas-examples/kafka-producer:latest
78+
environment:
79+
kafka_broker: "<your-broker-endpoint>:9092"
80+
secrets:
81+
- kafka-broker-username
82+
- kafka-broker-password
83+
```
84+
85+
The Debian variant of the template is required because `confluent-kafka` depends on `librdkafka`, a native C library that will not build on Alpine.
86+
87+
The Kafka producer is initialised once on first invocation and reused for subsequent requests, keeping the broker connection alive between calls and avoiding the overhead of re-authenticating on every request.
88+
89+
The `SASL_SSL` security protocol combines SASL authentication with TLS encryption. The `sasl.mechanism` must match your broker's configuration:
90+
91+
- `PLAIN` — standard for managed services such as Confluent Cloud and Aiven.
92+
- `SCRAM-SHA-256` / `SCRAM-SHA-512` — common for self-hosted brokers.
93+
- `GSSAPI` — Kerberos-based authentication.
94+
95+
## Step-by-step walkthrough
96+
97+
### Create the function
98+
99+
Pull the template and scaffold a new function:
100+
101+
```bash
102+
faas-cli template store pull python3-http-debian
103+
faas-cli new --lang python3-http-debian kafka-producer \
104+
--prefix ttl.sh/openfaas-examples
105+
```
106+
107+
The example uses the public [ttl.sh](https://ttl.sh) registry — replace the prefix with your own registry for production use.
108+
109+
Update `kafka-producer/handler.py` and `kafka-producer/requirements.txt` with the code from the overview above.
110+
111+
### Create secrets for Kafka broker credentials
112+
113+
Store your Kafka broker username and password as OpenFaaS secrets. This keeps credentials out of environment variables and the function's container image.
114+
115+
Save your broker username to `kafka-broker-username.txt` and your broker password to `kafka-broker-password.txt`, then run:
116+
117+
```bash
118+
faas-cli secret create kafka-broker-username --from-file kafka-broker-username.txt
119+
faas-cli secret create kafka-broker-password --from-file kafka-broker-password.txt
120+
```
121+
122+
At runtime, the secrets are mounted as files under `/var/openfaas/secrets/` inside the function container.
123+
124+
### Deploy and invoke
125+
126+
Build, push and deploy the function with `faas-cli up`:
127+
128+
```bash
129+
faas-cli up \
130+
--filter kafka-producer \
131+
--tag digest
132+
```
133+
134+
Publish a message to the Kafka topic by invoking the function:
135+
136+
```bash
137+
curl http://127.0.0.1:8080/function/kafka-producer \
138+
--data "Hello from OpenFaaS"
139+
```

0 commit comments

Comments
 (0)