Skip to content

Commit 16b0e26

Browse files
committed
Add Python example for calling the OpenAI Chat API
Signed-off-by: Han Verstraete (OpenFaaS Ltd) <han@openfaas.com>
1 parent f14416e commit 16b0e26

1 file changed

Lines changed: 107 additions & 0 deletions

File tree

docs/languages/python.md

Lines changed: 107 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -709,6 +709,113 @@ curl http://127.0.0.1:8080/function/kafka-producer \
709709
--data "Hello from OpenFaaS"
710710
```
711711

712+
## Example: Call the OpenAI Chat API
713+
714+
This example shows how to call the OpenAI Chat Completions API from a Python function using the official `openai` SDK.
715+
716+
**1. Create the function**
717+
718+
Pull the `python3-http` template and scaffold a new function. The `openai` package is pure Python, so the Alpine-based template is fine here.
719+
720+
```bash
721+
faas-cli template store pull python3-http
722+
faas-cli new --lang python3-http openai-chat \
723+
--prefix ttl.sh/openfaas-examples
724+
```
725+
726+
The example uses the public [ttl.sh](https://ttl.sh) registry — replace the prefix with your own registry for production use.
727+
728+
**2. Add the openai dependency**
729+
730+
Add `openai` to the function's `requirements.txt` so it gets installed during the build:
731+
732+
```
733+
openai
734+
```
735+
736+
**3. Create a secret for the API key**
737+
738+
Store the OpenAI API key as an OpenFaaS secret. This keeps the key out of environment variables and the function's container image.
739+
740+
Save your API key to `openai-api-key.txt`, then run:
741+
742+
```bash
743+
faas-cli secret create openai-api-key --from-file openai-api-key.txt
744+
```
745+
746+
At runtime, the secret is mounted as a file under `/var/openfaas/secrets/` inside the function container.
747+
748+
**4. Configure the function**
749+
750+
Update `stack.yaml` to attach the secret created in the previous step:
751+
752+
```yaml
753+
functions:
754+
openai-chat:
755+
lang: python3-http
756+
handler: ./openai-chat
757+
image: ttl.sh/openfaas-examples/openai-chat:latest
758+
secrets:
759+
- openai-api-key
760+
```
761+
762+
**5. Write the handler**
763+
764+
The handler reads the OpenAI API key from the mounted secret and uses it to create a chat completion. The request body is sent as the user message.
765+
766+
The OpenAI client is initialised once on the first request and stored in a global variable. This means subsequent invocations reuse the same client, avoiding the overhead of creating a new connection on every request.
767+
768+
```python
769+
from openai import OpenAI
770+
771+
client = None
772+
773+
def initClient():
774+
apiKey = read_secret('openai-api-key')
775+
return OpenAI(api_key=apiKey)
776+
777+
def handle(event, context):
778+
global client
779+
780+
# Initialise the client once and reuse it across invocations
781+
if client is None:
782+
client = initClient()
783+
784+
completion = client.chat.completions.create(
785+
model="gpt-4o-mini",
786+
messages=[
787+
{"role": "user", "content": event.body.decode("utf-8")}
788+
]
789+
)
790+
791+
return {
792+
"statusCode": 200,
793+
"body": completion.choices[0].message.content
794+
}
795+
796+
def read_secret(name):
797+
with open("/var/openfaas/secrets/" + name, "r") as f:
798+
return f.read().strip()
799+
```
800+
801+
**6. Deploy and invoke**
802+
803+
Build, push and deploy the function with `faas-cli up`. The `--filter` flag selects a single function from the stack file and `--tag digest` uses the image content hash as the tag instead of `latest`, so that Kubernetes always pulls an updated image:
804+
805+
```bash
806+
faas-cli up \
807+
--filter openai-chat \
808+
--tag digest
809+
810+
# Send a prompt to the function
811+
curl http://127.0.0.1:8080/function/openai-chat \
812+
--data "What is the capital of France?"
813+
```
814+
815+
!!! tip "Streaming responses with Server-Sent Events (SSE)"
816+
817+
This example waits for the full completion before responding. To stream tokens back to the client as they are generated, you can use Server-Sent Events (SSE) with the `python3-flask` template, which gives direct access to Flask's `stream_with_context` helper. See [Stream OpenAI responses from functions using Server Sent Events](https://www.openfaas.com/blog/openai-streaming-responses/) on the OpenFaaS blog for a working example.
818+
712819
## OpenTelemetry zero-code instrumentation
713820

714821
Using [OpenTelemetry zero-code instrumentation](https://opentelemetry.io/docs/zero-code/python/) for python functions requires some minor modifications to the existing Python templates.

0 commit comments

Comments
 (0)