Skip to content

Commit 6c18061

Browse files
authored
Merge pull request #49 from pamelafox/upgrade-maf-1.0.0
Upgrade to MAF 1.0.0
2 parents bc61b14 + 3afb96f commit 6c18061

127 files changed

Lines changed: 4306 additions & 4794 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

.env.sample

Lines changed: 3 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,11 @@
1-
# API_HOST can be either azure, openai, or github:
1+
# API_HOST can be either azure or openai:
22
API_HOST=azure
33
# Configure for Azure:
44
AZURE_OPENAI_ENDPOINT=https://YOUR-AZURE-OPENAI-SERVICE-NAME.openai.azure.com
55
AZURE_OPENAI_CHAT_DEPLOYMENT=YOUR-AZURE-DEPLOYMENT-NAME
66
# Configure for OpenAI.com:
77
OPENAI_API_KEY=YOUR-OPENAI-KEY
8-
OPENAI_MODEL=gpt-3.5-turbo
9-
# Configure for GitHub models: (GITHUB_TOKEN already exists inside Codespaces)
10-
GITHUB_MODEL=gpt-4.1-mini
11-
GITHUB_TOKEN=YOUR-GITHUB-PERSONAL-ACCESS-TOKEN
8+
OPENAI_MODEL=gpt-5.4
129
# Configure for Redis (used by agent_history_redis.py, defaults to dev container Redis):
1310
REDIS_URL=redis://localhost:6379
1411
# Configure OTLP exporter (not needed in devcontainer, which sets these via docker-compose):
@@ -21,5 +18,5 @@ APPLICATIONINSIGHTS_CONNECTION_STRING=InstrumentationKey=YOUR-KEY;IngestionEndpo
2118
# Configure for Azure AI Search (used by agent_knowledge_aisearch.py):
2219
AZURE_SEARCH_ENDPOINT=https://YOUR-SEARCH-SERVICE.search.windows.net
2320
AZURE_SEARCH_KNOWLEDGE_BASE_NAME=YOUR-KB-NAME
24-
# Optional: Set to log evaluation results to Azure AI Foundry for rich visualization
21+
# Optional: Set to log evaluation results to Microsoft Foundry for rich visualization
2522
AZURE_AI_PROJECT=https://YOUR-ACCOUNT.services.ai.azure.com/api/projects/YOUR-PROJECT

.github/prompts/update_translations.prompt.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,4 +4,4 @@ description: Use this prompt to update the Spanish translations in the repo.
44
model: GPT-5.2 (copilot)
55
---
66

7-
Update the Spanish translations in the repo according to the guidelines in AGENTS.md. Ensure there are spanish equivalents of each english example. Make sure to keep the translations consistent with the original content and maintain the technical accuracy of the code.
7+
Update the Spanish translations in the repo according to the guidelines in AGENTS.md. Ensure there are spanish equivalents of each english example. Make sure to keep the translations consistent with the original content and maintain the technical accuracy of the code.

AGENTS.md

Lines changed: 92 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ The agent-framework GitHub repo is here:
66
https://github.com/microsoft/agent-framework
77
It contains both Python and .NET agent framework code, but we are only using the Python packages in this repo.
88

9-
MAF is changing rapidly still, so we sometimes need to check the repo changelog and issues to see if there are any breaking changes that might affect our code.
9+
MAF is changing rapidly still, so we sometimes need to check the repo changelog and issues to see if there are any breaking changes that might affect our code.
1010
The Python changelog is here:
1111
https://github.com/microsoft/agent-framework/blob/main/python/CHANGELOG.md
1212

@@ -92,3 +92,94 @@ def _on_response_with_body(self, request, response):
9292

9393
HttpLoggingPolicy.on_response = _on_response_with_body
9494
```
95+
96+
## Manual test plan
97+
98+
After upgrading dependencies or making changes across examples, use this plan to verify everything works. Run each example with `uv run python examples/<file>.py`.
99+
100+
### No extra setup (Azure OpenAI only)
101+
102+
These work with just `API_HOST=azure` and the standard `.env` from `azd up`:
103+
104+
| Examples | Notes |
105+
|----------|-------|
106+
| `agent_basic.py` | Interactive chat loop |
107+
| `agent_tool.py`, `agent_tools.py` | Tool calling |
108+
| `agent_session.py` | Session persistence |
109+
| `agent_with_subagent.py`, `agent_without_subagent.py` | Sub-agent patterns |
110+
| `agent_supervisor.py` | Supervisor pattern |
111+
| `agent_middleware.py` | Middleware pipeline |
112+
| `agent_summarization.py` | Summarization middleware |
113+
| `agent_tool_approval.py` | Tool approval |
114+
| `workflow_agents.py`, `workflow_agents_sequential.py`, `workflow_agents_concurrent.py`, `workflow_agents_streaming.py` | Basic workflows |
115+
| `workflow_conditional.py`, `workflow_conditional_state.py`, `workflow_conditional_state_isolated.py`, `workflow_conditional_structured.py` | Conditional workflows |
116+
| `workflow_switch_case.py` | Switch/case workflow |
117+
| `workflow_converge.py`, `workflow_fan_out_fan_in_edges.py` | Converge / fan-out patterns |
118+
| `workflow_aggregator_ranked.py`, `workflow_aggregator_structured.py`, `workflow_aggregator_summary.py`, `workflow_aggregator_voting.py` | Aggregator workflows |
119+
| `workflow_multi_selection_edge_group.py` | Multi-selection edges |
120+
| `workflow_handoffbuilder.py`, `workflow_handoffbuilder_rules.py` | Handoff builder |
121+
| `workflow_hitl_handoff.py`, `workflow_hitl_requests.py`, `workflow_hitl_requests_structured.py`, `workflow_hitl_tool_approval.py` | HITL workflows |
122+
| `workflow_hitl_checkpoint.py` | HITL with file-based checkpoints |
123+
| `agent_knowledge_sqlite.py` | SQLite knowledge provider |
124+
| `agent_history_sqlite.py` | SQLite history provider (no tools — see [agent-framework#3295](https://github.com/microsoft/agent-framework/issues/3295)) |
125+
| `agent_memory_mem0.py` | Mem0 memory provider |
126+
127+
### Requires Redis (dev container)
128+
129+
Redis runs automatically in the dev container at `redis://redis:6379`.
130+
131+
| Examples | Notes |
132+
|----------|-------|
133+
| `agent_history_redis.py` | Redis history provider (no tools — see [agent-framework#3295](https://github.com/microsoft/agent-framework/issues/3295)) |
134+
| `agent_memory_redis.py` | Redis memory provider |
135+
136+
### Requires PostgreSQL (dev container)
137+
138+
PostgreSQL runs automatically in the dev container at `postgresql://admin:LocalPasswordOnly@db:5432/postgres`.
139+
140+
| Examples | Notes |
141+
|----------|-------|
142+
| `agent_knowledge_pg.py` | PG + pgvector knowledge |
143+
| `agent_knowledge_pg_rewrite.py` | PG knowledge with query rewrite |
144+
| `agent_knowledge_postgres.py` | PG knowledge (alternative) |
145+
| `workflow_hitl_checkpoint_pg.py` | HITL with PG-backed checkpoints |
146+
147+
### Requires Azure AI Search
148+
149+
Needs `AZURE_SEARCH_ENDPOINT` and `AZURE_SEARCH_KNOWLEDGE_BASE_NAME` in `.env`.
150+
151+
| Examples | Notes |
152+
|----------|-------|
153+
| `agent_knowledge_aisearch.py` | Azure AI Search knowledge base (agentic mode) |
154+
155+
### Requires MCP server
156+
157+
Start the MCP server first: `uv run python examples/mcp_server.py`
158+
159+
| Examples | Notes |
160+
|----------|-------|
161+
| `agent_mcp_local.py` | Local MCP server (stdio) |
162+
| `agent_mcp_remote.py` | Remote MCP server (SSE) |
163+
164+
### Requires OTel / Aspire
165+
166+
| Examples | Notes |
167+
|----------|-------|
168+
| `agent_otel_aspire.py` | Aspire dashboard (runs in dev container at `http://aspire-dashboard:18888`) |
169+
| `agent_otel_appinsights.py` | Needs `APPLICATIONINSIGHTS_CONNECTION_STRING` in `.env` |
170+
171+
### Slow-running examples (⏱ 2–10 minutes)
172+
173+
These take significantly longer than other examples:
174+
175+
| Examples | Notes |
176+
|----------|-------|
177+
| `agent_evaluation.py` | Runs agent + evaluators inline. ~2–3 min. |
178+
| `agent_evaluation_generate.py` | Generates eval data JSONL. ~2 min. |
179+
| `agent_evaluation_batch.py` | Batch evaluators on JSONL. ~3–5 min. Needs `eval_data.jsonl` from `agent_evaluation_generate.py`. |
180+
| `agent_redteam.py` | Red team attack simulation. ~5–10 min. |
181+
| `workflow_magenticone.py` | Multi-agent MagenticOne orchestration. ~2–5 min. |
182+
183+
### Spanish examples
184+
185+
Spanish files under `examples/spanish/` mirror the English examples exactly (same code, translated strings). After changes, spot-check 3–5 Spanish files to confirm they run correctly.

README.md

Lines changed: 8 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
<!--
22
---
33
name: Python Agent Framework Demos
4-
description: Collection of Python examples for Microsoft Agent Framework using GitHub Models or Azure AI Foundry.
4+
description: Collection of Python examples for Microsoft Agent Framework using Microsoft Foundry.
55
languages:
66
- python
77
products:
@@ -17,15 +17,14 @@ urlFragment: python-agentframework-demos
1717
[![Open in GitHub Codespaces](https://img.shields.io/static/v1?style=for-the-badge&label=GitHub+Codespaces&message=Open&color=brightgreen&logo=github)](https://codespaces.new/Azure-Samples/python-agentframework-demos)
1818
[![Open in Dev Containers](https://img.shields.io/static/v1?style=for-the-badge&label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/Azure-Samples/python-agentframework-demos)
1919

20-
This repository provides examples of [Microsoft Agent Framework](https://learn.microsoft.com/agent-framework/) using LLMs from [GitHub Models](https://github.com/marketplace/models), [Azure AI Foundry](https://learn.microsoft.com/azure/ai-foundry/), or other model providers. GitHub Models are free to use for anyone with a GitHub account, up to a [daily rate limit](https://docs.github.com/github-models/prototyping-with-ai-models#rate-limits).
20+
This repository provides examples of [Microsoft Agent Framework](https://learn.microsoft.com/agent-framework/) using LLMs from [Microsoft Foundry](https://learn.microsoft.com/azure/ai-foundry/) or other model providers.
2121

2222
* [Getting started](#getting-started)
2323
* [GitHub Codespaces](#github-codespaces)
2424
* [VS Code Dev Containers](#vs-code-dev-containers)
2525
* [Local environment](#local-environment)
2626
* [Configuring model providers](#configuring-model-providers)
27-
* [Using GitHub Models](#using-github-models)
28-
* [Using Azure AI Foundry models](#using-azure-ai-foundry-models)
27+
* [Using Microsoft Foundry models](#using-microsoft-foundry-models)
2928
* [Using OpenAI.com models](#using-openaicom-models)
3029
* [Running the Python examples](#running-the-python-examples)
3130
* [Resources](#resources)
@@ -95,35 +94,11 @@ The dev container includes a Redis server, which is used by the `agent_history_r
9594

9695
## Configuring model providers
9796

98-
These examples can be run with Azure AI Foundry, OpenAI.com, or GitHub Models, depending on the environment variables you set. All the scripts reference the environment variables from a `.env` file, and an example `.env.sample` file is provided. Host-specific instructions are below.
97+
These examples can be run with Microsoft Foundry or OpenAI.com, depending on the environment variables you set. All the scripts reference the environment variables from a `.env` file, and an example `.env.sample` file is provided. Host-specific instructions are below.
9998

100-
## Using GitHub Models
99+
## Using Microsoft Foundry models
101100

102-
If you open this repository in GitHub Codespaces, you can run the scripts for free using GitHub Models without any additional steps, as your `GITHUB_TOKEN` is already configured in the Codespaces environment.
103-
104-
If you want to run the scripts locally, you need to set up the `GITHUB_TOKEN` environment variable with a GitHub personal access token (PAT). You can create a PAT by following these steps:
105-
106-
1. Go to your GitHub account settings.
107-
2. Click on "Developer settings" in the left sidebar.
108-
3. Click on "Personal access tokens" in the left sidebar.
109-
4. Click on "Tokens (classic)" or "Fine-grained tokens" depending on your preference.
110-
5. Click on "Generate new token".
111-
6. Give your token a name and select the scopes you want to grant. For this project, you don't need any specific scopes.
112-
7. Click on "Generate token".
113-
8. Copy the generated token.
114-
9. Set the `GITHUB_TOKEN` environment variable in your terminal or IDE:
115-
116-
```shell
117-
export GITHUB_TOKEN=your_personal_access_token
118-
```
119-
120-
10. Optionally, you can use a model other than "gpt-4.1-mini" by setting the `GITHUB_MODEL` environment variable. Use a model that supports function calling, such as: `gpt-5`, `gpt-4.1-mini`, `gpt-4o`, `gpt-4o-mini`, `o3-mini`, `AI21-Jamba-1.5-Large`, `AI21-Jamba-1.5-Mini`, `Codestral-2501`, `Cohere-command-r`, `Ministral-3B`, `Mistral-Large-2411`, `Mistral-Nemo`, `Mistral-small`
121-
122-
## Using Azure AI Foundry models
123-
124-
You can run all examples in this repository using GitHub Models. If you want to run the examples using models from Azure AI Foundry instead, you need to provision the Azure AI resources, which will incur costs.
125-
126-
This project includes infrastructure as code (IaC) to provision Azure OpenAI deployments of "gpt-4.1-mini" and "text-embedding-3-large" via Azure AI Foundry. The IaC is defined in the `infra` directory and uses the Azure Developer CLI to provision the resources.
101+
This project includes infrastructure as code (IaC) to provision Azure OpenAI deployments of "gpt-5.4" and "text-embedding-3-large" via Microsoft Foundry. The IaC is defined in the `infra` directory and uses the Azure Developer CLI to provision the resources.
127102

128103
1. Make sure the [Azure Developer CLI (azd)](https://aka.ms/install-azd) is installed.
129104

@@ -233,7 +208,7 @@ You can run the examples in this repository by executing the scripts in the `exa
233208
| [agent_otel_aspire.py](examples/agent_otel_aspire.py) | An agent with OpenTelemetry tracing, metrics, and structured logs exported to the [Aspire Dashboard](https://aspire.dev/dashboard/standalone/). |
234209
| [agent_otel_appinsights.py](examples/agent_otel_appinsights.py) | An agent with OpenTelemetry tracing, metrics, and structured logs exported to [Azure Application Insights](https://learn.microsoft.com/azure/azure-monitor/app/app-insights-overview). Requires Azure provisioning via `azd provision`. |
235210
| [agent_evaluation_generate.py](examples/agent_evaluation_generate.py) | Generate synthetic evaluation data for the travel planner agent. |
236-
| [agent_evaluation.py](examples/agent_evaluation.py) | Evaluate a travel planner agent using [Azure AI Evaluation](https://learn.microsoft.com/azure/ai-foundry/concepts/evaluation-evaluators/agent-evaluators) agent evaluators (IntentResolution, ToolCallAccuracy, TaskAdherence, ResponseCompleteness). Optionally set `AZURE_AI_PROJECT` in `.env` to log results to [Azure AI Foundry](https://learn.microsoft.com/azure/ai-foundry/how-to/develop/agent-evaluate-sdk). |
211+
| [agent_evaluation.py](examples/agent_evaluation.py) | Evaluate a travel planner agent using [Azure AI Evaluation](https://learn.microsoft.com/azure/ai-foundry/concepts/evaluation-evaluators/agent-evaluators) agent evaluators (IntentResolution, ToolCallAccuracy, TaskAdherence, ResponseCompleteness). Optionally set `AZURE_AI_PROJECT` in `.env` to log results to [Microsoft Foundry](https://learn.microsoft.com/azure/ai-foundry/how-to/develop/agent-evaluate-sdk). |
237212
| [agent_evaluation_batch.py](examples/agent_evaluation_batch.py) | Batch evaluation of agent responses using Azure AI Evaluation's `evaluate()` function. |
238213
| [agent_redteam.py](examples/agent_redteam.py) | Red-team a financial advisor agent using [Azure AI Evaluation](https://learn.microsoft.com/azure/ai-foundry/how-to/develop/red-teaming-agent) to test resilience against adversarial attacks across risk categories (Violence, HateUnfairness, Sexual, SelfHarm). Requires `AZURE_AI_PROJECT` in `.env`. |
239214
@@ -304,7 +279,7 @@ This example requires an `APPLICATIONINSIGHTS_CONNECTION_STRING` environment var
304279
305280
**Option A: Automatic via `azd provision`**
306281
307-
If you run `azd provision` (see [Using Azure AI Foundry models](#using-azure-ai-foundry-models)), the Application Insights resource is provisioned automatically and the connection string is written to your `.env` file.
282+
If you run `azd provision` (see [Using Microsoft Foundry models](#using-microsoft-foundry-models)), the Application Insights resource is provisioned automatically and the connection string is written to your `.env` file.
308283
309284
**Option B: Manual from the Azure Portal**
310285

examples/agent_basic.py

Lines changed: 3 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@
99

1010
# Configure OpenAI client based on environment
1111
load_dotenv(override=True)
12-
API_HOST = os.getenv("API_HOST", "github")
12+
API_HOST = os.getenv("API_HOST", "azure")
1313

1414
async_credential = None
1515
if API_HOST == "azure":
@@ -18,17 +18,11 @@
1818
client = OpenAIChatClient(
1919
base_url=f"{os.environ['AZURE_OPENAI_ENDPOINT']}/openai/v1/",
2020
api_key=token_provider,
21-
model_id=os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"],
22-
)
23-
elif API_HOST == "github":
24-
client = OpenAIChatClient(
25-
base_url="https://models.github.ai/inference",
26-
api_key=os.environ["GITHUB_TOKEN"],
27-
model_id=os.getenv("GITHUB_MODEL", "openai/gpt-4.1-mini"),
21+
model=os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"],
2822
)
2923
else:
3024
client = OpenAIChatClient(
31-
api_key=os.environ["OPENAI_API_KEY"], model_id=os.environ.get("OPENAI_MODEL", "gpt-4.1-mini")
25+
api_key=os.environ["OPENAI_API_KEY"], model=os.environ.get("OPENAI_MODEL", "gpt-5.4")
3226
)
3327

3428
agent = Agent(client=client, instructions="You're an informational agent. Answer questions cheerfully.")

examples/agent_evaluation.py

Lines changed: 5 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@
2828
logger.setLevel(logging.INFO)
2929

3030
load_dotenv(override=True)
31-
API_HOST = os.getenv("API_HOST", "github")
31+
API_HOST = os.getenv("API_HOST", "azure")
3232

3333
async_credential = None
3434
if API_HOST == "azure":
@@ -37,33 +37,21 @@
3737
client = OpenAIChatClient(
3838
base_url=f"{os.environ['AZURE_OPENAI_ENDPOINT']}/openai/v1/",
3939
api_key=token_provider,
40-
model_id=os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"],
40+
model=os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"],
4141
)
4242
eval_model_config = AzureOpenAIModelConfiguration(
4343
type="azure_openai",
4444
azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"],
4545
azure_deployment=os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"],
4646
)
47-
elif API_HOST == "github":
48-
client = OpenAIChatClient(
49-
base_url="https://models.github.ai/inference",
50-
api_key=os.environ["GITHUB_TOKEN"],
51-
model_id=os.getenv("GITHUB_MODEL", "openai/gpt-4.1-mini"),
52-
)
53-
eval_model_config = OpenAIModelConfiguration(
54-
type="openai",
55-
base_url="https://models.github.ai/inference",
56-
api_key=os.environ["GITHUB_TOKEN"],
57-
model="openai/gpt-4.1-mini",
58-
)
5947
else:
6048
client = OpenAIChatClient(
61-
api_key=os.environ["OPENAI_API_KEY"], model_id=os.environ.get("OPENAI_MODEL", "gpt-4.1-mini")
49+
api_key=os.environ["OPENAI_API_KEY"], model=os.environ.get("OPENAI_MODEL", "gpt-5.4")
6250
)
6351
eval_model_config = OpenAIModelConfiguration(
6452
type="openai",
6553
api_key=os.environ["OPENAI_API_KEY"],
66-
model=os.environ.get("OPENAI_MODEL", "gpt-4.1-mini"),
54+
model=os.environ.get("OPENAI_MODEL", "gpt-5.4"),
6755
)
6856

6957

@@ -298,9 +286,7 @@ async def main():
298286

299287
intent_result = intent_evaluator(query=eval_query, response=eval_response, tool_definitions=tool_definitions)
300288
completeness_result = completeness_evaluator(response=response.text, ground_truth=ground_truth)
301-
adherence_result = adherence_evaluator(
302-
query=eval_query, response=eval_response, tool_definitions=tool_definitions
303-
)
289+
adherence_result = adherence_evaluator(query=eval_query, response=eval_response, tool_definitions=tool_definitions)
304290
tool_accuracy_result = tool_accuracy_evaluator(
305291
query=eval_query, response=eval_response, tool_definitions=tool_definitions
306292
)

0 commit comments

Comments
 (0)