Skip to content

Commit 773a4c6

Browse files
committed
Add App Insights telemetry export example (agent_otel_appinsights.py)
- New example: agent_otel_appinsights.py (English + Spanish) Uses Pattern 3 from Agent Framework docs: configure_azure_monitor() + enable_instrumentation() for Azure Application Insights OTel export - Add azure-monitor-opentelemetry dependency to pyproject.toml - Add Log Analytics + Application Insights to infra/main.bicep - Wire APPLICATIONINSIGHTS_CONNECTION_STRING into .env writer scripts - Document new example and setup in both READMEs
1 parent 34412d4 commit 773a4c6

10 files changed

Lines changed: 703 additions & 18 deletions

File tree

.env.sample

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -10,5 +10,7 @@ OPENAI_MODEL=gpt-3.5-turbo
1010
GITHUB_MODEL=gpt-5-mini
1111
GITHUB_TOKEN=YOUR-GITHUB-PERSONAL-ACCESS-TOKEN
1212
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317
13+
# Optional: Set to export telemetry to Azure Application Insights
14+
APPLICATIONINSIGHTS_CONNECTION_STRING=InstrumentationKey=YOUR-KEY;IngestionEndpoint=https://YOUR-REGION.in.applicationinsights.azure.com/
1315
# Optional: Set to log evaluation results to Azure AI Foundry for rich visualization
1416
AZURE_AI_PROJECT=https://YOUR-ACCOUNT.services.ai.azure.com/api/projects/YOUR-PROJECT

README.md

Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -174,6 +174,7 @@ You can run the examples in this repository by executing the scripts in the `exa
174174
| [openai_tool_calling.py](examples/openai_tool_calling.py) | Tool calling with the low-level OpenAI SDK, showing manual tool dispatch. |
175175
| [workflow_basic.py](examples/workflow_basic.py) | A workflow-based agent. |
176176
| [agent_otel_aspire.py](examples/agent_otel_aspire.py) | An agent with OpenTelemetry tracing, metrics, and structured logs exported to the [Aspire Dashboard](https://aspire.dev/dashboard/standalone/). |
177+
| [agent_otel_appinsights.py](examples/agent_otel_appinsights.py) | An agent with OpenTelemetry tracing, metrics, and structured logs exported to [Azure Application Insights](https://learn.microsoft.com/azure/azure-monitor/app/app-insights-overview). Requires Azure provisioning via `azd provision`. |
177178
| [agent_evaluation.py](examples/agent_evaluation.py) | Evaluate a travel planner agent using [Azure AI Evaluation](https://learn.microsoft.com/azure/ai-foundry/concepts/evaluation-evaluators/agent-evaluators) agent evaluators (IntentResolution, ToolCallAccuracy, TaskAdherence, ResponseCompleteness). Optionally set `AZURE_AI_PROJECT` in `.env` to log results to [Azure AI Foundry](https://learn.microsoft.com/azure/ai-foundry/how-to/develop/agent-evaluate-sdk). |
178179
179180
## Using the Aspire Dashboard for telemetry
@@ -233,6 +234,44 @@ If you're running locally without Dev Containers, you need to start the Aspire D
233234

234235
For the full Python + Aspire guide, see [Use the Aspire dashboard with Python apps](https://aspire.dev/dashboard/standalone-for-python/).
235236

237+
## Exporting telemetry to Azure Application Insights
238+
239+
The [agent_otel_appinsights.py](examples/agent_otel_appinsights.py) example exports OpenTelemetry traces, metrics, and structured logs to [Azure Application Insights](https://learn.microsoft.com/azure/azure-monitor/app/app-insights-overview).
240+
241+
### Setup
242+
243+
This example requires an `APPLICATIONINSIGHTS_CONNECTION_STRING` environment variable. You can get this automatically or manually:
244+
245+
**Option A: Automatic via `azd provision`**
246+
247+
If you run `azd provision` (see [Using Azure AI Foundry models](#using-azure-ai-foundry-models)), the Application Insights resource is provisioned automatically and the connection string is written to your `.env` file.
248+
249+
**Option B: Manual from the Azure Portal**
250+
251+
1. Create an Application Insights resource in the [Azure Portal](https://portal.azure.com).
252+
2. Copy the connection string from the resource's Overview page.
253+
3. Add it to your `.env` file:
254+
255+
```sh
256+
APPLICATIONINSIGHTS_CONNECTION_STRING=InstrumentationKey=...;IngestionEndpoint=...
257+
```
258+
259+
### Running the example
260+
261+
```sh
262+
uv run examples/agent_otel_appinsights.py
263+
```
264+
265+
### Viewing telemetry
266+
267+
After running the example, navigate to your Application Insights resource in the Azure Portal:
268+
269+
* **Transaction search**: See end-to-end traces for agent invocations, chat completions, and tool executions.
270+
* **Live Metrics**: Monitor real-time request rates and performance.
271+
* **Performance**: Analyze operation durations and identify bottlenecks.
272+
273+
Telemetry data may take 2–5 minutes to appear in the portal.
274+
236275
## Resources
237276
238277
* [Agent Framework Documentation](https://learn.microsoft.com/agent-framework/)

examples/agent_otel_appinsights.py

Lines changed: 108 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,108 @@
1+
import asyncio
2+
import logging
3+
import os
4+
import random
5+
from datetime import datetime, timezone
6+
from typing import Annotated
7+
8+
from agent_framework import ChatAgent
9+
from agent_framework.openai import OpenAIChatClient
10+
from azure.identity.aio import DefaultAzureCredential, get_bearer_token_provider
11+
from dotenv import load_dotenv
12+
from pydantic import Field
13+
from rich import print
14+
from rich.logging import RichHandler
15+
16+
# Setup logging
17+
handler = RichHandler(show_path=False, rich_tracebacks=True, show_level=False)
18+
logging.basicConfig(level=logging.WARNING, handlers=[handler], force=True, format="%(message)s")
19+
logger = logging.getLogger(__name__)
20+
logger.setLevel(logging.INFO)
21+
22+
# Configure OpenTelemetry export to Azure Application Insights (if connection string is set)
23+
load_dotenv(override=True)
24+
appinsights_connection_string = os.getenv("APPLICATIONINSIGHTS_CONNECTION_STRING")
25+
if appinsights_connection_string:
26+
from azure.monitor.opentelemetry import configure_azure_monitor
27+
from agent_framework.observability import create_resource, enable_instrumentation
28+
29+
os.environ.setdefault("OTEL_SERVICE_NAME", "agent-framework-demo")
30+
configure_azure_monitor(
31+
connection_string=appinsights_connection_string,
32+
resource=create_resource(),
33+
enable_live_metrics=True,
34+
)
35+
enable_instrumentation(enable_sensitive_data=True)
36+
logger.info("Azure Application Insights export enabled")
37+
else:
38+
logger.info(
39+
"Set APPLICATIONINSIGHTS_CONNECTION_STRING in .env to export telemetry to Azure Application Insights. "
40+
"Run 'azd provision' to automatically provision and configure Application Insights, "
41+
"or set the connection string manually from the Azure Portal."
42+
)
43+
44+
# Configure OpenAI client based on environment
45+
API_HOST = os.getenv("API_HOST", "github")
46+
47+
async_credential = None
48+
if API_HOST == "azure":
49+
async_credential = DefaultAzureCredential()
50+
token_provider = get_bearer_token_provider(async_credential, "https://cognitiveservices.azure.com/.default")
51+
client = OpenAIChatClient(
52+
base_url=f"{os.environ['AZURE_OPENAI_ENDPOINT']}/openai/v1/",
53+
api_key=token_provider,
54+
model_id=os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"],
55+
)
56+
elif API_HOST == "github":
57+
client = OpenAIChatClient(
58+
base_url="https://models.github.ai/inference",
59+
api_key=os.environ["GITHUB_TOKEN"],
60+
model_id=os.getenv("GITHUB_MODEL", "openai/gpt-5-mini"),
61+
)
62+
else:
63+
client = OpenAIChatClient(
64+
api_key=os.environ["OPENAI_API_KEY"], model_id=os.environ.get("OPENAI_MODEL", "gpt-5-mini")
65+
)
66+
67+
68+
def get_weather(
69+
city: Annotated[str, Field(description="City name, spelled out fully")],
70+
) -> dict:
71+
"""Returns weather data for a given city, a dictionary with temperature and description."""
72+
logger.info(f"Getting weather for {city}")
73+
weather_options = [
74+
{"temperature": 72, "description": "Sunny"},
75+
{"temperature": 60, "description": "Rainy"},
76+
{"temperature": 55, "description": "Cloudy"},
77+
{"temperature": 45, "description": "Windy"},
78+
]
79+
return random.choice(weather_options)
80+
81+
82+
def get_current_time(
83+
timezone_name: Annotated[str, Field(description="Timezone name, e.g. 'US/Eastern', 'Asia/Tokyo', 'UTC'")],
84+
) -> str:
85+
"""Returns the current date and time in UTC (timezone_name is for display context only)."""
86+
logger.info(f"Getting current time for {timezone_name}")
87+
now = datetime.now(timezone.utc)
88+
return f"The current time in {timezone_name} is approximately {now.strftime('%Y-%m-%d %H:%M:%S')} UTC"
89+
90+
91+
agent = ChatAgent(
92+
name="weather-time-agent",
93+
chat_client=client,
94+
instructions="You are a helpful assistant that can look up weather and time information.",
95+
tools=[get_weather, get_current_time],
96+
)
97+
98+
99+
async def main():
100+
response = await agent.run("What's the weather in Seattle and what time is it in Tokyo?")
101+
print(response.text)
102+
103+
if async_credential:
104+
await async_credential.close()
105+
106+
107+
if __name__ == "__main__":
108+
asyncio.run(main())

examples/spanish/README.md

Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -175,6 +175,7 @@ Puedes ejecutar los ejemplos en este repositorio ejecutando los scripts en el di
175175
| [openai_tool_calling.py](openai_tool_calling.py) | Llamadas a funciones con el SDK de OpenAI de bajo nivel, mostrando despacho manual de herramientas. |
176176
| [workflow_basic.py](workflow_basic.py) | Usa Agent Framework para crear un agente basado en flujo de trabajo. |
177177
| [agent_otel_aspire.py](agent_otel_aspire.py) | Un agente con trazas, métricas y logs estructurados de OpenTelemetry exportados al [Aspire Dashboard](https://aspire.dev/dashboard/standalone/). |
178+
| [agent_otel_appinsights.py](agent_otel_appinsights.py) | Un agente con trazas, métricas y logs estructurados de OpenTelemetry exportados a [Azure Application Insights](https://learn.microsoft.com/azure/azure-monitor/app/app-insights-overview). Requiere aprovisionamiento de Azure con `azd provision`. |
178179
| [agent_evaluation.py](agent_evaluation.py) | Evalúa un agente planificador de viajes usando evaluadores de [Azure AI Evaluation](https://learn.microsoft.com/azure/ai-foundry/concepts/evaluation-evaluators/agent-evaluators) (IntentResolution, ToolCallAccuracy, TaskAdherence, ResponseCompleteness). Opcionalmente configura `AZURE_AI_PROJECT` en `.env` para registrar resultados en [Azure AI Foundry](https://learn.microsoft.com/azure/ai-foundry/how-to/develop/agent-evaluate-sdk). |
179180

180181
## Usar el Aspire Dashboard para telemetría
@@ -234,6 +235,44 @@ Si ejecutas localmente sin Dev Containers, necesitas iniciar el Aspire Dashboard
234235

235236
Para la guia completa de Python + Aspire, consulta [Usar el Aspire Dashboard con apps de Python](https://aspire.dev/dashboard/standalone-for-python/).
236237

238+
## Exportar telemetría a Azure Application Insights
239+
240+
El ejemplo [agent_otel_appinsights.py](agent_otel_appinsights.py) exporta trazas, métricas y logs estructurados de OpenTelemetry a [Azure Application Insights](https://learn.microsoft.com/azure/azure-monitor/app/app-insights-overview).
241+
242+
### Configuración
243+
244+
Este ejemplo requiere la variable de entorno `APPLICATIONINSIGHTS_CONNECTION_STRING`. Puedes obtenerla automáticamente o manualmente:
245+
246+
**Opción A: Automática con `azd provision`**
247+
248+
Si ejecutas `azd provision` (consulta [Usar modelos de Azure AI Foundry](#usar-modelos-de-azure-ai-foundry)), el recurso de Application Insights se provisiona automáticamente y la cadena de conexión se escribe en tu archivo `.env`.
249+
250+
**Opción B: Manual desde el Portal de Azure**
251+
252+
1. Crea un recurso de Application Insights en el [Portal de Azure](https://portal.azure.com).
253+
2. Copia la cadena de conexión desde la página de resumen del recurso.
254+
3. Agrégala a tu archivo `.env`:
255+
256+
```sh
257+
APPLICATIONINSIGHTS_CONNECTION_STRING=InstrumentationKey=...;IngestionEndpoint=...
258+
```
259+
260+
### Ejecutar el ejemplo
261+
262+
```sh
263+
uv run examples/spanish/agent_otel_appinsights.py
264+
```
265+
266+
### Ver telemetría
267+
268+
Después de ejecutar el ejemplo, navega a tu recurso de Application Insights en el Portal de Azure:
269+
270+
* **Búsqueda de transacciones**: Ve trazas de extremo a extremo para invocaciones de agentes, completados de chat y ejecuciones de herramientas.
271+
* **Métricas en vivo**: Monitorea tasas de solicitudes y rendimiento en tiempo real.
272+
* **Rendimiento**: Analiza duraciones de operaciones e identifica cuellos de botella.
273+
274+
Los datos de telemetría pueden tardar entre 2 y 5 minutos en aparecer en el portal.
275+
237276
## Recursos
238277

239278
* [Documentación de Agent Framework](https://learn.microsoft.com/agent-framework/)
Lines changed: 108 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,108 @@
1+
import asyncio
2+
import logging
3+
import os
4+
import random
5+
from datetime import datetime, timezone
6+
from typing import Annotated
7+
8+
from agent_framework import ChatAgent
9+
from agent_framework.openai import OpenAIChatClient
10+
from azure.identity.aio import DefaultAzureCredential, get_bearer_token_provider
11+
from dotenv import load_dotenv
12+
from pydantic import Field
13+
from rich import print
14+
from rich.logging import RichHandler
15+
16+
# Configura logging
17+
handler = RichHandler(show_path=False, rich_tracebacks=True, show_level=False)
18+
logging.basicConfig(level=logging.WARNING, handlers=[handler], force=True, format="%(message)s")
19+
logger = logging.getLogger(__name__)
20+
logger.setLevel(logging.INFO)
21+
22+
# Configura la exportación de OpenTelemetry a Azure Application Insights (si la cadena de conexión está configurada)
23+
load_dotenv(override=True)
24+
appinsights_connection_string = os.getenv("APPLICATIONINSIGHTS_CONNECTION_STRING")
25+
if appinsights_connection_string:
26+
from agent_framework.observability import create_resource, enable_instrumentation
27+
from azure.monitor.opentelemetry import configure_azure_monitor
28+
29+
os.environ.setdefault("OTEL_SERVICE_NAME", "agent-framework-demo")
30+
configure_azure_monitor(
31+
connection_string=appinsights_connection_string,
32+
resource=create_resource(),
33+
enable_live_metrics=True,
34+
)
35+
enable_instrumentation(enable_sensitive_data=True)
36+
logger.info("Exportación a Azure Application Insights habilitada")
37+
else:
38+
logger.info(
39+
"Configura APPLICATIONINSIGHTS_CONNECTION_STRING en .env para exportar telemetría a Azure Application "
40+
"Insights. Ejecuta 'azd provision' para provisionar y configurar Application Insights automáticamente, "
41+
"o configura la cadena de conexión manualmente desde el Portal de Azure."
42+
)
43+
44+
# Configura el cliente de OpenAI según el entorno
45+
API_HOST = os.getenv("API_HOST", "github")
46+
47+
async_credential = None
48+
if API_HOST == "azure":
49+
async_credential = DefaultAzureCredential()
50+
token_provider = get_bearer_token_provider(async_credential, "https://cognitiveservices.azure.com/.default")
51+
client = OpenAIChatClient(
52+
base_url=f"{os.environ['AZURE_OPENAI_ENDPOINT']}/openai/v1/",
53+
api_key=token_provider,
54+
model_id=os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"],
55+
)
56+
elif API_HOST == "github":
57+
client = OpenAIChatClient(
58+
base_url="https://models.github.ai/inference",
59+
api_key=os.environ["GITHUB_TOKEN"],
60+
model_id=os.getenv("GITHUB_MODEL", "openai/gpt-5-mini"),
61+
)
62+
else:
63+
client = OpenAIChatClient(
64+
api_key=os.environ["OPENAI_API_KEY"], model_id=os.environ.get("OPENAI_MODEL", "gpt-5-mini")
65+
)
66+
67+
68+
def get_weather(
69+
city: Annotated[str, Field(description="City name, spelled out fully")],
70+
) -> dict:
71+
"""Devuelve datos meteorológicos para una ciudad: temperatura y descripción."""
72+
logger.info(f"Obteniendo el clima para {city}")
73+
weather_options = [
74+
{"temperature": 22, "description": "Soleado"},
75+
{"temperature": 15, "description": "Lluvioso"},
76+
{"temperature": 13, "description": "Nublado"},
77+
{"temperature": 7, "description": "Ventoso"},
78+
]
79+
return random.choice(weather_options)
80+
81+
82+
def get_current_time(
83+
timezone_name: Annotated[str, Field(description="Timezone name, e.g. 'US/Eastern', 'Asia/Tokyo', 'UTC'")],
84+
) -> str:
85+
"""Devuelve la fecha y hora actual en UTC (timezone_name es solo para contexto de visualización)."""
86+
logger.info(f"Obteniendo la hora actual para {timezone_name}")
87+
now = datetime.now(timezone.utc)
88+
return f"La hora actual en {timezone_name} es aproximadamente {now.strftime('%Y-%m-%d %H:%M:%S')} UTC"
89+
90+
91+
agent = ChatAgent(
92+
name="weather-time-agent",
93+
chat_client=client,
94+
instructions="Eres un asistente útil que puede consultar información del clima y la hora.",
95+
tools=[get_weather, get_current_time],
96+
)
97+
98+
99+
async def main():
100+
response = await agent.run("¿Cómo está el clima en Ciudad de México y qué hora es en Buenos Aires?")
101+
print(response.text)
102+
103+
if async_credential:
104+
await async_credential.close()
105+
106+
107+
if __name__ == "__main__":
108+
asyncio.run(main())

infra/main.bicep

Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -144,6 +144,33 @@ module openAi 'br/public:avm/res/cognitive-services/account:0.7.1' = {
144144
}
145145
}
146146

147+
// Log Analytics workspace for Application Insights
148+
var logAnalyticsName = '${prefix}-loganalytics'
149+
module logAnalytics 'br/public:avm/res/operational-insights/workspace:0.9.1' = {
150+
name: 'loganalytics'
151+
scope: resourceGroup
152+
params: {
153+
name: logAnalyticsName
154+
location: location
155+
tags: tags
156+
}
157+
}
158+
159+
// Application Insights for OpenTelemetry export
160+
var appInsightsName = '${prefix}-appinsights'
161+
module appInsights 'br/public:avm/res/insights/component:0.4.2' = {
162+
name: 'appinsights'
163+
scope: resourceGroup
164+
params: {
165+
name: appInsightsName
166+
location: location
167+
tags: tags
168+
workspaceResourceId: logAnalytics.outputs.resourceId
169+
kind: 'web'
170+
applicationType: 'web'
171+
}
172+
}
173+
147174
output AZURE_LOCATION string = location
148175
output AZURE_TENANT_ID string = tenant().tenantId
149176
output AZURE_RESOURCE_GROUP string = resourceGroup.name
@@ -154,3 +181,6 @@ output AZURE_OPENAI_CHAT_MODEL string = azureOpenaiChatModel
154181
output AZURE_OPENAI_CHAT_DEPLOYMENT string = azureOpenaiChatDeployment
155182
output AZURE_OPENAI_EMBEDDING_MODEL string = azureOpenaiEmbeddingModel
156183
output AZURE_OPENAI_EMBEDDING_DEPLOYMENT string = azureOpenaiEmbeddingDeployment
184+
185+
// Specific to Application Insights
186+
output APPLICATIONINSIGHTS_CONNECTION_STRING string = appInsights.outputs.connectionString

infra/write_dot_env.ps1

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,3 +17,5 @@ Add-Content -Path .env -Value "AZURE_OPENAI_CHAT_DEPLOYMENT=$azureOpenAiChatDepl
1717
Add-Content -Path .env -Value "AZURE_OPENAI_CHAT_MODEL=$azureOpenAiChatModel"
1818
Add-Content -Path .env -Value "AZURE_OPENAI_EMBEDDING_DEPLOYMENT=$azureOpenAiEmbeddingDeployment"
1919
Add-Content -Path .env -Value "AZURE_OPENAI_EMBEDDING_MODEL=$azureOpenAiEmbeddingModel"
20+
$appInsightsConnectionString = azd env get-value APPLICATIONINSIGHTS_CONNECTION_STRING
21+
Add-Content -Path .env -Value "APPLICATIONINSIGHTS_CONNECTION_STRING=$appInsightsConnectionString"

infra/write_dot_env.sh

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,3 +12,4 @@ echo "AZURE_OPENAI_CHAT_DEPLOYMENT=$(azd env get-value AZURE_OPENAI_CHAT_DEPLOYM
1212
echo "AZURE_OPENAI_CHAT_MODEL=$(azd env get-value AZURE_OPENAI_CHAT_MODEL)" >> .env
1313
echo "AZURE_OPENAI_EMBEDDING_DEPLOYMENT=$(azd env get-value AZURE_OPENAI_EMBEDDING_DEPLOYMENT)" >> .env
1414
echo "AZURE_OPENAI_EMBEDDING_MODEL=$(azd env get-value AZURE_OPENAI_EMBEDDING_MODEL)" >> .env
15+
echo "APPLICATIONINSIGHTS_CONNECTION_STRING=$(azd env get-value APPLICATIONINSIGHTS_CONNECTION_STRING)" >> .env

pyproject.toml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,7 @@ dependencies = [
1414
"faker",
1515
"fastmcp",
1616
"opentelemetry-exporter-otlp-proto-grpc",
17+
"azure-monitor-opentelemetry",
1718
"azure-ai-evaluation>=1.15.0",
1819
"agent-framework-core @ git+https://github.com/microsoft/agent-framework.git@98cd72839e4057d661a58092a3b013993264d834#subdirectory=python/packages/core",
1920
"agent-framework-devui @ git+https://github.com/microsoft/agent-framework.git@98cd72839e4057d661a58092a3b013993264d834#subdirectory=python/packages/devui",

0 commit comments

Comments
 (0)