Skip to content

Commit 8b48544

Browse files
committed
Merge remote-tracking branch 'origin/main' into hitl
2 parents 3c8adb6 + d0c53e7 commit 8b48544

90 files changed

Lines changed: 3922 additions & 94 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

AGENTS.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -37,6 +37,7 @@ Each example .py file should have a corresponding file with the same name under
3737
* Identifiers (functions/classes/vars): English
3838
* User-facing output/data (e.g., example responses, sample values): Spanish
3939
* HITL control words: bilingual (approve/aprobar, exit/salir)
40+
* Agent and workflow names: English ("TravelPlannerAgent" should be the same in both versions, not "AgentePlanificadorDeViajes")
4041

4142
Use informal (tuteo) LATAM Spanish, tu not usted, puedes not podes, etc. The content is technical so if a word is best kept in English, then do so.
4243

README.md

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -194,6 +194,7 @@ You can run the examples in this repository by executing the scripts in the `exa
194194
| [agent_with_subagent.py](examples/agent_with_subagent.py) | Context isolation with sub-agents to keep prompts focused on relevant tools. |
195195
| [agent_without_subagent.py](examples/agent_without_subagent.py) | Context bloat example where one agent carries all tool schemas in a single prompt. |
196196
| [agent_summarization.py](examples/agent_summarization.py) | Context compaction via summarization middleware to reduce token usage in long conversations. |
197+
| [workflow_magenticone.py](examples/workflow_magenticone.py) | A MagenticOne multi-agent workflow. |
197198
| [agent_middleware.py](examples/agent_middleware.py) | Agent, chat, and function middleware for logging, timing, and blocking. |
198199
| [agent_knowledge_aisearch.py](examples/agent_knowledge_aisearch.py) | Knowledge retrieval (RAG) using Azure AI Search with AgentFrameworkAzureAISearchRAG. |
199200
| [agent_knowledge_sqlite.py](examples/agent_knowledge_sqlite.py) | Knowledge retrieval (RAG) using a custom context provider with SQLite FTS5. |
@@ -204,15 +205,24 @@ You can run the examples in this repository by executing the scripts in the `exa
204205
| [agent_mcp_local.py](examples/agent_mcp_local.py) | An agent connected to a local MCP server (e.g. for expense logging). |
205206
| [openai_tool_calling.py](examples/openai_tool_calling.py) | Tool calling with the low-level OpenAI SDK, showing manual tool dispatch. |
206207
| [workflow_rag_ingest.py](examples/workflow_rag_ingest.py) | A RAG ingestion pipeline using plain Python executors: fetch a document with markitdown, split into chunks, and embed with an OpenAI model. |
208+
| [workflow_fan_out_fan_in_edges.py](examples/workflow_fan_out_fan_in_edges.py) | Fan-out/fan-in with explicit edge groups using `add_fan_out_edges` and `add_fan_in_edges`. |
209+
| [workflow_aggregator_summary.py](examples/workflow_aggregator_summary.py) | Fan-out/fan-in with LLM summarization: synthesize expert outputs into an executive brief. |
210+
| [workflow_aggregator_structured.py](examples/workflow_aggregator_structured.py) | Fan-out/fan-in with LLM structured extraction into a typed Pydantic model (`response_format`). |
211+
| [workflow_aggregator_voting.py](examples/workflow_aggregator_voting.py) | Fan-out/fan-in with majority-vote aggregation across multiple classifiers (pure logic tally). |
212+
| [workflow_aggregator_ranked.py](examples/workflow_aggregator_ranked.py) | Fan-out/fan-in with LLM-as-judge ranking: score and rank multiple candidates into a typed list. |
207213
| [workflow_agents.py](examples/workflow_agents.py) | A workflow with AI agents as executors: a Writer drafts content and a Reviewer provides feedback. |
208214
| [workflow_agents_sequential.py](examples/workflow_agents_sequential.py) | A sequential orchestration using `SequentialBuilder`: Writer and Reviewer run in order while sharing full conversation history. |
209215
| [workflow_agents_streaming.py](examples/workflow_agents_streaming.py) | The same Writer → Reviewer workflow using `run(stream=True)` to observe `executor_invoked`, `executor_completed`, and streaming `output` events in real-time. |
216+
| [workflow_agents_concurrent.py](examples/workflow_agents_concurrent.py) | Concurrent orchestration using `ConcurrentBuilder`: run specialist agents in parallel and collect merged conversations. |
210217
| [workflow_conditional.py](examples/workflow_conditional.py) | A minimal workflow with conditional edges: the Reviewer routes to a Publisher (approved) or Editor (needs revision) based on a sentinel token. |
211218
| [workflow_conditional_structured.py](examples/workflow_conditional_structured.py) | The same conditional-edge routing pattern, but with structured reviewer output (`response_format`) for typed branch decisions instead of sentinel string matching. |
212219
| [workflow_conditional_state.py](examples/workflow_conditional_state.py) | A stateful conditional workflow with iterative revision loops: stores the latest draft in workflow state and publishes from that state after approval. |
213220
| [workflow_conditional_state_isolated.py](examples/workflow_conditional_state_isolated.py) | The stateful conditional workflow using a `create_workflow(...)` factory to build fresh agents/workflow per task for state isolation and thread safety. |
214221
| [workflow_switch_case.py](examples/workflow_switch_case.py) | A workflow with switch-case routing: a Classifier agent uses structured outputs to categorize a message and route to a specialized handler. |
222+
| [workflow_multi_selection_edge_group.py](examples/workflow_multi_selection_edge_group.py) | LLM-powered multi-selection routing using `add_multi_selection_edge_group` to activate one-or-many downstream handlers. |
215223
| [workflow_converge.py](examples/workflow_converge.py) | A branch-and-converge workflow: Reviewer routes to Publisher or Editor, then converges before final summary output. |
224+
| [workflow_handoffbuilder.py](examples/workflow_handoffbuilder.py) | Autonomous handoff orchestration using `HandoffBuilder` (agents transfer control without human-in-the-loop). |
225+
| [workflow_handoffbuilder_rules.py](examples/workflow_handoffbuilder_rules.py) | Handoff orchestration with explicit routing rules using `HandoffBuilder.add_handoff()`. |
216226
| [agent_otel_aspire.py](examples/agent_otel_aspire.py) | An agent with OpenTelemetry tracing, metrics, and structured logs exported to the [Aspire Dashboard](https://aspire.dev/dashboard/standalone/). |
217227
| [agent_otel_appinsights.py](examples/agent_otel_appinsights.py) | An agent with OpenTelemetry tracing, metrics, and structured logs exported to [Azure Application Insights](https://learn.microsoft.com/azure/azure-monitor/app/app-insights-overview). Requires Azure provisioning via `azd provision`. |
218228
| [agent_evaluation_generate.py](examples/agent_evaluation_generate.py) | Generate synthetic evaluation data for the travel planner agent. |

examples/agent_middleware.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -88,7 +88,7 @@
8888
client = OpenAIChatClient(
8989
base_url="https://models.github.ai/inference",
9090
api_key=os.environ["GITHUB_TOKEN"],
91-
model_id=os.getenv("GITHUB_MODEL", "openai/gpt-4o"),
91+
model_id=os.getenv("GITHUB_MODEL", "openai/gpt-4.1-mini"),
9292
)
9393
else:
9494
client = OpenAIChatClient(api_key=os.environ["OPENAI_API_KEY"], model_id=os.environ.get("OPENAI_MODEL", "gpt-4o"))

examples/agent_summarization.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -77,7 +77,7 @@
7777
client = OpenAIChatClient(
7878
base_url="https://models.github.ai/inference",
7979
api_key=os.environ["GITHUB_TOKEN"],
80-
model_id=os.getenv("GITHUB_MODEL", "openai/gpt-4o"),
80+
model_id=os.getenv("GITHUB_MODEL", "openai/gpt-4.1-mini"),
8181
)
8282
else:
8383
client = OpenAIChatClient(api_key=os.environ["OPENAI_API_KEY"], model_id=os.environ.get("OPENAI_MODEL", "gpt-4o"))

examples/agent_with_subagent.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -82,7 +82,7 @@
8282
client = OpenAIChatClient(
8383
base_url="https://models.github.ai/inference",
8484
api_key=os.environ["GITHUB_TOKEN"],
85-
model_id=os.getenv("GITHUB_MODEL", "openai/gpt-4o"),
85+
model_id=os.getenv("GITHUB_MODEL", "openai/gpt-4.1-mini"),
8686
)
8787
else:
8888
client = OpenAIChatClient(api_key=os.environ["OPENAI_API_KEY"], model_id=os.environ.get("OPENAI_MODEL", "gpt-4o"))

examples/agent_without_subagent.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -70,7 +70,7 @@
7070
client = OpenAIChatClient(
7171
base_url="https://models.github.ai/inference",
7272
api_key=os.environ["GITHUB_TOKEN"],
73-
model_id=os.getenv("GITHUB_MODEL", "openai/gpt-4o"),
73+
model_id=os.getenv("GITHUB_MODEL", "openai/gpt-4.1-mini"),
7474
)
7575
else:
7676
client = OpenAIChatClient(api_key=os.environ["OPENAI_API_KEY"], model_id=os.environ.get("OPENAI_MODEL", "gpt-4o"))

examples/spanish/README.md

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -189,6 +189,7 @@ Puedes ejecutar los ejemplos en este repositorio ejecutando los scripts en el di
189189
| [agent_with_subagent.py](agent_with_subagent.py) | Aislamiento de contexto con subagentes para mantener los prompts enfocados en herramientas relevantes. |
190190
| [agent_without_subagent.py](agent_without_subagent.py) | Ejemplo de inflado de contexto cuando un solo agente carga todos los esquemas de herramientas en un mismo prompt. |
191191
| [agent_summarization.py](agent_summarization.py) | Compactación de contexto mediante middleware de resumen para reducir el uso de tokens en conversaciones largas. |
192+
| [workflow_magenticone.py](workflow_magenticone.py) | Un workflow multi-agente MagenticOne. |
192193
| [agent_middleware.py](agent_middleware.py) | Middleware de agente, chat y funciones para logging, timing y bloqueo. |
193194
| [agent_knowledge_aisearch.py](agent_knowledge_aisearch.py) | Recuperación de conocimiento (RAG) usando Azure AI Search con AgentFrameworkAzureAISearchRAG. |
194195
| [agent_knowledge_sqlite.py](agent_knowledge_sqlite.py) | Recuperación de conocimiento (RAG) usando un proveedor de contexto personalizado con SQLite FTS5. |
@@ -199,15 +200,24 @@ Puedes ejecutar los ejemplos en este repositorio ejecutando los scripts en el di
199200
| [agent_mcp_local.py](agent_mcp_local.py) | Un agente conectado a un servidor MCP local (p. ej. para registro de gastos). |
200201
| [openai_tool_calling.py](openai_tool_calling.py) | Llamadas a herramientas con el SDK de OpenAI de bajo nivel, mostrando despacho manual de herramientas. |
201202
| [workflow_rag_ingest.py](workflow_rag_ingest.py) | Un pipeline de ingesta para RAG con ejecutores Python puros: descarga un documento con markitdown, lo divide en fragmentos y genera embeddings con un modelo de OpenAI. |
203+
| [workflow_fan_out_fan_in_edges.py](workflow_fan_out_fan_in_edges.py) | Fan-out/fan-in con grupos de aristas explícitos usando `add_fan_out_edges` y `add_fan_in_edges`. |
204+
| [workflow_aggregator_summary.py](workflow_aggregator_summary.py) | Fan-out/fan-in con resumen por LLM: sintetiza salidas de expertos en un brief ejecutivo. |
205+
| [workflow_aggregator_structured.py](workflow_aggregator_structured.py) | Fan-out/fan-in con extracción estructurada por LLM en un modelo Pydantic tipado (`response_format`). |
206+
| [workflow_aggregator_voting.py](workflow_aggregator_voting.py) | Fan-out/fan-in con agregación por voto mayoritario entre clasificadores (conteo de lógica pura). |
207+
| [workflow_aggregator_ranked.py](workflow_aggregator_ranked.py) | Fan-out/fan-in con LLM como juez: puntúa y ordena múltiples candidatos en una lista tipada. |
202208
| [workflow_agents.py](workflow_agents.py) | Un workflow con agentes de IA como ejecutores: un Escritor redacta contenido y un Revisor da retroalimentación. |
203209
| [workflow_agents_sequential.py](workflow_agents_sequential.py) | Una orquestación secuencial usando `SequentialBuilder`: Escritor y Revisor se ejecutan en orden compartiendo todo el historial de la conversación. |
204210
| [workflow_agents_streaming.py](workflow_agents_streaming.py) | El mismo workflow Escritor → Revisor usando `run(stream=True)` para observar los eventos `executor_invoked`, `executor_completed` y `output` en tiempo real. |
211+
| [workflow_agents_concurrent.py](workflow_agents_concurrent.py) | Orquestación concurrente usando `ConcurrentBuilder`: ejecuta agentes especialistas en paralelo y junta las conversaciones. |
205212
| [workflow_conditional.py](workflow_conditional.py) | Un workflow mínimo con aristas condicionales: el Revisor enruta al Publicador (aprobado) o al Editor (necesita revisión) según una señal de texto. |
206213
| [workflow_conditional_structured.py](workflow_conditional_structured.py) | El mismo patrón de enrutamiento con aristas condicionales, pero usando salida estructurada del revisor (`response_format`) para decisiones tipadas en vez de matching por cadena. |
207214
| [workflow_conditional_state.py](workflow_conditional_state.py) | Un workflow condicional con estado y bucle iterativo: guarda el último borrador en el estado del workflow y publica desde ese estado tras la aprobación. |
208215
| [workflow_conditional_state_isolated.py](workflow_conditional_state_isolated.py) | El workflow condicional con estado usando una fábrica `create_workflow(...)` para crear agentes/workflow nuevos por tarea y así aislar estado e hilos de agente. |
209216
| [workflow_switch_case.py](workflow_switch_case.py) | Un workflow con enrutamiento switch-case: un agente Clasificador usa salidas estructuradas para categorizar un mensaje y enrutarlo al manejador especializado. |
217+
| [workflow_multi_selection_edge_group.py](workflow_multi_selection_edge_group.py) | Enrutamiento multi-selección con LLM usando `add_multi_selection_edge_group` para activar uno o varios manejadores. |
210218
| [workflow_converge.py](workflow_converge.py) | Un workflow con rama y convergencia: Revisor enruta a Publicador o Editor y luego converge antes del resumen final. |
219+
| [workflow_handoffbuilder.py](workflow_handoffbuilder.py) | Orquestación de handoff autónoma usando `HandoffBuilder` (los agentes se transfieren el control sin HITL). |
220+
| [workflow_handoffbuilder_rules.py](workflow_handoffbuilder_rules.py) | Orquestación de handoff con reglas explícitas usando `HandoffBuilder.add_handoff()`. |
211221
| [agent_otel_aspire.py](agent_otel_aspire.py) | Un agente con trazas, métricas y logs estructurados de OpenTelemetry exportados al [Aspire Dashboard](https://aspire.dev/dashboard/standalone/). |
212222
| [agent_otel_appinsights.py](agent_otel_appinsights.py) | Un agente con trazas, métricas y logs estructurados de OpenTelemetry exportados a [Azure Application Insights](https://learn.microsoft.com/azure/azure-monitor/app/app-insights-overview). Requiere aprovisionamiento de Azure con `azd provision`. |
213223
| [agent_evaluation_generate.py](agent_evaluation_generate.py) | Genera datos sintéticos de evaluación para el agente planificador de viajes. |

examples/spanish/agent_middleware.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -89,7 +89,7 @@
8989
client = OpenAIChatClient(
9090
base_url="https://models.github.ai/inference",
9191
api_key=os.environ["GITHUB_TOKEN"],
92-
model_id=os.getenv("GITHUB_MODEL", "openai/gpt-4o"),
92+
model_id=os.getenv("GITHUB_MODEL", "openai/gpt-4.1-mini"),
9393
)
9494
else:
9595
client = OpenAIChatClient(api_key=os.environ["OPENAI_API_KEY"], model_id=os.environ.get("OPENAI_MODEL", "gpt-4o"))

examples/spanish/agent_summarization.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -76,7 +76,7 @@
7676
client = OpenAIChatClient(
7777
base_url="https://models.github.ai/inference",
7878
api_key=os.environ["GITHUB_TOKEN"],
79-
model_id=os.getenv("GITHUB_MODEL", "openai/gpt-4o"),
79+
model_id=os.getenv("GITHUB_MODEL", "openai/gpt-4.1-mini"),
8080
)
8181
else:
8282
client = OpenAIChatClient(api_key=os.environ["OPENAI_API_KEY"], model_id=os.environ.get("OPENAI_MODEL", "gpt-4o"))

examples/spanish/agent_with_subagent.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -82,7 +82,7 @@
8282
client = OpenAIChatClient(
8383
base_url="https://models.github.ai/inference",
8484
api_key=os.environ["GITHUB_TOKEN"],
85-
model_id=os.getenv("GITHUB_MODEL", "openai/gpt-4o"),
85+
model_id=os.getenv("GITHUB_MODEL", "openai/gpt-4.1-mini"),
8686
)
8787
else:
8888
client = OpenAIChatClient(api_key=os.environ["OPENAI_API_KEY"], model_id=os.environ.get("OPENAI_MODEL", "gpt-4o"))

0 commit comments

Comments
 (0)