Skip to content

Commit a23e190

Browse files
authored
Merge pull request #2 from pamelafox/pamela-examples
Add new examples, reorganize files, and improve repo setup
2 parents 1d9806c + a0167c2 commit a23e190

38 files changed

Lines changed: 3807 additions & 2394 deletions

.env.sample

Lines changed: 2 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,8 @@
1-
# API_HOST can be either azure, ollama, openai, or github:
1+
# API_HOST can be either azure, openai, or github:
22
API_HOST=azure
33
# Configure for Azure:
4-
AZURE_OPENAI_ENDPOINT=https://YOUR-AZURE-OPENAI-SERVICE-NAME.openai.azure.com/openai/v1
4+
AZURE_OPENAI_ENDPOINT=https://YOUR-AZURE-OPENAI-SERVICE-NAME.openai.azure.com
55
AZURE_OPENAI_CHAT_DEPLOYMENT=YOUR-AZURE-DEPLOYMENT-NAME
6-
# Configure for Ollama:
7-
OLLAMA_ENDPOINT=http://localhost:11434/v1
8-
OLLAMA_MODEL=llama3.1
96
# Configure for OpenAI.com:
107
OPENAI_API_KEY=YOUR-OPENAI-KEY
118
OPENAI_MODEL=gpt-3.5-turbo

.gitattributes

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
* text=auto
2+
*.sh text eol=lf
3+
*.ps1 text eol=crlf

.github/PULL_REQUEST_TEMPLATE.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -42,4 +42,4 @@ Verify that the following are valid
4242
* ...
4343

4444
## Other Information
45-
<!-- Add any other helpful information that may be needed here. -->
45+
<!-- Add any other helpful information that may be needed here. -->
Lines changed: 136 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,136 @@
1+
---
2+
name: review_pr_comments
3+
description: This prompt is used to review comments on an active pull request and decide whether to accept, iterate, or reject the changes suggested in each comment.
4+
---
5+
We have received comments on the current active pull request. Together, we will go through each comment one by one and discuss whether to accept the change, iterate on it, or reject the change.
6+
7+
## Steps to follow:
8+
9+
1. Fetch the active pull request: If available, use the `activePullRequest` tool from the `GitHub Pull Requests` toolset to get the details of the active pull request including the comments. If not, use the GitHub MCP server or GitHub CLI to get the details of the active pull request. Fetch both top level comments and inline comments.
10+
2. Present a list of the comments with a one-sentence summary of each.
11+
3. One at a time, present each comment in full detail and ask me whether to accept, iterate, or reject the change. Provide your recommendation for each comment based on best practices, code quality, and project guidelines. Await user's decision before proceeding to the next comment. DO NOT make any changes to the code or files until I have responded with my decision for each comment.
12+
4. If the decision is to accept or iterate, make the necessary code changes to address the comment. If the decision is to reject, provide a brief explanation of why the change was not made.
13+
5. Wait for user to affirm completion of any code changes made before moving to the next comment.
14+
6. Reply to each comment on the pull request with the outcome of our discussion (accepted, iterated, or rejected) along with any relevant explanations.
15+
16+
17+
## How to reply to PR review comments
18+
19+
This guide explains how to reply directly to inline review comments on GitHub pull requests.
20+
21+
### API Endpoint
22+
23+
To reply to an inline PR comment, use:
24+
25+
```http
26+
POST /repos/{owner}/{repo}/pulls/{pull_number}/comments/{comment_id}/replies
27+
```
28+
29+
With body:
30+
31+
```json
32+
{
33+
"body": "Your reply message"
34+
}
35+
```
36+
37+
### Using gh CLI
38+
39+
```bash
40+
gh api repos/{owner}/{repo}/pulls/{pull_number}/comments/{comment_id}/replies \
41+
-X POST \
42+
-f body="Your reply message"
43+
```
44+
45+
### Workflow
46+
47+
1. **Get PR comments**: First fetch the PR review comments to get their IDs:
48+
49+
```bash
50+
gh api repos/{owner}/{repo}/pulls/{pull_number}/comments
51+
```
52+
53+
2. **Identify comment IDs**: Each comment has an `id` field. For threaded comments, use the root comment's `id`.
54+
55+
3. **Post replies**: For each comment you want to reply to:
56+
57+
```bash
58+
gh api repos/{owner}/{repo}/pulls/{pull_number}/comments/{comment_id}/replies \
59+
-X POST \
60+
-f body="Fixed in commit abc123"
61+
```
62+
63+
### Example Replies
64+
65+
For accepted changes:
66+
67+
- "Fixed in {commit_sha}"
68+
- "Accepted - fixed in {commit_sha}"
69+
70+
For rejected changes:
71+
72+
- "Rejected - {reason}"
73+
- "Won't fix - {explanation}"
74+
75+
For questions:
76+
77+
- "Good catch, addressed in {commit_sha}"
78+
79+
## Notes
80+
81+
- The `comment_id` is the numeric ID from the comment object, NOT the `node_id`
82+
- Replies appear as threaded responses under the original comment
83+
- You can reply to any comment, including bot comments (like Copilot reviews)
84+
85+
### Resolving Conversations
86+
87+
To resolve (mark as resolved) PR review threads, use the GraphQL API:
88+
89+
1. **Get thread IDs**: Query for unresolved threads:
90+
91+
```bash
92+
gh api graphql -f query='
93+
query {
94+
repository(owner: "{owner}", name: "{repo}") {
95+
pullRequest(number: {pull_number}) {
96+
reviewThreads(first: 50) {
97+
nodes {
98+
id
99+
isResolved
100+
comments(first: 1) {
101+
nodes { body path }
102+
}
103+
}
104+
}
105+
}
106+
}
107+
}'
108+
```
109+
110+
2. **Resolve threads**: Use the `resolveReviewThread` mutation:
111+
112+
```bash
113+
gh api graphql -f query='
114+
mutation {
115+
resolveReviewThread(input: {threadId: "PRRT_xxx"}) {
116+
thread { isResolved }
117+
}
118+
}'
119+
```
120+
121+
3. **Resolve multiple threads at once**:
122+
123+
```bash
124+
gh api graphql -f query='
125+
mutation {
126+
t1: resolveReviewThread(input: {threadId: "PRRT_xxx"}) { thread { isResolved } }
127+
t2: resolveReviewThread(input: {threadId: "PRRT_yyy"}) { thread { isResolved } }
128+
}'
129+
```
130+
131+
The thread ID starts with `PRRT_` and can be found in the GraphQL query response.
132+
133+
Note: This skill can be removed once the GitHub MCP server has added built-in support for replying to PR review comments and resolving threads.
134+
See:
135+
https://github.com/github/github-mcp-server/issues/1323
136+
https://github.com/github/github-mcp-server/issues/1768

.github/workflows/azure-dev.yaml

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -56,4 +56,3 @@ jobs:
5656
run: azd provision --no-prompt
5757
env:
5858
AZD_INITIAL_ENVIRONMENT_CONFIG: ${{ secrets.AZD_INITIAL_ENVIRONMENT_CONFIG }}
59-

CONTRIBUTING.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ chances of your issue being dealt with quickly:
4747
* **Suggest a Fix** - if you can't fix the bug yourself, perhaps you can point to what might be
4848
causing the problem (line of code or commit)
4949

50-
You can file new issues by providing the above information at the corresponding repository's issues link:
50+
You can file new issues by providing the above information at the corresponding repository's issues link:
5151
replace`[organization-name]` and `[repository-name]` in
5252
`https://github.com/[organization-name]/[repository-name]/issues/new` .
5353

LICENSE.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,4 +18,4 @@
1818
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
1919
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
2020
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21-
SOFTWARE
21+
SOFTWARE

README.md

Lines changed: 11 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -162,13 +162,17 @@ You can run the examples in this repository by executing the scripts in the `exa
162162
163163
| Example | Description |
164164
| ------- | ----------- |
165-
| [basic.py](examples/basic.py) | Uses Agent Framework to build a basic informational agent. |
166-
| [tool.py](examples/tool.py) | Uses Agent Framework to build an agent with a single weather tool. |
167-
| [tools.py](examples/tools.py) | Uses Agent Framework to build a weekend planning agent with multiple tools. |
168-
| [supervisor.py](examples/supervisor.py) | Uses Agent Framework with a supervisor orchestrating activity and recipe sub-agents. |
169-
| [magenticone.py](examples/magenticone.py) | Uses Agent Framework to build a MagenticOne agent. |
170-
| [hitl.py](examples/hitl.py) | Uses Agent Framework with human-in-the-loop (HITL) for tool-enabled agents with human feedback. |
171-
| [workflow.py](examples/workflow.py) | Uses Agent Framework to build a workflow-based agent. |
165+
| [agent_basic.py](examples/agent_basic.py) | A basic informational agent. |
166+
| [agent_tool.py](examples/agent_tool.py) | An agent with a single weather tool. |
167+
| [agent_tools.py](examples/agent_tools.py) | A weekend planning agent with multiple tools. |
168+
| [agent_supervisor.py](examples/agent_supervisor.py) | A supervisor orchestrating activity and recipe sub-agents. |
169+
| [workflow_magenticone.py](examples/workflow_magenticone.py) | A MagenticOne multi-agent workflow. |
170+
| [workflow_hitl.py](examples/workflow_hitl.py) | Human-in-the-loop (HITL) for tool-enabled agents with human feedback. |
171+
| [agent_middleware.py](examples/agent_middleware.py) | Agent, chat, and function middleware for logging, timing, and blocking. |
172+
| [agent_mcp_remote.py](examples/agent_mcp_remote.py) | An agent using a remote MCP server (Microsoft Learn) for documentation search. |
173+
| [agent_mcp_local.py](examples/agent_mcp_local.py) | An agent connected to a local MCP server (e.g. for expense logging). |
174+
| [openai_tool_calling.py](examples/openai_tool_calling.py) | Tool calling with the low-level OpenAI SDK, showing manual tool dispatch. |
175+
| [workflow_basic.py](examples/workflow_basic.py) | A workflow-based agent. |
172176
173177
## Resources
174178
Lines changed: 46 additions & 50 deletions
Original file line numberDiff line numberDiff line change
@@ -1,50 +1,46 @@
1-
import asyncio
2-
import os
3-
4-
from agent_framework import ChatAgent
5-
from agent_framework.openai import OpenAIChatClient
6-
from azure.identity.aio import DefaultAzureCredential, get_bearer_token_provider
7-
from dotenv import load_dotenv
8-
from rich import print
9-
10-
# Configure OpenAI client based on environment
11-
load_dotenv(override=True)
12-
API_HOST = os.getenv("API_HOST", "github")
13-
14-
async_credential = None
15-
if API_HOST == "azure":
16-
async_credential = DefaultAzureCredential()
17-
token_provider = get_bearer_token_provider(async_credential, "https://cognitiveservices.azure.com/.default")
18-
client = OpenAIChatClient(
19-
base_url=f"{os.environ['AZURE_OPENAI_ENDPOINT']}/openai/v1/",
20-
api_key=token_provider,
21-
model_id=os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"],
22-
)
23-
elif API_HOST == "github":
24-
client = OpenAIChatClient(
25-
base_url="https://models.github.ai/inference",
26-
api_key=os.environ["GITHUB_TOKEN"],
27-
model_id=os.getenv("GITHUB_MODEL", "openai/gpt-5-mini"),
28-
)
29-
elif API_HOST == "ollama":
30-
client = OpenAIChatClient(
31-
base_url=os.environ.get("OLLAMA_ENDPOINT", "http://localhost:11434/v1"),
32-
api_key="none",
33-
model_id=os.environ.get("OLLAMA_MODEL", "llama3.1:latest"),
34-
)
35-
else:
36-
client = OpenAIChatClient(api_key=os.environ["OPENAI_API_KEY"], model_id=os.environ.get("OPENAI_MODEL", "gpt-5-mini"))
37-
38-
agent = ChatAgent(chat_client=client, instructions="You're an informational agent. Answer questions cheerfully.")
39-
40-
41-
async def main():
42-
response = await agent.run("Whats weather today in San Francisco?")
43-
print(response.text)
44-
45-
if async_credential:
46-
await async_credential.close()
47-
48-
49-
if __name__ == "__main__":
50-
asyncio.run(main())
1+
import asyncio
2+
import os
3+
4+
from agent_framework import ChatAgent
5+
from agent_framework.openai import OpenAIChatClient
6+
from azure.identity.aio import DefaultAzureCredential, get_bearer_token_provider
7+
from dotenv import load_dotenv
8+
from rich import print
9+
10+
# Configure OpenAI client based on environment
11+
load_dotenv(override=True)
12+
API_HOST = os.getenv("API_HOST", "github")
13+
14+
async_credential = None
15+
if API_HOST == "azure":
16+
async_credential = DefaultAzureCredential()
17+
token_provider = get_bearer_token_provider(async_credential, "https://cognitiveservices.azure.com/.default")
18+
client = OpenAIChatClient(
19+
base_url=f"{os.environ['AZURE_OPENAI_ENDPOINT']}/openai/v1/",
20+
api_key=token_provider,
21+
model_id=os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"],
22+
)
23+
elif API_HOST == "github":
24+
client = OpenAIChatClient(
25+
base_url="https://models.github.ai/inference",
26+
api_key=os.environ["GITHUB_TOKEN"],
27+
model_id=os.getenv("GITHUB_MODEL", "openai/gpt-5-mini"),
28+
)
29+
else:
30+
client = OpenAIChatClient(
31+
api_key=os.environ["OPENAI_API_KEY"], model_id=os.environ.get("OPENAI_MODEL", "gpt-5-mini")
32+
)
33+
34+
agent = ChatAgent(chat_client=client, instructions="You're an informational agent. Answer questions cheerfully.")
35+
36+
37+
async def main():
38+
response = await agent.run("Whats weather today in San Francisco?")
39+
print(response.text)
40+
41+
if async_credential:
42+
await async_credential.close()
43+
44+
45+
if __name__ == "__main__":
46+
asyncio.run(main())

examples/agent_mcp_local.py

Lines changed: 67 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,67 @@
1+
import asyncio
2+
import logging
3+
import os
4+
from datetime import datetime
5+
6+
from agent_framework import ChatAgent, MCPStreamableHTTPTool
7+
from agent_framework.openai import OpenAIChatClient
8+
from azure.identity.aio import DefaultAzureCredential, get_bearer_token_provider
9+
from dotenv import load_dotenv
10+
from rich import print
11+
from rich.logging import RichHandler
12+
13+
# Setup logging
14+
handler = RichHandler(show_path=False, rich_tracebacks=True, show_level=False)
15+
logging.basicConfig(level=logging.WARNING, handlers=[handler], force=True, format="%(message)s")
16+
logger = logging.getLogger(__name__)
17+
logger.setLevel(logging.INFO)
18+
19+
# Configure OpenAI client based on environment
20+
load_dotenv(override=True)
21+
API_HOST = os.getenv("API_HOST", "github")
22+
MCP_SERVER_URL = os.getenv("MCP_SERVER_URL", "http://localhost:8000/mcp/")
23+
24+
async_credential = None
25+
if API_HOST == "azure":
26+
async_credential = DefaultAzureCredential()
27+
token_provider = get_bearer_token_provider(async_credential, "https://cognitiveservices.azure.com/.default")
28+
client = OpenAIChatClient(
29+
base_url=f"{os.environ['AZURE_OPENAI_ENDPOINT']}/openai/v1/",
30+
api_key=token_provider,
31+
model_id=os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"],
32+
)
33+
elif API_HOST == "github":
34+
client = OpenAIChatClient(
35+
base_url="https://models.github.ai/inference",
36+
api_key=os.environ["GITHUB_TOKEN"],
37+
model_id=os.getenv("GITHUB_MODEL", "openai/gpt-5-mini"),
38+
)
39+
else:
40+
client = OpenAIChatClient(
41+
api_key=os.environ["OPENAI_API_KEY"],
42+
model_id=os.environ.get("OPENAI_MODEL", "gpt-5-mini"),
43+
)
44+
45+
46+
async def main() -> None:
47+
"""Run an agent connected to a local MCP server for expense logging."""
48+
async with (
49+
MCPStreamableHTTPTool(name="Expenses MCP Server", url=MCP_SERVER_URL) as mcp_server,
50+
ChatAgent(
51+
chat_client=client,
52+
instructions=(
53+
"You help users with tasks using the available tools. "
54+
f"Today's date is {datetime.now().strftime('%Y-%m-%d')}."
55+
),
56+
tools=[mcp_server],
57+
) as agent,
58+
):
59+
response = await agent.run("yesterday I bought a laptop for $1200 using my visa.")
60+
print(response.text)
61+
62+
if async_credential:
63+
await async_credential.close()
64+
65+
66+
if __name__ == "__main__":
67+
asyncio.run(main())

0 commit comments

Comments
 (0)