Professional Literature Research Assistant for AI Agents - More than just an API wrapper
A Domain-Driven Design (DDD) based MCP server that serves as an intelligent research assistant for AI agents, providing task-oriented literature search and analysis capabilities.
β¨ What's Included:
- π§ 44 MCP Tools - Streamlined PubMed, Europe PMC, CORE, NCBI database access, and Research Timeline / Context Graph
- πΌοΈ OA Figure Extraction - Pull figure captions, direct image URLs, and PDF links from PMC Open Access articles
- π Docs Site - Browse overview, architecture, quick reference, pipeline tutorials, source contracts, troubleshooting, and deployment in one place at docs/index.html
- π 24 Claude Skills - Ready-to-use workflow guides for AI agents (Claude Code-specific)
- π Copilot Instructions - VS Code GitHub Copilot integration guide
π Language: English | ηΉι«δΈζ
π Tool Usage Docs: Capability-first guide | Complete index
-
Python 3.10+ β Download
-
uv (recommended) β Install uv
# macOS / Linux curl -LsSf https://astral.sh/uv/install.sh | sh # Windows powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
-
NCBI Email β Required by NCBI API policy. Any valid email address.
-
NCBI API Key (optional) β Get one here for higher rate limits (10 req/s vs 3 req/s)
-
OpenAlex API Key (optional) β set
OPENALEX_API_KEYto use authenticated OpenAlex requests instead of mailto-only polite-pool auth
# Option 1: Zero-install with uvx (recommended for trying out)
uvx pubmed-search-mcp
# Option 2: Add as project dependency
uv add pubmed-search-mcp
# Option 3: pip install
pip install pubmed-search-mcpThis MCP server works with any MCP-compatible AI tool. Choose your preferred client:
{
"servers": {
"pubmed-search": {
"type": "stdio",
"command": "uvx",
"args": ["pubmed-search-mcp"],
"env": {
"NCBI_EMAIL": "your@email.com"
}
}
}
}Optional: enable browser-session PDF fallback once and let tools auto-use it:
{
"servers": {
"pubmed-search": {
"type": "stdio",
"command": "uvx",
"args": ["pubmed-search-mcp"],
"env": {
"NCBI_EMAIL": "your@email.com",
"BROWSER_FETCH_CONFIG": "{\"enabled\":true,\"auto_enabled\":true,\"broker_url\":\"http://127.0.0.1:8766/fetch\",\"token\":\"local-dev-token\",\"allowed_hosts\":[\"jamanetwork.com\",\"*.jamanetwork.com\",\"nejm.org\",\"*.nejm.org\"]}"
}
}
}
}With this setting, get_fulltext will automatically try the local broker for institutional or publisher landing pages. Pass allow_browser_session=false only when you want to suppress it for a specific call.
Run the local broker with download interception:
uv sync --extra browser-broker
uv run playwright install chromium
uv run pubmed-browser-fetch-broker --token local-dev-tokenThe broker launches a persistent browser profile with download interception enabled. Log in once inside that broker-controlled browser window, and subsequent PDF downloads will be captured automatically without a native "Save As" dialog.
{
"mcpServers": {
"pubmed-search": {
"command": "uvx",
"args": ["pubmed-search-mcp"],
"env": {
"NCBI_EMAIL": "your@email.com"
}
}
}
}Config file location:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json- Windows:
%APPDATA%\Claude\claude_desktop_config.json- Linux:
~/.config/Claude/claude_desktop_config.json
claude mcp add pubmed-search -- uvx pubmed-search-mcpOr add to .mcp.json in your project root:
{
"mcpServers": {
"pubmed-search": {
"command": "uvx",
"args": ["pubmed-search-mcp"],
"env": {
"NCBI_EMAIL": "your@email.com"
}
}
}
}Zed editor (z.ai) supports MCP servers natively. Add to your Zed settings.json:
{
"context_servers": {
"pubmed-search": {
"command": "uvx",
"args": ["pubmed-search-mcp"],
"env": {
"NCBI_EMAIL": "your@email.com"
}
}
}
}Tip: Open Command Palette β
zed: open settingsto edit, or go to Agent Panel β Settings β "Add Custom Server".
OpenClaw uses MCP servers via the mcp-adapter plugin. Install the adapter first:
openclaw plugins install mcp-adapterThen add to ~/.openclaw/openclaw.json:
{
"plugins": {
"entries": {
"mcp-adapter": {
"enabled": true,
"config": {
"servers": [
{
"name": "pubmed-search",
"transport": "stdio",
"command": "uvx",
"args": ["pubmed-search-mcp"],
"env": {
"NCBI_EMAIL": "your@email.com"
}
}
]
}
}
}
}
}Restart the gateway after configuration:
openclaw gateway restart
openclaw plugins list # Should show: mcp-adapter | loaded{
"mcpServers": {
"pubmed-search": {
"command": "uvx",
"args": ["pubmed-search-mcp"],
"env": {
"NCBI_EMAIL": "your@email.com"
},
"alwaysAllow": [],
"disabled": false
}
}
}Any MCP-compatible client can use this server via stdio transport:
# Command
uvx pubmed-search-mcp
# With environment variable
NCBI_EMAIL=your@email.com uvx pubmed-search-mcpNote:
NCBI_EMAILis required by NCBI API policy. Optionally setNCBI_API_KEYfor higher rate limits (10 req/s vs 3 req/s). π Detailed Integration Guides: See docs/INTEGRATIONS.md for all environment variables, Copilot Studio setup, Docker deployment, proxy configuration, and troubleshooting.
Core Positioning: The intelligent middleware between AI Agents and academic search engines.
Other tools give you raw API access. We give you vocabulary translation + intelligent routing + research analysis:
| Challenge | Our Solution |
|---|---|
| Agent uses ICD codes, PubMed needs MeSH | β Auto ICDβMeSH conversion |
| Multiple databases, different APIs | β Unified Search single entry point |
| Clinical questions need structured search | β
PICO toolkit (parse_pico + generate_search_queries for Agent-driven workflow) |
| Typos in medical terms | β ESpell auto-correction |
| Too many results from one source | β Parallel multi-source with dedup |
| Need to trace research evolution | β Research Timeline & Tree with landmark detection, diagnostics, and sub-topic branching |
| Citation context is unclear | β Citation Tree forward/backward/network |
| Can't access full text | β Multi-source fulltext (Europe PMC, CORE, CrossRef) |
| Gene/drug info scattered across DBs | β NCBI Extended (Gene, PubChem, ClinVar) |
| Need cutting-edge preprints | β Preprint search (arXiv, medRxiv, bioRxiv) with peer-review filtering |
| Export to reference managers | β One-click export (RIS, BibTeX, CSV, MEDLINE) |
- Vocabulary Translation Layer - Agent speaks naturally, we translate to each database's terminology (MeSH, ICD-10, text-mined entities)
- Unified Search Gateway - One
unified_search()call, auto-dispatch to PubMed/Europe PMC/CORE/OpenAlex - PICO Toolkit -
parse_pico()decomposes clinical questions into P/I/C/O elements; Agent then callsgenerate_search_queries()per element and builds Boolean query - Research Timeline & Lineage Tree - Detect milestones with policy-driven heuristics, identify landmark papers via multi-signal scoring, surface timeline diagnostics, and visualize research evolution as branching trees by sub-topic
- Citation Network Analysis - Build multi-level citation trees to map an entire research landscape from a single paper
- Full Research Lifecycle - From search β discovery β full text β analysis β export, all in one server
- Agent-First Design - Output optimized for machine decision-making, not human reading
This MCP server integrates with multiple academic databases and APIs:
| Source | Coverage | Vocabulary | Auto-Convert | Description |
|---|---|---|---|---|
| NCBI PubMed | 36M+ articles | MeSH | β Native | Primary biomedical literature |
| NCBI Entrez | Multi-DB | MeSH | β Native | Gene, PubChem, ClinVar |
| Europe PMC | 33M+ | Text-mined | β Extraction | Full text XML access |
| CORE | 200M+ | None | β‘οΈ Free-text | Open access aggregator |
| Semantic Scholar | 200M+ | S2 Fields | β‘οΈ Free-text | AI-powered recommendations |
| OpenAlex | 250M+ | Concepts | β‘οΈ Free-text | Open scholarly metadata |
| NIH iCite | PubMed | N/A | N/A | Citation metrics (RCR) |
π Key: β = Full vocabulary support | β‘οΈ = Query pass-through (no controlled vocabulary)
ICD Codes: Auto-detected and converted to MeSH before PubMed search
# Required
NCBI_EMAIL=your@email.com # Required by NCBI policy
# Optional - For higher rate limits
NCBI_API_KEY=your_ncbi_api_key # Get from: https://www.ncbi.nlm.nih.gov/account/settings/
CORE_API_KEY=your_core_api_key # Get from: https://core.ac.uk/services/api
S2_API_KEY=your_s2_api_key # Get from: https://www.semanticscholar.org/product/api
# Optional - Network settings
HTTP_PROXY=http://proxy:8080 # HTTP proxy for API requests
HTTPS_PROXY=https://proxy:8080 # HTTPS proxy for API requestsβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β AI AGENT β
β β
β "Find papers about I10 hypertension treatment in diabetic patients" β
β β
βββββββββββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β π PUBMED SEARCH MCP (MIDDLEWARE) β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β β 1οΈβ£ VOCABULARY TRANSLATION ββ
β β β’ ICD-10 "I10" β MeSH "Hypertension" ββ
β β β’ "diabetic" β MeSH "Diabetes Mellitus" ββ
β β β’ ESpell: "hypertention" β "hypertension" ββ
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β β 2οΈβ£ INTELLIGENT ROUTING ββ
β β ββββββββββββ ββββββββββββ ββββββββββββ ββββββββββββ ββ
β β β PubMed β βEurope PMCβ β CORE β β OpenAlex β ββ
β β β 36M+ β β 33M+ β β 200M+ β β 250M+ β ββ
β β β (MeSH) β β(fulltext)β β (OA) β β(metadata)β ββ
β β ββββββ¬ββββββ ββββββ¬ββββββ ββββββ¬ββββββ ββββββ¬ββββββ ββ
β β ββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββ ββ
β β βΌ ββ
β β 3οΈβ£ RESULT AGGREGATION: Dedupe + Rank + Enrich ββ
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
βββββββββββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β UNIFIED RESULTS β
β β’ 150 unique papers (deduplicated from 4 sources) β
β β’ Ranked by relevance + citation impact (RCR) β
β β’ Full text links enriched from Europe PMC β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
If you want to understand the tool surface as a usable system, do not start by memorizing 40 tool names.
Start with the Tools Usage Guide: it compresses the current 40 tools into 8 capability families, explains the theoretical lower bound, and gives intent-based routing for both humans and agents.
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β SEARCH ENTRY POINT β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β unified_search() β π Single entry for all sources β
β β β
β βββ Quick search β Direct multi-source query β
β βββ PICO hints β Detects comparison, shows P/I/C/O β
β βββ ICD expansion β Auto ICDβMeSH conversion β
β β
β Sources: PubMed Β· Europe PMC Β· CORE Β· OpenAlex β
β Auto: Deduplicate β Rank β Enrich full-text links β
β β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β QUERY INTELLIGENCE β
β β
β generate_search_queries() β MeSH expansion + synonym discovery β
β parse_pico() β PICO element decomposition β
β analyze_search_query() β Query analysis without execution β
β β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Found important paper (PMID)
β
βββββββββββββββββββββββββΌββββββββββββββββββββββββ
β β β
βΌ βΌ βΌ
βββββββββββββββ βββββββββββββββ βββββββββββββββ
β BACKWARD β β SIMILAR β β FORWARD β
β βββββββ β β ββββββ β β βββββββΆ β
β β β β β β
β get_article β βfind_related β βfind_citing β
β _references β β _articles β β _articles β
β β β β β β
β Foundation β β Similar β β Follow-up β
β papers β β topic β β research β
βββββββββββββββ βββββββββββββββ βββββββββββββββ
fetch_article_details() β Detailed article metadata
get_citation_metrics() β iCite RCR, citation percentile
build_citation_tree() β Full network visualization (6 formats)
| Category | Tools |
|---|---|
| Full Text | get_fulltext β Multi-source retrieval (Europe PMC, CORE, PubMed, CrossRef) |
| Figures | get_article_figures β Extract figure labels, captions, image URLs, and PDF links from PMC Open Access articles |
| Figure-aware Full Text | get_fulltext(include_figures=True) β Embed figure metadata alongside structured fulltext |
| Text Mining | get_text_mined_terms β Extract genes, diseases, chemicals |
| Export | prepare_export β RIS, BibTeX, CSV, MEDLINE, JSON |
Use the PMC Open Access path when an agent needs evidence figures, not just article text:
get_article_figures(identifier="PMC12086443")β Figure labels, captions, image URLs, and PDF/article linksget_fulltext(pmcid="PMC7096777", include_figures=True)β Structured fulltext with figures inline- Figure output preserves article context, so agents can connect each figure back to the sections where it is mentioned
| Tool | Description |
|---|---|
search_gene |
Search NCBI Gene database |
get_gene_details |
Gene details by NCBI Gene ID |
get_gene_literature |
PubMed articles linked to a gene |
search_compound |
Search PubChem compounds |
get_compound_details |
Compound details by PubChem CID |
get_compound_literature |
PubMed articles linked to a compound |
search_clinvar |
Search ClinVar clinical variants |
| Tool | Description |
|---|---|
build_research_timeline |
Build timeline/tree with landmark detection and formatted diagnostics. Output: text, tree, mermaid, mindmap, json |
analyze_timeline_milestones |
Analyze milestone distribution with diagnostics payload |
compare_timelines |
Compare multiple topic timelines with per-topic diagnostics |
| Tool | Description |
|---|---|
configure_institutional_access |
Configure institution's link resolver |
get_institutional_link |
Generate OpenURL access link |
list_resolver_presets |
List resolver presets |
test_institutional_access |
Test resolver configuration |
convert_icd_mesh |
Convert between ICD codes and MeSH terms (bidirectional) |
unified_search |
Auto-detect ICD codes in queries and expand them to MeSH |
| Tool | Description |
|---|---|
get_session_pmids |
Retrieve cached PMID lists |
get_cached_article |
Get article from session cache (no API cost) |
get_session_summary |
Session status overview |
Dynamic MCP resources are also available for agents that can read resources directly:
session://contextβ active session statussession://last-searchβ latest search metadatasession://last-search/pmidsβ latest PMID list + CSV formsession://last-search/resultsβ cached article payloads for the latest search
manage_pipeline is the primary facade for pipeline CRUD, history, and scheduling. The more specific pipeline tools remain available as compatibility wrappers.
| Tool | Description |
|---|---|
manage_pipeline |
Primary facade for save, list, load, delete, history, and schedule actions |
save_pipeline |
Save a pipeline config for later reuse (YAML/JSON, auto-validated) |
list_pipelines |
List saved pipelines (filter by tag/scope) |
load_pipeline |
Load pipeline from name or file for review/editing |
delete_pipeline |
Delete pipeline and its execution history |
get_pipeline_history |
View execution history with article diff analysis |
schedule_pipeline |
Create, update, or remove recurring pipeline schedules |
Step-by-step tutorials:
- English: docs/PIPELINE_MODE_TUTORIAL.en.md
- ηΉι«δΈζ: docs/PIPELINE_MODE_TUTORIAL.md
| Tool | Description |
|---|---|
analyze_figure_for_search |
Analyze scientific figure for search |
search_biomedical_images |
Search biomedical images across Open-i (X-ray, microscopy, photos, diagrams) |
Search arXiv, medRxiv, and bioRxiv preprint servers via unified_search options flags:
preprints: Enable dedicated preprint search and show results in a separate section.all_types: Keep non-peer-reviewed content in main aggregated results.
Recommended combinations:
- Empty
options: Peer-reviewed results only. options="preprints": Peer-reviewed main results plus a separate preprint section.options="preprints, all_types": Separate preprint section plus non-peer-reviewed content retained in main results.options="all_types": No dedicated preprint crawl, but non-peer-reviewed items from searched sources are retained.
Preprint detection β articles are identified as preprints by:
- Article type from source API (OpenAlex, CrossRef, Semantic Scholar)
- arXiv ID present without PubMed ID
- Known preprint server source or journal name
- DOI prefix matching preprint servers (e.g.,
10.1101/β bioRxiv/medRxiv,10.48550/β arXiv)
unified_search can append a lightweight research lineage view built from PMID-backed ranked results:
| Option Flag | Description |
|---|---|
context_graph |
Append a Research Context Graph preview to Markdown output and include research_context in JSON output |
This is useful when an agent needs quick thematic branching without making a second build_research_timeline call.
unified_search can also front-load the existing source coverage and decision hints for agents that want routing help before reading the ranked list:
| Option Flag | Description |
|---|---|
counts_first |
Add a source-count table, coverage summary, and next-tool recommendations to the response |
Example:
unified_search(query="remimazolam ICU sedation", options="counts_first")This mode is useful when the agent should decide whether to expand a source, inspect the lead PMID, fetch fulltext, extract figures, or pivot into timeline exploration.
When the MCP client provides a progress token, unified_search, build_research_timeline, analyze_timeline_milestones, compare_timelines, get_fulltext, and get_text_mined_terms emit progress updates for their major phases.
This reduces the "black box" wait time for agents during longer searches.
# Agent just asks naturally - middleware handles everything
unified_search(query="remimazolam ICU sedation", limit=20)
# Or with clinical codes - auto-converted to MeSH
unified_search(query="I10 treatment in E11.9 patients")
# β ICD-10 β ICD-10
# Hypertension Type 2 DiabetesSimple path β unified_search can search directly (no PICO decomposition):
# unified_search searches as-is; detects "A vs B" pattern and shows PICO hints in metadata
unified_search(query="Is remimazolam better than propofol for ICU sedation?")
# β Multi-source keyword search + PICO hint metadata in output
# β οΈ This does NOT auto-decompose PICO or expand MeSH!
# For structured PICO search, use the Agent workflow belowAgent workflow β PICO decomposition + MeSH expansion (recommended for clinical questions):
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β "Is remimazolam better than propofol for ICU sedation?" β
βββββββββββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β parse_pico() β
β βββββββββββ βββββββββββ βββββββββββ βββββββββββ β
β β P β β I β β C β β O β β
β β ICU β βremimaz- β βpropofol β βsedation β β
β βpatients β β olam β β β βoutcomes β β
β ββββββ¬βββββ ββββββ¬βββββ ββββββ¬βββββ ββββββ¬βββββ β
βββββββββΌβββββββββββββΌβββββββββββββΌβββββββββββββΌβββββββββββββββββββββββββββ
β β β β
βΌ βΌ βΌ βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β generate_search_queries() Γ 4 (parallel) β
β β
β P β "Intensive Care Units"[MeSH] β
β I β "remimazolam" [Supplementary Concept], "CNS 7056" β
β C β "Propofol"[MeSH], "Diprivan" β
β O β "Conscious Sedation"[MeSH], "Deep Sedation"[MeSH] β
βββββββββββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Agent combines with Boolean logic β
β β
β (P) AND (I) AND (C) AND (O) β High precision β
β (P) AND (I OR C) AND (O) β High recall β
βββββββββββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β unified_search() (auto multi-source + dedup) β
β β
β PubMed + Europe PMC + CORE + OpenAlex β Auto deduplicate & rank β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
# Step 1: Parse clinical question
parse_pico("Is remimazolam better than propofol for ICU sedation?")
# Returns: P=ICU patients, I=remimazolam, C=propofol, O=sedation outcomes
# Step 2: Get MeSH for each element (parallel!)
generate_search_queries(topic="ICU patients") # P
generate_search_queries(topic="remimazolam") # I
generate_search_queries(topic="propofol") # C
generate_search_queries(topic="sedation") # O
# Step 3: Agent combines with Boolean
query = '("Intensive Care Units"[MeSH]) AND (remimazolam OR "CNS 7056") AND propofol AND sedation'
# Step 4: Search (auto multi-source, dedup, rank)
unified_search(query=query)# Found landmark paper PMID: 33475315
find_related_articles(pmid="33475315") # Similar methodology
find_citing_articles(pmid="33475315") # Who built on this?
get_article_references(pmid="33475315") # What's the foundation?
# Build complete research map
build_citation_tree(pmid="33475315", depth=2, output_format="mermaid")# Research a gene
search_gene(query="BRCA1", organism="human")
get_gene_literature(gene_id="672", limit=20)
# Research a drug compound
search_compound(query="propofol")
get_compound_literature(cid="4943", limit=20)# Export last search results
prepare_export(pmids="last", format="ris") # β EndNote/Zotero
prepare_export(pmids="last", format="bibtex") # β LaTeX
# Retrieve full text for a selected paper from the last search
get_fulltext(pmid="12345678", extended_sources=True)# Include preprints alongside peer-reviewed results
unified_search(query="COVID-19 vaccine efficacy", options="preprints")
# β Main results (peer-reviewed) + Separate preprint section (arXiv, medRxiv, bioRxiv)
# Include preprints and retain non-peer-reviewed items in main results
unified_search(query="CRISPR gene therapy", options="preprints, all_types")
# β Separate preprint section + non-peer-reviewed items retained in main results
# Only peer-reviewed (default behavior)
unified_search("diabetes treatment")
# β Preprints from any source automatically filtered out
# Add a research context graph preview to the same search response
unified_search("remimazolam ICU sedation", options="context_graph")# Save a template-based pipeline through the primary facade
manage_pipeline(
action="save",
name="icu_sedation_weekly",
config="template: pico\nparams:\n P: ICU patients\n I: remimazolam\n C: propofol\n O: delirium",
tags="anesthesia,sedation",
description="Weekly ICU sedation monitoring"
)
# Save a custom DAG pipeline
manage_pipeline(
action="save",
name="brca1_comprehensive",
config="""
steps:
- id: expand
action: expand
params: { topic: BRCA1 breast cancer }
- id: pubmed
action: search
params: { query: BRCA1, sources: pubmed, limit: 50 }
- id: expanded
action: search
inputs: [expand]
params: { strategy: mesh, sources: pubmed,openalex, limit: 50 }
- id: merged
action: merge
inputs: [pubmed, expanded]
params: { method: rrf }
- id: enriched
action: metrics
inputs: [merged]
output:
limit: 30
ranking: quality
"""
)
# Execute a saved pipeline
unified_search(pipeline="saved:icu_sedation_weekly")
# List & manage
manage_pipeline(action="list", tag="anesthesia")
manage_pipeline(action="load", source="brca1_comprehensive") # Review YAML
manage_pipeline(action="history", name="icu_sedation_weekly") # View past runsβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β SEARCH MODE DECISION TREE β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β "What kind of search do I need?" β
β β β
β βββ Know exactly what to search? β
β β βββ unified_search(query="topic keywords") β
β β β Quick, auto-routing to best sources β
β β β
β βββ Have a clinical question (A vs B)? β
β β βββ parse_pico() β generate_search_queries() Γ N β
β β β Agent builds Boolean β unified_search() β
β β β
β βββ Need comprehensive systematic coverage? β
β β βββ generate_search_queries() β parallel search β
β β β MeSH expansion, multiple strategies, merge β
β β β
β βββ Exploring from a key paper? β
β βββ find_related/citing/references β build_citation_tree β
β β Citation network, research context β
β β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
| Mode | Entry Point | Best For | Auto-Features |
|---|---|---|---|
| Quick | unified_search() |
Fast topic search | ICDβMeSH, multi-source, dedup |
| PICO | parse_pico() β Agent |
Clinical questions | Agent: decompose β MeSH expand β Boolean |
| Systematic | generate_search_queries() |
Literature reviews | MeSH expansion, synonyms |
| Exploration | find_*_articles() |
From key paper | Citation network, related |
Pre-built workflow guides in .claude/skills/, divided into Usage Skills (for using the MCP server) and Development Skills (for maintaining the project):
| Skill | Description |
|---|---|
pubmed-quick-search |
Basic search with filters |
pubmed-systematic-search |
MeSH expansion, comprehensive |
pubmed-pico-search |
Clinical question decomposition |
pubmed-paper-exploration |
Citation tree, related articles |
pubmed-gene-drug-research |
Gene/PubChem/ClinVar |
pubmed-fulltext-access |
Europe PMC, CORE full text |
pubmed-export-citations |
RIS/BibTeX/CSV export |
pubmed-multi-source-search |
Cross-database unified search |
pubmed-mcp-tools-reference |
Complete tool reference guide |
pipeline-persistence |
Save, load, reuse search plans |
| Skill | Description |
|---|---|
changelog-updater |
Auto-update CHANGELOG.md |
code-refactor |
DDD architecture refactoring |
code-reviewer |
Code quality & security review |
ddd-architect |
DDD scaffold for new features |
git-doc-updater |
Sync docs before commits |
git-precommit |
Pre-commit workflow orchestration |
memory-checkpoint |
Save context to Memory Bank |
memory-updater |
Update Memory Bank files |
project-init |
Initialize new projects |
readme-i18n |
Multilingual README sync |
readme-updater |
Sync README with code changes |
roadmap-updater |
Update ROADMAP.md status |
test-generator |
Generate test suites |
π Location:
.claude/skills/*/SKILL.md(Claude Code-specific, and the single source of truth for repo skills) Do not mirror or split repo skills into.github/skills/. These repo skills are project-scoped and should remain version-controlled. Personal cross-project skills belong in a user directory such as~/.copilot/skills/or~/.claude/skills/, not in this repository.
This project uses Domain-Driven Design (DDD) architecture, with literature research domain knowledge as the core model.
src/pubmed_search/
βββ domain/ # Core business logic
β βββ entities/article.py # UnifiedArticle, Author, etc.
βββ application/ # Use cases
β βββ search/ # QueryAnalyzer, ResultAggregator
β βββ export/ # Citation export (RIS, BibTeX...)
β βββ session/ # SessionManager
βββ infrastructure/ # External systems
β βββ ncbi/ # Entrez, iCite, Citation Exporter
β βββ sources/ # Europe PMC, CORE, CrossRef...
β βββ http/ # HTTP clients
βββ presentation/ # User interfaces
β βββ mcp_server/ # MCP tools, prompts, resources
β β βββ tools/ # discovery, strategy, pico, export...
β βββ api/ # REST API (Copilot Studio)
βββ shared/ # Cross-cutting concerns
βββ exceptions.py # Unified error handling
βββ async_utils.py # Rate limiter, retry, circuit breaker
| Mechanism | Description |
|---|---|
| Session | Auto-create, auto-switch |
| Cache | Auto-cache search results, avoid duplicate API calls |
| Rate Limit | Auto-comply with NCBI API limits (0.34s/0.1s) |
| MeSH Lookup | generate_search_queries() auto-queries NCBI MeSH database |
| ESpell | Auto spelling correction (remifentanyl β remifentanil) |
| Query Analysis | Each suggested query shows how PubMed actually interprets it |
Our Core Value: We are the intelligent middleware between Agent and Search Engines, automatically handling vocabulary standardization so Agent doesn't need to know each database's terminology.
Different data sources use different controlled vocabulary systems. This server provides automatic conversion:
| API / Database | Vocabulary System | Auto-Conversion |
|---|---|---|
| PubMed / NCBI | MeSH (Medical Subject Headings) | β
Full support via expand_with_mesh() |
| ICD Codes | ICD-10-CM / ICD-9-CM | β Auto-detect & convert to MeSH |
| Europe PMC | Text-mined entities (Gene, Disease, Chemical) | β
get_text_mined_terms() extraction |
| OpenAlex | OpenAlex Concepts (deprecated) | β Free-text only |
| Semantic Scholar | S2 Field of Study | β Free-text only |
| CORE | None | β Free-text only |
| CrossRef | None | β Free-text only |
When searching with ICD codes (e.g., I10 for Hypertension), unified_search() automatically:
- Detects ICD-10/ICD-9 patterns via
detect_and_expand_icd_codes() - Looks up corresponding MeSH terms from internal mapping (
ICD10_TO_MESH,ICD9_TO_MESH) - Expands query with MeSH synonyms for comprehensive search
# Agent calls unified_search with clinical terminology
unified_search(query="I10 treatment outcomes")
# Server auto-expands to PubMed-compatible query
"(I10 OR Hypertension[MeSH]) treatment outcomes"π Full architecture documentation: ARCHITECTURE.md
When calling generate_search_queries("remimazolam sedation"), internally it:
- ESpell Correction - Fix spelling errors
- MeSH Query -
Entrez.esearch(db="mesh")to get standard vocabulary - Synonym Extraction - Get synonyms from MeSH Entry Terms
- Query Analysis - Analyze how PubMed interprets each query
{
"mesh_terms": [
{
"input": "remimazolam",
"preferred": "remimazolam [Supplementary Concept]",
"synonyms": ["CNS 7056", "ONO 2745"]
}
],
"all_synonyms": ["CNS 7056", "ONO 2745", ...],
"suggested_queries": [
{
"id": "q1_title",
"query": "(remimazolam sedation)[Title]",
"purpose": "Exact title match - highest precision",
"estimated_count": 8,
"pubmed_translation": "\"remimazolam sedation\"[Title]"
},
{
"id": "q3_and",
"query": "(remimazolam AND sedation)",
"purpose": "All keywords required",
"estimated_count": 561,
"pubmed_translation": "(\"remimazolam\"[Supplementary Concept] OR \"remimazolam\"[All Fields]) AND (\"sedate\"[All Fields] OR ...)"
}
]
}Value of Query Analysis: Agent thinks
remimazolam AND sedationonly searches these two words, but PubMed actually expands to Supplementary Concept + synonyms, results go from 8 to 561. This helps Agent understand the difference between intent and actual search.
Enable HTTPS secure communication for production environments.
# Step 1: Generate SSL certificates
./scripts/generate-ssl-certs.sh
# Step 2: Start HTTPS service (Docker)
./scripts/start-https-docker.sh up
# Verify deployment
curl -k https://localhost/| Service | URL | Description |
|---|---|---|
| MCP SSE | https://localhost/sse |
SSE connection (MCP) |
| Messages | https://localhost/messages |
MCP POST |
| Health | https://localhost/health |
Health check |
{
"mcpServers": {
"pubmed-search": {
"url": "https://localhost/sse"
}
}
}Integrate PubMed Search MCP with Microsoft 365 Copilot (Word, Teams, Outlook)!
# Start with Streamable HTTP transport (required by Copilot Studio)
uv run python run_server.py --transport streamable-http --port 8765
# Enable Copilot-compatible HTTP semantics while keeping full tool schemas
uv run python run_server.py --transport streamable-http --copilot-compatible --port 8765
# Or use the dedicated script with ngrok
./scripts/start-copilot-studio.sh --with-ngrok| Field | Value |
|---|---|
| Server name | PubMed Search |
| Server URL | https://your-server.com/mcp |
| Authentication | None (or API Key) |
π Full documentation: copilot-studio/README.md
Use
--copilot-compatiblewithrun_server.pyfor Copilot HTTP semantics, orrun_copilot.pyif you also need simplified tool schemas.
β οΈ Note: SSE transport deprecated since Aug 2025. Usestreamable-http.
π More documentation:
- Architecture β ARCHITECTURE.md
- Pipeline tutorial (English) β docs/PIPELINE_MODE_TUTORIAL.en.md
- Pipeline tutorial (zh-TW) β docs/PIPELINE_MODE_TUTORIAL.md
- Deployment guide β DEPLOYMENT.md
- Copilot Studio β copilot-studio/README.md
| Layer | Feature | Description |
|---|---|---|
| HTTPS | TLS 1.2/1.3 encryption | All traffic encrypted via Nginx |
| Rate Limiting | 30 req/s | Nginx level protection |
| Security Headers | XSS/CSRF protection | X-Frame-Options, X-Content-Type-Options |
| SSE Optimization | 24h timeout | Long-lived connections for real-time |
| No Database | Stateless | No SQL injection risk |
| No Secrets | In-memory only | No credentials stored |
See DEPLOYMENT.md for detailed deployment instructions.
Export your search results in formats compatible with major reference managers:
| Format | Compatible With | Use Case |
|---|---|---|
| RIS | EndNote, Zotero, Mendeley | Universal import |
| BibTeX | LaTeX, Overleaf, JabRef | Academic writing |
| CSV | Excel, Google Sheets | Data analysis |
| MEDLINE | PubMed native format | Archiving |
| JSON | Programmatic access | Custom processing |
- Core: PMID, Title, Authors, Journal, Year, Volume, Issue, Pages
- Identifiers: DOI, PMC ID, ISSN
- Content: Abstract (HTML tags cleaned)
- Metadata: Language, Publication Type, Keywords
- Access: DOI URL, PMC URL, Full-text availability
- BibTeX exports use pylatexenc for proper LaTeX encoding
- Nordic characters (ΓΈ, Γ¦, Γ₯), umlauts (ΓΌ, ΓΆ, Γ€), and accents are correctly converted
- Example:
SΓΈren HansenβS{\o}ren Hansen
GitHub will show Cite this repository from CITATION.cff. If you use PubMed Search MCP in research, methods sections, or internal technical reports, prefer the GitHub-generated citation or reuse the repository metadata directly.
@software{pubmed_search_mcp,
title = {PubMed Search MCP},
author = {u9401066},
url = {https://github.com/u9401066/pubmed-search-mcp}
}Apache License 2.0 - see LICENSE
