Scrapemed
ScrapeMed: Data scraping for PubMed Central.
Install / Use
/learn @danielfrees/ScrapemedREADME
ScrapeMed
Data Scraping for PubMed Central
⭐ Used by Duke University to power medical generative AI research.
⭐ Enables pythonic object-oriented access to a massive amount of research data. PMC constitutes over 14% of The Pile.
⭐ Natural language Paper querying and Paper embedding, powered via LangChain and ChromaDB
⭐ Easy to integrate with pandas for data science workflows
Installation
Available on PyPI! Simply pip install scrapemed.
Feature List
- Scraping API for PubMed Central (PMC) ✅
- Data Validation ✅
- Markup Language Cleaning ✅
- Processes all PMC XML into
Paperobjects ✅ - Dataset building functionality (
paperSets) ✅ - Semantic paper vectorization with
ChromaDB✅ - Natural language
Paperquerying ✅ - Integration with
pandas✅ paperSetvisualization ✅- Direct Search for Papers by PMCID on PMC ✅
- Advanced Term Search for Papers on PMC ✅
Introduction
ScrapeMed is designed to make large-scale data science projects relying on PubMed Central (PMC) easy. The raw XML that can be downloaded from PMC is inconsistent and messy, and ScrapeMed aims to solve that problem at scale. ScrapeMed downloads, validates, cleans, and parses data from nearly all PMC articles into Paper objects which can then be used to build datasets (paperSets), or investigated in detail for literature reviews.
Beyond the heavy-lifting performed behind the scenes by ScrapeMed to standardize data scraped from PMC, a number of features are included to make data science and literature review work easier. A few are listed below:
-
Papers can be queried with natural language [.query()], or simply chunked and embedded for storage in a vector DB [.vectorize()].Papers can also be converted to pandas Series easily [.to_relational()] for data science workflows. -
paperSets can be visualized [.visualize()], or converted to pandas DataFrames [.to_relational()].paperSets can be generated not only via a list of PMCIDs, but also via a search term using PMC advanced search [.from_search()]. -
Useful for advanced users:
TextSections andTextParagraphs found within.abstractand.bodyattributes ofPaperobjects contain not only text [.text], but also text with attached reference data [.text_with_refs]. Reference data includes tables, figures, and citations. These are processed into DataFrames and data dicts and can be found within the.ref_mapattribute of aPaperobject. Simply decode references based on their MHTML index. ie. an MHTML tag of "MHTML::dataref::14" found in aTextSectionof paperpcorresponds to the table, fig, or citation atp.ref_map[14].
Documentation
The docs are hosted on Read The Docs!
Sponsorship
Package sponsored by Daceflow.ai!
If you'd like to sponsor a feature or donate to the project, reach out to me at danielfrees@g.ucla.edu.
Related Skills
claude-opus-4-5-migration
107.2kMigrate prompts and code from Claude Sonnet 4.0, Sonnet 4.5, or Opus 4.1 to Opus 4.5
model-usage
346.4kUse CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
feishu-drive
346.4k|
things-mac
346.4kManage Things 3 via the `things` CLI on macOS (add/update projects+todos via URL scheme; read/search/list from the local Things database)
