SkillAgentSearch skills...

InstructCell

A Multi-Modal AI Copilot for Single-Cell Analysis with Instruction Following

Install / Use

/learn @zjunlp/InstructCell

README

<h1 align="center"> 🎨 InstructCell </h1> <h3 align="center"> A Multi-Modal AI Copilot for Single-Cell Analysis with Instruction Following </h3>

Awesome License: MIT

Table of Contents

<h2 id="1">🗞️ Overview</h2>

InstructCell is a multi-modal AI copilot that integrates natural language with single-cell RNA sequencing data, enabling researchers to perform tasks like cell type annotation, pseudo-cell generation, and drug sensitivity prediction through intuitive text commands. By leveraging a specialized multi-modal architecture and our multi-modal single-cell instruction dataset, InstructCell reduces technical barriers and enhances accessibility for single-cell analysis.

InstructCell has two versions:

  1. Chat Version: Supports generating both detailed textual answers and single-cell data, offering comprehensive and context-rich outputs.
  2. Instruct Version: Supports generating only the answer portion without additional explanatory text, providing concise and task-specific outputs.

Both versions of the model are available for download from Hugging Face (zjunlp/InstructCell-chat and zjunlp/InstructCell-instruct).

<img width="1876" alt="image" src="https://github.com/user-attachments/assets/3fefe71c-3c00-4c21-b388-cf2300fb9f90" /> <h2 id="2">🗝️ Quick start</h2>

🪜 Requirements

  • python 3.10 and above are recommended
  • CUDA 11.7 and above are recommended

We provide a simple example for quick reference. This demonstrates a basic cell type annotation workflow.

Make sure to specify the paths for H5AD_PATH and GENE_VOCAB_PATH appropriately:

  • H5AD_PATH: Path to your .h5ad single-cell data file (e.g., H5AD_PATH = "path/to/your/data.h5ad").
  • GENE_VOCAB_PATH: Path to your gene vocabulary file (e.g., GENE_VOCAB_PATH = "path/to/your/gene_vocab.npy").
from mmllm.module import InstructCell
import anndata
import numpy as np
from utils import unify_gene_features

# Load the pre-trained InstructCell model from HuggingFace
model = InstructCell.from_pretrained("zjunlp/InstructCell-chat")

# Load the single-cell data (H5AD format) and gene vocabulary file (numpy format)
adata = anndata.read_h5ad(H5AD_PATH)
gene_vocab = np.load(GENE_VOCAB_PATH)
adata = unify_gene_features(adata, gene_vocab, force_gene_symbol_uppercase=False)

# Select a random single-cell sample and extract its gene counts and metadata
k = np.random.randint(0, len(adata)) 
gene_counts = adata[k, :].X.toarray()
sc_metadata = adata[k, :].obs.iloc[0].to_dict()

# Define the model prompt with placeholders for metadata and gene expression profile
prompt = (
    "Can you help me annotate this single cell from a {species}? " 
    "It was sequenced using {sequencing_method} and is derived from {tissue}. " 
    "The gene expression profile is {input}. Thanks!"
)

# Use the model to generate predictions
for key, value in model.predict(
    prompt, 
    gene_counts=gene_counts, 
    sc_metadata=sc_metadata, 
    do_sample=True, 
    top_p=0.95,
    top_k=50,
    max_new_tokens=256,
).items():
    # Print each key-value pair
    print(f"{key}: {value}")

For more detailed explanations and additional examples, please refer to the Jupyter notebook demo.ipynb.

<h2 id="3">🚀 How to run</h2>

Assume your current directory path is DIR_PATH.

🧫 Collecting Raw Single-Cell Datasets

<div align="center"> <img width="500" alt="image" src="https://github.com/user-attachments/assets/b2002629-a2dc-4009-976e-f63fa6d4aec6" /> </div>

The datasets used in the paper are all publicly available. Detailed instructions and dataset links are provided in the Jupyter notebooks: HumanUnified.ipynb and MouseUnified.ipynb. Below is a summary of the datasets and their corresponding details:

|Dataset|Species|Task|Data Repository|Download Link| |:-------:|:-------:|:----:|:---------------:|:-------------:| |Xin-2016|human|cell type annotation|GEO|https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE114297| |Segerstolpe-2016|human|cell type annotation|BioStudies|https://www.ebi.ac.uk/biostudies/arrayexpress/studies/E-MTAB-5061| |He-2020|human|cell type annotation|GEO|https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE159929| |PBMC68K|human|conditional pseudo cell generation|Figshare|https://figshare.com/s/49b29cb24b27ec8b6d72| |GSE117872|human|drug sensitivity prediction|GEO|https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE117872| |GSE149383|human|drug sensitivity predictio|Github|https://github.com/OSU-BMBL/scDEAL| |Ma-2020|mouse|cell type annotation|GEO|https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE140203| |Bastidas-Ponce-2019|mouse|cell type annotation|GEO|https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE132188| |GSE110894|mouse|drug sensitivity predictio|GEO|https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE110894| |Mouse-Atlas|mouse|conditional pseudo cell generation|GEO|https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSM4505404|

🔗 Please Note:

For the He-2020 dataset, the cell type annotation file is sourced from the GitHub repository scRNA-AHCA 👈.

⚙️ Installation Guide

Follow these steps to set up InstructCell:

  1. Clone the repository:
git clone https://github.com/zjunlp/InstructCell.git
  1. Set up a virtual environment and install the dependencies:
conda create -n instructcell python=3.10
conda activate instructcell
cd InstructCell
pip install -r requirements.txt

🌐 Downloading Pre-trained Language Models

The pre-trained language model used in this project is T5-base. You can download it from 🤗 Hugging Face and place the corresponding model directory under DIR_PATH.

Alternatively, you can use the provided script to automate the download process:

python download_script.py --repo_id google-t5/t5-base --parent_dir ..

🛠️ Single Cell Data Preprocessing

Navigate to the parent directory DIR_PATH and organize your data by creating a main data folder and three task-specific subfolders:

cd ..
mkdir data 
cd data
mkdir cell_type_annotation 
mkdir drug_sensitivity_prediction 
mkdir conditional_pseudo_cell_generation
cd ..

For dataset preprocessing, refer to the previously mentioned Jupyter notebooks:

[!NOTE] Matching orthologous genes between mouse and human is based on pybiomart and pyensembl. Before preprocessing mouse datasets, ensure the corresponding Ensembl data are downloaded by running:

pyensembl install --release 100 --species mus_musculus

After completing the preprocessing steps, split each dataset and build a gene vocabulary using the following command:

cd InstructCell
python preprocess.py --n_top_genes 3600 

To customize the size of the gene vocabulary, adjust the n_top_genes parameter as needed. For instance, setting it to 2000 will generate a smaller vocabulary. At this point, two files, gene_vocab.npy and choices.pkl, are generated. The first file stores the selected genes, while the second holds the category labels for each classification dataset. The gene vocabulary and label set used in this project can both be found in this folder.

🧺 Instruction-Response Template Construction

The instruction-response templates used in the projects are stored in this folder.

<div align="center"> <img width="800" alt="image" src="https://github.com/user-attachments/assets/a58e5c62-c6dd-4fac-8677-c47c4cb7c093" /> </div>

The construction of instruction-response templates is divided into four stages:

  1. Motivation and personality generation: In this stage, the large language model is prompted to generate potential motivations for each task and corresponding personalities. This step is implemented in the data_synthesis.py script.
  2. Template synthesis via parallel API calls: Multiple APIs are run in parallel to synthesize templates, with each API invoked a specified number of times per task. This process is also implemented in the data_synthesis.py script.
  3. Merging synthesized templates: The generated templates are consolidated into a unified collection using the merge_templates.py script.
  4. Filtering and splitting templates: Finally, the templates are filtered for quality and divided into specific datasets using the split_templates.py script.

To execute all four stages in sequence, use the run_data_synthesis.sh script:

bash run_data_synthesis.sh  

[!NOTE] Before executing run_data_synthesis.sh, ensure the parameters in the script are configured correctly. Update the API keys and base URL as needed, specify the model for template synthesis (model in the script), and adjust the number of API calls per task (num_templates_for_task in the script).

🚀 Training Inst

View on GitHub
GitHub Stars33
CategoryCustomer
Updated19d ago
Forks5

Languages

Jupyter Notebook

Security Score

95/100

Audited on Mar 11, 2026

No findings