FineST
Fine-grained Spatial Transcriptomics by integrating paired histology image
Install / Use
/learn @StatBiomed/FineSTREADME
========================================================================================================================== FineST: Contrastive learning integrates histology and spatial transcriptomics for nuclei-resolved ligand-receptor analysis
This software package implements FineST (Fine-grained Spatial Transcriptomics), which
identifies super-resolved ligand-receptor interactions with spatial co-expression
refining spot to sub-spot or single-cell resolution.
.. image:: https://github.com/StatBiomed/FineST/blob/main/docs/fig/FineST_summary_300.png?raw=true :width: 800px :align: center
FineST comprises three components (Training-Imputation-Discovery) after HE image feature extraction:
- Step0: HE image feature extraction
- Step1: Training FineST on the within spots
- Step2: Super-resolution spatial RNA-seq imputation at sub-spot or single-cell level
- Step3: Fast fine-grained ligand-receptor pair and cell-cell communication pattern discovery
.. It comprises two main steps:
.. 1. global selection spatialdm_global to identify significantly interacting LR pairs;
.. 2. local selection spatialdm_local to identify local spots for each interaction.
Installation using Conda
.. code-block:: bash
git clone https://github.com/StatBiomed/FineST.git conda create --name FineST python=3.8 conda activate FineST cd FineST pip install -r requirements.txt
.. Typically installation is completed within a few minutes.
.. Then install pytorch, refer to pytorch installation <https://pytorch.org/get-started/locally/>_.
.. .. code-block:: bash
.. conda install pytorch=1.7.1 torchvision torchaudio cudatoolkit=11.0 -c pytorch
Verify the installation using the following command:
.. code-block:: text
python
import torch print(torch.version) 2.1.2+cu121 (or your installed version) print(torch.cuda.is_available()) True
.. Installation using PyPI .. =======================
FineST package is available through PyPI <https://pypi.org/project/FineST/>_.
.. To install, type the following command line and add -U for updates:
.. code-block:: bash
pip install -U FineST
Alternatively, install from GitHub for latest version:
pip install -U git+https://github.com/StatBiomed/FineST
.. Alternatively, install from this GitHub repository for latest (often .. development) version (time: < 1 min):
.. .. code-block:: bash
.. pip install -U git+https://github.com/StatBiomed/FineST
The FineST conda environment can be used for the following Tutorial by:
.. code-block:: text
python -m pip install ipykernel python -m ipykernel install --user --name=FineST
Tutorial notebooks:
NPC_Train_Impute_demo.ipynb <https://github.com/StatBiomed/FineST/tree/main/tutorial/NPC_Train_Impute_demo.ipynb>_ (using Virchow2; requires Hugging Face token, approval may take days)NPC_Train_Impute_demo_HIPT.ipynb <https://github.com/StatBiomed/FineST/blob/main/tutorial/NPC_Train_Impute_demo_HIPT.ipynb>_ (using HIPT; recommended for quick start)
ROI selection via Napair
To analyze a specific region of interest (ROI), use napari <https://github.com/napari/napari>_ to select the region:
.. code-block:: bash
from PIL import Image Image.MAX_IMAGE_PIXELS = None import matplotlib.pyplot as plt import napari
image = plt.imread("FineST_tutorial_data/20210809-C-AH4199551.tif") viewer = napari.view_image(image, channel_axis=2, ndisplay=2) napari.run()
Quick guide:
- A shapes layer is automatically added when opening napari
- Use the
Add Polygonstool to draw ROI(s) on the HE image - Optionally rename the ROI layer for clarity
For detailed instructions and ROI extraction using fst.crop_img_adata(), see the
tutorial <https://finest-rtd-tutorial.readthedocs.io/en/latest/Crop_ROI_Boundary_image.html>_ or
video guide <https://drive.google.com/file/d/1y3sb_Eemq3OV2gkxwu4gZBhLFp-gpzpH/view?usp=sharing>_.
Get Started for Visium or Visium HD data
The tutorial includes:
- Visium: 10x Visium human nasopharyngeal carcinoma (NPC) data
- Visium HD: 10x Visium HD human colorectal cancer (CRC) data (16-um bin) [
Sample P2 CRC <https://www.10xgenomics.com/products/visium-hd-spatial-gene-expression/dataset-human-crc>_]
Data Download
Download Visum FineST_tutorial_data from Google Drive <https://drive.google.com/drive/folders/10WvKW2EtQVuH3NWUnrde4JOW_Dd_H6r8?usp=sharing>_ or via command line:
.. code-block:: bash
python -m pip install gdown gdown --folder https://drive.google.com/drive/folders/1rZ235pexAMVvRzbVZt1ONOu7Dcuqz5BD?usp=drive_link
Fast Run for Demo
.. code-block:: bash
bash test_demo.sh
- Note: The demo uses HIPT for image features, which is faster and doesn't need Hugging Face token.
- For using Virchow2 (may require a token and take longer; in paper), see the detailed manual below.
- The demo uses the Visium NPC dataset; for Visium HD CRC data, follow the manual for Step 0-1-2.
- The demo runs Step 0-1; for Step 2, plese replace the trained
weight_save_pathwith your own.
Step0: HE image feature extraction
- For Visium data, extract image features for both within-spots and between-spots.
- For Visium HD data, extract features directly from continuous squares.
Option A: Extract image features for within-spots (Visium)
For Visium (55um spot diameter, 100um center-to-center distance), extract image features of original (within) spots:
.. code-block:: bash
Option A: Using HIPT (recommended for quick start, no token required)
python ./demo/Image_feature_extraction.py
--dataset NPC
--position_path FineST_tutorial_data/spatial/tissue_positions_list.csv
--rawimage_path FineST_tutorial_data/20210809-C-AH4199551.tif
--scale_image False
--method HIPT
--patch_size 64
--output_img FineST_tutorial_data/ImgEmbeddings/pth_64_16_image
--output_pth FineST_tutorial_data/ImgEmbeddings/pth_64_16
--logging FineST_tutorial_data/ImgEmbeddings/Logging/
--scale 0.5 # default is 0.5
.. code-block:: bash
Option B: Using Virchow2 (requires Hugging Face token)
python ./demo/Image_feature_extraction.py
--dataset NPC
--position_path FineST_tutorial_data/spatial/tissue_positions_list.csv
--rawimage_path FineST_tutorial_data/20210809-C-AH4199551.tif
--scale_image False
--method Virchow2
--patch_size 112
--output_img FineST_tutorial_data/ImgEmbeddings/pth_112_14_image
--output_pth FineST_tutorial_data/ImgEmbeddings/pth_112_14
--logging FineST_tutorial_data/ImgEmbeddings/Logging/
--scale 0.5 # default is 0.5
Option B: Extract image features for bin-squares (Visium HD)
For Visium HD (continuous squares without gaps), extract image features directly:
.. code-block:: bash
python ./demo/Image_feature_extraction.py
--dataset HD_CRC_16um
--position_path ./Dataset/CRC/square_016um/tissue_positions.parquet
--rawimage_path ./Dataset/CRC/square_016um/Visium_HD_Human_Colon_Cancer_tissue_image.btf
--scale_image True
--method Virchow2
--output_img ./Dataset/CRC/HIPT/HD_CRC_16um_pth_28_14_image
--output_pth ./Dataset/CRC/HIPT/HD_CRC_16um_pth_28_14
--patch_size 28
--logging ./Logging/HIPT_HD_CRC_16um/
--scale 0.5 # default is 0.5
Note: Visium HD uses .parquet for positions and .btf for images, while Visium uses .csv and .tif.
Step1: Training FineST on the within spots
Option A: Visium
Train FineST model on within-spots to learn the mapping from image features to gene expression.
.. code-block:: bash
HIPT with Visium16 (patch_size=64)
python ./demo/Step1_FineST_train_infer.py
--system_path '/home/lingyu/ssd/Python/FineST/FineST/'
--parame_path 'parameter/parameters_NPC_HIPT.json'
--dataset_class 'Visium16'
--image_class 'HIPT'
--gene_selected 'CD70'
--LRgene_path 'FineST/datasets/LR_gene/LRgene_CellChatDB_baseline_human.csv'
--visium_path 'FineST_tutorial_data/spatial/tissue_positions_list.csv'
--image_embed_path 'FineST_tutorial_data/ImgEmbeddings/pth_64_16'
--spatial_pos_path 'FineST_tutorial_data/OrderData/position_order.csv'
--reduced_mtx_path 'FineST_tutorial_data/OrderData/matrix_order.npy'
--figure_save_path 'FineST_tutorial_data/Figures/'
--save_data_path 'FineST_tutorial_data/SaveData/'
--patch_size 64
--weight_w 0.5
.. code-block:: bash
Virchow2 with Visium64 (patch_size=112)
python ./demo/Step1_FineST_train_infer.py
--system_path '/home/lingyu/ssd/Python/FineST_submit/FineST/'
--parame_path 'FineST_tutorial_data/parameter/parameters_NPC_virchow2.json'
--dataset_class 'Visium64'
--image_class 'Virchow2'
--gene_selected 'CD70'
--LRgene_path 'FineST_tutorial_data/LRgene/LRgene_CellChatDB_baseline.csv'
--visium_path 'FineST_tutorial_data/spatial/tissue_positions_list.csv'
--image_embed_path 'FineST_tutorial_data/ImgEmbeddings/pth_112_14'
--spatial_pos_path 'FineST_tutorial_data/OrderData/position_order.csv'
--reduced_mtx_path 'FineST_tutorial_data/OrderData/matrix_order.npy'
--figure_save_path 'FineST_tutorial_data/Figures/'
--save_data_path 'FineST_tutorial_data/SaveData/'
--patch_size 112
--weight_w 0.5
Key parameters:
--dataset_class: ``'Visiu
Related Skills
best-practices-researcher
The most comprehensive Claude Code skills registry | Web Search: https://skills-registry-web.vercel.app
mentoring-juniors
Community-contributed instructions, agents, skills, and configurations to help you make the most of GitHub Copilot.
groundhog
399Groundhog's primary purpose is to teach people how Cursor and all these other coding agents work under the hood. If you understand how these coding assistants work from first principles, then you can drive these tools harder (or perhaps make your own!).
isf-agent
a repo for an agent that helps researchers apply for isf funding
