SkillAgentSearch skills...

PaCMAP

PaCMAP: Large-scale Dimension Reduction Technique Preserving Both Global and Local Structure

Install / Use

/learn @YingfanWang/PaCMAP
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

PaCMAP Tests

<a name='News'></a>News

NeurIPS 2024: New Parametric DR method ParamRepulsor accepted

We're excited to announce that our latest work has been published at the Thirty-Eighth Conference on Neural Information Processing Systems (NeurIPS 2024)!🎉🎉

Traditional dimensionality reduction (DR) algorithms struggle with online-learning scenarios, while existing parametric DR approaches often fail to preserve local structure in visualizations. Our latest algorithm, ParamRepulsor, builds on Parametric PaCMAP to address these challenges, achieving state-of-the-art performance in both local and global structure preservation. With GPU support using PyTorch, ParamRepulsor delivers exceptional speed and scalability, making it suitable for large-scale and dynamic datasets.

Check out the NeurIPS paper and the code for detailed insights into the new approach.

AAAI 2025: New DR method LocalMAP for Local Adjusted Graph accepted

We're excited to announce that our latest work has been published at the The 39th Annual AAAI Conference on Artificial Intelligence (AAAI 2025)!🎉🎉

General dimension reduction (DR) algorithm often involves converting the original high-dimensional data into a graph. Each edge within the graph represents the similarity or dissimilarity between pairs of data points. However, this graph is frequently suboptimal due to unreliable high-dimensional distances and the limited information extracted from the high-dimensional data.

Our latest algorithm, Pairwise Controlled Manifold Approximation with Local Adjusted Graph(LocalMAP), from a nonparametric perspective, address this problem by dynamically and locally adjusting the graph during the final stage, making the real clusters within the dataset to be easier to identify and more separable comparing to other DR methods that may overlook or combine.

Check out our AAAI Paper and the Code for detailed insights into the new approach. This method will be embedded into PaCMAP package soon.

<a name='Introduction'></a>Introduction

Our work has been published in the Journal of Machine Learning Research (JMLR) 📚 and has earned the prestigious John M. Chambers Statistical Software Award 🥇 and the Award for Innovation in Statistical Programming and Analytics 💡 presented by the Statistical Computing Section (SCS) and the Statistical Programming and Analytics Section (SSPA) of the American Statistical Association (ASA).

PaCMAP (Pairwise Controlled Manifold Approximation) is a dimensionality reduction method that can be used for visualization, preserving both local and global structure of the data in original space. PaCMAP optimizes the low dimensional embedding using three kinds of pairs of points: neighbor pairs (pair_neighbors), mid-near pair (pair_MN), and further pairs (pair_FP).

Previous dimensionality reduction techniques focus on either local structure (e.g. t-SNE, LargeVis and UMAP) or global structure (e.g. TriMAP), but not both, although with carefully tuning the parameter in their algorithms that controls the balance between global and local structure, which mainly adjusts the number of considered neighbors. Instead of considering more neighbors to attract for preserving global structure, PaCMAP dynamically uses a special group of pairs -- mid-near pairs, to first capture global structure and then refine local structure, which both preserve global and local structure. For a thorough background and discussion on this work, please read our paper.

<a name='ReleaseNotes'></a>Release Notes

Please see the release notes.

<a name='Installation'></a>Installation

<a name='Installfromconda-forgeviacondaormamba'></a>Install from conda-forge via conda or mamba

You can use conda or mamba to install PaCMAP from the conda-forge channel.

conda:

conda install pacmap -c conda-forge

mamba:

mamba install pacmap -c conda-forge

<a name='InstallfromPyPIviapip'></a>Install from PyPI via pip

You can use pip to install pacmap from PyPI.

Basic installation (includes FAISS as the default KNN backend):

pip install pacmap

Optional KNN backends:

PaCMAP supports multiple KNN backends. Install optional backends as needed:

# Install with Annoy backend
pip install pacmap[annoy]

# Install with Voyager backend
pip install pacmap[voyager]

# Install all optional backends
pip install pacmap[all]

Note: The original PaCMAP paper used Annoy as the default KNN backend. Since the Annoy package is no longer actively maintained, we have switched to FAISS as the default backend for better long-term stability and performance. Annoy remains available as an optional backend for compatibility.

If you have any problems during the installation, you can try installing dependencies with conda or mamba. Users have also reported that in some cases, you may wish to use numba >= 0.57.

<a name='Usage'></a>Usage

<a name='UsingPaCMAPinPython'></a>Using PaCMAP in Python

The pacmap package is designed to be compatible with scikit-learn, meaning that it has a similar interface with functions in the sklearn.manifold module. To run pacmap on your own dataset, you should install the package following the instructions in installation, and then import the module. The following code clip includes a use case about how to use PaCMAP on the COIL-20 dataset:

import pacmap
import numpy as np
import matplotlib.pyplot as plt

# loading preprocessed coil_20 dataset
# you can change it with any dataset that is in the ndarray format, with the shape (N, D)
# where N is the number of samples and D is the dimension of each sample
X = np.load("./data/coil_20.npy", allow_pickle=True)
X = X.reshape(X.shape[0], -1)
y = np.load("./data/coil_20_labels.npy", allow_pickle=True)

# initializing the pacmap instance
# Setting n_neighbors to "None" leads to an automatic choice shown below in "parameter" section
embedding = pacmap.PaCMAP(n_components=2, n_neighbors=10, MN_ratio=0.5, FP_ratio=2.0) 

# fit the data (The index of transformed data corresponds to the index of the original data)
X_transformed = embedding.fit_transform(X, init="pca")

# visualize the embedding
fig, ax = plt.subplots(1, 1, figsize=(6, 6))
ax.scatter(X_transformed[:, 0], X_transformed[:, 1], cmap="Spectral", c=y, s=0.6)

<a name='UsingPaCMAPinR'></a>Using PaCMAP in R

You can also use PaCMAP in R with the reticulate package. We provide a sample R notebook that demonstrates how PaCMAP can be called in R for visualization. We also provide a Seurat Integration that allows seamless integration with Seurat Objects for single cell genomics.

<a name='UsingPaCMAPinRust'></a>Using PaCMAP in Rust

A Rust implementation of PaCMAP has recently be released by @hadronzoo. This implementation is Python free, meaning that it does not depend on a Python runtime or Python environment.

<a name='Benchmarks'></a>Benchmarks

The following images are visualizations of two datasets: MNIST (n=70,000, d=784) and Mammoth (n=10,000, d=3), generated by PaCMAP. The two visualizations demonstrate the local and global structure's preservation ability of PaCMAP respectively.

MNIST

Mammoth

<a name='Parameters'></a>Parameters

The list of the most important parameters is given below. Changing these values will affect the result of dimension reduction significantly, as specified in section 8.3 in our paper.

  • n_components: the number of dimension of the output. Default to 2.

  • n_neighbors: the number of neighbors considered in the k-Nearest Neighbor graph. Default to 10. We also allow this parameter to be set to None to enable the auto-selection of number of neighbors: the number of neighbors will be set to 10 for dataset whose sample size is smaller than 10000. For large dataset whose sample size (n) is larger than 10000, the value is: 10 + 15 * (log10(n) - 4).

  • MN_ratio: the ratio of the number of mid-near pairs to the number of neighbors, n_MN = <img src="https://latex.codecogs.com/gif.latex?\lfloor" title="\lfloor" /> n_neighbors * MN_ratio <img src="https://latex.codecogs.com/gif.latex?\rfloor" title="\rfloor" /> . Default to 0.5.

  • FP_ratio: the ratio of the number of further pairs to the number of neighbors, n_FP = <img src="https://latex.codecogs.com/gif.latex?\lfloor" title="\lfloor" /> n_neighbors * FP_ratio <img src="https://latex.codecogs.com/gif.latex?\rfloor" title="\rfloor" /> Default to 2.

The initialization is also important to the result, but it's a parameter of the fit and fit_transform function.

  • init: the initialization of the lower dimensional embedding. One of "pca" or "random", or a user-provided numpy ndarray with the shape (N, 2). Default to "pca".

Other parameters include:

  • num_iters: number of iterations. Default to 450. 450 iterations is enough for most dataset to converge.
  • pair_neighbors, pair_MN and `p
View on GitHub
GitHub Stars948
CategoryDevelopment
Updated8h ago
Forks79

Languages

Python

Security Score

95/100

Audited on Mar 28, 2026

No findings