SkillAgentSearch skills...

Physicsnemo

Open-source deep-learning framework for building, training, and fine-tuning deep learning models using state-of-the-art Physics-ML methods

Install / Use

/learn @NVIDIA/Physicsnemo
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

NVIDIA PhysicsNeMo

<!-- markdownlint-disable -->

📝 NVIDIA PhysicsNeMo is undergoing an update to v2.0 - all the features, with easier installation and integration to external packages. See the migration guide for more details!

Project Status: Active - The project has reached a stable, usable state and is being actively developed. GitHub Code style: black Install CI

<!-- markdownlint-enable -->

NVIDIA PhysicsNeMo | Documentation | Install Guide | Getting Started | Contributing Guidelines | Dev blog

What is PhysicsNeMo?

NVIDIA PhysicsNeMo is an open-source deep-learning framework for building, training, fine-tuning, and inferring Physics AI models using state-of-the-art SciML methods for AI4Science and engineering.

PhysicsNeMo provides Python modules to compose scalable and optimized training and inference pipelines to explore, develop, validate, and deploy AI models that combine physics knowledge with data, enabling real-time predictions.

Whether you are exploring the use of neural operators, GNNs, or transformers, or are interested in Physics-Informed Neural Networks or a hybrid approach in between, PhysicsNeMo provides you with an optimized stack that will enable you to train your models at scale.

<!-- markdownlint-disable --> <p align="center"> <img src=https://raw.githubusercontent.com/NVIDIA/physicsnemo/main/docs/img/value_prop/Knowledge_guided_models.gif alt="PhysicsNeMo"/> </p> <!-- markdownlint-enable --> <!-- toc --> <!-- tocstop -->

More About PhysicsNeMo

At a granular level, PhysicsNeMo is developed as modular functionality and therefore provides built-in composable modules that are packaged into a few key components:

<!-- markdownlint-disable -->

Component | Description | ---- | --- | physicsnemo.models ( More Details) | A collection of optimized, customizable, and easy-to-use families of model architectures such as Neural Operators, Graph Neural Networks, Diffusion models, Transformer models, and many more| physicsnemo.datapipes | Optimized and scalable built-in data pipelines fine-tuned to handle engineering and scientific data structures like point clouds, meshes, etc.| physicsnemo.distributed | A distributed computing sub-module built on top of torch.distributed to enable parallel training with just a few steps| physicsnemo.curator | A sub-module to streamline and accelerate the process of data curation for engineering datasets| physicsnemo.sym.geometry | A sub-module to handle geometry for DL training using Constructive Solid Geometry modeling and CAD files in STL format| physicsnemo.sym.eq | A sub-module to use PDEs in your DL training with several implementations of commonly observed equations and easy ways for customization|

<!-- markdownlint-enable -->

For a complete list, refer to the PhysicsNeMo API documentation for PhysicsNeMo.

AI4Science Library

Usually, PhysicsNeMo is used either as:

  • A complementary tool to PyTorch when exploring AI for SciML and AI4Science applications.
  • A deep learning research platform that provides scale and optimal performance on NVIDIA GPUs.

Domain-Specific Packages

The following are packages dedicated to domain experts of specific communities, catering to their unique exploration needs:

  • PhysicsNeMo CFD: Inference sub-module of PhysicsNeMo to enable CFD domain experts to explore, experiment, and validate using pretrained AI models for CFD use cases.
  • PhysicsNeMo Curator: Inference sub-module of PhysicsNeMo to streamline and accelerate the process of data curation for engineering datasets.
  • Earth-2 Studio: Inference sub-module of PhysicsNeMo to enable climate researchers and scientists to explore and experiment with pretrained AI models for weather and climate.

Scalable GPU-Optimized Training Library

PhysicsNeMo provides a highly optimized and scalable training library for maximizing the power of NVIDIA GPUs. Distributed computing utilities allow for efficient scaling from a single GPU to multi-node GPU clusters with a few lines of code, ensuring that large-scale physics-informed machine learning (ML) models can be trained quickly and effectively. The framework includes support for advanced optimization utilities, tailor-made datapipes, and validation utilities to enhance end-to-end training speed.

A Suite of Physics-Informed ML Models

PhysicsNeMo offers a library of state-of-the-art models specifically designed for Physics-ML applications. Users can build any model architecture by using the underlying PyTorch layers and combining them with curated PhysicsNeMo layers.

The Model Zoo includes optimized implementations of families of model architectures such as Neural Operators:

And many others.

These models are optimized for various physics domains, such as computational fluid dynamics, structural mechanics, and electromagnetics. Users can download, customize, and build upon these models to suit their specific needs, significantly reducing the time required to develop high-fidelity simulations.

Seamless PyTorch Integration

PhysicsNeMo is built on top of PyTorch, providing a familiar and user-friendly experience for those already proficient with PyTorch. This includes a simple Python interface and modular design, making it easy to use PhysicsNeMo with existing PyTorch workflows. Users can leverage the extensive PyTorc

Related Skills

View on GitHub
GitHub Stars2.6k
CategoryEducation
Updated5h ago
Forks632

Languages

Python

Security Score

100/100

Audited on Apr 4, 2026

No findings