SkillAgentSearch skills...

CRA5

A large compression model for weather and climate data, which compresses a 400+ TB ERA5 dataset into a new 0.8 TB CRA5 dataset.

Install / Use

/learn @taohan10200/CRA5
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

<!-- ![ID-CompressAI-logo](assets/CRA5LOGO.svg =750x140) -->

<a href="url"><img src="assets/CRA5LOGO.svg" align="center"></a>

License PyPI Downloads

Paper:CRA5: Extreme Compression of ERA5 for Portable Global Climate and Weather Research via an Efficient Variational Transformer

Introduction and get started

CRA5 dataset now is available at OneDrive

CRA5 is a extreme compressed weather dataset of the most popular ERA5 reanalysis dataset. The repository also includes compression models, forecasting model for researchers to conduct portable weather and climate research.

CRA5 currently provides:

  • A customized variaitional transformer (VAEformer) for climate data compression
  • A dataset CRA5 less than 1 TiB, but contains the same information with 400+ TiB ERA5 dataset. Covering houly ERA5 from year 1979 to 2023.
  • A pre-trained Auto-Encoder on the climate/weather data to support some potential weather research.

Note: Multi-GPU support is now experimental.

Installation

CRA5 supports python 3.8+ and PyTorch 1.7+.

conda create --name cra5 python=3.10 -y 
conda activate cra5

Please install cra5 from source:

A C++17 compiler, a recent version of pip (19.0+), and common python packages are also required (see setup.py for the full list).

To get started locally and install the development version of CRA5, run the following commands in a virtual environment:

git clone https://github.com/taohan10200/CRA5
cd CRA5

pip install -U pip && pip install -e .
<!-- For a custom installation, you can also run one of the following commands: * `pip install -e '.[dev]'`: install the packages required for development (testing, linting, docs) * `pip install -e '.[tutorials]'`: install the packages required for the tutorials (notebooks) * `pip install -e '.[all]'`: install all the optional packages --> <!-- ## Documentation --> <!-- * [Installation](https://interdigitalinc.github.io/CompressAI/installation.html) * [CompressAI API](https://interdigitalinc.github.io/CompressAI/) * [Training your own model](https://interdigitalinc.github.io/CompressAI/tutorials/tutorial_train.html) * [List of available models (model zoo)](https://interdigitalinc.github.io/CompressAI/zoo.html) -->

Test

python test.py

Usages

Using with API:

Supporting functions like: Compression / decompression / latents representation / feature visulization / reconstructed visulization

# We build a downloader to help use download the original ERA5 netcdf files for testing.

# data/ERA5/2024/2024-06-01T00:00:00_pressure.nc (513MiB) and data/ERA5/2024/2024-06-01T00:00:00_single.nc (18MiB) 
from cra5.api.era5_downloader import era5_downloader
ERA5_data = era5_downloader('./cra5/api/era5_config.py') #specify the dataset config for what we want to download
data = ERA5_data.get_form_timestamp(time_stamp="2024-06-01T00:00:00",
                                    local_root='./data/ERA5')

# After getting the ERA5 data ready, you can explore the compression.
from cra5.api import cra5_api
cra5_API = cra5_api()

####=======================compression functions=====================
# Return a continuous latent y for ERA5 data at 2024-06-01T00:00:00
y = cra5_API.encode_to_latent(time_stamp="2024-06-01T00:00:00") 

# Return the the arithmetic coded binary stream of y 
bin_stream = cra5_API.latent_to_bin(y=y)  

# Or if you want to directly compress and save the binary stream to a folder
cra5_API.encode_era5_as_bin(time_stamp="2024-06-01T00:00:00", save_root='./data/cra5')  


####=======================decompression functions=====================
# Starting from the bin_stream, you can decode the binary file to the quantized latent.
y_hat = cra5_API.bin_to_latent(bin_path="./data/CRA5/2024/2024-06-01T00:00:00.bin")  # Decoding from binary can only get the quantized latent.

# Return the normalized cra5 data
normlized_x_hat = cra5_API.latent_to_reconstruction(y_hat=y_hat) 


# If you have saveed  or downloaded the binary file, then you can directly restore the binary file into reconstruction.
normlized_x_hat = cra5_API.decode_from_bin("2024-06-01T00:00:00", return_format='normalized') # Return the normalized cra5 data
x_hat = cra5_API.decode_from_bin("2024-06-01T00:00:00", return_format='de_normalized') # Return the de-normalized cra5 data

# Show some channels of the latent
cra5_API.show_latent(
	latent=y_hat.squeeze(0).cpu().numpy(), 
	time_stamp="2024-06-01T00:00:00", 
	show_channels=[0, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120, 130, 140, 150],
    save_path = './data/vis')

<!-- ![ID-CompressAI-logo](assets/2024-06-01T00:00:00_latent.png =400x140) -->

<a href="url"><img src="assets/2024-06-01T00_latent.png" align="center"></a>

# show some variables for the constructed data
cra5_API.show_image(
	reconstruct_data=x_hat.cpu().numpy(), 
	time_stamp="2024-06-01T00:00:00", 
	show_variables=['z_500', 'q_500', 'u_500', 'v_500', 't_500', 'w_500'],
    save_path = './data/vis')
<!-- ![ID-CompressAI-logo](assets/CRA5LOGO.svg =400x140) -->

<a href="url"><img src="assets/2024-06-01T00.png" align="center"></a>

Or using with the pre-trained model

import os 
import torch
from cra5.models.compressai.zoo import vaeformer_pretrained
device = 'cuda' if torch.cuda.is_available() else 'cpu'
print(device)
net = vaeformer_pretrained(quality=268, pretrained=True).eval().to(device)
input_data_norm = torch.rand(1,268, 721,1440).to(device) #This is a proxy weather data. It actually should be a 

print(x.shape)
with torch.no_grad():
    out_net = net.compress(x) 
    
print(out_net)

Features

1. CRA5 dataset is a product of the VAEformer applied in the atmospheric science. We explore this to facilitate the research in weather and climate.

  • Train the large data-driven numerical weather forecasting models with our CRA5

Note: For researches who do not have enough disk space to store the 300 TiB+ ERA5 dataset, but have interests to train a large weather forecasting model, like FengWu-GHR, this research can help you save it into less than 1 TiB disk.

Our preliminary attemp has proven that the CRA5 dataset can train the very very similar NWP model compared with the original ERA5 dataset. Also, with this dataset, you can easily train a Nature published forecasting model, like Pangu-Weather.

<!-- ![ID-CompressAI-logo](assets/rmse_acc_bias_activity.png =400x140) -->

<a href="url"><img src="assets/rmse_acc_bias_activity.png" align="center"></a>

2. VAEformer is a powerful compression model, we hope it can be extended to other domains, like image and video compression.

<!-- ![ID-CompressAI-logo](assets/MSE_supp_new.png =400x140) -->

<a href="url"><img src="assets/MSE_supp_new.png" align="center"></a>

3 VAEformer is based on the Auto-Encoder-Decoder, we provide a pretrained VAE for the weather research, you can use our VAEformer to get the latents for downstream research, like diffusion-based or other generation-based forecasting methods.

  • Using it as a Auto-Encoder-Decoder

Note: For people who are intersted in diffusion-based or other generation-based forecasting methods, we can provide an Auto Encoder and decoder for the weather research, you can use our VAEformer to get the latents for downstream research.

<!-- Script and notebook examples can be found in the `examples/` directory. To encode/decode images with the provided pre-trained models, run the `codec.py` example: ```bash python3 examples/codec.py --help ``` An examplary training script with a rate-distortion loss is provided in `examples/train.py`. You can replace the model used in the training script with your own model implemented within CompressAI, and then run the script for a simple training pipeline: ```bash python3 examples/train.py -d /path/to/my/image/dataset/ --epochs 300 -lr 1e-4 --batch-size 16 --cuda --save ``` > **Note:** the training example uses a custom [ImageFolder](https://interdigitalinc.github.io/CompressAI/datasets.html#imagefolder) structure. A jupyter notebook illustrating the usage of a pre-trained model for learned image compression is also provided in the `examples` directory: ```bash pip install -U ipython jupyter ipywidgets matplotlib jupyter notebook examples/ ``` --> <!-- ### Evaluation To evaluate a trained model on your own dataset, CompressAI provides an evaluation script: ```bash python3 -m compressai.utils.eval_model checkpoint /path/to/images/folder/ -a $ARCH -p $MODEL_CHECKPOINT... ``` To evaluate provided pre-trained models: ```bash python3 -m compressai.utils.eval_model pretrained /path/to/images/folder/ -a $ARCH -q $QUALITY_LEVELS... ``` To plot results from bench/eval_model simulations (requires matplotlib by default): ```bash python3 -m compressai.utils.plot --help --> <!-- ``` --> <!-- To evaluate traditional codecs: ```bash python3 -m compressai.utils.bench --help python3 -m compressai.utils.bench bpg --help python3 -m compressai.utils.bench vtm --help ``` For video, similar tests can be run, CompressAI only includes ssf2020 for now: ```bash python3 -m compressai.utils.video.eval_model checkpoint /path/to/video/folder/ -a ssf2020 -p $MODEL_CHECKPOINT... python3 -m compressai.utils.video.eval_model pretrained /path/to/video/folder/ -a ssf2020 -q $QUALITY_LEVELS... python3 -m comp
View on GitHub
GitHub Stars93
CategoryDevelopment
Updated4d ago
Forks10

Languages

Python

Security Score

85/100

Audited on Mar 24, 2026

No findings