SkillAgentSearch skills...

SeisCLIP

The code of Paper 'SeisCLIP: A seismology foundation model pre-trained by multimodal data for multipurpose seismic feature extraction'

Install / Use

/learn @sixu0/SeisCLIP
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

<p align="center" width="100%"> <img src="assets\SeisCLIP.png" width="80%" height="80%"> </p> <div> <div align="center"> <a href='https://sixu0.github.io/' target='_blank'>Xu Si<sup>1</sup></a>&emsp; <a href='http://cig.ustc.edu.cn/people/list.htm' target='_blank'>Xinming Wu<sup>1,†,‡</sup></a>&emsp; <a href='http://cig.ustc.edu.cn/people/list.htm' target='_blank'>Hanlin Sheng<sup>1</sup></a>&emsp; </br> <a href='https://dams.ustc.edu.cn/main.htm' target='_blank'>Jun Zhu<sup>1</sup></a>&emsp; <a href='https://dams.ustc.edu.cn/main.htm' target='_blank'>Zefeng Li<sup>1</sup></a>&emsp; </div> <div> <div align="center"> <sup>1</sup> University of Science and Technology of China&emsp; </br> <!-- <sup>*</sup> Equal Contribution&emsp; --> <sup>†</sup> Corresponding Author&emsp; <sup>‡</sup> Project Lead&emsp; </div>

arXiv TGRS GitHub followers GitHub stars

🌟 Spec-based Foundation Model Supports A Wide Range of Seismology

As shown in this figure, SeisCLIP can provide services for downstream tasks including event classification 💥 , location 🌍 , mechanism ⛰, etc.

Due to the limitations of hinet data transmission, we have not made the location and focal mechanism analysis datasets publicly available. They can be accessed through Baidu Netdisk.Links(Password:SEIS)

🌟 News

  • 2025.11.20: Update Social Circle examples.
  • 2024.2.2: 🌟🌟🌟 Congratulation! The paper has been published on IEEE Transactions on Geoscience and Remote Sensing (IEEE TGRS) Links.
  • 2023.9.14: 🌟🌟🌟 Pretrained weight and a simple usage demo for out SeisCLIP have been released. The implementation of SeisCLIP for event classification also released. Because the location and focal mechanism analysis code need lib 'Pytorch_geometric', it may be challenging for beginners. To provide a more detailed documentation, we will release it later. (Python Version 3.9.0 is recommended)
  • 2023.9.8: Paper is released at arxiv, and code will be gradually released.
  • 2023.8.7: Github Repository Initialization. (copy README template from Meta-Transformer)

🔓 Model Zoo

<!-- <details> --> <summary> Open-source Modality-Agnostic Models </summary> <br> <div>

| Model | Pretraining | Spec Size | #Param | Download | 国内下载源 | | :------------: | :----------: | :----------------------: | :----: | :---------------------------------------------------------------------------------------------------: | :--------: | | SeisCLIP | STEAD-1M | 50 × 120 | - | ckpt | [ckpt] | SeisCLIP | STEAD-1M | 50 × 600 | - | ckpt | [ckpt]

Citation

If the code and paper help your research, please kindly cite:

@ARTICLE{
  author={Si, Xu and Wu, Xinming and Sheng, Hanlin and Zhu, Jun and Li, Zefeng},
  journal={IEEE Transactions on Geoscience and Remote Sensing}, 
  title={SeisCLIP: A Seismology Foundation Model Pre-Trained by Multimodal Data for Multipurpose Seismic Feature Extraction}, 
  year={2024},
  volume={62},
  pages={1-13},
  doi={10.1109/TGRS.2024.3354456}}

License

This project is released under the MIT license.

Acknowledgement

This code is developed based on excellent open-sourced projects including CLIP, OpenCLIP, AST, MetaTransformer, ViT-Adapter, Seisbench, STEAD and PNW.

Related Skills

View on GitHub
GitHub Stars54
CategoryEducation
Updated1mo ago
Forks2

Languages

Jupyter Notebook

Security Score

100/100

Audited on Feb 1, 2026

No findings