SeisCLIP
The code of Paper 'SeisCLIP: A seismology foundation model pre-trained by multimodal data for multipurpose seismic feature extraction'
Install / Use
/learn @sixu0/SeisCLIPREADME
🌟 Spec-based Foundation Model Supports A Wide Range of Seismology
As shown in this figure, SeisCLIP can provide services for downstream tasks including event classification 💥 , location 🌍 , mechanism ⛰, etc.
Due to the limitations of hinet data transmission, we have not made the location and focal mechanism analysis datasets publicly available. They can be accessed through Baidu Netdisk.Links(Password:SEIS)
🌟 News
- 2025.11.20: Update Social Circle examples.
- 2024.2.2: 🌟🌟🌟 Congratulation! The paper has been published on IEEE Transactions on Geoscience and Remote Sensing (IEEE TGRS) Links.
- 2023.9.14: 🌟🌟🌟 Pretrained weight and a simple usage demo for out SeisCLIP have been released. The implementation of SeisCLIP for event classification also released. Because the location and focal mechanism analysis code need lib 'Pytorch_geometric', it may be challenging for beginners. To provide a more detailed documentation, we will release it later. (Python Version 3.9.0 is recommended)
- 2023.9.8: Paper is released at arxiv, and code will be gradually released.
- 2023.8.7: Github Repository Initialization. (copy README template from Meta-Transformer)
🔓 Model Zoo
<!-- <details> --> <summary> Open-source Modality-Agnostic Models </summary> <br> <div>| Model | Pretraining | Spec Size | #Param | Download | 国内下载源 | | :------------: | :----------: | :----------------------: | :----: | :---------------------------------------------------------------------------------------------------: | :--------: | | SeisCLIP | STEAD-1M | 50 × 120 | - | ckpt | [ckpt] | SeisCLIP | STEAD-1M | 50 × 600 | - | ckpt | [ckpt]
Citation
If the code and paper help your research, please kindly cite:
@ARTICLE{
author={Si, Xu and Wu, Xinming and Sheng, Hanlin and Zhu, Jun and Li, Zefeng},
journal={IEEE Transactions on Geoscience and Remote Sensing},
title={SeisCLIP: A Seismology Foundation Model Pre-Trained by Multimodal Data for Multipurpose Seismic Feature Extraction},
year={2024},
volume={62},
pages={1-13},
doi={10.1109/TGRS.2024.3354456}}
License
This project is released under the MIT license.
Acknowledgement
This code is developed based on excellent open-sourced projects including CLIP, OpenCLIP, AST, MetaTransformer, ViT-Adapter, Seisbench, STEAD and PNW.
Related Skills
groundhog
398Groundhog's primary purpose is to teach people how Cursor and all these other coding agents work under the hood. If you understand how these coding assistants work from first principles, then you can drive these tools harder (or perhaps make your own!).
isf-agent
a repo for an agent that helps researchers apply for isf funding
workshop-rules
Materials used to teach the summer camp <Data Science for Kids>
last30days-skill
13.4kAI agent skill that researches any topic across Reddit, X, YouTube, HN, Polymarket, and the web - then synthesizes a grounded summary
