81 skills found · Page 1 of 3
USEPA / CMAQCode for U.S. EPA’s Community Multiscale Air Quality Model (CMAQ) for estimating ozone, particulates, toxics, and deposition of acids and nutrients at neighborhood to global scales.
Siyou-Li / U2Tokenizera multiscale multimodal large language models for radiology report generation (RRG) tasks
locuslab / Mdeq[NeurIPS'20] Multiscale Deep Equilibrium Models
octree-nn / OctgptOctGPT: Octree-based Multiscale Autoregressive Models for 3D Shape Generation [SIGGRAPH 2025]
genbio-ai / AIDOAI-Driven Digital Organism (AIDO) is a system of multiscale foundation models for predicting, simulating and programming biology at all levels
vavrines / Kinetic.jlUniversal modeling and simulation of fluid mechanics upon machine learning. From the Boltzmann equation, heading towards multiscale and multiphysics flows.
HungVu307 / Few Shot Via Ensembling Transformer With Mahalanobis Distance[IEEE TIM] This is official code for paper "Few-Shot Bearing Fault Diagnosis via Ensembling Transformer-based Model with Mahalanobis Distance Metric Learning from Multiscale Features". IEEE Transactions on Instrumentation and Measurement (Accepted)
openworm / C302The c302 framework for generating multiscale network models of C. elegans
Open-Systems-Pharmacology / MoBiMoBi® is a software tool for multiscale physiological modeling and simulation
JLnorthwestern / GO MELTGO-MELT: GPU-Optimized Multilevel Execution of LPBF Thermal simulations
atruszkowska / LBM MATLABMPI-style parallelized Shan and Chen LBM with multiscale modeling extension
chemle / Emle EngineAn engine for electrostatic ML embedding for multiscale modelling.
rajanil / MsCentipedeA hierarchical multiscale model for inferring transcription factor binding from chromatin accessibility data.
ECCC-ASTD-MRD / GemThe Global Environmental Multiscale (GEM) model is a numerical weather prediction model developed by the Meteorological Research Division of Environment and Climate Change Canada.
LopezGroup-ICIQ / AmuseCode repo of Automated MUltiscale Simulation Environment (AMUSE) for multiscale modeling of heterogenous catalytic reactions
DiODeProject / MuMoTMultiscale Modelling Tool - mathematical modelling without the maths
zhang201882 / MTF CRNNInspired by the convolutional recurrent neural network(CRNN) and inception, we propose a multiscale time-frequency convolutional recurrent neural network (MTF-CRNN) for audio event detection. Our goal is to improve audio event detection performance and recognize target audio events that have different lengths and accompany the complex audio background. We exploit multi-groups of parallel and serial convolutional kernels to learn high-level shift invariant features from the time and frequency domains of acoustic samples. A two-layer bi-direction gated recurrent unit) based on the recurrent neural network is used to capture the temporal context from the extracted high-level features. The proposed method is evaluated on the DCASE2017 challenge dataset. Compared to other methods, the MTF-CRNN achieves one of the best test performances for a single model without pre-training and without using a multi-model ensemble approach.
dattnguyenx / MSVoxelDNNPytorch implementation of the paper "Multiscale deep context modeling for lossless pointcloud geometry compression"
wedeling / EasySurrogateThe VECMA toolkit for creating surrogate models of multiscale systems.
Greak-1124 / LMFFNetReal-time semantic segmentation is widely used in the field of autonomous driving and robotics. Most previous networks achieved great accuracy based on a complicated model involving mass computing. The existing lightweight networks generally reduce the parameter sizes by sacrificing the segmentation accuracy. It is critical to balance the parameters and accuracy for real-time semantic segmentation tasks. In this paper, we introduce a Lightweight-Multiscale-Feature-Fusion Network (LMFFNet) mainly composed of three types of components: Split-Extract-Merge Bottleneck (SEM-B) block, Features Fusion Module (FFM), and Multiscale Attention Decoder (MAD). The SEM-B block extracts sufficient features with fewer parameters. FFMs fuse multiscale semantic features to effectively improve the segmentation accuracy. The MAD well recovers the details of the input images through the attention mechanism. Two networks combined with different components are proposed based on the LMFFNet model. Without pretraining, the smaller network of LMFFNet-S achieves 72.7% mIoU on Cityscapes test set at the 512×1024 resolution with only 1.1 M parameters at a reference speed of 98.9 fps running on a GTX1080Ti GPU while the larger version of LMFFNet-L achieves 74.7% mIoU with 1.4 M parameters at 89.6 fps. Besides, 67.7% mIoU at 208.9 fps and 70.3% mIoU at 72.4 fps are respectively achieved for 360 × 480 and 720 × 960 resolutions on CamVid test set using LMFFNet-S while LMFFNet--L achieves 68.1% mIoU at 182.9 fps and 71.0% mIoU at 66.5 fps, correspondingly. The proposed LMFFNets make an adequate trade-off between accuracy and parameter size for real-time inference for semantic segmentation tasks.