MultiDampGen
This repository contains the datasets and code related to the article "MultiDampGen: A Self-supervised Latent Diffusion Framework for Multiscale Energy-dissipating Microstructure Generation"
Install / Use
/learn @AshenOneme/MultiDampGenREADME
<div align=center>
MultiDampGen: A Self-supervised Latent Diffusion Framework for Multiscale Energy-dissipating Microstructure Generation
</div>🎉🎉🎉🎉🎉🎉
<!-- 逆向设计 -->I am pleased to report the publication of my article https://doi.org/10.1016/j.asoc.2025.114194.
-
🧭 Overview of the workflow
-
⚛️ Datasets & Pre-trained models
The multiscale microstructure dataset encompasses a total of 50,000 samples. The dataset utilized in this study, along with the pre-trained weights of MultiDampGen, can be accessed through the link provided below.
🔗The damping microstructure dataset
🔗The checkpoints of the MultiDampGen
🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟
</div> <!-- T2C -->-
🧱 TXT2CAE
The TXT2CAE plugin has been developed based on the ABAQUS-Python API, enabling the generation of three-dimensional finite element models from arbitrary patterns, along with automated mesh generation. Testing has demonstrated successful operation on ABAQUS versions 2018 to 2023. 🔗TXT2CAE
-
🏗️ Dataset
A total of 50,000 sets of microstructural data were extracted, including yield strength, yield displacement, first stiffness, and second-order stiffness. The distribution relationships were subsequently plotted based on volume fraction and physical scale.
-
🏦 Architecture of MultiDampGen
The network architecture of MultiDampGen is detailed as follows, with TopoFormer serving as a Variational Autoencoder (VAE) structure, RSV representing a residual network structure, and LDPM designed as a UNet structure with conditional inputs. Both the VAE and LDPM incorporate self-attention mechanisms to enhance their functionality.
-
🔬 Generation process
The generation process of multiscale microstructures is illustrated in the figure, with the red line representing the specified mechanical performance demands. The scales of the microstructures are randomly determined, and the generation results at each timestep are evaluated through finite element analysis. It can be observed that the hysteretic performance, indicated by the blue line, progressively approaches the target demands.
-
🚀 Generation results
Regardless of how extreme the specified mechanical properties or scales may be, it is possible to generate microstructures that meet the demands. Additionally, by employing a latent diffusion approach, the generation efficiency has been improved significantly, achieving a square factor increase compared to the Denoising Diffusion Probabilistic Model (DDPM).
-
🔶 Notes
The structure of the folder is as follows:
|--Main folder
|--MultiDampGen
|--Dataset.py
|--VAE.py
|--Discriminator.py
|--UNet.py
|--MultiDampGen.py
+ |--TopoFormer.pt
+ |--RSV.pt
+ |--LDPM.pt
! |--imgs.txt <<<-------Read---------
|--Dataset ▲
|--Test │
|--Dataset_Test.h5 │
|--Train │
|--Dataset_Train.h5 │
! |--ABAQUS2018 >>>--------Call------------ │
|--Documentation │ │
|--SimulationServices │ │
|--SolidSQUAD_License_Servers │ │
|--temp │ │
|--SIMULIA │ │
|--Commands │ │
|--CAE │ │
|--2018 │ │
|--plugins │ │
|--2018 ▼ │
+ |--ABQ_TXT2CAE_v1
|--Example
|--1.png
|--2.png
|--ASHEN.png
|--icon.png
|--ABQ_TXT2CAE.pyc
|--aBQ_TXT2CAE_plugin.pyc
|--aBQ_TXT2CAEDB.pyc
|--aBQ_TXT2CAEDB.py
<details>
<summary> Architecture of TopoFormer【Click to expand】 </summary>
<pre><code class="language-python">
============================================================================================================================================
Layer (type:depth-idx) Input Shape Output Shape Param # Kernel Shape
============================================================================================================================================
VAE_M [28, 1, 128, 128] [28, 3, 32, 32] -- --
├─VAE_Encoder_M: 1-1 [28, 1, 128, 128] [28, 3, 32, 32] -- --
│ └─ModuleList: 2-1 -- -- -- --
│ │ └─Conv2d: 3-1 [28, 1, 128, 128] [28, 64, 128, 128] 640 [3, 3]
│ │ └─VAE_ResidualBlock: 3-2 [28, 64, 128, 128] [28, 64, 128, 128] 74,112 --
│ │ └─Conv2d: 3-3 [28, 64, 129, 129] [28, 128, 64, 64] 73,856 [3, 3]
│ │ └─VAE_ResidualBlock: 3-4 [28, 128, 64, 64] [28, 128, 64, 64] 295,680 --
│ │ └─Conv2d: 3-5 [28, 128, 65, 65] [28, 256, 32, 32] 295,168 [3, 3]
│ │ └─VAE_AttentionBlock: 3-6 [28, 256, 32, 32] [28, 256, 32, 32] 263,680 --
│ │ └─VAE_ResidualBlock: 3-7 [28, 256, 32, 32] [28, 256, 32, 32] 1,181,184 --
│ │ └─GroupNorm: 3-8 [28, 256, 32, 32] [28, 256, 32, 32] 512 --
│ │ └─SiLU: 3-9 [28, 256, 32, 32] [28, 256, 32, 32] -- --
│ │ └─Conv2d: 3-10 [28, 256, 32, 32] [28, 64, 32, 32] 147,520 [3, 3]
│ │ └─Conv2d: 3-11 [28, 64, 32, 32] [28, 6, 32, 32] 390 [1, 1]
├─VAE_Decoder_M: 1-2 [28, 3, 32, 32] [28, 1, 128, 128] -- --
│ └─ModuleList: 2-2 -- -- -- --
│ │ └─Conv2d: 3-12 [28, 3, 32, 32] [28, 64, 32, 32] 256 [1, 1]
│ │ └─Conv2d: 3-13 [28, 64, 32, 32] [28, 256, 32, 32] 147,712 [3, 3]
│ │ └─VAE_AttentionBlock: 3-14 [28, 256, 32, 32] [28, 256, 32, 32] 263,680 --
│ │ └─VAE_ResidualBlock: 3-15 [28, 256, 32, 32] [28, 256, 32, 32] 1,181,184 --
│ │ └─Upsample: 3-16 [28, 256, 32, 32] [28, 256, 64, 64] -- --
│ │ └─Conv2d: 3-17 [28, 256, 64, 64] [28, 128, 64, 64] 295,040 [3, 3]
│ │ └─VAE_ResidualBlock: 3-18 [28, 128, 64, 64] [28, 128, 64, 64] 295,680 --
│ │ └─Upsample: 3-19 [28, 128, 64, 64] [28, 128, 128, 128] -- --
│ │ └─Conv2d: 3-20 [28, 128, 128, 128] [28, 128, 128, 128] 147,584 [3, 3]
│ │ └─VAE_ResidualBlock: 3-21 [28, 128, 128, 128] [28, 128, 128, 128] 295,680 --
│ 