SkillAgentSearch skills...

SASQUATCH

Framework for sensitivity analysis and uncertainty quantification in cardiac hemodynamics

Install / Use

/learn @nikithiel/SASQUATCH
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

User Manual of SASQUATCH

Sensitivity Analysis and Uncertainty QUAnTification in Cardiac Hemodynamics

A framework for sensitivity analysis and uncertainty quantification in cardiac hemodynamics

Date: June 2024
Mail: thiel@ame.rwth-aachen.de or geno.jayadi@rwth-aachen.de

A) General Idea

This project consists of three major parts:

  1. Data Analysis
  2. Surrogate Model Comparison
  3. Sensitivity Analysis and Uncertainty Quantification

All the output from the project is stored under the local directory in output_data generated by the package

Data Analysis (da)

Performs Data Analysis to get insights in the data that helps in finding outliers and to check whether the program understands the data correctly. Different plots will be produced, some are plotted by default:

  • Distribution of output parameters
  • Correlation Matrix of data
  • Reduced Pairplot of data

and other toggleable ones:

  • Boxplot of data distribution of the output parameters
  • Non-reduced Pairplot of data

Surrogate Model Comparison (sc)

Surrogate Model Comparison gives insights on how well different Surrogate Models perform with predicting the data. To verify their behavior the mean $R^2$-Score is plotted by default alongside some other optional plots:

  • mean of the timings (training and testing) over a certain number of folds
  • $R^2$-score, Mean Absolut Error and Root Mean Squared Error for each model and output parameter
  • Actual vs Predicted values for each output parameter and model

All models are being saved using pickle. Use pkl.load() and model.predict() to use the models in another project.

Sensitivity Analysis (sa)

The Sensitivity Analysis provides more insights in the dependencies between input and output parameter. By default for the chosen models the following plots are plotted:

Uncertainty Quantification (uq)

Performs the sensitivity analysis with a changing bounds range. It additionally plots input variation for all input parameter once each.

Project Specific (ps)

It is also possible to add a project specific program.

B) Using for Your Own Project

General usage

  1. Provide input/output data in input_data folder
  2. Set preferences in config.txt (see chapter C) Settings and Preferences for more details)
  3. Run main.py
  4. See results in output_data folder

Adding project specific features #TODO

Adding surrogate models:

  • go to models.py and add a new model class according to the NIPCE class
  • you need a init(), fit(), predict(), get_params() and set_params() function
  • add the new model in the creatingModels()

Adding preprocessing function:

  • define in preprocessing.py a new function project_specific_preprocessing()
  • in initialization.py you can add it in read_data()
  • you can do it like mv_uq_procect_preprocessing()

Adding hyperparameters:

  • define your parameter in config.txt
  • add the parameter in main.py like the others in the section Initialize Hyperparameter with: your_param = parameter[your_param]
  • now you can use your_param in the main class

C) Settings and Preferences

There are a couple of preferences you can set. You write the name of the variable and its value(s) separated with spaces. The order is of the name is arbitrary but not for several inputs. Marking comments in the config file is possible using #

Here is a short description:

Data

| Name | Example input | Note | | ---- | ---- | ---- | | run_type | su | use any of da, sc, sa, uq, or ps | | data_path | data_df.scv or ../03_Results | .xlsx, .csv, and ansys .out files | | input_parameter | y z alpha | use exact column names of .csv/.xlsx | | input_units | mm mm ° | units of input parameter. need to have matching amount with input_parameter| | input_parameter_label | $y_d$ $z_d$ $\alpha$ $R_L$ | specify if you want labels that differ from input parameter names in .csv | | output_parameter | energy-loss wss | use exact column names of .csv/.xlsx to determine which output to be considered| | output_units | Pa m^3 Pa | units of output parameter. need to have matching amount as output_parameter| | output_parameter_label | Eloss WSS | specify if you want to use custom labels. If not specified, will instead use exact column name | | output_name | example | define the name of the folder where the output will be stored | | is_transient | True | whether data is transient or not. Reduced data saved in test_after_prep.csv | | normalize | True | normalizing data | | scaler | none | scale data. available options are: none, minmax, standard | | save_data | True | path to save the data in .csv file. The data saved in saved_data.csv| | get_mean | True | mean over e.g. timesteps in data set. Averaged data saved in reduced_data.csv |

Models

| Name | Input | Explanation | | ---- | ---- | ---- | | models | Svr-Rbf | Support Vector Regression - Radial Basis Function | | models | Svr-Linear | Support Vector Regression - Linear Basis Function | | models | Svr-poly | Support Vector Regression - Polynomial Basis Function | | models | Svr-Sigmoid | Support Vector Regression - Sigmoid Basis Function | | models | RF | Random Forrest | | models | KNN | K Nearest Neighbors | | models | LR | Linear Regression | | models | Bayesian-Ridge | Bayesian-Ridge | | models | NIPCE | Non intrusive polynomial chaos expansion using chaospy| | models | GP | Gaussian Process | | models | DecisionTree | Decision Tree | | NIPCE_order | 1 2 3 4 | Specify one or multiple orders for NIPCE model |

Training and Testing

| Name | Example input | Explanation | | ---- | ---- | ---- | | n_splits | 10 | number of splits for k-cross fold validation | | shuffle | True | for random order of datapoints | | random_state | 42 | the random seed to be used | | metrics | r2_score | metric used for testing |

Plotting

| Name | Example input | Note | | ---- | ---- | ---- | | plot_data | True | pair-plot (scatter) of data frame (currently not used)| | is_plotting_... | True | specify if plotting should be utilized or not | | number_of_top_models | 3 | specify how many models would be plotted in a descending order | | plot_type | pdf png | specify in what format all the plots will be saved as, also saves in multiple format if specified |

Sensitivity Analysis

| Name | Example input | Explanation | | ---- | ---- | ---- | | sa_models | NIPCE GP | defines with which model(s) you want to perform the sa | | sa_sobol_indice | ST or S1 | Total order or first order sa | | sa_17_segment_model | NIPCE | Defiens the model for segment plot | | sa_sample_size | 512 | sample size for SA | | sa_output_parameter | WSS Eloss ... | defines the output parameter for SA calculation | | input_start | average or median | the base point to be used as a start point for the perturbation. use specific to use user defined starting points. | | input_start_point | 1 2 3 4 | the starting points to be used if input_start is set to specific | | input_start_perturbation | 10 | the percentage of the perturbation from the starting point of the input. can either be a single value or the same amount of values as the start points| | output_parameter_sa_plot | WSS Eloss ...| defines output parameters for plotting in GSA | | output_units_sa_plot | Pa m^3 Pa | units of output parameter for plotting in GSA | | output_parameter_sa_plot_label | WSS Eloss ... | specify if you want labels that differ from output parameter names in .csv |

Uncertainty Quantification

| Name | Example input | Explanation | | ---- | ---- | ---- | | uq_output_parameter | [a,b] [c,d] | grouped parameter for plotting output variation in uncertainty quantification. the parameters are grouped in the brackets and separated using a comma within group and a space between groups. if left empty, will instead use all parameters in a single group | | uq_output_parameter_label | [e,f] [g,h] | the labels that corresponds to the parameter in uq_output_parameter. input shares the same format. | | uq_output_units | AB CD | the output units corresponding the output groups. input shares the same amount of groups in uq_output_parameter|

Project specific

Here you can add your project specific settings. In case of the Mitral Valve Uncertainty Quantification they`re the following

| Name | Example input | Explanation | | ---- | ---- | ---- | | sa_17_segment_model | Lin-Reg | which model is used for the 17-segments plot |

D) Requirements

it might be necessary to install the following dependencies for the current stable build of this project:
Python: Version 3.9.13

| Library | Version | | ---- | ---- | | numpy | 1.26.4 | | pandas | 2.2.3 | | matplotlib.pyplot | 3.10.2 | | seaborn | 0.13.2 | | scipy | 1.15.3 | | scikit-learn | 1.6.1 | | chaospy | 4.2.1 | | SALib | 1.5.1 | | statsmodels | 0.14.4 |

E) Application

This tool was used in the following publication:

Quantifying the Impact of Mitral Valve Anatomy on Clinical Markers Using Surrogate Models and Sensitivity Analysis

The input/output pairs used for training the surrogate models were create using Ansys Fluent CFD simulations. More details on using this automated CFD model and the corresponding setup files can be found here:

https://doi.org/10.5281/zenodo.12519189

https://www.youtube.com/watch?v=gO0ZYzpblLA

Related Skills

View on GitHub
GitHub Stars4
CategoryDevelopment
Updated6mo ago
Forks1

Languages

Python

Security Score

62/100

Audited on Sep 19, 2025

No findings