Nomad
A Genetic Algorithm (GA) / Discrete Particle Swarm Optimization/ Hybrid (GA-PSO) for nuclear fuel optimization using ML surrogates (DNN, KNN, Random Forest, Ridge) and OpenMC. Optimizes fuel loading patterns for a target k-eff and minimal Power Peaking Factor (PPF).
Install / Use
/learn @XxNILOYxX/NomadREADME
NOMAD: Nuclear Optimization with Machine-learning-Accelerated Design (A Genetic Algorithm (GA) / Discrete Particle Swarm Optimization/ Hybrid (GA-PSO) with Deep Neural Network (DNN)/KNN/Random Forest/Ridge/Gradient Boosting for Fuel Pattern Optimization)
NOMAD is a sophisticated tool for optimizing nuclear reactor core fuel loading patterns. It leverages a Genetic Algorithm (GA) / Discrete Particle Swarm Optimization / Hybrid (GA-PSO) coupled with machine learning (ML) models to efficiently determine fuel assembly enrichment arrangements that achieve a target multiplication factor (k_eff) while minimizing the Power Peaking Factor (PPF). This ensures safe, efficient, and compliant reactor operation.
By integrating ML models as high-speed surrogates for computationally expensive neutron transport simulations (e.g., via OpenMC), NOMAD significantly accelerates the optimization process while maintaining accuracy.

Table of Contents
- Overview
- How It Works
- Requirements
- Installation
- Usage Guide
- Results
- DPSO & Hybrid mode
- Example Configuration
- Disclaimer
- Contributing
- License
Overview
NOMAD optimizes nuclear reactor core designs by:
- Target: Achieving a specific k_eff while minimizing PPF.
- Method: Combining a Genetic Algorithm with ML-based surrogates for fast fitness evaluation.
- Simulation: Using OpenMC for high-fidelity neutron transport calculations.
- Iterative Improvement: Continuously refining ML models with new simulation data.
This hybrid approach enables rapid exploration of fuel enrichment configurations, making it a powerful tool for nuclear reactor core design.
How It Works
-
Initial Data Generation: Run OpenMC simulations for a diverse set of fuel enrichment configurations to create a baseline dataset.
-
ML Model Training:
- $k_{eff}$ Interpolator: A K-Nearest Neighbors (KNN) regressor predicts $k_{eff}$ for a given fuel pattern.
- PPF Interpolator: Predicts the Power Peaking Factor (PPF) using KNN, Random Forest, Ridge regression, or a Deep Neural Network (DNN) (configurable). The DNN is a more advanced option capable of capturing complex non-linear relationships.
-
Choosing the PPF Predictor (Experimental) The optimal choice for the PPF predictor is not fixed. During testing, sometimes Random Forest performs better than KNN, and sometimes the opposite is true. For best results, you should run the full optimization process with both models and use the superior result.
Pro Tip:
- First, run the entire optimization with
knnset as the PPF regression model. - Once complete, rename the final checkpoint file in the
data/directory (e.g., fromga_checkpoint.jsontoga_checkpoint_knn.json). - Next, change the model in your configuration file to
random_forestand run the optimization again. - The Random Forest model will benefit from the large dataset (
keff_interp_data.jsonandppf_interp_data.json) already generated, potentially yielding more accurate predictions and a different, sometimes better, outcome.
- First, run the entire optimization with
-
Genetic Algorithm Cycle: The GA evolves a population of fuel loading patterns over thousands of generations, evaluating fitness using the ML predictors for speed.
-
Verification: The best fuel pattern found by the GA is verified with a full, high-fidelity OpenMC simulation.
-
Iterative Improvement: The results from the verification simulation are added back into the dataset, and the ML models are retrained. This makes the predictors more accurate for all subsequent GA cycles.
Requirements
Software Dependencies
- Python 3.8+ with the following packages:
pip install numpy scipy pandas matplotlib scikit-learn torch - OpenMC: A working installation is required for physics simulations. See the OpenMC documentation for installation instructions.
Input Files
Ensure the following OpenMC input files are in the same directory as RunOptimizer.ipynb:
geometry.xmlmaterials.xmlsettings.xmltallies.xml
Installation
- Clone this repository:
git clone https://github.com/XxNILOYxX/nomad.git cd nomad - Install Python dependencies:
pip install -r requirements.txt - Install OpenMC following the official instructions.
- Ensure all OpenMC input files are correctly configured and placed in the root directory.
Step 1: Define Fuel Materials and Assemblies
This is the most critical step in setting up your model for NOMAD. The optimizer works by individually adjusting the enrichment of every single fuel assembly. For this to work, your OpenMC model must be built with a specific structure:
Each fuel assembly in your core must be represented by its own unique material and its own unique cell (or universe).
Think of it like giving each assembly a unique ID that the program can find and modify. If you define one material and use it for multiple assemblies, the optimizer will not be able to assign different enrichment values to them.
How to Structure Your Model
-
Unique Materials: If your core has 150 fuel assemblies, you must create 150 distinct
<material>blocks in yourmaterials.xmlfile. It's essential that theiridattributes are sequential (e.g., 3, 4, 5, ..., 152). -
Unique Cells/Universes: Similarly, in your
geometry.xml, each of these unique materials must fill a unique cell that represents the fuel region of an assembly.
Example Scenario (150 Assemblies)
Imagine your model's material IDs start at 3. Your materials.xml must be structured as follows:
<material depletable="true" id="3" name="Fuel for Assembly 1">
</material>
<material depletable="true" id="4" name="Fuel for Assembly 2">
</material>
...
<material depletable="true" id="152" name="Fuel for Assembly 150">
</material>
In your config.ini, you would then set:
num_assemblies = 150
start_id = 3
Pro-Tip: When generating your model files programmatically (e.g., in a Jupyter Notebook), always use the "Restart Kernel and Clear All Outputs" command before running your script. This prevents old data from being cached and ensures your material and cell IDs are created fresh and correctly, avoiding hard-to-debug errors.
Example Code for Creating Individual Fissile Materials
Use the following code as inspiration and modify it for your own reactor core:
all_materials_list = []
# You can adjust this number as needed
num_assemblies = 150
print("Creating unique fuel materials...")
# This loop creates variables fuel_1, fuel_2, ... fuel_150
for i in range(1, num_assemblies + 1):
# Define the material object
fuel_material = openmc.Material(name=f'Fissile fuel Assembly {i}')
fuel_material.add_nuclide('U235', use your weight fraction, 'wo')
fuel_material.add_nuclide('U238', use your weight fraction, 'wo')
fuel_material.add_nuclide('Pu238', use your weight fraction, 'wo')
fuel_material.add_nuclide('Pu239', use your weight fraction, 'wo')
fuel_material.add_nuclide('Pu240', use your weight fraction, 'wo')
fuel_material.add_nuclide('Pu241', use your weight fraction, 'wo')
fuel_material.add_nuclide('Pu242', use your weight fraction, 'wo')
fuel_material.add_element('Zr', use your weight fraction, 'wo')
fuel_material.set_density('g/cm3', use your density)
fuel_material.depletable = True
fuel_material.temperature = fuel_temperature
# This line dynamically creates a variable named fuel_1, fuel_2, etc.
globals()[f'fuel_{i}'] = fuel_material
# Add the new material
Related Skills
claude-opus-4-5-migration
111.3kMigrate prompts and code from Claude Sonnet 4.0, Sonnet 4.5, or Opus 4.1 to Opus 4.5
model-usage
352.5kUse CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
TrendRadar
51.2k⭐AI-driven public opinion & trend monitor with multi-platform aggregation, RSS, and smart alerts.🎯 告别信息过载,你的 AI 舆情监控助手与热点筛选工具!聚合多平台热点 + RSS 订阅,支持关键词精准筛选。AI 智能筛选新闻 + AI 翻译 + AI 分析简报直推手机,也支持接入 MCP 架构,赋能 AI 自然语言对话分析、情感洞察与趋势预测等。支持 Docker ,数据本地/云端自持。集成微信/飞书/钉钉/Telegram/邮件/ntfy/bark/slack 等渠道智能推送。
mcp-for-beginners
15.8kThis open-source curriculum introduces the fundamentals of Model Context Protocol (MCP) through real-world, cross-language examples in .NET, Java, TypeScript, JavaScript, Rust and Python. Designed for developers, it focuses on practical techniques for building modular, scalable, and secure AI workflows from session setup to service orchestration.
