222 skills found · Page 6 of 8
StevenShaw98 / Artificial Lemming AlgorithmArtificial lemming algorithm: A novel bionic meta-heuristic technique for solving real-world engineering optimization problems
venkat-0706 / Titanic Survival PredictionA machine learning project predicting Titanic passenger survival using data preprocessing, feature engineering, and model optimization with Logistic Regression, Random Forest, and XGBoost.
Shihong-Yin / MSCMO MUCPYin S, Xu N, Shi Z, Xiang Z*. Collaborative path planning of multi-unmanned surface vehicles via multi-stage constrained multi-objective optimization[J]. Advanced Engineering Informatics, 2025, 65: 103115.
pngts / Nonlinear Parameter Estimation In Thermodynamic ModelsThe reliable solution of nonlinear parameter estimation problems is an essential computational and mathematical problem in process systems engineering, both in on-line and off-line applications. Parameter estimation in semi-empirical models for vapor – liquid equilibrium (VLE) data modelling plays an important role in design, optimization and control of separation units. Conventional optimisation methods may not be reliable since they do not guarantee convergence to the global optimum sought in the parameter estimation problem. In this work we demonstrate a technique, based on genetic algorithms (GA), that can solve the nonlinear parameter estimation problem with complete reliability, providing a high probability that the global optimum is found. Two versions of stochastic optimization techniques are evaluated and compared for nine vapour - liquid equilibrium problems: our genetic base algorithm and a hybrid algorithm. Reliable experimental data from the literature on vapor - liquid equilibrium systems were correlated using the UNIQUAC equation for activity coefficients. Our results indicate that this method, when properly implemented, is a robust procedure for nonlinear parameter estimation in thermodynamic models. Considering that new globally optimal parameter values are found by using the proposed method we can surmise by our results that several sets of parameter values published in the DECHEMA VLE Data Collection correspond to local instead of global minima.
wchen459 / MO PaDGAN OptimizationReparameterizing Engineering Designs for Augmented Multi-objective Optimization
Hyhello / Geo CliEngineering tool based on geojson performance optimization.
DeriZSY / Hybrid MopsoVersions of hybrid pso algorithms for engineering optimization
john-data-chen / Next Dnd Starter KitA production-grade Kanban board application. Showcases engineering practices, decision-making and AI-assisted optimization for senior full-stack roles.
tanvibhayani / Tanvi BhayaniEarned a Machine Learning certificate covering data handling, feature engineering, model building, and accuracy optimization. Worked on real datasets using Python, Pandas, NumPy, Matplotlib, and ML algorithms like Linear Regression, Decision Trees, and KNN.
birukG09 / Calpyt1Calpyt1 is a next-level Python framework for symbolic, numerical, and applied calculus, designed for engineering, robotics, physics, finance, and AI/ML applications. It integrates symbolic math, numerical solvers, optimization, simulation, and visualization in a single modular package.
Bribak / SURFY2This repository constitutes SURFY2 and corresponds to the bioRxiv preprint 'Updating the in silico human surfaceome with meta-ensemble learning and feature engineering' by Daniel Bojar. SURFY2 is a machine learning classifier to predict whether a human transmembrane protein is located at the surface of a cell (the plasma membrane) or in one of the intracellular membranes based on the sequence characteristics of the protein. Making use of the data described in the recent publication from Bausch-Fluck et al. (https://doi.org/10.1073/pnas.1808790115), SURFY2 considerably improves on their reported classifier SURFY in terms of accuracy (95.5%), precision (94.3%), recall (97.6%) and area under ROC curve (0.954) when using a test set never seen by the classifier before. SURFY2 consists of a layer of 12 base estimators generating 24 new engineered features (class probabilities for both classes) which are appended to the original 253 features. Then, a soft voting classifier with three optimized base estimators (Random Forest, Gradient Boosting and Logistic Regression) and optimized voting weights is trained on this expanded dataset, resulting in the final prediction. The motivation of SURFY2 is to provide an updated and better version of the in silico human surfaceome to facilitate research and drug development on human surface-exposed transmembrane proteins. Additionally, SURFY2 enabled insights into biological properties of these proteins and generated several new hypotheses / ideas for experiments. The workflow is as following: 1) dataPrep Gets training data from data.xlsx, labels it according to surface class and outputs 'train_data.csv' 2) split Gets train_data.csv, splits it into training, validation and test data and outputs 'train.csv', 'val.csv', 'test.csv'. 3) main_val Was used for optimizing hyperparameters of base estimators and estimators & weights of voting classifier. Stores all estimators. Evaluates meta-ensemble classifier SURFY2 on validation set. 4) classifier_selection All base estimators and meta-ensemble approaches are tested on the initial dataset as well as the expanded dataset including the engineered features and compared in terms of their cross-validation score. 5) main_test Evaluates SURFY2 on the separate test set (trained on training + validation set). 6) testing_SURFY Evaluates the original SURFY through cross-validation and on validation as well as test set. 7) pred_unlabeled Uses SURFY2 to predict the surface label (+ prediction score) for unlabeled proteins in data.xlsx. Also gets the feature importances of the voting classifier estimators. 8) getting_discrepancies Compare predictions with those made by SURFY ('surfy.xlsx') and store mismatches. Also store the 10 most confident mismatches (by SURFY2 classification score) from each class. 9) feature_importances Plot the 10 most important features for the voting classifier estimators (Random Forest, Gradient Boosting, Logistic Regression) to interpret predictions. 10) base_estimator_importances Plot the 10 most important features for the two most important base estimators (XGBClassifier and Gradient Boosting). 11) comparing_mismatches Separate datasets into shared & discrepant predictions (between SURFY and SURFY2). Compare feature means and select features with the highest class feature mean differences between prediction datasets. Statistically analyze differences in features means between classes in both prediction datasets. Plot 9 representative features with their means grouped according to class and prediction dataset to rationalize discrepant predictions. 12) tSNE_surfy2 Perform nonlinear dimensionality reduction using t-SNE on proteins with predictions from both SURFY and SURFY2. Plot the two t-SNE dimensions and label the proteins according to their prediction class in order to see where discrepant predictions reside in the landscape. Plot surface proteins with most prevalent annotated functional subclasses and label them according to their subclass to enable comparison to class predictions. Functional annotations came from 'surfy.xlsx'.
kmatveev / Zx Fred RevengReverse engineering and optimizations for game Fred for ZX Spectrum
arthurmrodriguez / Advanced Metaheuristics LSGOThis repo contains my Computer Engineering Degree's Final Project, studied at the University of Granada, Spain. The main focus is to apply state-of-the-art metaheuristic algorithms into a Big Optimization problem with thousands of variables. Our task is to find out how accurate are theoretical benchmark results compared to real EEG (Electroencephalography) data
ara3d / Ara3d StudioA Free Large Model 3D Viewer for Windows optimized for construction and large engineering models.
lintool / OptTreesSource code for: Nima Asadi, Jimmy Lin, and Arjen P. de Vries. Runtime Optimizations for Tree-Based Machine Learning Models. IEEE Transactions on Knowledge and Data Engineering, 26(9):2281-2292, 2014.
HarshChaudhary1312 / SQL Walmart Sales Data Analysis- Analyzed Walmart sales data from three branches using SQL and Python, focusing on product performance, sales trends, and customer behavior to optimize sales strategies and identify key performance factors. - Performed data wrangling, feature engineering, and exploratory data analysis to uncover insights
AaravMehta-07 / LSTM Random Forest XGBoost Stock Predictor With OptunaA hybrid AI-based stock market prediction system using LSTM, Random Forest, and XGBoost, built for real-world deployment with Optuna-powered tuning, feature-rich engineering, and ensemble prediction logic. Designed to optimize F1 score and accuracy, this system aims to generate reliable buy/sell signals on stocks.
suzuki1969 / Python Based SMB OptimizerThis provides the python-based simulated moving bed (SMB) optimizer developed by the Process Information Engineering lab at the Department of Materials Process Engineering, Nagoya University.
Corning-AI / PCBaiIntelligent PCB design agent - Generate and optimize circuit boards through conversational engineering
marcosjimenez / PCompilerA declarative prompt engineering framework that transforms high-level DSL definitions into optimized, model-specific LLM prompts.