228 skills found · Page 4 of 8
Digitalized-Energy-Systems / OpfgymA gymnasium-compatible framework to create reinforcement learning (RL) environment for solving the optimal power flow (OPF) problem. Contains five OPF benchmark environments for comparable research.
joshwadd / Deep Traffic Sign ClassificationUsing DenseNet and AlexNet style architectures applied to the German Traffic Sign Recognition Benchmark(GTSRB) problem
cline / Cline BenchReal-world coding benchmarks derived from actual Cline user sessions. Tasks are challenging, verified, and represent genuine engineering problems solved in production.
tip-org / BenchmarksTons of Inductive Problems: The Benchmarks
jakobbossek / TspgenTSP benchmark problem generator written in pure R.
power-grid-lib / Pglib Opf HvdcBenchmarks for the Optimal Power Flow Problem with HVDC Lines
tonyzyl / Semisupervised VAE For Regression Application On Soft SensorAn semi-supervised extension based on VAE for Regression, demonstrate its performance on two soft sensor benchmark problems.
kshitija2 / Interactive Multi Objective Reinforcement LearningMulti-objective reinforcement learning deals with finding policies for tasks where there are multiple distinct criteria to optimize for. Since there may be trade-offs between the criteria, there does not necessarily exist a globally best policy; instead, the goal is to find Pareto optimal policies that are the best for certain preference functions. The Pareto Q-learning algorithm looks for all Pareto optimal policies at the same time. Introduced a variant of Pareto Q-learning that asks queries to a user, who is assumed to have an underlying preference function and also the scalarized Q-learning algorithm which reduces the dimensionality of multi-objective space by using scalarization function and ask user preferences by taking weights for scalarization. The goal is to find the optimal policy for that user’s preference function as quickly as possible. Used two benchmark problems i.e. Deep Sea Treasure and Resource Collection for experiments.
Zhangyong-Tang / MoETrackTIP'2025-Revisiting RGBT Tracking Benchmarks from the Perspective of Modality Validity: A New Benchmark, Problem, and Solution
kevin-thankyou-lin / Active 3d GymActive3DGym is a set of benchmark environments for the active view planning problem in robotics.
xuefeng-zhu5 / EDTCCodes and Dataset of the paper: Evidential Detection and Tracking Collaboration: New Problem, Benchmark and Algorithm for Robust Anti-UAV System
Maor-Oz / Medical Segmentation Decathlon U Net CNN With Generalized Dice CoefficientWith recent advances in machine learning, semantic segmentation algorithms are becoming increasingly general-purpose and translatable to unseen tasks. Many key algorithmic advances in the field of medical imaging are commonly validated on a small number of tasks, limiting our understanding of the generalizability of the proposed contributions. A model which works out-of-the-box on many tasks, in the spirit of AutoML (Automated Machine Learning), would have a tremendous impact on healthcare. The field of medical imaging is also missing a fully open source and comprehensive benchmark for general-purpose algorithmic validation and testing covering a large span of challenges, such as: small data, unbalanced labels, large-ranging object scales, multi-class labels, and multimodal imaging, etc. To address these problems, in this project, as part of the MSD challenge, we propose a generic machine learning algorithm which we applied on two organs: liver and tumors, spleen. We propose an unsupervised generic model by implementing U-net CNN architecture with Generalized Dice Coefficient as loss function and also as a metric. The MSD dataset consists of dozens of medical examinations in 3D (per organ), we’ll transform the 3-dimensional data into 2-d cuts as an input of our U-net. Experimental results show that our generic model based on U-net and Generalized Dice Coefficient algorithm leads to high segmentation accuracy for each organ (liver and tumors, spleen), separately, without human interaction, with a relatively short run time compared to traditional segmentation methods.
kazuho / ManymanythreadsA synthetic benchmark of C10K problem using pthreads or epoll
ewang26 / HorizonMathA benchmark to measure AI progress on unsolved research problems in mathematics.
MarcToussaint / Optimization CoursePython bindings to some optimization benchmarks (robotics problems), in order to constrained optimization solvers. Includes also an interface to the solvers within rai. See the jupyter notebocks in 'tutorials'.
paras2612 / CauseBoxCausal inference is a critical task in various fields such as healthcare,economics, marketing and education. Recently, there have beensignificant advances through the application of machine learningtechniques, especially deep neural networks. Unfortunately, to-datemany of the proposed methods are evaluated on different (data,software/hardware, hyperparameter) setups and consequently it isnearly impossible to compare the efficacy of the available methodsor reproduce results presented in original research manuscripts.In this paper, we propose a causal inference toolbox (CauseBox)that addresses the aforementioned problems. At the time of thewriting, the toolbox includes seven state of the art causal inferencemethods and two benchmark datasets. By providing convenientcommand-line and GUI-based interfaces, theCauseBoxtoolboxhelps researchers fairly compare the state of the art methods intheir chosen application context against benchmark datasets.
ideas4u / Trading PlatformThis project is the most awaited project in open source community where every user who belongs to Stock Trading always wanted to develop its own software. This project has been developed specifically for Indian Market Stock Trading. It encompasses end to end trading cycle for intraday trading but the design would be such that it can be easily extended for delivery trading. During the lifecycle of this project we will be using most advance technologies but the base code will always be C/C++. Development Methodology: ======================== We use "Incremental Life Cycle Model" along with Cross-Platform Development (Portable). Project Priorities and Assumptions: =================================== 1) Low Latency, High Performance all the time. 2) Wherever choice has to be made between memory and execution speed, we give preference to speed. 3) Every module devloped will be exhaustively tested. How the work Proceed: ===================== Before the beginning of any new project, we should know the "PROBLEM STATEMENT", so here it is "Problem Statement" ------------------- To Build a high performance, low latency, end to end Trading Platform for Indian Stock Market but not limited to which home users should be able use for trading which guarantees (99% of the times) the profit but does not guarantees maximized profit for intraday trading. First Step: ----------- To provide the optimal solution to any problem is "UNDERSTAING THE PROBLEM". To understand the above problem statement you need to really extract the explicit and implcit requirements from the statement. Here is the List of requirements: Explicit: --------- 1) High Performance 2) Low-Latency 3) End-to-End Trading Platform 4) Focus on Indian Stock Market but not limited to it. 5) Guarantees (99% of the times) the profit but does not guarantees maximized profit. 6) Only for Intraday Trading. Implicit: --------- 1) Book Keeping of the order and trade (Order Management System). 2) Availability of Market Data to End-Users on Demand for identifying the stock and placing the order. 3) User Account Management. Might be I missed something please suggest and after reveiw we add it here. Second Step: ------------ To understand the above Explicit/Implicit requirements, you should have the "KNOWLEDGE OF VARIOUS TECHNOLOGIES" and indepth undertstanding of the "PROBLEM DOMAIN" i.e. Stock Market. Once this is achieved we need to architect the solution in terms of Software and Hardware nodes and their integration. Third Step: ----------- To solve the problem statement, the above requirements should be "DECOMPOSED IN MODULES" and map to them with technolgoies/software/hardware used. Below is the list of modules we are able to identify: Modules Included: ================= Core Modules: -------------- 1) Core Libraries 2) Manual Order Entry System 3) Auto Order Entry System 4) Artificial Exchange 5) Algorithmic Trading Platform 6) Smart Order Router 7) Direct Trading Platform (Ooptional) Utility Modules: ---------------- 8) Logger Server 9) HeartBeat Server Technologies Used: ================= Software: --------- We always use freeware, Open Source Softwares or APIs which are the part of GPL, LGPL.xx licence. Any special requirement for building/using the modules will be detailed in specific module. For development, we generally use: ---------------------------------- Windows-7 for Operating System but any other OS ca be used. Our Code is Platform Indepandant. Visual Studio 2013 in built compiler for build or Intel@ Compilers which can be easily integrated with Visual Studio IDE. For real time, we generally use: -------------------------------- Linux-susse 10 or above with real time extensions. gcc 4.4.1 for build. vi editor Hardware: --------- No special requirement for development purpose. For real time use, it depands how much Stock you are interested in and the various configuration of modules. We prefer generally the below configuration for any number of Stock Trading: 256 GB RAM 16 core processor 1 TB of HDD/SDD Programming Languages and other Technologies: --------------------------------------------- C, C++99/c++11, Lua, ZeroMq, nanodbc, Lock-Free Data Structures, Intel TBB, Boost, Google Protobuf, MySql, Python. Fourth Step: ------------ Dcompose each module till it becomes entity to provide the useful functionality. We are going to explain this in each module detailed section. Fifth Step: ------------ We do design/develop/benchmark/unit test/integration testing of the above modules. Sixth Step: ------------ We deploy the delivered software on various hardware nodes as per the deployment architecture and integrate them. Seventh Step: ------------ Observe the behaviour of deployed software on live traffic and cut two branches at this level : 1st branch continue to do incremental development and 2nd branch fix the issues reported which can be later merged with 1st branch for another release. Any suggestions for improvement are most welcome.
PyVRP / InstancesVehicle routing problem instances and best-known solutions for benchmarking.
FeatEng / FeatEngThe benchmark for LLMs designed to tackle one of the most knowledge-intensive tasks in data science: writing feature engineering code, which requires domain knowledge in addition to a deep understanding of the underlying problem and data structure.
jamestrimble / Max Weight Clique InstancesBenchmark instances for the maximum weight clique problem