417 skills found · Page 9 of 14
rishab-partha / Quantum Optical ConvNetA repository for the 10th Grade Research Project Constructing a Quantum Optical Convolutional Neural Network (QOCNN) and providing scripts that evaluate future feasibility.
WebNLG / Challenge 2020Submissions, baselines and evaluations scripts for the 2nd version of the WebNLG+ Challenge 2020
hlt-mt / McifThis repository contains code used for the MCIF dataset and IWSLT 2025 Instruction Following shared task. This includes scripts used to create test sets and their references, as well as scripts used in the evaluation.
VLR-CVC / DocVQA2026Official evaluation scripts and baseline prompts for the DocVQA 2026 (ICDAR 2026) Competition on Multimodal Reasoning over Documents.
altasoft / SimpraAltaSoft.Simpra is a lightweight expression language for .NET, enabling dynamic rule evaluation with a safe, extensible, and script-like syntax. 🚀
sergioramos / Dangerously Set Inner HtmldangerouslySetInnerHTML that evaluates script tags
deep-spin / Qe EvaluationEvaluation scripts for the 2019 machine translation quality estimation shared task
oswaldoludwig / Sensitivity To Occlusion Keras This script evaluates the sensitivity of VGG-16 to occlusion using Keras
kr3t3n / Smolagents Video Script GeneratorA sophisticated system that uses multiple AI agents to research, create, and polish video scripts for social media platforms. The system employs specialized agents for research, script writing, polishing, and evaluation to ensure high-quality, engaging content.
centerforaisafety / Simple EvalsSimple evaluation scripts for AI benchmarks with minimal dependencies.
quishqa / WRF Chem SPScripts to perform WRF model evaluation, tailored for Sao Paulo State.
cucapra / Gem5 MeshFork of gem5 with support for manycore architectures. Includes models and scripts to evaluate a software-defined-vector architecture.
pr0me / Safirefuzz ExperimentsExperiment data and scripts for the artifact evaluation of "Forming Faster Firmware Fuzzers"
shihono / Evaluate Japanese W2vscript to evaluate pre-trained Japanese word2vec model on Japanese similarity dataset
idealclover / Fxxk NJU Class Evaluator🎄南哪大学课程评估自动化脚本/A script to help auto-evaluate NJU classes.
LOBYXLYX / Javascript InterpreterA JavaScript interpreter with a browser API included to evaluate JavaScript scripts from Python. (bugs)
marcgarnica13 / Ml Interpretability European FootballUnderstanding gender differences in professional European football through Machine Learning interpretability and match actions data. This repository contains the full data pipeline implemented for the study *Understanding gender differences in professional European football through Machine Learning interpretability and match actions data*. We evaluated European male, and female football players' main differential features in-match actions data under the assumption of finding significant differences and established patterns between genders. A methodology for unbiased feature extraction and objective analysis is presented based on data integration and machine learning explainability algorithms. Female (1511) and male (2700) data points were collected from event data categorized by game period and player position. Each data point included the main tactical variables supported by research and industry to evaluate and classify football styles and performance. We set up a supervised classification pipeline to predict the gender of each player by looking at their actions in the game. The comparison methodology did not include any qualitative enrichment or subjective analysis to prevent biased data enhancement or gender-related processing. The pipeline had three representative binary classification models; A logic-based Decision Trees, a probabilistic Logistic Regression and a multilevel perceptron Neural Network. Each model tried to draw the differences between male and female data points, and we extracted the results using machine learning explainability methods to understand the underlying mechanics of the models implemented. A good model predicting accuracy was consistent across the different models deployed. ## Installation Install the required python packages ``` pip install -r requirements.txt ``` To handle heterogeneity and performance efficiently, we use PySpark from [Apache Spark](https://spark.apache.org/). PySpark enables an end-user API for Spark jobs. You might want to check how to set up a local or remote Spark cluster in [their documentation](https://spark.apache.org/docs/latest/api/python/index.html). ## Repository structure This repository is organized as follows: - Preprocessed data from the two different data streams is collecting in [the data folder](data/). For the Opta files, it contains the event-based metrics computed from each match of the 2017 Women's Championship and a single file calculating the event-based metrics from the 2016 Men's Championship published [here](https://figshare.com/collections/Soccer_match_event_dataset/4415000/5). Even though we cannot publish the original data source, the two python scripts implemented to homogenize and integrate both data streams into event-based metrics are included in [the data gathering folder](data_gathering/) folder contains the graphical images and media used for the report. - The [data cleaning folder](data_cleaning/) contains descriptor scripts for both data streams and [the final integration](data_cleaning/merger.py) - [Classification](classification/) contains all the Jupyter notebooks for each model present in the experiment as well as some persistent models for testing.
erickrf / AssinEvaluation and baseline scripts for the ASSIN shared task.
funkey / MalaTraining and evaluation scripts for MALA (https://arxiv.org/abs/1709.02974).
psudowe / Parse27k ToolsTools for the Parse-27k Dataset - evaluation routines and some simple scripts to get started...