16 skills found
Derek-Jones / ESEUR Code DataCode and data used to create the examples in "Evidence-based Software Engineering based on the publicly available data"
neverworkintheory / Neverworkintheory.github.ioReviews of empirical software engineering research papers.
emsejournal / OpenscienceEmpirical Software Engineering journal (EMSE) open science and reproducible research initiative
codegrits / CodeGRITSCodeGRITS: A Research Toolkit for Developer Behavior and Eye Tracking in IDE
S2-group / Robot RunnerTool for Automatically Executing Experiments on Robotics Software
seart-group / DL4SEBuilding Training Datasets for Deep Learning Models in Software Engineering and Empirical Software Engineering Research
OussamaSghaier / CuREVHarnessing Large Language Models for Curated Code Reviews
S2-group / Experiment RunnerTool for the automatic orchestration of experiments targeting software systems
garghub / TROVONLearning from what we know: How to perform vulnerability prediction using noisy historical data, Empirical Software Engineering (EMSE)
mendezfe / TemseTeaching Empirical Research Methods in Software Engineering
h1alexbel / Sr DetectionIdentifying GitHub "sample repositories" (SR), that mostly contain educational or demonstration materials supposed to be copied instead of reused as a dependency
Copilot-Eval-Replication-Package / CopilotEvaluationThe Replication Package of the paper "GitHub Copilot AI pair programmer: Asset or Liability?" submitted to the Journal of Systems and Software on June 2022.
staslev / CodeDistilleryA highly parallel software repository mining framework.
lhmtriet / LLM4VulReproduction package of the paper "Software Vulnerability Prediction in Low Resource Languages An Empirical Study of CodeBERT and ChatGPT" in International Conference on Evaluation andAssessment in Software Engineering (EASE) 2024
aaghamohammadi / PUMTThe source code for the paper: An ensemble-based predictive mutation testing approach that considers impact of unreached mutants
M3SOulu / Measuring LDA Topic StabilityMika V. Mantyla, Maelick Claes, and Umar Farooq. 2018. Measuring LDA Topic Stability from Clusters of Replicated Runs. In ACM / IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM) (ESEM ’18), October 11–12, 2018, Oulu, Finland. ACM, New York, NY, USA, Article 4, 4 pages. https://doi.org/10.1145/3239235.3267435