TROVON
Learning from what we know: How to perform vulnerability prediction using noisy historical data, Empirical Software Engineering (EMSE)
Install / Use
/learn @garghub/TROVONREADME
Learning from what we know: How to perform vulnerability prediction using noisy historical data
This repository contains the source code and dataset for the paper Learning from what we know: How to perform vulnerability prediction using noisy historical data, published in Empirical Software Engineering (EMSE).
The bib entry for citing the paper is available here:
In addition to the source code of our proposed approach TROVON, we also implement existing approaches due to unavailable authors' implementation. Our implementations of the existing approaches which we compare TROVON with, are also available in this repository. Please refer to the details below.
Dataset
The dataset is composed of the following:
-
We gathered vulnerabilities, (i.e., the vulnerable and the corresponding fixed components) of the 36 releases of Linux Kernel, 10 releases of Openssl, and 10 releases of Wireshark. For this task, we use VulData7 which is a vulnerability patch gathering tool that used commit IDs provided by National Vulnerability Database (NVD) to gather the aforementioned. These are available in the vulnerabilities directory.
-
We also gathered codebase for the aforementioned releases. For this task, we use FrameVPM which is a framework built to evaluate and compare vulnerability prediction models. The framework is available here.
Source code
The source code of the vulnerability prediction approaches - TROVON and the existing (that we compared TROVON with) are available as below mentioned:
-
Source code of our proposed approach TROVON is available in the code directory.
-
Source code to replicate the following approaches - Software Metrics, Text Mining, Imports, and Function Calls, is available in the FrameVPM repository.
-
Source code of our implementation of the approach Devign is available in the devign directory.
-
Source code of our implementation of the approaches LSTM and LSTM-RF is available in the lstm-rf directory.
Tools required/dependencies to be taken care of
Model training
Please refer to the script train.sh
./train.sh [dirpath] [training-samples-num * epoch-num] [dirpath]/model [config] 1 [training-samples-num] [training-samples-num] 0
For model configuration, please refer length_50-l-1-2.yml. It is configured to train on sequences of length 50, which can be changed based on your requirement.
Model testing
Please refer to the script test.sh
./test.sh [dirpath]/test [dirpath]/model [desired-generated-sequences-file-name]
Related Skills
proje
Interactive vocabulary learning platform with smart flashcards and spaced repetition for effective language acquisition.
YC-Killer
2.7kA library of enterprise-grade AI agents designed to democratize artificial intelligence and provide free, open-source alternatives to overvalued Y Combinator startups. If you are excited about democratizing AI access & AI agents, please star ⭐️ this repository and use the link in the readme to join our open source AI research team.
best-practices-researcher
The most comprehensive Claude Code skills registry | Web Search: https://skills-registry-web.vercel.app
groundhog
400Groundhog's primary purpose is to teach people how Cursor and all these other coding agents work under the hood. If you understand how these coding assistants work from first principles, then you can drive these tools harder (or perhaps make your own!).
Security Score
Audited on Feb 13, 2026
