LineVul
A Transformer-based Line-Level Vulnerability Prediction
Install / Use
/learn @awsm-research/LineVulREADME
<a href="https://www.researchgate.net/publication/359402890_LineVul_A_Transformer-based_Line-Level_Vulnerability_Prediction">LineVul</a> Replication Package
<!-- LOGO --> <br /> <p align="center"> <img src="logo/linevul_logo.png" width="200" height="200"> </a> <h3 align="center">LineVul</h3> <p align="center"> A Transformer-based Line-Level Vulnerability Prediction Approach </p> </p>Predict Real-World Software Vulnerabilities
<div align="center"> <h3> <b> LineVul Performance on <a href="https://cwe.mitre.org/top25/archive/2021/2021_cwe_top25.html">Top-25 Most Dangerous CWEs in 2021</a> </b> </h3>| Rank | CWE Type | TPR | Proportion | |:----:|:--------:|:----:|:----------:| | 1 | CWE-787 | 75% | 18/24 | | 2 | CWE-79 | - | - | | 3 | CWE-125 | - | - | | 4 | CWE-20 | 86% | 98/114 | | 5 | CWE-78 | - | - | | 6 | CWE-89 | - | - | | 7 | CWE-416 | - | - | | 8 | CWE-22 | 100% | 4/4 | | 9 | CWE-352 | - | - | | 10 | CWE-434 | - | - | | 11 | CWE-306 | - | - | | 12 | CWE-190 | 90% | 27/30 | | 13 | CWE-502 | - | - | | 14 | CWE-287 | - | - | | 15 | CWE-476 | - | - | | 16 | CWE-798 | - | - | | 17 | CWE-119 | 88% | 173/197 | | 18 | CWE-862 | - | - | | 19 | CWE-276 | - | - | | 20 | CWE-200 | 85% | 45/53 | | 21 | CWE-522 | - | - | | 22 | CWE-732 | - | - | | 23 | CWE-611 | - | - | | 24 | CWE-918 | - | - | | 25 | CWE-77 | 100% | 2/2 |
<h3> <b> Top-10 Most Accurately Predicted CWE Types of LineVul </b> </h3>| Rank | CWE Type | TPR | Proportion | |:----:|:--------:|:----:|:----------:| | 1 | CWE-284 | 100% | 11/11 | | 2 | CWE-269 | 100% | 8/8 | | 3 | CWE-254 | 100% | 6/6 | | 4 | CWE-415 | 100% | 6/6 | | 5 | CWE-311 | 100% | 4/4 | | 6 | CWE-22 | 100% | 4/4 | | 7 | CWE-17 | 100% | 4/4 | | 8 | CWE-617 | 100% | 4/4 | | 9 | CWE-358 | 100% | 3/3 | | 10 | CWE-285 | 100% | 3/3 |
</div> <div align="center"> <h3> <b> [MSR 2022 Technical track] [Paper #166] [7 mins talk] LineVul: Line-Level Vulnerability Prediction </b> </h3> <a href="https://www.youtube.com/watch?v=m9bWIiDe-fU"><img src="./logo/msr_cover.png" alt="" style="width:480px;height:270px;"></a> </div> <!-- Table of contents --> <details open="open"> <summary>Table of Contents</summary> <ol> <li> <a href="#how-to-replicate">How to replicate</a> <ul> <li><a href="#about-the-environment-setup">About the Environment Setup</a></li> <li><a href="#about-the-datasets">About the Datasets</a></li> <li><a href="#about-the-models">About the Models</a></li> <li><a href="#about-the-experiment-replication">About the Experiment Replication</a></li> </ul> </li> <li> <a href="#appendix">Appendix</a> </li> <li> <a href="#acknowledgements">Acknowledgements</a> </li> <li> <a href="#license">License</a> </li> <li> <a href="#citation">Citation</a> </li> </ol> </details>How to replicate
About the Environment Setup
First of all, clone this repository to your local machine and access the main dir via the following command:
git clone https://github.com/anon-ai-research/LineVul.git
cd LineVul
Then, install the python dependencies via the following command:
pip install -r requirements.txt
About the Datasets
All of the dataset has the same number of columns (i.e., 39 cols), we focus on the following 3 columns to conduct our experiments:
- processed_func (str): The original function written in C/C++
- target (int): The function-level label that determines whether a function is vulnerable or not
- vul_func_with_fix (str): The fixed function with added in deleted lines labeled
processed_func | target | vul_func_with_fix | :---: | :---: | :---: ... | ... | ...
</div>For more information of our dataset, please refer to <a href="https://dl.acm.org/doi/10.1145/3379597.3387501">this paper</a> and <a href="https://github.com/ZeoVan/MSR_20_Code_vulnerability_CSV_Dataset">this repository</a>.
About the Models
Model Naming Convention
All of the models in the Google Drive are named based on the convention described in the following table:
Model Name | Model Specification | :---: | :---: LineVul | BPE Tokenizer + Pre-training (Codesearchnet) + BERT BPEBERT | BPE Tokenizer + No Pre-training + BERT WordlevelPretrainedBERT | Wordlevel Tokenizer + Pre-training (Codesearchnet) + BERT WordlevelBERT | Wordlevel Tokenizer + No Pre-training + BERT
How to access the models
- All of the models included in our experiments can be downloaded from public Google Drive.
About the Experiment Replication
We provide a csv file that contains all of the raw function-level predictions by LineVul, run the following commands to download:
cd linevul
cd results
gdown https://drive.google.com/uc?id=1WqvMoALIbL3V1KNQpGvvTIuc3TL5v5Q8
cd ../..
We recommend to use GPU with 8 GB up memory for training since BERT architecture is very computing intensive.
Note. If the specified batch size is not suitable for your device, please modify --eval_batch_size and --train_batch_size to fit your GPU memory.
Before replicating the experiment results, please download the dataset as described below, if you want to retrain the model, you need to download training, evaluation, and testing dataset. If you just need to reproduce the results (inference only), then downloading testing dataset alone is enough.
To download the testing dataset used for evaluation in our experiments, run the following commands:
cd data
cd big-vul_dataset
gdown https://drive.google.com/uc?id=1h0iFJbc5DGXCXXvvR6dru_Dms_b2zW4V
cd ../..
To download the training and evaluation dataset used for evaluation in our experiments, run the following commands:
cd data
cd big-vul_dataset
gdown https://drive.google.com/uc?id=1ldXyFvHG41VMrm260cK_JEPYqeb6e6Yw
gdown https://drive.google.com/uc?id=1yggncqivMcP0tzbh8-8Eu02Edwcs44WZ
cd ../..
To download the whole (i.e., train+val+test) unsplit dataset dataset, run the following commands:
cd data
cd big-vul_dataset
gdown https://drive.google.com/uc?id=10-kjbsA806Zdk54Ax8J3WvLKGTzN8CMX
cd ../..
How to replicate RQ1
Please first download the model "12heads_linevul_model.bin" through the following commands:
cd linevul
cd saved_models
cd checkpoint-best-f1
gdown https://drive.google.com/uc?id=1oodyQqRb9jEcvLMVVKILmu8qHyNwd-zH
cd ../../..
To reproduce the RQ1 result, run the following commands (Inference only):
cd linevul
python linevul_main.py \
--model_name=12heads_linevul_model.bin \
--output_dir=./saved_models \
--model_type=roberta \
--tokenizer_name=microsoft/codebert-base \
--model_name_or_path=microsoft/codebert-base \
--do_test \
--train_data_file=../data/big-vul_dataset/train.csv \
--eval_data_file=../data/big-vul_dataset/val.csv \
--test_data_file=../data/big-vul_dataset/test.csv \
--block_size 512 \
--eval_batch_size 512
To retrain the RQ1 model, run the following commands (Training + Inference):
cd linevul
python linevul_main.py \
--output_dir=./saved_models \
--model_type=roberta \
--tokenizer_name=microsoft/codebert-base \
--model_name_or_path=microsoft/codebert-base \
--do_train \
--do_test \
--train_data_file=../data/big-vul_dataset/train.csv \
--eval_data_file=../data/big-vul_dataset/val.csv \
--test_data_file=../data/big-vul_dataset/test.csv \
--epochs 10 \
--block_size 512 \
--train_batch_size 16 \
--eval_batch_size 16 \
--learning_rate 2e-5 \
--max_grad_norm 1.0 \
--evaluate_during_training \
--seed 123456 2>&1 | tee train.log
To reproduce the RQ1 result of BoW+RF, run the following commands:
cd bow_rf
mkdir saved_models
python rf_main.py
How to replicate RQ2
Please first download the model "12heads_linevul_model.bin" through the following commands:
cd linevul
cd saved_models
cd checkpoint-best-f1
gdown https://drive.google.com/uc?id=1oodyQqRb9jEcvLMVVKILmu8qHyNwd-zH
cd ../../..
To reproduce the RQ2 result of Top-10 Accuracy and IFA, run the following commands:
cd linevul
python linevul_main.py \
--model_name=12heads_linevul_model.bin \
--output_dir=./saved_models \
--model_type=roberta \
--tokenizer_name=microsoft/codebert-base \
--model_name_or_path=microsoft/codebert-base \
--do_test \
--do_local_explanation \
--top_k_constant=10 \
--reasoning_method=all \
--train_data_file=../data/big-vul_dataset/train.csv \
--eval_data_file=../data/big-vul_dataset/val.csv \
--test_data_file=../data/big-vul_dataset/test.csv \
--block_size 512 \
--eval_batch_size 512
To reproduce the RQ2 result of Top-10 Accuracy and IFA of CppCheck, run the following commands:
cd cppcheck
python run.py
Note. To install CppCheck, run the following command:
sudo apt-get install cppcheck
For more information about CppCheck, click <a href="https://cppcheck.sourceforge.io/">here</a>
How to replicate RQ3
Please first download the model "12heads_linevul_model.bin" through the following comm
