SsdNet
Intersecting machining feature localisation and recognition via single shot multibox detector
Install / Use
/learn @PeizhiShi/SsdNetREADME
SsdNet
Created by Peizhi Shi at University of Huddersfield.
Please note that the code is NOT intended for use in military, nuclear, missile, weaponry applications, or in activities involving animal slaughter, meat production, or any other scenarios where human or animal life, or property, could be at risk. We kindly ask you to refrain from applying the code in such contexts.
Introduction
The SsdNet is a novel learning-based intersecting feature recognition and localisation method. At the time of its release, the SsdNet achieves the state-of-the-art intersecting feature recognition and localisation performance. This repository provides the source codes of the SsdNet.
If this project is useful to you, please consider citing our paper:
@ARTICLE{shi2020intersecting,
author={Shi, Peizhi and Qi, Qunfen and Qin, Yuchu and Scott, Paul and Jiang, Xiangqian},
journal={IEEE Transactions on Industrial Informatics},
title={Intersecting machining feature localization and recognition via single shot multibox detector},
year={2021},
volume={17},
number={5},
pages={3292--3302}
}
This is a peer-reviewed paper, which is available online.
Experimental configuration
- CUDA (10.0.130)
- cupy-cuda100 (6.2.0)
- numpy (1.17.4)
- python (3.6.8)
- scikit-image (0.16.2)
- scipy (1.3.3)
- torch (1.1.0)
- torchvision (0.3.0)
- matplotlib (3.1.2)
All the experiments mentioned in our paper are conducted on Ubuntu 18.04 under the above experimental configurations. An Intel i9-9900X PC with a 128 GB memory and NVIDIA RTX 2080ti GPU is employed in this paper. If you run the code on the Windows or under different configurations, slightly different results might be achieved.
Training (optional)
- Get the SsdNet source code by cloning the repository:
git clone https://github.com/PeizhiShi/SsdNet.git. - Create the following folders:
data/TrSet,data/ValSet,data/FNSet,weightsandweights/base. - Download the single feature dataset (originally from the FeatureNet), and convert unzipped STL models into voxel models via binvox. The filename format is
label_index.binvox. Then put all the*.binvoxfiles in a same folderdata/FNSet. This folder is supposed to contain 24,000*.binvoxfiles. Please note there are some unlabelled/mislabelled files in category 8 (rectangular_blind_slot) and 12 (triangular_blind_step). Before moving these files in the same folder, please correct these filenames. - Run
python create_tr_set.pyandpython create_val_set.pyto create training and validation sets respectively. Please note that training set creation process is time-consuming. - Download the pretrained SSD300 basenet, and put it in the folder
weights/base. This pretrained model is utilised for transfer learning. - Run
python train.pyto train the neural network.
Intersecting feature recognition and localisation
- Get the SsdNet source code by cloning the repository:
git clone https://github.com/PeizhiShi/SsdNet.git. - Create the folder named
data/MulSet. - Download the benchmark multi-feature dataset, and put them in the folder
data/MulSet. - Download our pretrained SsdNet model, and then put the unzipped file into the folder
weights. This model allows for achieving the experimental results reported in our IEEE TII paper. This step could be skipped if you have trained the neural network by yourself. - Run
python test.pyto test the performances of the SsdNet for intersecting feature recognition and localisation. - Run
python visualize.pyto visualize the predicted feature boxes.
If you have any questions about the code, please feel free to contact me (p.shi@leeds.ac.uk).
