FgSegNet
FgSegNet: Foreground Segmentation Network, Foreground Segmentation Using Convolutional Neural Networks for Multiscale Feature Encoding
Install / Use
/learn @lim-anggun/FgSegNetREADME
FgSegNet : Foreground Segmentation Network
This repository contains source codes and training sets for the following paper:<br /><br /> "Foreground Segmentation Using Convolutional Neural Networks for Multiscale Feature Encoding" by Long Ang LIM and Hacer YALIM KELES <br /><br /> The preprint version of the above paper is available at: https://arxiv.org/abs/1801.02225 <br/><br/>
<br/>
Citation
If you find FgSegNet useful in your research, please consider citing: <br />
@article{LIM2018256,
title = "Foreground segmentation using convolutional neural networks for multiscale feature encoding",
journal = "Pattern Recognition Letters",
volume = "112",
pages = "256 - 262",
year = "2018",
issn = "0167-8655",
doi = "https://doi.org/10.1016/j.patrec.2018.08.002",
url = "http://www.sciencedirect.com/science/article/pii/S0167865518303702",
author = "Long Ang Lim and Hacer Yalim Keles",
keywords = "Foreground segmentation, Background subtraction, Deep learning, Convolutional neural networks, Video surveillance, Pixel classification"
}
Requirements
This work was implemented with the following frameworks:
- Spyder 3.2.x (recommended)
- Python 3.6.3
- Keras 2.0.6
- Tensorflow-gpu 1.1.0
Usage
Easy to train! Just a single click, go! <br />
-
Clone this repo:
git clone https://github.com/lim-anggun/FgSegNet.git -
Modify the following file:
<Your PYTHON 3.6>\site-packages\skimage\transform\pyramids.py<br/> inpyramid_reducefunction, replace the following two lines<br/>out_rows = math.ceil(rows / float(downscale))<br/>out_cols = math.ceil(cols / float(downscale))<br/> with <br/>out_rows = math.floor(rows / float(downscale))<br/>out_cols = math.floor(cols / float(downscale))
-
Download VGG16 weights from Here and put it in
FgSegNet/FgSegNet/dir, or it will be downloaded and stored in/.keras/models/automatically. -
Download CDnet2014 dataset, then put it in the following directory structure:<br/>
Example:
FgSegNet/ FgSegNet/FgSegNet_M_S_CDnet.py /FgSegNet_M_S_SBI.py /FgSegNet_M_S_UCSD.py /FgSegNet_M_S_module.py SBI2015_dataset/ SBI2015_train/ UCSD_dataset/ UCSD_train20/ UCSD_train50/ FgSegNet_dataset2014/ baseline/ highway50 highway200 pedestrians50 pedestrians200 ... badWeather/ skating50 skating200 ... ... CDnet2014_dataset/ baseline/ highway pedestrians ... badWeather/ skating ... ... -
There are two methods; i.e.
FgSegNet_MandFgSegNet_S. Choose a method that you want to train by settingmethod_name=='FgSegNet_M' or method_name=='FgSegNet_S'. -
Run the codes with Spyder IDE. Note that all trained models will be automatically saved (in current working directory) for you.
Evaluation
on CDnet2014 dataset
We perform two separated evaluations and report our results on two test splits (test dev & test challenge): <br />
- We compute our results locally. (on
test devdataset) - We upload our results to Change Detection 2014 Challenge. (on
test challengedataset where ground truth values are not shared with the public dataset)<br /> (Both results are reported in our paper. Please refer to it for details)<br />
Compute metrics locally using changedetection.net > UTILITIES tab.
Note:
test dev: by considering only the range of the frames that contain the ground truth labels by excluding training frames (50 or 200 frames)test challenge: dataset on the server side (http://changedetection.net)
on SBI2015 dataset
We split 20% for training (denoted by n frames, where n ∈ [2−148]) and 80% for testing.
on UCSD Background Subtraction dataset
We perform two sets of experiment: first, we split the frames 20% for training (denoted by n frames, where n ∈ [3 − 23]) and 80% for testing, second we split 50% for training (where n ∈ [7 − 56]) and remaining 50% for testing.
Results
Results on CDnet2014 dataset
Table below shows overall results across 11 categories obtained from Change Detection 2014 Challenge.
| Methods | PWC | F-Measure | Speed (320x240, batch-size=1) on NVIDIA GTX 970 GPU | | ------------- | ------------- | ------------- | ------------- | | FgSegNet_M | 0.0559 | 0.9770 | 18fps | | FgSegNet_S | 0.0461 | 0.9804 | 21fps |
Results on SBI2015 dataset
Table below shows overall test results across 14 video sequences.
| Methods | PWC | F-Measure | | ------------- | ------------- | ------------- | | FgSegNet_M | 0.9431 | 0.9794 | | FgSegNet_S | 0.8524 | 0.9831 |
Results on UCSD Background Subtraction dataset
Tables below show overall test results across 18 video sequences.
For 20% split
| Methods | PWC(th=0.4) | F-Measure(th=0.4) | PWC(th=0.7) | F-Measure(th=0.7) | | ------------- | ------------- | ------------- | ------------- | ------------- | | FgSegNet_M | 0.6260 | 0.8948 | 0.6381 | 0.8912 | | FgSegNet_S | 0.7052 | 0.8822 | 0.6273 | 0.8905 |
For 50% split
| Methods | PWC(th=0.4) | F-Measure(th=0.4) | PWC(th=0.7) | F-Measure(th=0.7) | | ------------- | ------------- | ------------- | ------------- | ------------- | | FgSegNet_M | 0.4637 | 0.9203 | 0.4878 | 0.9151 | | FgSegNet_S | 0.5024 | 0.9139 | 0.4676 | 0.9149 |
Updates
07/08/2018:
- combine
FgSegNet_SwithFgSegNet_Mand more - add SBI2015, UCSD dataset and more results
- add published paper and a new citation
- integrate custom loss and other codes in a single file, for ease of use
- other improvements
09/06/2018:
- add quantity results on changedetection.net (CDnet2014 dataset)
- add supporting codes
29/04/2018:
- add Jupyter notebook and a model for testing (FgSegNet_M).
27/01/2018:
- add FgSegNet_M (a triplet network) source code and training frames
Contact
lim.longang at gmail.com <br/> Any issues/discussions are welcome.
Related Skills
qqbot-channel
344.4kQQ 频道管理技能。查询频道列表、子频道、成员、发帖、公告、日程等操作。使用 qqbot_channel_api 工具代理 QQ 开放平台 HTTP 接口,自动处理 Token 鉴权。当用户需要查看频道、管理子频道、查询成员、发布帖子/公告/日程时使用。
docs-writer
99.9k`docs-writer` skill instructions As an expert technical writer and editor for the Gemini CLI project, you produce accurate, clear, and consistent documentation. When asked to write, edit, or revie
model-usage
344.4kUse CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
project-overview
FlightPHP Skeleton Project Instructions This document provides guidelines and best practices for structuring and developing a project using the FlightPHP framework. Instructions for AI Coding A
