Vhrv
VHRV: Very High-Resolution Benchmark Dataset for Vessel Detection
Install / Use
/learn @buyukkanber/VhrvREADME
VHRV: Very High-Resolution Benchmark Dataset for Vessel Detection
VHRV (Very High Resolution Vessels) Dataset
Welcome to the official repository for VHRV dataset, associated with our research article "VHRV: Very High-Resolution Benchmark Dataset for Vessel Detection" published in the Remote Sensing Applications: Society and Environment (RSASE) journal. To access our paper ---> VHRV Paper
Dataset Details
VHRV dataset, namely Very High-Resolution Vessels, constitutes a formative contribution to the field of computer vision, specifically addressing vessel detection practices from remote sensing imagery. This dataset has been thoughtfully built to meet the requirements of advancing research and development in object detection algorithms, particularly in the maritime context. Its purpose is to provide a versatile alternative, that contains consistent and rich content by adding more vessel types with having different scales, for deep learning models to detect opulent of ships under single vessel class in high-resolution remote sensing images.
- Number of Total Images: 1,502
- Number of Total Vessel Instances: 10,158
- Spatial Resolution: Ranges from 0.1 m to 0.25m
- Image Resolution: 4800x2886 pixels
- Annotation Format: YOLO
- Annotation Style: HBB (Horizontal Bounding Box)
Use of the Google Earth images must respect the "Google Earth" terms of use. All images and their associated annotations in VHRV dataset can be used for academic purposes only, but any commercial use is prohibited.
Download
VHRV dataset can be downloaded here ---> Download VHRV .
Deep Learning
To evaluate the effectiveness of the VHRV dataset, we conducted comprehensive experiments using both two-stage (R-CNN-based) and one-stage (YOLO-based) deep learning models. The results, as presented in our RSASE journal article, demonstrate the robustness of the dataset across various architectures. For reproducibility and further exploration, we provide the trained model weights used in these experiments.
Two-stage experiments (RCNN models)
| Model | Backbone<br><sup>Type/Depth | size<br><sup>(pixels) | mAP<sup>test<br>0.50 | mAP<sup>test<br>0.50:0.95 | mAP<sup>val<br>0.50 | mAP<sup>val<br>0.50:0.95 | | -----------------------|------------------|-----------------------|-----------|--------|----------------------|---------------------------| | Faster R-CNN | Restnet 50 | 1333x800 | 0.921 | 0.631 | 0.924 | 0.653 | | Faster R-CNN | Restnet 101 | 1333x800 | 0.933 | 0.631 | 0.925 | 0.648 | | Libra R-CNN | Restnet 50 | 1333x800 | 0.928 | 0.643 | 0.919 | 0.659 | | Libra R-CNN | Restnet 101 | 1333x800 | 0.929 | 0.634 | 0.930 | 0.661 | | Cascade R-CNN | Restnet 50 | 1333x800 | 0.931 | 0.668 | 0.926 | 0.683 | | Cascade R-CNN | Restnet 101 | 1333x800 | 0.925 | 0.657 | 0.925 | 0.677 |
R-CNN based algorithms have been implemented and evaluated in a unified code library MMDetection.
Single-stage experiments (YOLO models)
| Model | size<br><sup>(pixels) | mAP<sup>test<br>0.50 | mAP<sup>test<br>0.50 | mAP<sup>val<br>0.50 | mAP<sup>val<br>0.50:0.95 | params<br><sup>(M) | | --------------------------|--------------|-----------|--------|--------|----------------------|---------------------------| | YOLOv5x | 1024 | 0.985 | 0.835 | 0.971 | 0.848 | 56.9 | | YOLOv6l | 1024 | 0.982 | 0.812 | 0.975 | 0.823 | 56.9 | | YOLOv7x | 1024 | 0.988 | 0.832 | 0.979 | 0.846 | 56.9 | | YOLOv8x | 1024 | 0.978 | 0.828 | 0.975 | 0.844 | 56.9 | | YOLOv9c | 1024 | 0.981 | 0.845 | 0.973 | 0.856 | 56.9 | | YOLOv10x | 1024 | 0.978 | 0.817 | 0.967 | 0.824 | 56.9 | | YOLO11x | 1024 | 0.981 | 0.835 | 0.972 | 0.852 | 56.9 | | YOLOv12x | 1024 | 0.984 | 0.844 | 0.974 | 0.854 | 56.9 |
YOLO models were executed using their original source code libraries, with the exception of YOLOv12, which was implemented with Ultralytics adaptation.
Citation
If you make use of the VHRV dataset, please cite our following paper: https://doi.org/10.1016/j.rsase.2025.101641
We make this dataset available for academical purposes only. You may not use or distribute this dataset for commercial purposes.
@article{BUYUKKANBER2025101641,
title = {VHRV: Very High-Resolution Benchmark Dataset for Vessel Detection},
journal = {Remote Sensing Applications: Society and Environment},
pages = {101641},
year = {2025},
issn = {2352-9385},
doi = {https://doi.org/10.1016/j.rsase.2025.101641},
url = {https://www.sciencedirect.com/science/article/pii/S2352938525001946},
author = {Furkan Büyükkanber and Mustafa Yanalak and Nebiye Musaoğlu},
keywords = {Vessel detection, Ship dataset, Remote sensing images, Deep learning, Convolutional neural networks}
}
Contact
For further information or any question, you can use the issues (https://github.com/buyukkanber/vhrv/issues) tab
Related Skills
YC-Killer
2.7kA library of enterprise-grade AI agents designed to democratize artificial intelligence and provide free, open-source alternatives to overvalued Y Combinator startups. If you are excited about democratizing AI access & AI agents, please star ⭐️ this repository and use the link in the readme to join our open source AI research team.
openclaw-plugin-loom
Loom Learning Graph Skill This skill guides agents on how to use the Loom plugin to build and expand a learning graph over time. Purpose - Help users navigate learning paths (e.g., Nix, German)
best-practices-researcher
The most comprehensive Claude Code skills registry | Web Search: https://skills-registry-web.vercel.app
groundhog
398Groundhog's primary purpose is to teach people how Cursor and all these other coding agents work under the hood. If you understand how these coding assistants work from first principles, then you can drive these tools harder (or perhaps make your own!).
Security Score
Audited on Mar 11, 2026
