SkillAgentSearch skills...

Cholect50

A repository for surgical action triplet dataset. Data are videos of laparoscopic cholecystectomy that have been annotated with <instrument, verb, target> labels for every surgical fine-grained activity.

Install / Use

/learn @CAMMA-public/Cholect50

README

<div align="center"> <a href="http://camma.u-strasbg.fr/"> <img src="files/logo_cholect50.gif" width="100%"> </a> </div>

News

  • [ 17/09/2025 ]: Check out our CAMMA Dataset Overlaps repository for an analysis of video overlaps across Cholec80, CholecT50, and Endoscapes to ensure fair dataset splits.

<div align="right"> <a href="docs/README-Format.md" id="links">Data format</a> &nbsp;&nbsp;&nbsp; | &nbsp;&nbsp;&nbsp; <a href="docs/README-Splits.md" id="links">Data splits</a> &nbsp;&nbsp;&nbsp; | &nbsp;&nbsp;&nbsp; <a href="docs/README-Downloads.md" id="links">Downloads</a> &nbsp;&nbsp;&nbsp; | &nbsp;&nbsp;&nbsp; <a href="docs/README-Loader.md" id="links">Data loader</a> &nbsp;&nbsp;&nbsp; | &nbsp;&nbsp;&nbsp; <a href="docs/README-Challenges.md" id="links">Challenges</a> &nbsp;&nbsp;&nbsp; | &nbsp;&nbsp;&nbsp; <a href="docs/README-Leaderboards.md" id="links">Leaderboards</a> </div>
<br>

Highlights

The CholecT50 dataset can support the following research:

  1. Surgical action triplet recognition
  2. Surgical action triplet detection/localization
  3. Surgical tool presence detection
  4. Surgical tool detection/localization
  5. Surgical action/verb recognition
  6. Surgical target recognition
  7. Surgical phase recognition
  • Any combination of the above
<br>

News

  • ☒ [ 20/02/2023 ]: CholecT50 dataset is released for public use under CC BY-NC-SA 4.0 Licence.
  • ☒ [ 30/11/2022 ]: CholecTriplet2021 challenge joint paper accepted at Medical Image Analysis journal.
  • ☒ [ 29/04/2022 ]: Added PyTorch dataloader for the dataset.
  • ☐ [ 02/05/2022 ]: Added TensorFlow v1 & v2 dataloader for the dataset.
  • ☒ [ 12/02/2022 ]: Official splits of the dataset for developing deep learning models is contained in arxiv.
  • ☒ [ 12/02/2022 ]: CholecT45 dataset is released for public use under CC BY-NC-SA 4.0 Licence.
<!-- - &#x2612; [ **18/09/2022** ]: CholecTriplet2022 challenge results announced. Check out the [results and winners](https://cholectriplet2022.grand-challenge.org/results). --> <br>

Cholecystectomy Action Triplet Dataset

<b>CholecT50</b> is a dataset of endoscopic videos of laparoscopic cholecystectomy surgery introduced to enable research on fine-grained action recognition in laparoscopic surgery. The videos are collected in Strasbourg, France. The images are extracted at 1 fps from the videos and annotated with triplet information about surgical actions in the format of <instrument, verb, target>. The phase labels are also provided. Spatial annotations in the form of bounding boxes over the instrument tips are provided for 5 videos. The box-triplet matching labels are also provided for all bounding box annotations. The dataset is a collection of 50 videos consisting of 45 videos from the Cholec80 [1] dataset and 5 videos from the superset in-house Cholec120 [6] dataset of the same surgical procedure.

<b>CholecT40</b> [2] is the first effort of creating surgical action triplet dataset consisting of 40 videos. CholecT50 [3] is an extension of CholecT40 with 10 additional videos and standardized classes.

<b>CholecT45</b> [3] is a subset of CholecT50 consisting of 45 videos from the Cholec80 dataset and first public release of CholecT50. CholecT50 is the super set of CholecT45 and CholecT40 datasets.

<div align="right">

</div>

<u>Dataset Examples</u>

Some example images with overlay of their labels.

image

<br>

<u>Dataset Variants</u>

The following are the official variants of the dataset:

  1. CholecT50 (cross-val): the official cross validation split of CholecT50 [3]. (recommended)
  2. CholecT50 (challenge) : the variant used in CholecTriplet challenges [4, 5] (recommended).
  3. CholecT50 : the original version as used in the Rendezvous publication [3].
  4. CholecT45 (cross-val): the official cross validation split of CholecT45 [3].
  5. CholecT40 : the original version of CholecT40 as used in Tripnet publication [2].

For research purposes, we recommend the use of the CholecT50 (cross-val) version because it is complete and supports the evaluation of all the 100 triplet classes via k-fold cross-validation. Researchers can additionally use the CholecT50 (challenge) version to compare with the results presented at the CholecTriplet challenges.

We have provided bechmark results of baseline models and show how they compare across the above listed versions of the datasets in [6].

<br>

<u>Research Papers</u>

This dataset could only be generated thanks to the continuous support from our surgical partners. In order to properly credit the authors and clinicians for their efforts, you are kindly requested to cite the work that led to the generation of this dataset:

For CholecT45 and CholecT50:

  • [3] C.I. Nwoye, T. Yu, C. Gonzalez, B. Seeliger, P. Mascagni, D. Mutter, J. Marescaux, N. Padoy. Rendezvous: Attention Mechanisms for the Recognition of Surgical Action Triplets in Endoscopic Videos. Medical Image Analysis, 78 (2022) 102433.
    Journal Publication Read on ArXiv GitHub Result Demo
<br>

For CholecT40:

  • [2] C.I. Nwoye, C. Gonzalez, T. Yu, P. Mascagni, D. Mutter, J. Marescaux, N. Padoy. Recognition of instrument-tissue interactions in endoscopic videos via action triplets. International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), LNCS 12263(2020) 364-374.
    Journal Publication ArXiv paper GitHub
<br> <!--- ## Contributors - Chinedu Nwoye - Tong Yu - Cristians Gonzalez - Barbara Seeliger - Pietro Mascagni - Nicolas Padoy --> <br>

License

The cholecT50 dataset is publicly released under the Creative Commons license CC BY-NC-SA 4.0 LICENSE . This implies that:

  • the dataset cannot be used for commercial purposes,
  • the dataset can be transformed (additional annotations, etc.),
  • the dataset can be redistributed as long as it is redistributed under the same license with the obligation to cite the contributing work which led to the generation of the cholecT45 dataset (mentioned above).

By downloading and using this dataset, you agree on these terms and conditions.

<div align="right">

</div>

Acknowledgement

This work was supported by French state funds managed by BPI France (project CONDOR, Project 5G-OR) and by the ANR (Labex CAMI, IHU Strasbourg, project DeepSurg, National AI Chair AI4ORSafety). We also thank the research teams of IHU and IRCAD for their help with the initial annotation of the dataset during the CONDOR project.

<br><br> <img src="https://github.com/CAMMA-public/rendezvous/blob/main/files/ihu.png" width="6%" align="left" > <img src="https://github.com/CAMMA-public/rendezvous/blob/main/files/ANR-logo-2021-sigle.jpg" width="14%" align="left"> <img src="https://github.com/CAMMA-public/rendezvous/blob/main/files/condor.png" width="14%" align="left"> <img src="files/unistra.png" width="14%" align="left"> <br>

<br><br>


Contact

This dataset is maintained by the research group CAMMA: http://camma.u-strasbg.fr

Any updates regarding this dataset can be found here: http://camma.u-strasbg.fr/datasets

Any questions regarding the dataset can be sent to: camma.dataset@gmail.com

<br>

References

<div id="cite-cholec80">
  • [1] A.P. Twinanda, S. Shehata, D. Mutter, J. Marescaux, M. de Mathelin, N. Padoy. EndoNet: A Deep Architecture for Recognition Tasks on Laparoscopic Videos. IEEE Trans. on Medical Imaging 2016.

    @article{twinanda2016endonet,
      title={Endonet: a deep architecture for recognition tasks on laparoscopic videos},
      author={Twinanda, Andru P and Shehata, Sherif and Mutter, Didier and Marescaux, Jacques and De Mathelin, Michel and Padoy, Nicolas},
      journal={IEEE transactions on medical imaging},
      volume={36},
      number={1},
      pages={86--97},
      year={2016}
    }
    
    <div align="right">

    [![Journal Publication](https://img.shields.io/

View on GitHub
GitHub Stars78
CategoryContent
Updated1d ago
Forks6

Languages

Python

Security Score

85/100

Audited on Mar 23, 2026

No findings