602 skills found · Page 1 of 21
borglab / GtsamGTSAM is a library of C++ classes that implement smoothing and mapping (SAM) in robotics and vision, using factor graphs and Bayes networks as the underlying computing paradigm rather than sparse matrices.
GeorgeDu / Vision Based Robotic GraspingRelated papers and codes for vision-based robotic grasping
andrewkirillov / AForge.NETAForge.NET Framework is a C# framework designed for developers and researchers in the fields of Computer Vision and Artificial Intelligence - image processing, neural networks, genetic algorithms, machine learning, robotics, etc.
JackieTseng / Conference Call For Paper2021-2022 International Conferences in Artificial Intelligence, Machine Learning, Computer Vision, Data Mining, Natural Language Processing and Robotics
zubair-irshad / Awesome Robotics 3DA curated list of 3D Vision papers relating to Robotics domain in the era of large models i.e. LLMs/VLMs, inspired by awesome-computer-vision, including papers, codes, and related websites
zserge / GrayskullA tiny, dependency-free computer vision library in C for embedded systems, drones, and robotics.
mathiasmantelli / Awesome Mobile RoboticsUseful links of different content related to AI, Computer Vision, and Robotics.
SpatialVLA / SpatialVLA🔥 SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.
google-research / RavensTrain robotic agents to learn pick and place with deep learning for vision-based manipulation in PyBullet. Transporter Nets, CoRL 2020.
petercorke / RVC3 PythonCode examples for Robotics, Vision & Control 3rd edition in Python
Denghaoyuan123 / Awesome RL VLAA Survey on Reinforcement Learning of Vision-Language-Action Models for Robotic Manipulation
tobybreckon / Fire Detection Cnnreal-time fire detection in video imagery using a convolutional neural network (deep learning) - from our ICIP 2018 paper (Dunnings / Breckon) + ICMLA 2019 paper (Samarth / Bhowmik / Breckon)
AnjieCheng / NaVILA[RSS'25] This repository is the implementation of "NaVILA: Legged Robot Vision-Language-Action Model for Navigation"
YanjieZe / Paper ListA paper list of my history reading. Robotics, Learning, Vision.
Jiaaqiliu / Awesome VLA RoboticsA comprehensive list of excellent research papers, models, datasets, and other resources on Vision-Language-Action (VLA) models in robotics.
mint-lab / Awesome Robotics DatasetsA collection of useful datasets for robotics and computer vision
visionworkbench / VisionworkbenchThe NASA Vision Workbench is a general purpose image processing and computer vision library developed by the Autonomous Systems and Robotics (ASR) Area in the Intelligent Systems Division at the NASA Ames Research Center.
microsoft / CogACTA Foundational Vision-Language-Action Model for Synergizing Cognition and Action in Robotic Manipulation
PhotonVision / PhotonvisionPhotonVision is the free, fast, and easy-to-use computer vision solution for the FIRST Robotics Competition.
InternRobotics / InternVLA M1InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policy