305 skills found · Page 3 of 11
AmeyaWagh / 3D Object Recognitionrecognize and localize an object in 3D Point Cloud scene using VFH - SVMs based method and 3D-CNNs method
CatOneTwo / WSOD Paper ListA paper list of state-of-the-art weakly supervised object detection or localization.
shvdiwnkozbw / Multi Source Sound LocalizationThis repo aims to perform sound localization in complex audiovisual scenes, where there multiple objects making sounds.
CaptainEven / DenseBoxImplemention of Baidu's DenseBox used for multi-task learning of object detection and landmark(key-point) localization 用PyTorch实现了百度的DenseBox并针对任意宽高比矩形(不仅限于方形)的目标检测做了优化,不仅可以输出关键点的热力图(heatmap)而且可以输出每个bbox对应关键点坐标
ZhouYanzhao / SPNSoft Proposal Networks for Weakly Supervised Object Localization, in ICCV 2017
HaipengXiong / Weighted Hausdorff LossA loss function (Weighted Hausdorff Distance) for object localization in PyTorch
microsoft / XLIFF2 Object ModelIf you’re looking to store localization data and propagate it through your localization pipeline allowing tools to interoperate then you may want to use the XLIFF 2.0 object model. The XLIFF 2.0 object model implements the OASIS Standard for the XLIFF 2.0 specification as defined at http://docs.oasis-open.org/xliff/xliff-core/v2.0/xliff-core-v2.0.html.
AaronCIH / APGCCECCV24 - Improving Point-based Crowd Counting and Localization Based on Auxiliary Point Guidance
valeoai / FOUNDPyTorch code for Unsupervised Object Localization: Observing the Background to Discover Objects
wuyuebupt / DoubleheadsrcnnRethinking Classification and Localization for Object Detection
valeoai / Awesome Unsupervised Object LocalizationCurated list of awesome works on unsupervised object localization in 2D images.
facebookresearch / 3D Vision And TouchWhen told to understand the shape of a new object, the most instinctual approach is to pick it up and inspect it with your hand and eyes in tandem. Here, touch provides high fidelity localized information while vision provides complementary global context. However, in 3D shape reconstruction, the complementary fusion of visual and haptic modalities remains largely unexplored. In this paper, we study this problem and present an effective chart-based approach to fusing vision and touch, which leverages advances in graph convolutional networks. To do so, we introduce a dataset of simulated touch and vision signals from the interaction between a robotic hand and a large array of 3D objects. Our results show that (1) leveraging both vision and touch signals consistently improves single-modality baselines, especially when the object is occluded by the hand touching it; (2) our approach outperforms alternative modality fusion methods and strongly benefits from the proposed chart-based structure; (3) reconstruction quality boosts with the number of grasps provided; and (4) the touch information not only enhances the reconstruction at the touch site but also extrapolates to its local neighborhood.
ShengkaiWu / IoU Aware Single Stage Object DetectorIoU-aware single-stage object detector for accurate localization
tzzcl / PSOLcode repository of “Rethinking the Route Towards Weakly Supervised Object Localization” in CVPR 2020
rmariuzzo / Laravel Localization LoaderLaravel Localization loader for webpack. Convert Laravel Translation strings to JavaScript Objects.
mingweihe / ImageNetTrial on kaggle imagenet object localization by yolo v3 in google cloud
Tony607 / YOLO Object Localization KerasGentle guide on how YOLO Object Localization works with Keras (Part 2)
xuehaolan / DANetDANet: Divergent Activation for Weakly Supervised Object Localization,in ICCV 2019
jbhuang0604 / WSLWeakly Supervised Object Localization with Progressive Domain Adaptation (CVPR 2016)
xuannianz / Keras GaussianYOLOv3Gaussian YOLOv3 (An Accurate and Fast Object Detector Using Localization Uncertainty for Autonomous Driving (ICCV, 2019)) implementation in Keras and Tensorflow