80 skills found · Page 3 of 3
libxsmm / Libxsmm DnnReference implementation of Deep Neural Network primitives using LIBXSMM's Tensor Processing Primitives (TPP)
thanhtbt / ROLCP[IEEE ICASSP 2021] "A fast randomized adaptive CP decomposition for streaming tensors". In 46th IEEE International Conference on Acoustics, Speech, & Signal Processing, 2021.
xxTVOJAMAMAxx / SeedVR2 Memory Efficient Batch ProcessorSolves ComfyUI's inherent RAM bottleneck where videos are converted to uncompressed 32-bit float tensors (~24MB per frame for 1080p), causing exponential memory usage for longer videos. This workflow loads and processes frames in configurable batches, saving processed frames directly to disk as compressed PNGs to prevent accumulation.
val-link / ITEBD TEMPOOpen quantum system dynamics using process tensor networks and iTEBD.
provostultrasoundlab / SparseTensorULMCompanion code for the paper for: Rauby, B., Xing, P., Porée, J., Gasse, M., & Provost, J. (2025). Pruning Sparse Tensor Neural Networks Enables Deep Learning for 3D Ultrasound Localization Microscopy. IEEE Transactions on Image Processing, 34, 2367–2378. https://doi.org/10.1109/tip.2025.3552198
fatchur / Tensorflow And Image Processing TutorialCollection of Community Tutorial Materials
gpeyre / 2017 EJAM Quantum OtG. Peyré, L. Chizat, F-X. Vialard, J. Solomon, Quantum Optimal Transport for Tensor Field Processing, Arxiv, 2016
whyb / FastChwHwcConverterA high-performance, header-only C++ library for image tensor data format conversion, leveraging OpenMP parallel processing for unparalleled CPU performance and optimized CUDA/HIP implementations for AMD and NVIDIA GPUs, ensuring exceptional speed and scalability across diverse hardware platforms.
vikram-mm / Spiking Neural Network Theano FrameworkSpiking neural networks are biologically plausible CNNs which learn through a temporally dependent learning method known as Spike Time Dependant Plasticity (STDP)- an alternate to gradient descent. This repository contains layers built on top of Lasagne layers for spiking neural networks. This is the first implementation of spiking neural networks in any tensor based framework to the best of my knowledge. The various layers can be found in snn.py for dense layer and snn_conv.py for other layers. These layers are to be processed for each time step which is done using the Theano scan as a quick hack - in the snn class. The results can be found the ppt. Further details on how to use the code will be put up after later.
mfkiwl / Bfloat16 Floating Point Arithmetic UnitDesign and implementation of a pipelined Bfloat16 Floating Point Arithmetic Unit using VHDL. This unit can perform addition, subtraction, multiplication, division and fused multiply-add/subtract operations. Bfloat16 is a 16-bit floating-point data type developed at Google and currently used in their Tensor Processing Units (TPU's). Thanks to its dynamic range, the Bfloat16 format can be useful for Machine Learning applications that work well with low-precision representations of data. This Bfloat16 unit will be used to add custom RISC-V floating-point instructions to a RISC-V processor that can potentially be used as a hardware accelerator for Machine Learning applications. This model will also be tested on and FPGA and possibly modified to achieve optimal performance. Work is still in progress.
ameyskulkarni / Detection And Localization Of Traffic Lights Using RCNNs On Bosch BSTLD DatasetThis repository presents a code to detect the rear of cars using RCNNs. The dataset consists of road images in different conditions like daylight and night conditions. The labels are given in the .csv format. Each row of the labels file consists of name of the image, details about coordinates of the bounding box(x_min, x_max, y_min and y_max), and the label itself. Details are extracted from the csv file and stored in a dataframe. ONly a subset of the data was trained on due to the resource exhaustion. All the details will be given below. Object detection: There are two parts to object detection- Object classification Object localization Bounding boxes are used usually for the localization purpose and the labels are used for classification. The two major techniques used in the industry for object detection are RCNNs and YOLO. I have dedicated the time spent on these assignments to learn about one of these techniques: RCNNs. Region Based Convolutional Neural Networks The Architecture of RCNN is very extensive as it has different blocks of layes for the above mentioned purposes: classification and localization. The code I have used takes VGG-16 as the first block of layers which take in the images as 3D tensors and and give out feature maps. To understand the importance of Transfer learning, I have used pre-trained weights of this model. This is the base network. The next network block is the Region Proposal Network. This is a Fully Convolutional Network. This network uses a concept of Anchors. It is a very interesting concept. This solves the problem of using exactly what length of bounding boxes. The image is scaled down and now each pixel woks as an anchor. Each anchor defines a certain number of bounding box primitives. The RPN is used to predict the score of object being inside each of this bounding box primitive. A Region of INterest pooling layer appears next. This is a layer which takes in ROIs of the feature map to compare and classify each bounding box. A post processing technique of Non-maximal supression is used to select the bounding box with the highest probability of the object being there. The image is scaled back up and this box is displayed. Hyperparameters used- Number of samples for training- 2252 Number of samples for testing- 176 ROIs- 4 epoch length- 500 Epochs- 91 Anchors-9 All results are visible in the ipynb files of training and testing. With only running the 40 epochs the mAP over the test data gave 0.68 value. THis is close to the 75% expected. I trained more and the accuracy visibly improved from the loss graph and the bounding box accuracy but sadly I am not able to find the mAP after this training round because the I increased the dataset size and I always get and error of resource exhaustion. I am planning to make the code more modular so that I can allocate resources to different modules separately and this issue is overcome. The accuarcy can further be improved by training over a larger dataset and running for more epochs. I will try to do this and improve the accuracy.
CPestka / Tensor FFTA implementation of an FFT algorithm targeting fp16 data to accelerate its processing by utilizing tensor cores
LanlanFeng / MTTDThe codes are for the paper: Lanlan Feng, Ce Zhu, Zhen long, Jiani Liu, Yipeng Liu, "Multiplex Transformed Tensor Decomposition for Multidimensional Image Recovery," IEEE Transactions on Image Processing, 2023. DOI: 10.1109/TIP.2023.3284673.
PacktPublishing / Natural Language Processing With TensorFlow 2No description available
monahatami1 / Coursera Natural Language Processing In TensorFlowNo description available
hkchengrex / Shared Memory Tensor DatasetThis repository provides an example of reading from a single shared memory tensor from multiple processes (e.g., with DDP).
spirit-man / Planetary LiDAR Odometry ModularA modular LiDAR SLAM(currently only odometry part) framework based on A-LOAM, with enhanced features for planetary environments. This project implements 3D IMLS-SLAM and introduces tensor voting for better point cloud processing in sparse, feature-less environments.
ImperialCollegeLondon / INDIINDI is a command line tool to process in-vivo cardiac diffusion tensor imaging.
davidaknowles / Tensor GpCode for Tensor Gaussian Process Regression for predicting drug combination synergy, developed for the AstraZeneca-Sanger Drug Combination Prediction DREAM Challenge 2015: https://www.synapse.org/#!Synapse:syn4231880/wiki/235645
beatrizalbiero / MsResearchI am attempting to reproduce Rumelhart and McClelland's (1986) connectionist experiment from the book Paralell Distributed Processing, chapter "On learning the past tense of english verbs", but applied to the portuguese language.