16 skills found
mehtadushy / SelecSLS PytorchReference ImageNet implementation of SelecSLS CNN architecture proposed in the SIGGRAPH 2020 paper "XNect: Real-time Multi-Person 3D Motion Capture with a Single RGB Camera". The repository also includes code for pruning the model based on implicit sparsity emerging from adaptive gradient descent methods, as detailed in the CVPR 2019 paper "On implicit filter level sparsity in Convolutional Neural Networks".
taki0112 / AdaBound TensorflowSimple Tensorflow implementation of "Adaptive Gradient Methods with Dynamic Bound of Learning Rate" (ICLR 2019)
shirakawas / ASNG NASAdaptive Stochastic Natural Gradient Method for One-Shot Neural Architecture Search
juntang-zhuang / Torch ACArepo for paper: Adaptive Checkpoint Adjoint (ACA) method for gradient estimation in neural ODE
yashkant / Padam TensorflowReproducing the paper "PADAM: Closing The Generalization Gap of Adaptive Gradient Methods In Training Deep Neural Networks" for the ICLR 2019 Reproducibility Challenge
uclaml / PadamPartially Adaptive Momentum Estimation method in the paper "Closing the Generalization Gap of Adaptive Gradient Methods in Training Deep Neural Networks" (accepted by IJCAI 2020)
titu1994 / Keras PadamKeras implementation of Padam from "Closing the Generalization Gap of Adaptive Gradient Methods in Training Deep Neural Networks"
Tarzanagh / DADAMDADAM: A Consensus-based Distributed Adaptive Gradient Method for Online Optimization
dkolom / GALA2DAdaptive gradient-augmented level set method for a two-dimensional advection equation
Wadaboa / Cpr AppropriationSolutions to the Harvest CPR appropriation problem with policy gradient methods and social learning, for Autonomous and Adaptive Systems class at UNIBO
thanhdnh / ATVBH SIViPCode for the paper "An adaptive method for image restoration based on high-order total variation and inverse gradient"
andrewjc / PyTorch AdamLPyTorch implementation of the optimizer described in "AdamL: A fast adaptive gradient method incorporating loss function"
ChrisYZZ / CADA MasterStochastic gradient descent (SGD) has taken the stage as the primary workhorse for large-scale machine learning. It is often used with its adaptive variants such as AdaGrad, Adam, and AMSGrad. This paper proposes an adaptive stochastic gradient descent method for distributed machine learning, which can be viewed as the communication-adaptive counterpart of the celebrated Adam method - justifying its name CADA. The key components of CADA are a set of new rules tailored for adaptive stochastic gradients that can be implemented to save communication upload. The new algorithms adaptively reuse the stale Adam gradients, thus saving communication, and still have convergence rates comparable to original Adam. In numerical experiments, CADA achieves impressive empirical performance in terms of total communication round reduction.
dpoyyyy / Cgrpa OptimizerCGRPA is a second-order optimization method that adaptively interpolates between first-order (gradient descent) and second-order (Newton-like) directions based on curvature reliability
IST-DASLab / EFCPThe repository contains code to reproduce the experiments from our paper Error Feedback Can Accurately Compress Preconditioners available below:
hk60906632 / Retinopathy Project ThesisDiabetic Retinopathy (DR) is one of the eye-related disease that reduces the integrity of the blood vessels in the retinal layers which leads to retinal blood vessel leakage [2]. Sodium Fluorescein Angiography (FA) is widely used to monitor the leakage or the permeability of the vessel by imaging the back of the eyes as an important diagnostic value. Gamez [2] which is a PhD student in University of Bristol started FA on mice. Gamez [2] manually extracted fluorescent intensity data from the resulting FA videos and a graph of the fluorescence intensity ratio (FIR) versus time was plotted to obtain the gradient which is the solute flux (ΔIf/ Δt). These data was then used to assist the development of the Fick's Law adapted equation P=ΔIf/ Δt /(ΔC × A) to obtain the permeability of the vessel. The obstacle of this method was the manual data capturing process was too time consuming. This method also requires a lot of manual adjustments due to the movement of the camera caused by the heartbeat of the mice and their eyeball motion. The movement of the camera also caused blurry and unsharp images in the FA videos which led to inaccurate fluorescent intensity. A more intelligent way of data capturing was developed in this project using openCV with Python. This project firstly experimented on using K-means clustering to segment the exchange vessel groups and the large vessel group out of the FA frames to obtain the FIR for the FIR vs time graph. This project experimented on the two settings of K-mean clustering. One used random initial centers and the other one used the previous frame’s centers found by K-means clustering as the initial centers of the K-means clustering for the current frame (Reuse center K-means clustering). The experiment found that the random initial centers K-means clustering output stable FIR when the maximum iteration was 7 or above and the best epsilon (specific accuracy) was 0.1. Maximum iteration below 7 cannot be used due to FIR vs time graph showed large amount of noise and severe deformation. Conversely, the reuse center K-means clustering showed no deformation and noise on the FIR vs time graph when the maximum iteration was 7 or below and a much shorter execution time than the random initial centers K-means clustering. Then, the difference on the gradient of the FIR vs time graph was further examine between the two K-means clustering setting. The random initial centers Kmeans clustering showed fluctuation on the gradient value when the maximum iteration was between 7 and 15. The reuse center K-means clustering showed either an ascending or descending trend on the gradient value when the maximum iteration was below 7 and the gradient value stabilized when the maximum iteration was between 7 and 15. Reuse center K-means clustering was decided to implement in the final software and maximum iteration 7 was set as default to prioritize gradient accuracy over the execution time, and allow user to lower the maximum iteration to reduce execution time. This project then experimented on blurry frame classification by using Sobel edge detection. Convolution was performed with Sobel derivative operator on each FA frame to obtain an edge sharpness value. The edge sharpness versus frame number graphs were examined for all video and discovered a great separation between sharp and blurry frames in edge sharpness value. Sharp frames had higher edge sharpness and blurry frames had lower edge sharpness. A piece of code was created to loop along all the data points in edge sharpness vs frame number graph to classify sharp and blurry frames. The code firstly checked if the range of several neighboring data point (PtPbox) is larger than a specific value (tolerance value), then the data point needed a sharpness check, which take the mean of several neighboring data point (meanBox) and check is the current data point edge sharpness is lower or higher than the mean value. Lower means blurry frame and higher means sharp frames. A series of experiments were performed and the optimal value for PtPbox is 20, the meanBox is 10, the tolerance value is 0.1 and no histogram equalization is required. The sharp frames identification accuracy was above 80% and the blurry frames identification accuracy was above 96% for all the tested FA videos. All these experimental codes were then connected by a graphical user interface based on python with PyQT4. Finally, the PyInstaller was used to package these Python codes into a stand-alone Microsoft Window executable for Gamez [2] to use.