19 skills found
DanielStormApps / FannyMonitor your Mac's fan speed and CPU/GPU temperature from your Notification Center.
loneicewolf / Stuxnet Sourcestuxnet Source & Binaries. (+PLC ROOTKIT) ONLY FOR ACADEMICAL RESEARCH AND EDUCATIONAL PURPOSES! Includes: Source files, Binaries, PLC Samples,Fanny Added in another repo.
jxom / Fannypack[UNMAINTAINED] An accessible-focused, themeable, friendly React UI Kit.
LinusU / Fanny Pack🗄 Fanny Pack is a non-fancy, but very practical, key/value-store
loneicewolf / Fanny.bmpfanny.bmp cleaned MALWARE - ONLY FOR ACADEMICAL RESEARCH AND EDUCATIONAL PURPOSES! (incl Metasploit detection Module)
lukeed / FannypackThe tool belt for front-end developers
loneicewolf / Gauss SrcGAUSS MALWARE Source [Striking similarities with Duqu, FlameR!, Fanny, StuxNet and more.] Source coming soon! + Binaries + Video showing live-action (what it does, how to remove it & for those interested - how to change the source, compile it, and run it) (Only as a Academical Exercise obviously)
p-lambda / Robust TradeoffCode for the ICML 2020 paper "Understanding and Mitigating the Tradeoff Between Robustness and Accuracy", Aditi Raghunathan, Sang Michael Xie, Fanny Yang, John Duchi, and Percy Liang. Paper available at https://arxiv.org/pdf/2002.10716.pdf.
smooshworx / Pepe PackFanny/Funny Pack Remix with updated basket
elvezpablo / Fnm ParserFanny Mae File Parser
zipscene / FannyNo description available
brentyi / FannypackTools for training PyTorch models
mourner / FannyA simple and fast multilayer feedforward neural network implementation in JS, made for learning purposes.
sgandhi04 / Eye TrackThis engineering design project focuses on creating a wearable, assistive technology, for the visually impaired, that allows them to navigate their indoor surroundings. Around 285 million people in this world suffer from some kind of visual impairment. It is essential that we do everything that we can to improve their quality of life. Most individuals can detect physical obstacles and avoid them. For example, if a table is blocking one's path, they would walk around it to get to the other side. Obstacle avoidance is one motor skill that we take for granted, but not the blind community. Visually impaired people have trouble performing simple operations such as obstacle avoidance. “Eye-Track” is a solution to help the visually impaired navigate their immediate surroundings in indoor environments. Background research was done on how others have tried to solve this problem with the use of sensors, voice/vibration feedback, computer vision cameras, and RFID technology. The most appealing approach to me was the use of computer vision cameras, which could detect signatures of objects. I decided to create a device on the Arduino platform, using a cheap computer vision camera, and vibration motors for vibrotactile feedback, installed in a fanny pack worn around the waist. After some research, I discovered the low-cost cmuCam5 Pixy Cam computer vision camera, that is capable of recording signatures of objects. This device was able to detect pre-programmed obstacles, by its hue. I then proceeded to program the Eye-Track to tell the blind person which direction to walk, using motors for vibrotactile feedback. The motors would vibrate depending on whether the signature was detected in the field of vision, and guide the user to avoid the obstacle. An emergency button was also implemented using a low-cost SMS shield on the Arduino UNO board, which connected to the user's Android phone so that they can press a button to seek help in case of an emergency. To test this product, I simulated an obstacle course for the user to walk through. The test subject was blindfolded and asked to complete the obstacle course using the wearable device. The test criteria were based off on whether the user was able to identify the obstacle, avoid the obstacle, reach the destination, and make it through the obstacle course without touching a single obstacle. Based on these actions, the success (%) was calculated for one, two and three obstacles. In conclusion, the goal of avoiding obstacles was achieved, It was found that the “eye-track” product was 87.5% successful with one obstacle, 85% successful with two obstacles, and 72.5% successful with three obstacles. The error count increased as the number of obstacles increased. Eye- Track can be improved with a more sophisticated computer vision camera, with the capability to determine the depth of objects. Additionally, there were numerous qualitative learnings related to the pace of walking, the distance between objects, and external light conditions. Improving these components could make Eye-Track safer, and more apt for mass production.
polleverywhere / FannypackA simple set of base views to help develop Backbone applications.
SukunDev / Fanny And AnggaTemplate undangan online berbasis Next.js, PostgreSQL, dan Vercel dengan fitur RSVP, desain responsif, dan animasi interaktif.
david-castaneda / Fannypack📦Build configurations for node without the hassle.
FANNY-20 / The FANNY Protocol V0.1a Fully ANoNYmous and decentralized pandemic witness protocol (FANNY)
FANNY-20 / FANNY BackendA Laravel based backend used in conjunction with the FANNY hybrid application (an alternative Covid-19 tracking system)