13 skills found
molyswu / Hand Detectionusing Neural Networks (SSD) on Tensorflow. This repo documents steps and scripts used to train a hand detector using Tensorflow (Object Detection API). As with any DNN based task, the most expensive (and riskiest) part of the process has to do with finding or creating the right (annotated) dataset. I was interested mainly in detecting hands on a table (egocentric view point). I experimented first with the [Oxford Hands Dataset](http://www.robots.ox.ac.uk/~vgg/data/hands/) (the results were not good). I then tried the [Egohands Dataset](http://vision.soic.indiana.edu/projects/egohands/) which was a much better fit to my requirements. The goal of this repo/post is to demonstrate how neural networks can be applied to the (hard) problem of tracking hands (egocentric and other views). Better still, provide code that can be adapted to other uses cases. If you use this tutorial or models in your research or project, please cite [this](#citing-this-tutorial). Here is the detector in action. <img src="images/hand1.gif" width="33.3%"><img src="images/hand2.gif" width="33.3%"><img src="images/hand3.gif" width="33.3%"> Realtime detection on video stream from a webcam . <img src="images/chess1.gif" width="33.3%"><img src="images/chess2.gif" width="33.3%"><img src="images/chess3.gif" width="33.3%"> Detection on a Youtube video. Both examples above were run on a macbook pro **CPU** (i7, 2.5GHz, 16GB). Some fps numbers are: | FPS | Image Size | Device| Comments| | ------------- | ------------- | ------------- | ------------- | | 21 | 320 * 240 | Macbook pro (i7, 2.5GHz, 16GB) | Run without visualizing results| | 16 | 320 * 240 | Macbook pro (i7, 2.5GHz, 16GB) | Run while visualizing results (image above) | | 11 | 640 * 480 | Macbook pro (i7, 2.5GHz, 16GB) | Run while visualizing results (image above) | > Note: The code in this repo is written and tested with Tensorflow `1.4.0-rc0`. Using a different version may result in [some errors](https://github.com/tensorflow/models/issues/1581). You may need to [generate your own frozen model](https://pythonprogramming.net/testing-custom-object-detector-tensorflow-object-detection-api-tutorial/?completed=/training-custom-objects-tensorflow-object-detection-api-tutorial/) graph using the [model checkpoints](model-checkpoint) in the repo to fit your TF version. **Content of this document** - Motivation - Why Track/Detect hands with Neural Networks - Data preparation and network training in Tensorflow (Dataset, Import, Training) - Training the hand detection Model - Using the Detector to Detect/Track hands - Thoughts on Optimizations. > P.S if you are using or have used the models provided here, feel free to reach out on twitter ([@vykthur](https://twitter.com/vykthur)) and share your work! ## Motivation - Why Track/Detect hands with Neural Networks? There are several existing approaches to tracking hands in the computer vision domain. Incidentally, many of these approaches are rule based (e.g extracting background based on texture and boundary features, distinguishing between hands and background using color histograms and HOG classifiers,) making them not very robust. For example, these algorithms might get confused if the background is unusual or in situations where sharp changes in lighting conditions cause sharp changes in skin color or the tracked object becomes occluded.(see [here for a review](https://www.cse.unr.edu/~bebis/handposerev.pdf) paper on hand pose estimation from the HCI perspective) With sufficiently large datasets, neural networks provide opportunity to train models that perform well and address challenges of existing object tracking/detection algorithms - varied/poor lighting, noisy environments, diverse viewpoints and even occlusion. The main drawbacks to usage for real-time tracking/detection is that they can be complex, are relatively slow compared to tracking-only algorithms and it can be quite expensive to assemble a good dataset. But things are changing with advances in fast neural networks. Furthermore, this entire area of work has been made more approachable by deep learning frameworks (such as the tensorflow object detection api) that simplify the process of training a model for custom object detection. More importantly, the advent of fast neural network models like ssd, faster r-cnn, rfcn (see [here](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md#coco-trained-models-coco-models) ) etc make neural networks an attractive candidate for real-time detection (and tracking) applications. Hopefully, this repo demonstrates this. > If you are not interested in the process of training the detector, you can skip straight to applying the [pretrained model I provide in detecting hands](#detecting-hands). Training a model is a multi-stage process (assembling dataset, cleaning, splitting into training/test partitions and generating an inference graph). While I lightly touch on the details of these parts, there are a few other tutorials cover training a custom object detector using the tensorflow object detection api in more detail[ see [here](https://pythonprogramming.net/training-custom-objects-tensorflow-object-detection-api-tutorial/) and [here](https://towardsdatascience.com/how-to-train-your-own-object-detector-with-tensorflows-object-detector-api-bec72ecfe1d9) ]. I recommend you walk through those if interested in training a custom object detector from scratch. ## Data preparation and network training in Tensorflow (Dataset, Import, Training) **The Egohands Dataset** The hand detector model is built using data from the [Egohands Dataset](http://vision.soic.indiana.edu/projects/egohands/) dataset. This dataset works well for several reasons. It contains high quality, pixel level annotations (>15000 ground truth labels) where hands are located across 4800 images. All images are captured from an egocentric view (Google glass) across 48 different environments (indoor, outdoor) and activities (playing cards, chess, jenga, solving puzzles etc). <img src="images/egohandstrain.jpg" width="100%"> If you will be using the Egohands dataset, you can cite them as follows: > Bambach, Sven, et al. "Lending a hand: Detecting hands and recognizing activities in complex egocentric interactions." Proceedings of the IEEE International Conference on Computer Vision. 2015. The Egohands dataset (zip file with labelled data) contains 48 folders of locations where video data was collected (100 images per folder). ``` -- LOCATION_X -- frame_1.jpg -- frame_2.jpg ... -- frame_100.jpg -- polygons.mat // contains annotations for all 100 images in current folder -- LOCATION_Y -- frame_1.jpg -- frame_2.jpg ... -- frame_100.jpg -- polygons.mat // contains annotations for all 100 images in current folder ``` **Converting data to Tensorflow Format** Some initial work needs to be done to the Egohands dataset to transform it into the format (`tfrecord`) which Tensorflow needs to train a model. This repo contains `egohands_dataset_clean.py` a script that will help you generate these csv files. - Downloads the egohands datasets - Renames all files to include their directory names to ensure each filename is unique - Splits the dataset into train (80%), test (10%) and eval (10%) folders. - Reads in `polygons.mat` for each folder, generates bounding boxes and visualizes them to ensure correctness (see image above). - Once the script is done running, you should have an images folder containing three folders - train, test and eval. Each of these folders should also contain a csv label document each - `train_labels.csv`, `test_labels.csv` that can be used to generate `tfrecords` Note: While the egohands dataset provides four separate labels for hands (own left, own right, other left, and other right), for my purpose, I am only interested in the general `hand` class and label all training data as `hand`. You can modify the data prep script to generate `tfrecords` that support 4 labels. Next: convert your dataset + csv files to tfrecords. A helpful guide on this can be found [here](https://pythonprogramming.net/creating-tfrecord-files-tensorflow-object-detection-api-tutorial/).For each folder, you should be able to generate `train.record`, `test.record` required in the training process. ## Training the hand detection Model Now that the dataset has been assembled (and your tfrecords), the next task is to train a model based on this. With neural networks, it is possible to use a process called [transfer learning](https://www.tensorflow.org/tutorials/image_retraining) to shorten the amount of time needed to train the entire model. This means we can take an existing model (that has been trained well on a related domain (here image classification) and retrain its final layer(s) to detect hands for us. Sweet!. Given that neural networks sometimes have thousands or millions of parameters that can take weeks or months to train, transfer learning helps shorten training time to possibly hours. Tensorflow does offer a few models (in the tensorflow [model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md#coco-trained-models-coco-models)) and I chose to use the `ssd_mobilenet_v1_coco` model as my start point given it is currently (one of) the fastest models (read the SSD research [paper here](https://arxiv.org/pdf/1512.02325.pdf)). The training process can be done locally on your CPU machine which may take a while or better on a (cloud) GPU machine (which is what I did). For reference, training on my macbook pro (tensorflow compiled from source to take advantage of the mac's cpu architecture) the maximum speed I got was 5 seconds per step as opposed to the ~0.5 seconds per step I got with a GPU. For reference it would take about 12 days to run 200k steps on my mac (i7, 2.5GHz, 16GB) compared to ~5hrs on a GPU. > **Training on your own images**: Please use the [guide provided by Harrison from pythonprogramming](https://pythonprogramming.net/training-custom-objects-tensorflow-object-detection-api-tutorial/) on how to generate tfrecords given your label csv files and your images. The guide also covers how to start the training process if training locally. [see [here] (https://pythonprogramming.net/training-custom-objects-tensorflow-object-detection-api-tutorial/)]. If training in the cloud using a service like GCP, see the [guide here](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_on_cloud.md). As the training process progresses, the expectation is that total loss (errors) gets reduced to its possible minimum (about a value of 1 or thereabout). By observing the tensorboard graphs for total loss(see image below), it should be possible to get an idea of when the training process is complete (total loss does not decrease with further iterations/steps). I ran my training job for 200k steps (took about 5 hours) and stopped at a total Loss (errors) value of 2.575.(In retrospect, I could have stopped the training at about 50k steps and gotten a similar total loss value). With tensorflow, you can also run an evaluation concurrently that assesses your model to see how well it performs on the test data. A commonly used metric for performance is mean average precision (mAP) which is single number used to summarize the area under the precision-recall curve. mAP is a measure of how well the model generates a bounding box that has at least a 50% overlap with the ground truth bounding box in our test dataset. For the hand detector trained here, the mAP value was **0.9686@0.5IOU**. mAP values range from 0-1, the higher the better. <img src="images/accuracy.jpg" width="100%"> Once training is completed, the trained inference graph (`frozen_inference_graph.pb`) is then exported (see the earlier referenced guides for how to do this) and saved in the `hand_inference_graph` folder. Now its time to do some interesting detection. ## Using the Detector to Detect/Track hands If you have not done this yet, please following the guide on installing [Tensorflow and the Tensorflow object detection api](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md). This will walk you through setting up the tensorflow framework, cloning the tensorflow github repo and a guide on - Load the `frozen_inference_graph.pb` trained on the hands dataset as well as the corresponding label map. In this repo, this is done in the `utils/detector_utils.py` script by the `load_inference_graph` method. ```python detection_graph = tf.Graph() with detection_graph.as_default(): od_graph_def = tf.GraphDef() with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid: serialized_graph = fid.read() od_graph_def.ParseFromString(serialized_graph) tf.import_graph_def(od_graph_def, name='') sess = tf.Session(graph=detection_graph) print("> ====== Hand Inference graph loaded.") ``` - Detect hands. In this repo, this is done in the `utils/detector_utils.py` script by the `detect_objects` method. ```python (boxes, scores, classes, num) = sess.run( [detection_boxes, detection_scores, detection_classes, num_detections], feed_dict={image_tensor: image_np_expanded}) ``` - Visualize detected bounding detection_boxes. In this repo, this is done in the `utils/detector_utils.py` script by the `draw_box_on_image` method. This repo contains two scripts that tie all these steps together. - detect_multi_threaded.py : A threaded implementation for reading camera video input detection and detecting. Takes a set of command line flags to set parameters such as `--display` (visualize detections), image parameters `--width` and `--height`, videe `--source` (0 for camera) etc. - detect_single_threaded.py : Same as above, but single threaded. This script works for video files by setting the video source parameter videe `--source` (path to a video file). ```cmd # load and run detection on video at path "videos/chess.mov" python detect_single_threaded.py --source videos/chess.mov ``` > Update: If you do have errors loading the frozen inference graph in this repo, feel free to generate a new graph that fits your TF version from the model-checkpoint in this repo. Use the [export_inference_graph.py](https://github.com/tensorflow/models/blob/master/research/object_detection/export_inference_graph.py) script provided in the tensorflow object detection api repo. More guidance on this [here](https://pythonprogramming.net/testing-custom-object-detector-tensorflow-object-detection-api-tutorial/?completed=/training-custom-objects-tensorflow-object-detection-api-tutorial/). ## Thoughts on Optimization. A few things that led to noticeable performance increases. - Threading: Turns out that reading images from a webcam is a heavy I/O event and if run on the main application thread can slow down the program. I implemented some good ideas from [Adrian Rosebuck](https://www.pyimagesearch.com/2017/02/06/faster-video-file-fps-with-cv2-videocapture-and-opencv/) on parrallelizing image capture across multiple worker threads. This mostly led to an FPS increase of about 5 points. - For those new to Opencv, images from the `cv2.read()` method return images in [BGR format](https://www.learnopencv.com/why-does-opencv-use-bgr-color-format/). Ensure you convert to RGB before detection (accuracy will be much reduced if you dont). ```python cv2.cvtColor(image_np, cv2.COLOR_BGR2RGB) ``` - Keeping your input image small will increase fps without any significant accuracy drop.(I used about 320 x 240 compared to the 1280 x 720 which my webcam provides). - Model Quantization. Moving from the current 32 bit to 8 bit can achieve up to 4x reduction in memory required to load and store models. One way to further speed up this model is to explore the use of [8-bit fixed point quantization](https://heartbeat.fritz.ai/8-bit-quantization-and-tensorflow-lite-speeding-up-mobile-inference-with-low-precision-a882dfcafbbd). Performance can also be increased by a clever combination of tracking algorithms with the already decent detection and this is something I am still experimenting with. Have ideas for optimizing better, please share! <img src="images/general.jpg" width="100%"> Note: The detector does reflect some limitations associated with the training set. This includes non-egocentric viewpoints, very noisy backgrounds (e.g in a sea of hands) and sometimes skin tone. There is opportunity to improve these with additional data. ## Integrating Multiple DNNs. One way to make things more interesting is to integrate our new knowledge of where "hands" are with other detectors trained to recognize other objects. Unfortunately, while our hand detector can in fact detect hands, it cannot detect other objects (a factor or how it is trained). To create a detector that classifies multiple different objects would mean a long involved process of assembling datasets for each class and a lengthy training process. > Given the above, a potential strategy is to explore structures that allow us **efficiently** interleave output form multiple pretrained models for various object classes and have them detect multiple objects on a single image. An example of this is with my primary use case where I am interested in understanding the position of objects on a table with respect to hands on same table. I am currently doing some work on a threaded application that loads multiple detectors and outputs bounding boxes on a single image. More on this soon.
bert2 / DtmfDetectionC# implementation of the Goertzel algorithm for DTMF tone (a.k.a. Touch-Tone) detection and localization in audio data. Includes wrappers and extensions for NAudio.
lazerwalker / TinselChoose Your Own Adventure-style interactive fiction for touch tone phones
unframework / Dtmf DetectWebAudio demo to detect touch-tone DTMF codes (phone keys)
JNDreviews / Est Smartphones Under Rs.15000 WHICH ARE THE BEST SMARTPHONES UNDER 15000 . Best Smartphones under Rs.15000 models 2021 Step by step instructions to track down the best cell phones under Rs.15,000?Take a look Cell phones have turned into a central piece of our life. We can't ponder our existence without cell phones. Assuming you are hoping to purchase a Smartphones under ₹15,000, look at our rundown. There are various cell phones accessible in the various sections yet Smartphones under Rs.15,000 are the most jammed cell phone fragment in the Indian market. We get cell phones that offer fantastic worth and progressed components and execution. The accompanying elements that ought to be thought of while purchasing a Smartphone under Rs.15,000 are battery execution, quick charging, great showcase, nice execution and gaming experience, RAM, Processor, camera, working framework, and all that things are remembered for the underneath cell phones list. Cell phones makers center around making quality innovation that is available for everybody. On the off chance that you are searching for a cell phone in your spending plan, look at the beneath rundown of Best Smartphones under Rs.15,000. Here is the current rundown of Best Smartphones under Rs 15,000: Redmi Note 10 Realme 8 Realme Narzo 30 Samsung Galaxy M32 Motorola Moto G30 Redmi Note 10: WHICH ARE THE BEST SMARTPHONES UNDER 15000 Best Smartphones under Rs.15000 models 2021 Redmi Note 10 is one of the Most outstanding Smartphone under Rs.15000.Redmi has as of late refreshed its Note Series. This gadget accompanies a splendid 6.43 inch full HD show and offers great execution. As far as battery life, this cell phone is the best 5,000mAh battery which can undoubtedly most recent daily, charges from 0 to half inside 30 minutes. It has a super AMOLED show that permits you to encounter a smooth and vivid survey insight. Redmi Note 10 controlled by the Qualcomm Snapdragon 678 SoC processor that is amazing enough for relaxed gaming just as ordinary undertakings. Photography is streamlined with a 48 MP Quad Rear camera with a 8MP Ultra-wide focal point, 2MP Macro, and Portrait focal point on the front 13 MP selfie camera. It can record 4K@30fps, support magnificence mode, slow movement, and different elements. Redmi Note 10 has double sound system speakers with Hi-Res ensured sound for a vivid sound encounter. The side-mounted unique finger impression sensor accompanies a flush plan to give you an exceptional vibe. Presently you can open your gadget effectively with a smidgen. Shields your gadget from unforeseen falls and undesirable scratches with Corning Gorilla glasses. Redmi Note 10 comes in 3 distinctive slick shadings Aqua Green, Shadow Black, Frost white.3.5mm sound jack, simply attachment and play for constant amusement. Specialized Specification: Measurements (mm):160.46 x 74.50 x 8.30 Weight (g):178.80 Battery limit (mAh):5000 Quick charging: Proprietary Tones: Aqua Green, Frost White, Shadow Black Show: Screen size (inches):6.43 Touchscreen:Yes Resolution:1080×2400 pixels Assurance type:Gorilla Glass Processor octa-center Processor make Qualcomm Snapdragon 678 RAM:4GB Interior storage:64GB Expandable storage:Yes Expandable capacity type:microSD Expandable capacity up to (GB):512 Committed microSD space: Yes Back camera:48-megapixel + 8-megapixel + 2-megapixel)+ 2-megapixel No. of Rear Cameras:4 Back autofocus:Yes Back Flash: Yes Front camera:13-megapixel No. of Front Cameras:1 Working framework: Android 11 Skin: MIUI 12 Finger impression sensor: Yes Compass/Magnetometer:Yes Nearness sensor: Yes Accelerometer: Yes Surrounding light sensor: Yes Spinner : Yes Experts Eye-getting plan. Great camera yield from the essential camera. Great presentation and incredible battery life. Cons Baffling gaming execution. Realme 8 : The Realme 8 is a decent gadget for media utilization with an alluring striking plan. experience splendid, distinctive shadings with a 6.4″ super AMOLED full showcase. A touch inspecting pace of 180Hz.The fast in-show unique mark scanner gives a simpler open encounter. It accompanies a 5000mAh battery viable with 30W Fast Charging innovation. Hey Res affirmed sound for a vivid sound experience.The super-flimsy 7.99mm and 177g design.6GB RAM with 128GB in-assembled capacity. The Neon Portrait highlights assist with featuring your magnificence. The Dynamic Bokeh highlights assist you with taking more jazzy and dynamic pictures. The front and back cameras assist you with exploiting your inventiveness. Quickly charge the gadget to 100% in only 65 minutes. By utilizing slant shift mode you can add smaller than normal impacts to your photographs to make them look adorable and excellent. Assuming you are searching for Smartphones under Rs.15,000, you can go for Realme 8. We should take a gander at some specialized components: Measurements (mm):160.60 x 73.90 x 7.99 Weight (g):177.00 Battery limit (mAh):5000 Quick charging: Proprietary Shadings: Cyber Black, Cyber Silver Screen size (inches):6.40 Touchscreen: Yes Resolution:1080×2400 pixels Processor octa-center Processor make: MediaTek Helio G95 RAM:8GB Inner storage:128GB Expandable capacity: Yes Expandable capacity type:microSD Back camera:64-megapixel + 8-megapixel + 2-megapixel + 2-megapixel No. of Rear Cameras:4 Back self-adjust: Yes Back Flash: Yes Front camera:16-megapixel No. of Front Cameras:1 Working framework: Android 11 Skin: Realme UI 2.0 Face open: Yes In-Display Fingerprint Sensor: Yes Compass/Magnetometer:Yes Closeness sensor: Yes Accelerometer: Yes Encompassing light sensor: Yes Gyrator : Yes Stars Cons Dependable execution Disillusioning camera experience 90Hz revive rate show Bloatware-perplexed UI Great battery life. Slow charging Realme Narzo 30: On the off chance that you are searching for Best Smartphones under Rs.15,000, look at this Realme Narzo 30. The Realme Narzo 30 is a recently dispatched cell phone with brilliant components. Realme is one of the quickest developing brand in the Indian market. Going to its particulars, the new gadget has a splendid 6.5″ presentation which can assist you with opening up a totally different skyline. The cell phone has a huge 5000mAh battery. The gadget accompanies a MediaTek Helio G-85 octa-center processor. Realme Narzo 30 displays 64GB that is further expandable up to 256GB utilizing a microSD card. It accompanies a 48 MP AI Triple Camera with a 16MP front camera. It offers availability alternatives like Mobile Hotspot, Bluetooth v5.0, A-GPS Glonass, WiFi 802.11, USB Type-C, USB Charging alongside help for 4G VoLTE organization. This presentation of this Realme Narzo 30 offers a smooth looking over experience. This Realme Narzo 30 components a race track-roused V-speed configuration to offer an exciting, restless look. The realme Narzo 30 has Android 11 OS, and it is smooth and easy to use. The Realme Narzo 30 is one of the Most amazing Smartphone under Rs.15,000. We should take a gander at some specialized provisions: Screen Size (In Inches):6.5 Show Technology :IPS LCD Screen Resolution (In Pixels):1080 x 2400 Pixel Density (Ppi):270 Invigorate Rate:90 Hz Camera Features:Triple Back Camera Megapixel:48 + 2 + 2 Front Camera Megapixel:16 Face Detection:Yes Hdr:Yes Battery Capacity (Mah):5000 Quick Charging Wattage:30 W Charging Type Port :Type-C Cpu:Mediatek Helio G95 Central processor Speed:2×2.05 GHz, 6×2.0 GHz Processor Cores:Octa Ram:4 GB Gpu:Mali-G76 MC4 Measurements (Lxbxh-In Mm):162.3 x 75.4 x 9.4 Weight (In Grams):192 Storage:64 GB Stars Extraordinary presentation to watch recordings. Respectable essential camera in daytime. Cons Helpless low-light camera execution. Samsung Galaxy F22: Samsung presents the Samsung universe F22 cell phone which is the Best Smartphone under Rs.15,000.if you are a moderate client like online media, observe a few recordings, and mess around for the sake of entertainment, then, at that point this telephone is intended for you. Keeping in see the mid-range level of passage Samsung has made its quality felt inside the majority. Eminent telephone with a heavenly look and very magnificent execution Samsung Galaxy F22 accompanies a 16.23cm(6.4″)sAMOLED vastness U showcase. Super AMOLED with HD very much designed which is satisfying to the eye for long viewing.Glam up your feed with a genuine 42MP Quad camera. Consistent performing various tasks, monstrous capacity, and force loaded with the MTK G80 processor.Scanner.Available in two cool shadings Denim dark, Denim blue. Samsung Galaxy F22 accompanies a 6000mAh battery so you can go a whole day without having to continually re-energize. Each photograph that you catch on this Samsung cosmic system F22 will be clear and reasonable. make your installment speedy and quick by utilizing Samsung pay smaller than usual. We should take a gander at some specialized components: Measurements (mm):159.90 x 74.00 x 9.30 Weight (g):203.00 Battery limit (mAh):6000 Screen size (inches):6.40 Touchscreen:Yes Resolution:720×1600 pixels Assurance type:Gorilla Glass Processor:octa-center Processor make:MediaTek Helio G80 RAM:4GB Inward storage:64GB Working system:Android 11 Back camera:48-megapixel 8-megapixel + 2-megapixel + 2-megapixel No. of Rear Cameras:4 Back autofocus:Yes front camera:13-megapixel No. of Front Cameras:1 Aces: 90 Hz Refresh Rate. Samsung Pay Mini. Up-to-date Design.Motorola Moto G30: Motorola Moto G30: Motorola has dispatched Moto G30 is one of the Most outstanding Smartphones under Rs.15,000 in India. The cell phone has Android 11 OS with a close stock interface. Moto G30 accompanies a quad-camera which incorporates a 64MP essential sensor and 13 MP camera at the front. Moto G30 has two distinct shadings Dark Pearl and Pastel Sky tones. Moto G30 accompanies a 6.5-inch HD show with a 20:9 angle ratio,90Hz revive rate, and 720*1600 pixels show goal. The Moto G30 runs on Android 11. The telephone is stacked with highlights like Night Vision, shot advancement, Auto grin catch, HDR, and RAW photograph output.it is controlled by a Qualcomm Snapdragon 662 octa-center processor alongside 4 GB of RAM.it accompanies 64 GB of installed stockpiling that is expandable up to 512GB by means of a microSD card. Moto G30 has a 5,000mAh battery that can go more than 2 days on a solitary charge. Far reaching equipment and programming security ensure your own information is better ensured. By utilizing NFC innovation assists you with making smooth, quick, and secure installments when you hold it close to a NFC terminal.Connectivity choice incorporate Wi-Fi 802.11 a/b/g/n/ac, GPS, Bluetooth v5.00, NFC, and USB Type-C.It has measurements 169.60 x 75.90 x 9.80mm and weighs 225.00 g. We should take a gander at some specialized components: Manufacturer:Moto Model:G30 Dispatch Date (Global):09-03-2021 Working System:Android Operating system Version:11 Display:6.50-inch, 720×1600 pixels Processor:Qualcomm Snapdragon 662 RAM:4GB Battery Capacity:5000mAh Back Camera: 64MP + 8MP +2MP Front Camera:13MP Computer chip Speed:4×2.0 GHz, 4×1.8 GHz Processor Cores:Octa-center Gpu:Adreno 610 Measurements (Lxbxh-In Mm) :165.2 x 75.7 x 9.1 Weight (In Grams) :200 Storage:128 GB Quick Charging Wattage:20W Charging Type Port:Type-C Experts: High invigorate rate show Clean Android 11 UI Great battery execution Good cameras Cons: Huge and cumbersome Forceful Night mode. FOR THIS KIND OF MORE COOL STUFF VISIT OUR SITE (JUSTNEWSDAY.COM)
ThibaultDucray / TonexPedal TouchOSC TemplateComplete template to parameter the ToneX pedal from your computer using MIDI (standard MIDI or MIDI over USB). Switch presets, activate / deactivate effects (Reverb, Compressor, NoiseGate), tune effects parameters.
TT3D / DivaSlideAn open source touch slider for Project DIVA Future Tone (PS4) controllers
poprhythm / Touch Tone MidiArduino sketch to operate landline phone which sends MIDI notes
andreas-yuji-fujiki-dev / Material GTK3 Purple MOD With BordersA customized Material-GTK theme with a purple color scheme and sharp, non-rounded edges for a sleek, modern look. This theme brings a unique touch to your GNOME desktop, blending the elegance of Material Design with vibrant purple tones, perfect for users seeking a distinctive and clean aesthetic.
codecaffeine / TouchTonesMore playing around with audio on iOS
vanja-san / Comfort EditionA touch more comfy! Skin in standard Steam tones, but with a drop of comfort.
cheehieu / Touch Tone RecognitionA Matlab algorithm to decode computer- and user-generated DTMF touch tones.
AlphaTechnolog / Duskbloom[WIP] Darker, moodier tone with a touch of elegance