159 skills found · Page 1 of 6
oxnr / Awesome AnalyticsA curated list of analytics frameworks, software and other tools.
dart-code-checker / Dart Code MetricsSoftware analytics tool that helps developers analyse and improve software quality.
chaoss / GrimoirelabGrimoireLab: platform for software development analytics and insights
change-metrics / MonocleMonocle helps teams and individual to better organize daily duties and to detect anomalies in the way changes are produced and reviewed.
bavc / QctoolsQCTools (Quality Control Tools for Video Preservation) is a free and open source software tool that helps users analyze and understand their digitized video files through use of audiovisual analytics and filtering. QCTools is funded by the National Endowment for the Humanities and the Knight Foundation, and is developed by the Bay Area Video Coalition.
chaoss / Grimoirelab PercevalSend Sir Perceval on a quest to retrieve and gather data from software repositories.
introlab / OpenIMUOpen Source Analytics & Visualisation Software for Inertial Measurement Units
WMJonssen / Centcount AnalyticsAn open-source web analytics software. Developed by using PHP + MySQL + Redis, Can be easily deployed on your own server, 100% data ownership.
microsoft / Microsoft Rocket Video Analytics PlatformA highly extensible software stack to empower everyone to build practical real-world live video analytics applications for object detection and counting with cutting edge machine learning algorithms.
git-insights / Git InsightsGit Insights is an analytics tool to give you insights on your software projects and teams.
Datamart / Komito🔖 Komito Analytics is a free, open-source enhancement for the most popular web analytics software.
wa0x6e / ResqueBoardResqueBoard is an analytics software for PHP Resque. Monitor your workers health and job activities in realtime
Masudbro94 / Python Hacked Mobile Phone Open in app Get started ITNEXT Published in ITNEXT You have 2 free member-only stories left this month. Sign up for Medium and get an extra one Kush Kush Follow Apr 15, 2021 · 7 min read · Listen Save How you can Control your Android Device with Python Photo by Caspar Camille Rubin on Unsplash Photo by Caspar Camille Rubin on Unsplash Introduction A while back I was thinking of ways in which I could annoy my friends by spamming them with messages for a few minutes, and while doing some research I came across the Android Debug Bridge. In this quick guide I will show you how you can interface with it using Python and how to create 2 quick scripts. The ADB (Android Debug Bridge) is a command line tool (CLI) which can be used to control and communicate with an Android device. You can do many things such as install apps, debug apps, find hidden features and use a shell to interface with the device directly. To enable the ADB, your device must firstly have Developer Options unlocked and USB debugging enabled. To unlock developer options, you can go to your devices settings and scroll down to the about section and find the build number of the current software which is on the device. Click the build number 7 times and Developer Options will be enabled. Then you can go to the Developer Options panel in the settings and enable USB debugging from there. Now the only other thing you need is a USB cable to connect your device to your computer. Here is what todays journey will look like: Installing the requirements Getting started The basics of writing scripts Creating a selfie timer Creating a definition searcher Installing the requirements The first of the 2 things we need to install, is the ADB tool on our computer. This comes automatically bundled with Android Studio, so if you already have that then do not worry. Otherwise, you can head over to the official docs and at the top of the page there should be instructions on how to install it. Once you have installed the ADB tool, you need to get the python library which we will use to interface with the ADB and our device. You can install the pure-python-adb library using pip install pure-python-adb. Optional: To make things easier for us while developing our scripts, we can install an open-source program called scrcpy which allows us to display and control our android device with our computer using a mouse and keyboard. To install it, you can head over to the Github repo and download the correct version for your operating system (Windows, macOS or Linux). If you are on Windows, then extract the zip file into a directory and add this directory to your path. This is so we can access the program from anywhere on our system just by typing in scrcpy into our terminal window. Getting started Now that all the dependencies are installed, we can start up our ADB and connect our device. Firstly, connect your device to your PC with the USB cable, if USB debugging is enabled then a message should pop up asking if it is okay for your PC to control the device, simply answer yes. Then on your PC, open up a terminal window and start the ADB server by typing in adb start-server. This should print out the following messages: * daemon not running; starting now at tcp:5037 * daemon started successfully If you also installed scrcpy, then you can start that by just typing scrcpy into the terminal. However, this will only work if you added it to your path, otherwise you can open the executable by changing your terminal directory to the directory of where you installed scrcpy and typing scrcpy.exe. Hopefully if everything works out, you should be able to see your device on your PC and be able to control it using your mouse and keyboard. Now we can create a new python file and check if we can find our connected device using the library: Here we import the AdbClient class and create a client object using it. Then we can get a list of devices connected. Lastly, we get the first device out of our list (it is generally the only one there if there is only one device connected). The basics of writing scripts The main way we are going to interface with our device is using the shell, through this we can send commands to simulate a touch at a specific location or to swipe from A to B. To simulate screen touches (taps) we first need to work out how the screen coordinates work. To help with these we can activate the pointer location setting in the developer options. Once activated, wherever you touch on the screen, you can see that the coordinates for that point appear at the top. The coordinate system works like this: A diagram to show how the coordinate system works A diagram to show how the coordinate system works The top left corner of the display has the x and y coordinates (0, 0) respectively, and the bottom right corners’ coordinates are the largest possible values of x and y. Now that we know how the coordinate system works, we need to check out the different commands we can run. I have made a list of commands and how to use them below for quick reference: Input tap x y Input text “hello world!” Input keyevent eventID Here is a list of some common eventID’s: 3: home button 4: back button 5: call 6: end call 24: volume up 25: volume down 26: turn device on or off 27: open camera 64: open browser 66: enter 67: backspace 207: contacts 220: brightness down 221: brightness up 277: cut 278: copy 279: paste If you wanted to find more, here is a long list of them here. Creating a selfie timer Now we know what we can do, let’s start doing it. In this first example I will show you how to create a quick selfie timer. To get started we need to import our libraries and create a connect function to connect to our device: You can see that the connect function is identical to the previous example of how to connect to your device, except here we return the device and client objects for later use. In our main code, we can call the connect function to retrieve the device and client objects. From there we can open up the camera app, wait 5 seconds and take a photo. It’s really that simple! As I said before, this is simply replicating what you would usually do, so thinking about how to do things is best if you do them yourself manually first and write down the steps. Creating a definition searcher We can do something a bit more complex now, and that is to ask the browser to find the definition of a particular word and take a screenshot to save it on our computer. The basic flow of this program will be as such: 1. Open the browser 2. Click the search bar 3. Enter the search query 4. Wait a few seconds 5. Take a screenshot and save it But, before we get started, you need to find the coordinates of your search bar in your default browser, you can use the method I suggested earlier to find them easily. For me they were (440, 200). To start, we will have to import the same libraries as before, and we will also have our same connect method. In our main function we can call the connect function, as well as assign a variable to the x and y coordinates of our search bar. Notice how this is a string and not a list or tuple, this is so we can easily incorporate the coordinates into our shell command. We can also take an input from the user to see what word they want to get the definition for: We will add that query to a full sentence which will then be searched, this is so that we can always get the definition. After that we can open the browser and input our search query into the search bar as such: Here we use the eventID 66 to simulate the press of the enter key to execute our search. If you wanted to, you could change the wait timings per your needs. Lastly, we will take a screenshot using the screencap method on our device object, and we can save that as a .png file: Here we must open the file in the write bytes mode because the screencap method returns bytes representing the image. If all went according to plan, you should have a quick script which searches for a specific word. Here it is working on my phone: A GIF to show how the definition searcher example works on my phone A GIF to show how the definition searcher example works on my phone Final thoughts Hopefully you have learned something new today, personally I never even knew this was a thing before I did some research into it. The cool thing is, that you can do anything you normal would be able to do, and more since it just simulates your own touches and actions! I hope you enjoyed the article and thank you for reading! 💖 468 9 468 9 More from ITNEXT Follow ITNEXT is a platform for IT developers & software engineers to share knowledge, connect, collaborate, learn and experience next-gen technologies. Sabrina Amrouche Sabrina Amrouche ·Apr 15, 2021 Using the Spotify Algorithm to Find High Energy Physics Particles Python 5 min read Using the Spotify Algorithm to Find High Energy Physics Particles Wenkai Fan Wenkai Fan ·Apr 14, 2021 Responsive design at different levels in Flutter Flutter 3 min read Responsive design at different levels in Flutter Abhishek Gupta Abhishek Gupta ·Apr 14, 2021 Getting started with Kafka and Rust: Part 2 Kafka 9 min read Getting started with Kafka and Rust: Part 2 Adriano Raiano Adriano Raiano ·Apr 14, 2021 How to properly internationalize a React application using i18next React 17 min read How to properly internationalize a React application using i18next Gary A. Stafford Gary A. Stafford ·Apr 14, 2021 AWS IoT Core for LoRaWAN, AWS IoT Analytics, and Amazon QuickSight Lora 11 min read AWS IoT Core for LoRaWAN, Amazon IoT Analytics, and Amazon QuickSight Read more from ITNEXT Recommended from Medium Morpheus Morpheus Morpheus Swap — Resurrection Ashutosh Kumar Ashutosh Kumar GIT Branching strategies and GitFlow Balachandar Paulraj Balachandar Paulraj Delta Lake Clones: Systematic Approach for Testing, Sharing data Jason Porter Jason Porter Week 3 -Yieldly No-Loss Lottery Results Casino slot machines Mikolaj Szabó Mikolaj Szabó in HackerNoon.com Why functional programming matters Tt Tt Set Up LaTeX on Mac OS X Sierra Goutham Pratapa Goutham Pratapa Upgrade mongo to the latest build Julia Says Julia Says in Top Software Developers in the World How to Choose a Software Vendor AboutHelpTermsPrivacy Get the Medium app A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
dkazanc / TomoPhantomSoftware to generate 2D/3D/4D analytical phantoms and their Radon transforms (parallel beam) for image processing
JanVanHaaren / Soccer Analytics ResourcesCollection of references, datasets and software libraries for soccer analytics.
OpenVisualCloud / DockerfilesOptimized media, analytics and graphics software stack images. Use the dockerfile(s) in your project or as a recipe book for bare metal installation.
Scthe / Frostbitten Hair WebgpuSoftware rasterizing hair strands with analytical AA and OIT. Inspired by Frostbite's hair system: "Every Strand Counts: Physics and Rendering Behind Frostbite's Hair".
feststelltaste / Awesome Software AnalyticsCurated list of awesome resources and links about Software Analytics
bancolombia / Dart Code LinterDart Code Linter is a software analytics tool that helps developers analyse and improve software quality. Dart Code Linter is based on a fork of Dart Code Metrics. We welcome contributions from other developers. Please feel free to submit pull-requests and bugreports to this GitHub repository.
Aryia-Behroziuan / NeuronsAn ANN is a model based on a collection of connected units or nodes called "artificial neurons", which loosely model the neurons in a biological brain. Each connection, like the synapses in a biological brain, can transmit information, a "signal", from one artificial neuron to another. An artificial neuron that receives a signal can process it and then signal additional artificial neurons connected to it. In common ANN implementations, the signal at a connection between artificial neurons is a real number, and the output of each artificial neuron is computed by some non-linear function of the sum of its inputs. The connections between artificial neurons are called "edges". Artificial neurons and edges typically have a weight that adjusts as learning proceeds. The weight increases or decreases the strength of the signal at a connection. Artificial neurons may have a threshold such that the signal is only sent if the aggregate signal crosses that threshold. Typically, artificial neurons are aggregated into layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first layer (the input layer) to the last layer (the output layer), possibly after traversing the layers multiple times. The original goal of the ANN approach was to solve problems in the same way that a human brain would. However, over time, attention moved to performing specific tasks, leading to deviations from biology. Artificial neural networks have been used on a variety of tasks, including computer vision, speech recognition, machine translation, social network filtering, playing board and video games and medical diagnosis. Deep learning consists of multiple hidden layers in an artificial neural network. This approach tries to model the way the human brain processes light and sound into vision and hearing. Some successful applications of deep learning are computer vision and speech recognition.[68] Decision trees Main article: Decision tree learning Decision tree learning uses a decision tree as a predictive model to go from observations about an item (represented in the branches) to conclusions about the item's target value (represented in the leaves). It is one of the predictive modeling approaches used in statistics, data mining, and machine learning. Tree models where the target variable can take a discrete set of values are called classification trees; in these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels. Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision making. Support vector machines Main article: Support vector machines Support vector machines (SVMs), also known as support vector networks, are a set of related supervised learning methods used for classification and regression. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that predicts whether a new example falls into one category or the other.[69] An SVM training algorithm is a non-probabilistic, binary, linear classifier, although methods such as Platt scaling exist to use SVM in a probabilistic classification setting. In addition to performing linear classification, SVMs can efficiently perform a non-linear classification using what is called the kernel trick, implicitly mapping their inputs into high-dimensional feature spaces. Illustration of linear regression on a data set. Regression analysis Main article: Regression analysis Regression analysis encompasses a large variety of statistical methods to estimate the relationship between input variables and their associated features. Its most common form is linear regression, where a single line is drawn to best fit the given data according to a mathematical criterion such as ordinary least squares. The latter is often extended by regularization (mathematics) methods to mitigate overfitting and bias, as in ridge regression. When dealing with non-linear problems, go-to models include polynomial regression (for example, used for trendline fitting in Microsoft Excel[70]), logistic regression (often used in statistical classification) or even kernel regression, which introduces non-linearity by taking advantage of the kernel trick to implicitly map input variables to higher-dimensional space. Bayesian networks Main article: Bayesian network A simple Bayesian network. Rain influences whether the sprinkler is activated, and both rain and the sprinkler influence whether the grass is wet. A Bayesian network, belief network, or directed acyclic graphical model is a probabilistic graphical model that represents a set of random variables and their conditional independence with a directed acyclic graph (DAG). For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases. Efficient algorithms exist that perform inference and learning. Bayesian networks that model sequences of variables, like speech signals or protein sequences, are called dynamic Bayesian networks. Generalizations of Bayesian networks that can represent and solve decision problems under uncertainty are called influence diagrams. Genetic algorithms Main article: Genetic algorithm A genetic algorithm (GA) is a search algorithm and heuristic technique that mimics the process of natural selection, using methods such as mutation and crossover to generate new genotypes in the hope of finding good solutions to a given problem. In machine learning, genetic algorithms were used in the 1980s and 1990s.[71][72] Conversely, machine learning techniques have been used to improve the performance of genetic and evolutionary algorithms.[73] Training models Usually, machine learning models require a lot of data in order for them to perform well. Usually, when training a machine learning model, one needs to collect a large, representative sample of data from a training set. Data from the training set can be as varied as a corpus of text, a collection of images, and data collected from individual users of a service. Overfitting is something to watch out for when training a machine learning model. Federated learning Main article: Federated learning Federated learning is an adapted form of distributed artificial intelligence to training machine learning models that decentralizes the training process, allowing for users' privacy to be maintained by not needing to send their data to a centralized server. This also increases efficiency by decentralizing the training process to many devices. For example, Gboard uses federated machine learning to train search query prediction models on users' mobile phones without having to send individual searches back to Google.[74] Applications There are many applications for machine learning, including: Agriculture Anatomy Adaptive websites Affective computing Banking Bioinformatics Brain–machine interfaces Cheminformatics Citizen science Computer networks Computer vision Credit-card fraud detection Data quality DNA sequence classification Economics Financial market analysis[75] General game playing Handwriting recognition Information retrieval Insurance Internet fraud detection Linguistics Machine learning control Machine perception Machine translation Marketing Medical diagnosis Natural language processing Natural language understanding Online advertising Optimization Recommender systems Robot locomotion Search engines Sentiment analysis Sequence mining Software engineering Speech recognition Structural health monitoring Syntactic pattern recognition Telecommunication Theorem proving Time series forecasting User behavior analytics In 2006, the media-services provider Netflix held the first "Netflix Prize" competition to find a program to better predict user preferences and improve the accuracy of its existing Cinematch movie recommendation algorithm by at least 10%. A joint team made up of researchers from AT&T Labs-Research in collaboration with the teams Big Chaos and Pragmatic Theory built an ensemble model to win the Grand Prize in 2009 for $1 million.[76] Shortly after the prize was awarded, Netflix realized that viewers' ratings were not the best indicators of their viewing patterns ("everything is a recommendation") and they changed their recommendation engine accordingly.[77] In 2010 The Wall Street Journal wrote about the firm Rebellion Research and their use of machine learning to predict the financial crisis.[78] In 2012, co-founder of Sun Microsystems, Vinod Khosla, predicted that 80% of medical doctors' jobs would be lost in the next two decades to automated machine learning medical diagnostic software.[79] In 2014, it was reported that a machine learning algorithm had been applied in the field of art history to study fine art paintings and that it may have revealed previously unrecognized influences among artists.[80] In 2019 Springer Nature published the first research book created using machine learning.[81] Limitations Although machine learning has been transformative in some fields, machine-learning programs often fail to deliver expected results.[82][83][84] Reasons for this are numerous: lack of (suitable) data, lack of access to the data, data bias, privacy problems, badly chosen tasks and algorithms, wrong tools and people, lack of resources, and evaluation problems.[85] In 2018, a self-driving car from Uber failed to detect a pedestrian, who was killed after a collision.[86] Attempts to use machine learning in healthcare with the IBM Watson system failed to deliver even after years of time and billions of dollars invested.[87][88] Bias Main article: Algorithmic bias Machine learning approaches in particular can suffer from different data biases. A machine learning system trained on current customers only may not be able to predict the needs of new customer groups that are not represented in the training data. When trained on man-made data, machine learning is likely to pick up the same constitutional and unconscious biases already present in society.[89] Language models learned from data have been shown to contain human-like biases.[90][91] Machine learning systems used for criminal risk assessment have been found to be biased against black people.[92][93] In 2015, Google photos would often tag black people as gorillas,[94] and in 2018 this still was not well resolved, but Google reportedly was still using the workaround to remove all gorillas from the training data, and thus was not able to recognize real gorillas at all.[95] Similar issues with recognizing non-white people have been found in many other systems.[96] In 2016, Microsoft tested a chatbot that learned from Twitter, and it quickly picked up racist and sexist language.[97] Because of such challenges, the effective use of machine learning may take longer to be adopted in other domains.[98] Concern for fairness in machine learning, that is, reducing bias in machine learning and propelling its use for human good is increasingly expressed by artificial intelligence scientists, including Fei-Fei Li, who reminds engineers that "There’s nothing artificial about AI...It’s inspired by people, it’s created by people, and—most importantly—it impacts people. It is a powerful tool we are only just beginning to understand, and that is a profound responsibility.”[99] Model assessments Classification of machine learning models can be validated by accuracy estimation techniques like the holdout method, which splits the data in a training and test set (conventionally 2/3 training set and 1/3 test set designation) and evaluates the performance of the training model on the test set. In comparison, the K-fold-cross-validation method randomly partitions the data into K subsets and then K experiments are performed each respectively considering 1 subset for evaluation and the remaining K-1 subsets for training the model. In addition to the holdout and cross-validation methods, bootstrap, which samples n instances with replacement from the dataset, can be used to assess model accuracy.[100] In addition to overall accuracy, investigators frequently report sensitivity and specificity meaning True Positive Rate (TPR) and True Negative Rate (TNR) respectively. Similarly, investigators sometimes report the false positive rate (FPR) as well as the false negative rate (FNR). However, these rates are ratios that fail to reveal their numerators and denominators. The total operating characteristic (TOC) is an effective method to express a model's diagnostic ability. TOC shows the numerators and denominators of the previously mentioned rates, thus TOC provides more information than the commonly used receiver operating characteristic (ROC) and ROC's associated area under the curve (AUC).[101] Ethics Machine learning poses a host of ethical questions. Systems which are trained on datasets collected with biases may exhibit these biases upon use (algorithmic bias), thus digitizing cultural prejudices.[102] For example, using job hiring data from a firm with racist hiring policies may lead to a machine learning system duplicating the bias by scoring job applicants against similarity to previous successful applicants.[103][104] Responsible collection of data and documentation of algorithmic rules used by a system thus is a critical part of machine learning. Because human languages contain biases, machines trained on language corpora will necessarily also learn these biases.[105][106] Other forms of ethical challenges, not related to personal biases, are more seen in health care. There are concerns among health care professionals that these systems might not be designed in the public's interest but as income-generating machines. This is especially true in the United States where there is a long-standing ethical dilemma of improving health care, but also increasing profits. For example, the algorithms could be designed to provide patients with unnecessary tests or medication in which the algorithm's proprietary owners hold stakes. There is huge potential for machine learning in health care to provide professionals a great tool to diagnose, medicate, and even plan recovery paths for patients, but this will not happen until the personal biases mentioned previously, and these "greed" biases are addressed.[107] Hardware Since the 2010s, advances in both machine learning algorithms and computer hardware have led to more efficient methods for training deep neural networks (a particular narrow subdomain of machine learning) that contain many layers of non-linear hidden units.[108] By 2019, graphic processing units (GPUs), often with AI-specific enhancements, had displaced CPUs as the dominant method of training large-scale commercial cloud AI.[109] OpenAI estimated the hardware compute used in the largest deep learning projects from AlexNet (2012) to AlphaZero (2017), and found a 300,000-fold increase in the amount of compute required, with a doubling-time trendline of 3.4 months.[110][111] Software Software suites containing a variety of machine learning algorithms include the following: Free and open-source so