1,092 skills found · Page 19 of 37
roannav / 30DayChartChallenge2022Data visualization charts made mostly with Python, matplotlib, pandas, and numpy, and some scipy, plotly, seaborn, holoviews bokeh, streamlit, and R. Dataviz on Twitter. #30DayChartChallenge
MarieHofmann / Working With Climate Data In PythonA recipe book on how to deal with climate data in a progammatical way: how to work with python, the climatological datasets of ECMWF and how to generate different plots out of it.
nitinkaushik01 / Plotly Dash Web App DataScienceIt's a repository which contain source code for building python based plotly dash web apps. One can follow the tutorial to build beautiful Analytics & Data Science related interactive and dynamic dashboards. Various HTML components like slider , drop down, check boxes can be utilized to alter the graphs. One can see the outputs in the web browser.
dostonshernazarov / AnimalsClassificationModelAnimalsClassificationModel is a Python app that classifies animals from images using a ResNet34 model. It features a Streamlit interface and utilizes Plotly for visualization.
emmanueladeniyi / Machine Learning For Materials Science ProjectsQuerying databases, Organizing and Plotting Data: Query Pymatgen for properties like Young's modulus and melting temperature Organize data into Pandas dataframes and python dictionaries and plot using Plotly Linear Regression to predict material properties: Performed linear regression using the scikit learn package and predict Young's modulus Visualize trends in data and 'goodness of fit' of linear model Neural Network Regression to predict material properties: Used neural networks to perform non-linear, higher order regression Visualize trends and compare non-linear model to linear regression Neural Network Classification to predict crystal structures: Used neural networks to classify elements according to their crystal structures
simorxb / SMC PendulumExample of how to implement SMC control on a pendulum model, using the Super Twisting Algorithm. This Python code is used to plot the result of a model implemented using Collimator (https://collimator.ai).
ajaybhatiya1234 / DEEP FACE Dectection01 Read the technical deep dive: https://www.dessa.com/post/deepfake-detection-that-actually-works # Visual DeepFake Detection In our recent [article](https://www.dessa.com/post/deepfake-detection-that-actually-works), we make the following contributions: * We show that the model proposed in current state of the art in video manipulation (FaceForensics++) does not generalize to real-life videos randomly collected from Youtube. * We show the need for the detector to be constantly updated with real-world data, and propose an initial solution in hopes of solving deepfake video detection. Our Pytorch implementation, conducts extensive experiments to demonstrate that the datasets produced by Google and detailed in the FaceForensics++ paper are not sufficient for making neural networks generalize to detect real-life face manipulation techniques. It also provides a current solution for such behavior which relies on adding more data. Our Pytorch model is based on a pre-trained ResNet18 on Imagenet, that we finetune to solve the deepfake detection problem. We also conduct large scale experiments using Dessa's open source scheduler + experiment manger [Atlas](https://github.com/dessa-research/atlas). ## Setup ## Prerequisities To run the code, your system should meet the following requirements: RAM >= 32GB , GPUs >=1 ## Steps 0. Install [nvidia-docker](https://github.com/nvidia/nvidia-docker/wiki/Installation-(version-2.0)) 00. Install [ffmpeg](https://www.ffmpeg.org/download.html) or `sudo apt install ffmpeg` 1. Git Clone this repository. 2. If you haven't already, install [Atlas](https://github.com/dessa-research/atlas). 3. Once you've installed Atlas, activate your environment if you haven't already, and navigate to your project folder. That's it, You're ready to go! ## Datasets Half of the dataset used in this project is from the [FaceForensics](https://github.com/ondyari/FaceForensics/tree/master/dataset) deepfake detection dataset. . To download this data, please make sure to fill out the [google form](https://github.com/ondyari/FaceForensics/#access) to request access to the data. For the dataset that we collected from Youtube, it is accessible on [S3](ttps://deepfake-detection.s3.amazonaws.com/augment_deepfake.tar.gz) for download. To automatically download and restructure both datasets, please execute: ``` bash restructure_data.sh faceforensics_download.py ``` Note: You need to have received the download script from FaceForensics++ people before executing the restructure script. Note2: We created the `restructure_data.sh` to do a split that replicates our exact experiments avaiable in the UI above, please feel free to change the splits as you wish. ## Walkthrough Before starting to train/evaluate models, we should first create the docker image that we will be running our experiments with. To do so, we already prepared a dockerfile to do that inside `custom_docker_image`. To create the docker image, execute the following commands in terminal: ``` cd custom_docker_image nvidia-docker build . -t atlas_ff ``` Note: if you change the image name, please make sure you also modify line 16 of `job.config.yaml` to match the docker image name. Inside `job.config.yaml`, please modify the data path on host from `/media/biggie2/FaceForensics/datasets/` to the absolute path of your `datasets` folder. The folder containing your datasets should have the following structure: ``` datasets ├── augment_deepfake (2) │ ├── fake │ │ └── frames │ ├── real │ │ └── frames │ └── val │ ├── fake │ └── real ├── base_deepfake (1) │ ├── fake │ │ └── frames │ ├── real │ │ └── frames │ └── val │ ├── fake │ └── real ├── both_deepfake (3) │ ├── fake │ │ └── frames │ ├── real │ │ └── frames │ └── val │ ├── fake │ └── real ├── precomputed (4) └── T_deepfake (0) ├── manipulated_sequences │ ├── DeepFakeDetection │ ├── Deepfakes │ ├── Face2Face │ ├── FaceSwap │ └── NeuralTextures └── original_sequences ├── actors └── youtube ``` Notes: * (0) is the dataset downloaded using the FaceForensics repo scripts * (1) is a reshaped version of FaceForensics data to match the expected structure by the codebase. subfolders called `frames` contain frames collected using `ffmpeg` * (2) is the augmented dataset, collected from youtube, available on s3. * (3) is the combination of both base and augmented datasets. * (4) precomputed will be automatically created during training. It holds cashed cropped frames. Then, to run all the experiments we will show in the article to come, you can launch the script `hparams_search.py` using: ```bash python hparams_search.py ``` ## Results In the following pictures, the title for each subplot is in the form `real_prob, fake_prob | prediction | label`. #### Model trained on FaceForensics++ dataset For models trained on the paper dataset alone, we notice that the model only learns to detect the manipulation techniques mentioned in the paper and misses all the manipulations in real world data (from data)   #### Model trained on Youtube dataset Models trained on the youtube data alone learn to detect real world deepfakes, but also learn to detect easy deepfakes in the paper dataset as well. These models however fail to detect any other type of manipulation (such as NeuralTextures).   #### Model trained on Paper + Youtube dataset Finally, models trained on the combination of both datasets together, learns to detect both real world manipulation techniques as well as the other methods mentioned in FaceForensics++ paper.   for a more in depth explanation of these results, please refer to the [article](https://www.dessa.com/post/deepfake-detection-that-actually-works) we published. More results can be seen in the [interactive UI](http://deepfake-detection.dessa.com/projects) ## Help improve this technology Please feel free to fork this work and keep pushing on it. If you also want to help improving the deepfake detection datasets, please share your real/forged samples at foundations@dessa.com. ## LICENSE © 2020 Square, Inc. ATLAS, DESSA, the Dessa Logo, and others are trademarks of Square, Inc. All third party names and trademarks are properties of their respective owners and are used for identification purposes only.
yirogue / Choropleth Maps In Python Using PlotlyThe easiest way to build a choropleth map in Python using Plotly
MeteoSwiss-APN / PyflexplotPython FLEXPART Plotting
LSEG-API-Samples / Example.RDP.Python.ESGGraphPlotThis example will demonstrate how we can retrieve ESG data from Refinitiv Data Platform (RDP). We will be using Python with RDP API to request ESG data on the Jupyter Notebook. The notebook allows the user to create and share documents that contain live code, narrative text, visualizations and we can also plot the graph on the notebook.
chlewissoil / TernaryPlotPyA ternary plot, in Python, written originally for soil texture representation.
TeeOhh / TRECSNLP text recommendation system built in Python using Gensim, spaCy, and Plotly Dash
SuhaneeP / Wireless CommunicationsContains MATLAB and Python codes and plots for deriving inferences for various concepts of Wireless Communications.
oKermorgant / Log2plotA module to generate data logs from C++ and plot them from Python
bmrb-io / PyBMRBBMRB data visualization tools using python Plotly
ckflight / RADAR 24GHZ NEURAL NETWORKPython Radar record parse, fft video and waterfall plot and Matlab object type detection using neural network classification algorithm.
sadol / BrylogSimple linux program with GUI written in Python - logging and plotting Brymen 257 multimeter output.
cnborja / Floquet Fourier ApproachPython scripts to calculate and plot the quasienergy spectra from the time-independent Floquet hamiltonian of physical systems
nickderobertis / SensitivitySensitivity Analysis in Python - Gradient DataFrames and Hex-Bin Plots
paulgavrikov / Parallel Matplotlib GridThis Python 3 module helps you speedup generation of subplots in pseudo-parallel mode using matplotlib and multiprocessing. This can be useful if you are dealing with expensive preprocessing or plotting tasks such as violin plots per subplot.