59 skills found · Page 1 of 2
zju3dv / GVHMRCode for "GVHMR: World-Grounded Human Motion Recovery via Gravity-View Coordinates", Siggraph Asia 2024
xeokit / Xeokit SDK3D BIM IFC Viewer SDK for AEC engineering applications. Open Source JavaScript Toolkit based on pure WebGL for top performance, real-world coordinates and full double precision
TUMFTM / Racetrack DatabaseThis repository contains center lines (x- and y-coordinates), track widths and race lines for over 20 race tracks (F1 and DTM) all over the world
StefanJAuer / RaySARRaySAR is a 3D synthetic aperture radar (SAR) simulator which enables to generate SAR image layers related to detailed 3D object models. Moreover, it enables one to localize the 3D positions and surface intersection points related to reflected radar signals. In particular, RaySAR helps to understand the nature of signal multiple reflections at man-made objects (e.g. building structures) or artificial shapes. Scene models with different levels of detail can be processed - from digital surface models (DSMs) to high-end 3D structures - which can be defined in relative or absolute world coordinates. RaySAR can be run on Windows / Linux and is based on an adapted version of the open-source ray tracer POV-Ray.
JayabharathP / The Python Mega Course Build 10 Real World Applications The Python Mega Course is one of the top online Python courses with over 100,000 enrolled students and is targeted toward people with little or no previous programming experience. The course follows a modern-teaching approach where students learn by doing. You will start Python from scratch by first creating simple programs. Once you learn the basics you will then be guided on how to create 10 real-world complex applications in Python 3 through easy video explanations and support by the course instructor. Some of the applications you will build during the course are database web apps, desktop apps, web scraping scripts, webcam object detectors, web maps, and more. These programs are not only great examples to master Python, you can also use any of them as a portfolio once you have built them. By buying the course you will gain lifetime access to all its videos, coding exercises, quizzes, code notebooks, and the Q&A inside the course where you can ask your questions and get an answer the same day. On top of that you are covered by the Udemy 30-day money back guarantee, so you can easily return the course if you don't like it. If you don't know anything about Python, do not worry! In the first two sections, you will learn Python basics such as functions, loops, and conditionals. If you already know the basics, then the first two sections can serve as a refresher. The other 22 sections focus entirely on building real-world applications. The applications you will build cover a wide range of interesting topics: Web applications Desktop applications Database applications Web scraping Web mapping Data analysis Data visualization Computer vision Object-Oriented Programming Specifically, the 10 Python applications you will build are: A program that returns English-word definitions A program that blocks access to distracting websites A web map visualizing volcanoes and population data A portfolio website A desktop-graphical program with a database backend A webcam motion detector A web scraper of real estate data An interactive web graph A database web application A web service that converts addresses to geographic coordinates To consider yourself a professional programmer you need to know how to make professional programs and there's no other course that teaches you that, so join thousands of other students who have successfully applied their Python skills in the real world. Sign up and start learning Python today! What you’ll learn Go from a total beginner to an advanced-Python programmer Create 10 real-world Python programs (no useless programs) Solidify your skills with bonus practice activities throughout the course Create an app that translates English words Create a web-mapping app Create a portfolio website Create a desktop app for storing book information Create a webcam video app that detects objects Create a web scraper Create a data visualization app Create a database app Create a geocoding web app Create a website blocker Send automated emails Analyze and visualize data Use Python to schedule programs based on computer events. Learn OOP (Object-Oriented Programming) Learn GUIs (Graphical-User Interfaces) Are there any course requirements or prerequisites? A computer (Windows, Mac, or Linux). No prior knowledge of Python is required. No previous programming experience needed. Who this course is for: Those with no prior knowledge of Python. Those who know Python basics and want to master Python
Masudbro94 / Python Hacked Mobile Phone Open in app Get started ITNEXT Published in ITNEXT You have 2 free member-only stories left this month. Sign up for Medium and get an extra one Kush Kush Follow Apr 15, 2021 · 7 min read · Listen Save How you can Control your Android Device with Python Photo by Caspar Camille Rubin on Unsplash Photo by Caspar Camille Rubin on Unsplash Introduction A while back I was thinking of ways in which I could annoy my friends by spamming them with messages for a few minutes, and while doing some research I came across the Android Debug Bridge. In this quick guide I will show you how you can interface with it using Python and how to create 2 quick scripts. The ADB (Android Debug Bridge) is a command line tool (CLI) which can be used to control and communicate with an Android device. You can do many things such as install apps, debug apps, find hidden features and use a shell to interface with the device directly. To enable the ADB, your device must firstly have Developer Options unlocked and USB debugging enabled. To unlock developer options, you can go to your devices settings and scroll down to the about section and find the build number of the current software which is on the device. Click the build number 7 times and Developer Options will be enabled. Then you can go to the Developer Options panel in the settings and enable USB debugging from there. Now the only other thing you need is a USB cable to connect your device to your computer. Here is what todays journey will look like: Installing the requirements Getting started The basics of writing scripts Creating a selfie timer Creating a definition searcher Installing the requirements The first of the 2 things we need to install, is the ADB tool on our computer. This comes automatically bundled with Android Studio, so if you already have that then do not worry. Otherwise, you can head over to the official docs and at the top of the page there should be instructions on how to install it. Once you have installed the ADB tool, you need to get the python library which we will use to interface with the ADB and our device. You can install the pure-python-adb library using pip install pure-python-adb. Optional: To make things easier for us while developing our scripts, we can install an open-source program called scrcpy which allows us to display and control our android device with our computer using a mouse and keyboard. To install it, you can head over to the Github repo and download the correct version for your operating system (Windows, macOS or Linux). If you are on Windows, then extract the zip file into a directory and add this directory to your path. This is so we can access the program from anywhere on our system just by typing in scrcpy into our terminal window. Getting started Now that all the dependencies are installed, we can start up our ADB and connect our device. Firstly, connect your device to your PC with the USB cable, if USB debugging is enabled then a message should pop up asking if it is okay for your PC to control the device, simply answer yes. Then on your PC, open up a terminal window and start the ADB server by typing in adb start-server. This should print out the following messages: * daemon not running; starting now at tcp:5037 * daemon started successfully If you also installed scrcpy, then you can start that by just typing scrcpy into the terminal. However, this will only work if you added it to your path, otherwise you can open the executable by changing your terminal directory to the directory of where you installed scrcpy and typing scrcpy.exe. Hopefully if everything works out, you should be able to see your device on your PC and be able to control it using your mouse and keyboard. Now we can create a new python file and check if we can find our connected device using the library: Here we import the AdbClient class and create a client object using it. Then we can get a list of devices connected. Lastly, we get the first device out of our list (it is generally the only one there if there is only one device connected). The basics of writing scripts The main way we are going to interface with our device is using the shell, through this we can send commands to simulate a touch at a specific location or to swipe from A to B. To simulate screen touches (taps) we first need to work out how the screen coordinates work. To help with these we can activate the pointer location setting in the developer options. Once activated, wherever you touch on the screen, you can see that the coordinates for that point appear at the top. The coordinate system works like this: A diagram to show how the coordinate system works A diagram to show how the coordinate system works The top left corner of the display has the x and y coordinates (0, 0) respectively, and the bottom right corners’ coordinates are the largest possible values of x and y. Now that we know how the coordinate system works, we need to check out the different commands we can run. I have made a list of commands and how to use them below for quick reference: Input tap x y Input text “hello world!” Input keyevent eventID Here is a list of some common eventID’s: 3: home button 4: back button 5: call 6: end call 24: volume up 25: volume down 26: turn device on or off 27: open camera 64: open browser 66: enter 67: backspace 207: contacts 220: brightness down 221: brightness up 277: cut 278: copy 279: paste If you wanted to find more, here is a long list of them here. Creating a selfie timer Now we know what we can do, let’s start doing it. In this first example I will show you how to create a quick selfie timer. To get started we need to import our libraries and create a connect function to connect to our device: You can see that the connect function is identical to the previous example of how to connect to your device, except here we return the device and client objects for later use. In our main code, we can call the connect function to retrieve the device and client objects. From there we can open up the camera app, wait 5 seconds and take a photo. It’s really that simple! As I said before, this is simply replicating what you would usually do, so thinking about how to do things is best if you do them yourself manually first and write down the steps. Creating a definition searcher We can do something a bit more complex now, and that is to ask the browser to find the definition of a particular word and take a screenshot to save it on our computer. The basic flow of this program will be as such: 1. Open the browser 2. Click the search bar 3. Enter the search query 4. Wait a few seconds 5. Take a screenshot and save it But, before we get started, you need to find the coordinates of your search bar in your default browser, you can use the method I suggested earlier to find them easily. For me they were (440, 200). To start, we will have to import the same libraries as before, and we will also have our same connect method. In our main function we can call the connect function, as well as assign a variable to the x and y coordinates of our search bar. Notice how this is a string and not a list or tuple, this is so we can easily incorporate the coordinates into our shell command. We can also take an input from the user to see what word they want to get the definition for: We will add that query to a full sentence which will then be searched, this is so that we can always get the definition. After that we can open the browser and input our search query into the search bar as such: Here we use the eventID 66 to simulate the press of the enter key to execute our search. If you wanted to, you could change the wait timings per your needs. Lastly, we will take a screenshot using the screencap method on our device object, and we can save that as a .png file: Here we must open the file in the write bytes mode because the screencap method returns bytes representing the image. If all went according to plan, you should have a quick script which searches for a specific word. Here it is working on my phone: A GIF to show how the definition searcher example works on my phone A GIF to show how the definition searcher example works on my phone Final thoughts Hopefully you have learned something new today, personally I never even knew this was a thing before I did some research into it. The cool thing is, that you can do anything you normal would be able to do, and more since it just simulates your own touches and actions! I hope you enjoyed the article and thank you for reading! 💖 468 9 468 9 More from ITNEXT Follow ITNEXT is a platform for IT developers & software engineers to share knowledge, connect, collaborate, learn and experience next-gen technologies. Sabrina Amrouche Sabrina Amrouche ·Apr 15, 2021 Using the Spotify Algorithm to Find High Energy Physics Particles Python 5 min read Using the Spotify Algorithm to Find High Energy Physics Particles Wenkai Fan Wenkai Fan ·Apr 14, 2021 Responsive design at different levels in Flutter Flutter 3 min read Responsive design at different levels in Flutter Abhishek Gupta Abhishek Gupta ·Apr 14, 2021 Getting started with Kafka and Rust: Part 2 Kafka 9 min read Getting started with Kafka and Rust: Part 2 Adriano Raiano Adriano Raiano ·Apr 14, 2021 How to properly internationalize a React application using i18next React 17 min read How to properly internationalize a React application using i18next Gary A. Stafford Gary A. Stafford ·Apr 14, 2021 AWS IoT Core for LoRaWAN, AWS IoT Analytics, and Amazon QuickSight Lora 11 min read AWS IoT Core for LoRaWAN, Amazon IoT Analytics, and Amazon QuickSight Read more from ITNEXT Recommended from Medium Morpheus Morpheus Morpheus Swap — Resurrection Ashutosh Kumar Ashutosh Kumar GIT Branching strategies and GitFlow Balachandar Paulraj Balachandar Paulraj Delta Lake Clones: Systematic Approach for Testing, Sharing data Jason Porter Jason Porter Week 3 -Yieldly No-Loss Lottery Results Casino slot machines Mikolaj Szabó Mikolaj Szabó in HackerNoon.com Why functional programming matters Tt Tt Set Up LaTeX on Mac OS X Sierra Goutham Pratapa Goutham Pratapa Upgrade mongo to the latest build Julia Says Julia Says in Top Software Developers in the World How to Choose a Software Vendor AboutHelpTermsPrivacy Get the Medium app A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
cong / 2Dto3DUsing camera calibration and PnP to translate Pixel Coordinates System 2D coordinate point (u, v) to World Coordinate System 3D coordinate point (X, Y, Z).
EYDS-CA / ArcorelocationA lightweight iOS framework for displaying AR content at real-world coordinates
drakh / Slovakia Gps DataGeoJSON database of Slovak cities, district boundaries, and country boundaries. All data coordinates in World Mercator ( EPSG 3395)
a-nasikun / FastSpectrumWe approximate the lowest part of the eigenvalues and eigenfunctions of Laplace-Beltrami operator, to have faster speed, less storage, and quicker mapping to world coordinates. This program requires Eigen, LibIGL, and an eigensolver (either CUDA's cuSOLVER or Matlab's eigs function).
1024jp / LensCalibratorConvert coordinates in a picture to the real world based on multiple reference points in the picture.
zacbarton / Node Mercator ProjectionTranslate latitude and longitude values into 'world' coordinates as used by the Google Map API.
savnani5 / Motion Planning Of A Differential Drive RobotThe differential drive has an ESP32 board for wireless connectivity a Client-Server network is established between the server laptop and client ESP to transmit the coordinates to the robot. An overhead camera is used to visually survey the obstacle course and image processing is used to segment the obstacles and the robot from the captured images. Further, the obstacle course is used to make a "visibility graph" and finally "Dijkstra's shortest path algorithm" is used to search the shortest pah from the robot position to the goal position. Kinematic Equations of the differential drive are used to drive the robot on the path obtained. Finally, a pygame simulation of the robot movement is made to predict the behavior of the robot in real world and the robot is driven using this simulation.
nhz2 / XYZgeomagLightweight C++ header-only library for calculating the magnetic field on earth given geocentric cartesian coordinates using the World Magnetic Model(WMM). Compatible with Arduino.
user29A / JPFITSFITS File interaction written in Visual Studio C# .Net. JPFITS is not based on any other implementations and is written from the ground-up, consistent with the FITS standard, designed to interact with fits files as object-oriented structures. See the github Wiki link below for more info.
Pixpipe / QuickvoxelcoreToolkit to display brain volumes (NIfTI, MINC2) with WebGL2, featuring obliques, colormaps, overlay, world coordinates, multiple cameras, etc.
CogitoNTNU / Geoguessr AICV-based AI model that is able to predict location (coordinates) of picture in world.
ajaybhatiya1234 / DEEP FACE Dectection01 Read the technical deep dive: https://www.dessa.com/post/deepfake-detection-that-actually-works # Visual DeepFake Detection In our recent [article](https://www.dessa.com/post/deepfake-detection-that-actually-works), we make the following contributions: * We show that the model proposed in current state of the art in video manipulation (FaceForensics++) does not generalize to real-life videos randomly collected from Youtube. * We show the need for the detector to be constantly updated with real-world data, and propose an initial solution in hopes of solving deepfake video detection. Our Pytorch implementation, conducts extensive experiments to demonstrate that the datasets produced by Google and detailed in the FaceForensics++ paper are not sufficient for making neural networks generalize to detect real-life face manipulation techniques. It also provides a current solution for such behavior which relies on adding more data. Our Pytorch model is based on a pre-trained ResNet18 on Imagenet, that we finetune to solve the deepfake detection problem. We also conduct large scale experiments using Dessa's open source scheduler + experiment manger [Atlas](https://github.com/dessa-research/atlas). ## Setup ## Prerequisities To run the code, your system should meet the following requirements: RAM >= 32GB , GPUs >=1 ## Steps 0. Install [nvidia-docker](https://github.com/nvidia/nvidia-docker/wiki/Installation-(version-2.0)) 00. Install [ffmpeg](https://www.ffmpeg.org/download.html) or `sudo apt install ffmpeg` 1. Git Clone this repository. 2. If you haven't already, install [Atlas](https://github.com/dessa-research/atlas). 3. Once you've installed Atlas, activate your environment if you haven't already, and navigate to your project folder. That's it, You're ready to go! ## Datasets Half of the dataset used in this project is from the [FaceForensics](https://github.com/ondyari/FaceForensics/tree/master/dataset) deepfake detection dataset. . To download this data, please make sure to fill out the [google form](https://github.com/ondyari/FaceForensics/#access) to request access to the data. For the dataset that we collected from Youtube, it is accessible on [S3](ttps://deepfake-detection.s3.amazonaws.com/augment_deepfake.tar.gz) for download. To automatically download and restructure both datasets, please execute: ``` bash restructure_data.sh faceforensics_download.py ``` Note: You need to have received the download script from FaceForensics++ people before executing the restructure script. Note2: We created the `restructure_data.sh` to do a split that replicates our exact experiments avaiable in the UI above, please feel free to change the splits as you wish. ## Walkthrough Before starting to train/evaluate models, we should first create the docker image that we will be running our experiments with. To do so, we already prepared a dockerfile to do that inside `custom_docker_image`. To create the docker image, execute the following commands in terminal: ``` cd custom_docker_image nvidia-docker build . -t atlas_ff ``` Note: if you change the image name, please make sure you also modify line 16 of `job.config.yaml` to match the docker image name. Inside `job.config.yaml`, please modify the data path on host from `/media/biggie2/FaceForensics/datasets/` to the absolute path of your `datasets` folder. The folder containing your datasets should have the following structure: ``` datasets ├── augment_deepfake (2) │ ├── fake │ │ └── frames │ ├── real │ │ └── frames │ └── val │ ├── fake │ └── real ├── base_deepfake (1) │ ├── fake │ │ └── frames │ ├── real │ │ └── frames │ └── val │ ├── fake │ └── real ├── both_deepfake (3) │ ├── fake │ │ └── frames │ ├── real │ │ └── frames │ └── val │ ├── fake │ └── real ├── precomputed (4) └── T_deepfake (0) ├── manipulated_sequences │ ├── DeepFakeDetection │ ├── Deepfakes │ ├── Face2Face │ ├── FaceSwap │ └── NeuralTextures └── original_sequences ├── actors └── youtube ``` Notes: * (0) is the dataset downloaded using the FaceForensics repo scripts * (1) is a reshaped version of FaceForensics data to match the expected structure by the codebase. subfolders called `frames` contain frames collected using `ffmpeg` * (2) is the augmented dataset, collected from youtube, available on s3. * (3) is the combination of both base and augmented datasets. * (4) precomputed will be automatically created during training. It holds cashed cropped frames. Then, to run all the experiments we will show in the article to come, you can launch the script `hparams_search.py` using: ```bash python hparams_search.py ``` ## Results In the following pictures, the title for each subplot is in the form `real_prob, fake_prob | prediction | label`. #### Model trained on FaceForensics++ dataset For models trained on the paper dataset alone, we notice that the model only learns to detect the manipulation techniques mentioned in the paper and misses all the manipulations in real world data (from data)   #### Model trained on Youtube dataset Models trained on the youtube data alone learn to detect real world deepfakes, but also learn to detect easy deepfakes in the paper dataset as well. These models however fail to detect any other type of manipulation (such as NeuralTextures).   #### Model trained on Paper + Youtube dataset Finally, models trained on the combination of both datasets together, learns to detect both real world manipulation techniques as well as the other methods mentioned in FaceForensics++ paper.   for a more in depth explanation of these results, please refer to the [article](https://www.dessa.com/post/deepfake-detection-that-actually-works) we published. More results can be seen in the [interactive UI](http://deepfake-detection.dessa.com/projects) ## Help improve this technology Please feel free to fork this work and keep pushing on it. If you also want to help improving the deepfake detection datasets, please share your real/forged samples at foundations@dessa.com. ## LICENSE © 2020 Square, Inc. ATLAS, DESSA, the Dessa Logo, and others are trademarks of Square, Inc. All third party names and trademarks are properties of their respective owners and are used for identification purposes only.
gkw0010 / GTAV Head DatasetGTA_Head is a large-scale virtual world dataset for crowd counting and head detection, including 5096 images labeld with 1732043 head bounding boxes. The pictures and target center coordinates are taken from GCC Dataset. We provide information for each visible head, including: xmin, ymin, length and width for training and evaluation of object detection model. There are 35 scenes in the dataset, including 24 scenarios in the training set and 11 scenarios in the test set. Compared with other datasets, GTA Head provides pedestrian head annotations for a large number of complex scenes, including indoor shopping malls, subways and outdoor stadiums and squares. Our dataset follows the standard of MOTChallenge CVPR19 benchmark.
duboviy / Dist:world_map: Python/C API extension module that computes distance between two coordinates on the world map