MCV
Minecraft Computer Vision (MCV) is a tool that can recreate player locations, hitboxes and trajectories from Minecraft gameplay footage/screnshots with high precision with PnP and other tools.
Install / Use
/learn @rubyswolf/MCVREADME
What does MCV do?
Minecraft Computer Vision (MCV) is a tool for recreating Minecraft player poses from images and video. It lets you label points on a Minecraft world image with optical edge detection/refinement assistance by drawing lines, pair those points with in-game world coordinates, and then solve the PnL problem (estimating camera pose and focal length from 2D-3D correspondences). This tool was created to assist in collecting evidence for the Dream MCC 11 Parkour cheating controversy, but it can be used for many other applications. For those reasons, it is open source and free for anyone to use. Learn how to use it by reading Controls.md.

Where does it run?
MCV targets both Python and Web and provides maximum flexibility, allowing you to run it entirely online or entirely offline. The web version is a React component that can be embedded in any website, and the Python version is a standalone script that hosts a local Flask server. Both versions have the same core functionality and can be used to label points, solve PnL, and process the data to find positions on tick boundaries and export your findings. I have a live copy on my website https://dartjs.com/mcv
Applications
- Detecting cheating from gameplay footage by accurately tracking player movements, mainly useful for parkour
- Recreating maps more accurately by using an already recreated section to align camera pose and focal length, then using that alignment to construct the rest of the map very precisely without manually lining up your view.
- Map recreations are also helpful for seed cracking, and have been used before to find the seeds of PewDiePie's survival world as well as the world from the
pack.pngplaceholder texture.
- Map recreations are also helpful for seed cracking, and have been used before to find the seeds of PewDiePie's survival world as well as the world from the
But what about tick alignment?
Minecraft linearly interpolates player movement between ticks, meaning a single screenshot does not have enough information to determine the location a player actually was on a tick boundary, just a sample along the line they were travelling on. However, if you have video and the frame rate is at least 40 FPS, this problem can be solved.
To solve this problem, you first need frame-level tick alignment. The best way to extrapolate this data is through animated textures (like fire) and particle effects (like running particles), because those animations swap frames on each tick boundary, you can also use suddden changes in direction.
From this, you can isolate which frames lie on a line segment between tick locations. By sampling all available points along a line (2 for 40 FPS+, 3 for 60 FPS+, etc.) for two consecutive line segments, you can find the intersection of those line segments to estimate the tick-boundary location with subframe accuracy using a least-squares regression solver. Ideally you will solve multiple frames all in one go using the constraints that all segments meet at points and the tick phase is a constant.
I have not made a tool for doing this so implementing that is up to you. If you want some help you can contact me through my website https://dartjs.com
Finding the world coordinates
To find the world coordinates of a point, you can use debug mode (F3), stand on top of a corner, look straight down, and crouch to align your cursor with the corner. Your player coordinates will then be the world coordinates of that corner. If you cannot stand directly on top of a block because something is above it, you can either break what is above it and replace it later, or measure a different corner and shift it by the appropriate amount to get the coordinates you want. Remember that you might see for example 24.9 and that actually means 25 because you always need to round to the nearest integer. Note that the debug crosshair is red, green and blue (just like MCV) and each lines points in the positive direction from their intersection.
What's included in the repo?
- /examples
- /labeled: Some real examples of labeled points and their corresponding world coordinates, along with the original images and ground truths for testing the PnL system.
- /images: Some random images that can be used for testing the corner selector tool and edge refinement.
- /python_prototype: My first working prototype of the corner selector tool and PnP solver, written in Python using OpenCV.
Build
Build uses Node + esbuild.
Install dependencies:
npm install
Build targets:
npm run build:python
npm run build:web
The Python target is a standalone script that hosts a local Flask server using the official OpenCV Python library.
The web target is an embeddable React component that can be used in websites.
The web target depends on OpenCV.js, which is not bundled with the app. You need to build it yourself using build_opencv_js_single.py and host it on your website, or use a CDN version.
build:weboutputs:dist/common/app.bundle.jsdist/common/index.inline.htmldist/web/index.htmldist/web/MCV.tsx
build:pythonoutputs:dist/common/app.bundle.jsdist/common/index.inline.htmldist/python/mcv_standalone.py
Build config is in build/config.json:
common:media_api: relative media API path (example:/mcv/media.json)data_api: relative data API path (example:/mcv/data)website_url: absolute site base URL for media/data APIs (example:https://dartjs.com)
web:site_root: root of your website project, all other web paths are relative to thiscomponent_dest: destination folder forMCV.tsxopencv_dest: destination folder foropencv.js(optional if you don't build it yourself)opencv_url: runtime URL used by the app to load OpenCV.js (example:/opencv.js)
python:- reserved for Python-target settings
Media API behavior:
- Web target uses
common.media_apiandcommon.data_apidirectly (relative paths, works on localhost/dev). - Python uses absolute paths constructed by combining
common.website_urlwithcommon.media_apiandcommon.data_api. - If either API path is omitted, that API is disabled by default.
Media API schema docs are in APIs.md.
Data API is not yet implemented, project is being left here. You can however import (upload) and export (control+s) data to SVG files which contain the image, JSON data in a script tag and the different collected bits of data as vector graphics layers.
Build OpenCV.js
OpenCV.js is built with the helper script in build/build_opencv_js_single.py.
Prereqs:
- Emscripten SDK installed and activated
- CMake
- Ninja
Typical Windows flow:
C:\dev\build\emsdk\emsdk_env.bat
python build\build_opencv_js_single.py --run
Useful flags:
--cleanto clean the OpenCV.js build dir--simd/--no-simdto toggle WASM SIMD--cmake-option=...to pass extra CMake options
Expected output (automatically picked up by the web build):
build/opencv_js_mcv_single/bin/opencv.js
Whitelist used for exported JS bindings:
build/opencv_js_mcv_whitelist.py
