CorridorKey
Perfect Green Screen Keys
Install / Use
/learn @nikopueringer/CorridorKeyREADME
CorridorKey
https://github.com/user-attachments/assets/1fb27ea8-bc91-4ebc-818f-5a3b5585af08
When you film something against a green screen, the edges of your subject inevitably blend with the green background. This creates pixels that are a mix of your subject's color and the green screen's color. Traditional keyers struggle to untangle these colors, forcing you to spend hours building complex edge mattes or manually rotoscoping. Even modern "AI Roto" solutions typically output a harsh binary mask, completely destroying the delicate, semi-transparent pixels needed for a realistic composite.
I built CorridorKey to solve this unmixing problem.
You input a raw green screen frame, and the neural network completely separates the foreground object from the green screen. For every single pixel, even the highly transparent ones like motion blur or out-of-focus edges, the model predicts the true, un-multiplied straight color of the foreground element, alongside a clean, linear alpha channel. It doesn't just guess what is opaque and what is transparent; it actively reconstructs the color of the foreground object as if the green screen was never there.
No more fighting with garbage mattes or agonizing over "core" vs "edge" keys. Give CorridorKey a hint of what you want, and it separates the light for you.
Alert!
This is a brand new release, I'm sure you will discover many ways it can be improved! I invite everyone to help. Join us on the "Corridor Creates" Discord to share ideas, work, forks, etc! https://discord.gg/zvwUrdWXJm
If you want an easy-install, artist-friendly user interface version of CorridorKey, check out EZ-CorridorKey
This project uses uv to manage dependencies — it handles Python installation, virtual environments, and packages all in one step, so you don't need to worry about any of that. just run the appropriate install script for your OS.
Naturally, I have not tested everything. If you encounter errors, please consider patching the code as needed and submitting a pull request.
Features
- Physically Accurate Unmixing: Clean extraction of straight color foreground and linear alpha channels, preserving hair, motion blur, and translucency.
- Resolution Independent: The engine dynamically scales inference to handle 4K plates while predicting using its native 2048x2048 high-fidelity backbone.
- VFX Standard Outputs: Natively reads and writes 16-bit and 32-bit Linear float EXR files, preserving true color math for integration in Nuke, Fusion, or Resolve.
- Auto-Cleanup: Includes a morphological cleanup system to automatically prune any tracking markers or tiny background features that slip through CorridorKey's detection.
Hardware Requirements
This project was designed and built on a Linux workstation (Puget Systems PC) equipped with an NVIDIA RTX Pro 6000 with 96GB of VRAM. The community is ACTIVELY optimizing it for consumer GPUS.
The most recent build should work on computers with 6-8 gig of VRAM, and it can run on most M1+ Mac systems with unified memory. Yes, it might even work on your old Macbook pro. Let us know on the Discord!
- Windows Users (NVIDIA): To run GPU acceleration natively on Windows, your system MUST have NVIDIA drivers that support CUDA 12.8 or higher installed. If your drivers only support older CUDA versions, the installer will likely fallback to the CPU.
- AMD GPU Users (ROCm): AMD Radeon RX 7000 series (RDNA3) and RX 9000 series (RDNA4) are supported via ROCm on Linux. Windows ROCm support is experimental (torch.compile is not yet functional). See the AMD ROCm Setup section below.
- GVM (Optional): Requires approximately 80 GB of VRAM and utilizes massive Stable Video Diffusion models.
- VideoMaMa (Optional): Natively requires a massive chunk of VRAM as well (originally 80GB+). While the community has tweaked the architecture to run at less than 24GB, those extreme memory optimizations have not yet been fully implemented in this repository.
- BiRefNet (Optional): Lightweight AlphaHint generator option.
Because GVM and VideoMaMa have huge model file sizes and extreme hardware requirements, installing their modules is completely optional. You can always provide your own Alpha Hints generated from your editing program, BiRefNet, or any other method. The better the AlphaHint, the better the result.
Getting Started
1. Installation
This project uses uv to manage Python and all dependencies. uv is a fast, modern replacement for pip that automatically handles Python versions, virtual environments, and package installation in a single step. You do not need to install Python yourself — uv does it for you.
For Windows Users (Automated):
- Clone or download this repository to your local machine.
- Double-click
Install_CorridorKey_Windows.bat. This will automatically install uv (if needed), set up your Python environment, install all dependencies, and download the CorridorKey model.Note: If this is the first time installing uv, any terminal windows you already had open won't see it. The installer script handles the current window automatically, but if you open a new terminal and get "'uv' is not recognized", just close and reopen that terminal.
- (Optional) Double-click
Install_GVM_Windows.batandInstall_VideoMaMa_Windows.batto download the heavy optional Alpha Hint generator weights.
For Linux / Mac Users (Automated):
- Clone or download this repository to your local machine.
- Open terminal and write
bash. Put a space after writingbash. - Drag and drop
Install_CorridorKey_Linux_Mac.shinto the terminal. Then press enter. - (Optional) Do the 2. step again. But now drag and drop
Install_GVM_Linux_Mac.shandInstall_VideoMaMa_Linux_Mac.shto download the heavy optional Alpha Hint generator weights.
For Linux / Mac Users (Manual):
- Clone or download this repository to your local machine.
- Install uv if you don't have it:
curl -LsSf https://astral.sh/uv/install.sh | sh - Install all dependencies (uv will download Python 3.10+ automatically if needed):
For AMD ROCm setup, see the AMD ROCm Setup section below.uv sync # CPU/MPS (default — works everywhere) uv sync --extra cuda # CUDA GPU acceleration (Linux/Windows) uv sync --extra mlx # Apple Silicon MLX acceleration - Download the Models:
- CorridorKey v1.0 Model (~300MB): Downloads automatically on first run. If no
.pthfile is found inCorridorKeyModule/checkpoints/, the engine fetches it from CorridorKey's HuggingFace and saves it asCorridorKey.pth. No manual download needed. - GVM Weights (Optional): HuggingFace: geyongtao/gvm
- Download using the CLI:
uv run hf download geyongtao/gvm --local-dir gvm_core/weights
- Download using the CLI:
- VideoMaMa Weights (Optional): HuggingFace: SammyLim/VideoMaMa
- Download the VideoMaMa fine-tuned weights:
uv run hf download SammyLim/VideoMaMa --local-dir VideoMaMaInferenceModule/checkpoints/VideoMaMa - VideoMaMa also requires the Stable Video Diffusion base model (VAE + image encoder only, ~2.5GB). Accept the license at stabilityai/stable-video-diffusion-img2vid-xt, then:
uv run hf download stabilityai/stable-video-diffusion-img2vid-xt \ --local-dir VideoMaMaInferenceModule/checkpoints/stable-video-diffusion-img2vid-xt \ --include "feature_extractor/*" "image_encoder/*" "vae/*" "model_index.json" - VideoMaMa is an amazing project, please go star their repo and show them some support!
- Download the VideoMaMa fine-tuned weights:
- CorridorKey v1.0 Model (~300MB): Downloads automatically on first run. If no
2. How it Works
CorridorKey requires two inputs to process a frame:
- The Original RGB Image: The to-be-processed green screen footage. This requires the sRGB color gamut (interchangeable with REC709 gamut), and the engine can ingest either an sRGB gamma or Linear gamma curve.
- A Coarse Alpha Hint: A rough black-and-white mask that generally isolates the subject. This does not need to be precise. It can be generated by you with a rough chroma key or AI roto.
I've had the best results using GVM or VideoMaMa to create the AlphaHint, so I've repackaged those projects and integrated them here as optional modules inside clip_manager.py. Here is how they compare:
- GVM: Completely automatic and requires no additional input. It works exceptionally well for people, but can struggle with inanimate objects.
- VideoMaMa: Requires you to provide a rough VideoMamaMaskHint (often drawn by hand or AI) telling it what you want to key. If you choose to use this, place your mask hint in the
VideoMamaMaskHint/folder that the wizard creates for your shot. VideoMaMa results are spectacular and can be controlled more easily than GVM due to this mask hint. - Please go show the creators of these projects some love and star their repos. VideoMaMa and GVM
Perhaps in the future, I will implement other generators for the AlphaHint! In the meantime, the better your Alpha Hint, the better CorridorKey's final result will be. Experiment with different amounts of mask erosion or feathering. The model was trained on coarse, blurry, eroded masks, and is exceptional at filling in details from the hint. However, it is generally less effective at subtracting unwanted mask details if your Alpha Hint is
