Lunar
an aim assist using real-time object detection accelerated with CUDA
Install / Use
/learn @zeyad-mansour/LunarREADME
Lunar
Lunar is a neural network aim assist that uses real-time object detection accelerated with CUDA on Nvidia GPUs.
About
Lunar can be modified to work with a variety of FPS games; however, it is currently configured for Fortnite. Besides being general purpose, the main advantage of using Lunar is that it does not meddle with the memory of other processes.
The basis of Lunar's player detection is the YOLOv5 architecture written in PyTorch.
A demo video (outdated) can be found here.

Installation
-
Install a version of Python 3.8 or later.
-
Navigate to the root directory. Use the package manager pip to install the necessary dependencies.
pip install -r requirements.txt
Usage
python lunar.py
To update sensitivity settings:
python lunar.py setup
To collect image data for annotating and training:
python lunar.py collect_data
Issues
- The method of mouse movement (SendInput) is slow. For this reason, the crosshair often lags behind a moving detection. This problem can be lessened by increasing the pixel_increment (e.g. to 4) so fewer calls to that function are made.
- False positives can also happen under certain lighting conditions.
Contributing
Pull requests are welcome. If you have any suggestions, questions, or find any issues, please open an issue and provide some detail. If you find this project interesting or helpful, please star the repository.
License
This project is distributed under GNU General Public License v3.0 license.
Related Skills
node-connect
339.5kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
claude-opus-4-5-migration
83.9kMigrate prompts and code from Claude Sonnet 4.0, Sonnet 4.5, or Opus 4.1 to Opus 4.5
frontend-design
83.9kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
model-usage
339.5kUse CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
