AimAide
Object Detection-based external aim aiding for CSGO
Install / Use
/learn @schelmi1/AimAideREADME
AimAide for CSGO ~~and soon CS2~~
*UPDATE! <br>as long as raw input can not be disabled in CS2 the mouse mover does not work.<br> models seem to work tho
External realtime object detection-based aim aiding powered by <b>YOLOv8</b>, <b>CUDA</b> and <b>TensorRT</b><br> Twitter for further development updates
<img src="/docs/header_cts.jpg"><br> Video Demo here <br>
<h3>Latest changes/additions</h3> <br> <b>13/09/23</b><br> fixed initial TensorRT engine building when starting for the first time from webui launcher<br> changed ultralytics version to required 8.0.58, cause there are some issues with newer versions <br> <br> 25/07/23<br> -added a webui launcher for comfortably changing settings<br> just run launcher.py from your command line <br> <hr> <img src="/docs/launcher.png"> <br> <hr> 16/05/23<br> -new model which is optimized on heads on mirage (high headshot rate, --minconf 0.6 recommended)<br> models/yolov8s_csgo_mirage-320-v62-pal-gen-bg-head.pt<br> -added flickieness argument to control how fast the mouse mover should flick to target <br><br> 09/05/23 - bug in d3d_np grabber fixed (mixed up color channels), code improvements, removed engines from repo (engines will built locally),<br> d3d_gpu is disabled and needs to be rewritten<br> 16/04/23 - engine builder added to circumvent TensorRT incompatibilities <br>(by https://github.com/triple-Mu/YOLOv8-TensorRT)<br> 15/04/23 - introduced 320x320 input models which drastically increase fps with YOLO and TensorRT<br> <h3>Supported Maps</h3> * Mirage <h3>Road Map</h3> Models for CS2 and support for additional maps<br> Human-like aim methods (like windmouse or ai-based) <h3>Features</h3> YOLOv8 Models trained on mirage with various CT and T agents (body and head).<br> Simple smooth linear mouse mover locking onto target closest to crosshair.<br> <h3>Hardware Requirements</h3> To get this to work the detector has to run at 30fps at least.<br> NVIDIA GTX1070 runs at 30fps on a 640x640 model or 60fps on a 320x320 model with TensorRT.<br> NVIDIA RTX4090 should max out at ~120fps on a 640x640 model. (also TensorRT)<br> <h3>Installation</h3> 1) NVIDIA CUDA Toolkit >= 11.7<br> 2) Python 3.10.6 environment<br> 3) Corresponding PyTorch CUDA Package -> https://pytorch.org/get-started/locally/<br> 4) pip install -r requirements.txt<br><br> <b>Optional but recommended:</b><br> 5) NVIDIA TensorRT >= 8.4 -> https://developer.nvidia.com/tensorrt -> https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html<br> <br> Speedup for bigger models with TensorRT is significant.<br> <s>Thats why all models bigger than medium will only be released as TensorRT engines</s>.<br> Models will be released as YOLO weights and locally built as TensorRT engines on first start up.<br> This is due to TensorRT version incompatibilities.<br> <img src="/docs/TensorRT_Speedup.png"> <h3>Usage</h3> I) Disable windows mouse acceleration<br> II) Disable raw input in CSGO<br> III) Cap max_fps in CSGO at your native display refresh rate<br> <br> 1) Run either run_tensorrt.py or run_yolo.py<br> 2) Selective detection can be activated by running with argument <b>-side 'your side'</b> (t, ct or dm for detecting all)<br> If you want to change the detection mode while the script is running, simply write 't', 'ct' or 'dm' into the console and hit enter<br><br> <img src="/docs/side_switch.png"><br> 3) Depending on your hardware choose from 3 different models (nano, small, medium)<br> nano (highest framerate, lowest detection performance),<br> medium (lowest framerate, best decetion performance)<br> 4) Run in benchmark mode first to see what framerate you get (over 60fps increase sensitivity mode)<br> 5) Adjust mouse sensitivity in CS and/or sensitivity mode of AimAide <h3>Benchmark mode</h3> Run run_tensorrt.py or run_yolo.py with argument <b>--benchmark</b> to start in benchmark mode.<br> This is going to run the detector in view-only- and detect-all mode for 300 iterations.<br> Switch to CSGO and run/look around. At the end the average fps of the detector during that time will be displayed. <br><br> <img src="/docs/benchmark_mode1.png"> <h3>Arguments<h3>| arg | default | Description |
| ---- | --- | --- |
| <sub>--input_size</sub> | <sub>320</sub> | <sub>dimension of the input image for the detector</sub> |
| <sub>--grabber</sub> | <sub>'win32'</sub> | <sub>select screen grabber (win32, d3d_gpu, d3d_np) </sub> |
| <sub>--model</sub> | <sub>models/yolov8s_csgo_mirage-320-v41-al-gen-bg</sub>| <sub>selected engine (TensorRT) or weights (YOLOv8)</sub>|
| <sub>--side </sub> | <sub>'dm'</sub> | <sub>which side your are on, 'ct', 't' or 'dm' (deathmatch)</sub> |
| <sub>--minconf </sub> | <sub>0.75</sub> | <sub>minimum detection confidence</sub> |
| <sub>--sensitivity</sub> | <sub>1</sub> | <sub>sensitivity mode, increase when having a high framerate or chaotic aim</sub> |
| <sub>--flickieness</sub> |<sub>4</sub> | <sub>how flicky the mouse mover behaves (4 is slow, 16 is very flicky)</sub> |
| <sub>--visualize</sub> |<sub>False</sub> | <sub>show live detector output in a new window</sub> |
| <sub>--view_only </sub> |<sub>False</sub> | <sub>run in view only mode (disarmed)</sub> |
| <sub>--benchmark</sub> | <sub>False</sub> | <sub>launch benchmark mode</sub> |
| <sub>--no_engine_check</sub> | <sub>False</sub> | <sub>skips engine checking and building (run_tensorrt.py only)</sub> |
Related Skills
node-connect
349.2kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
109.5kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
349.2kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
349.2kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
