- This project is actively being developed thanks to the people who support on Boosty and Patreon. By providing active support, you receive enhanced AI models.
⚠️ WARNING: TensorRT version 10 does not support the Pascal architecture (10 series graphics cards). Use only with GPUs of at least the 20 series.
-
Download CUDA
- Download and install CUDA 12.4.
-
Download the Latest Release
- Download the latest release from here (v2.6).
-
Unpack Aimbot
- Extract the contents of the Aimbot.
-
First Launch and Model Export
- Run
ai.exe
and wait until the standard.onnx
model is exported, usually taking no more than two minutes. - To export another model, simply place it in
.onnx
format in themodels
folder. Then, in the AI tab, select this model, and it will be exported automatically.
- Run
-
Settings
- After successfully exporting the model, you can configure the program.
- All settings are available in the overlay (default key is
Home
). - A list of settings can be found in the config documentation.
-
Controls
- Right Mouse Button: Aim at the detected target.
- F2: Exit the program.
- F3: Activate pause for aiming.
- F4: Reload config.
- Home: Show overlay.
ℹ️ NOTE: This guide is intended for advanced users. If you encounter errors while building the modules, please report them on the Discord server.
-
Install Visual Studio 2019 Community
Download and install from the official website. -
Install Windows SDK
Ensure you have Windows SDK version 10.0.26100.0 installed. -
Install CUDA and cuDNN
- CUDA 12.4
Download from NVIDIA CUDA Toolkit. - cuDNN 9.1
Available on the NVIDIA cuDNN archive website.
- CUDA 12.4
-
Set Up Project Structure
Create a folder namedmodules
in the directorysunone_aimbot_cpp/sunone_aimbot_cpp/modules
. -
Build OpenCV with CUDA Support
- Download and install CMake and CUDA 12.4.
- Download OpenCV.
- Download OpenCV Contrib.
- Create new directories:
sunone_aimbot_cpp/sunone_aimbot_cpp/modules/opencv/
andsunone_aimbot_cpp/modules/opencv/build
. - Extract
opencv-4.10.0
tosunone_aimbot_cpp/sunone_aimbot_cpp/modules/opencv/opencv-4.10.0
andopencv_contrib-4.10.0
tosunone_aimbot_cpp/modules/opencv/opencv_contrib-4.10.0
. - Extract cuDNN to
sunone_aimbot_cpp/sunone_aimbot_cpp/modules/cudnn
. - Open CMake and set the source code location to
sunone_aimbot_cpp/modules/opencv/opencv-4.10.0
. - Set the build directory to
sunone_aimbot_cpp/sunone_aimbot_cpp/modules/opencv/build
. - Click
Configure
. - (Some options will appear after the next configuration application. For example, to configure the CUDNN_LIBRARY paths, you first need to activate the WITH_CUDA option and click configure.)
- Check or configure:
-
WITH_CUDA
-
WITH_CUBLAS
-
ENABLE_FAST_MATH
-
CUDA_FAST_MATH
-
WITH_CUDNN
-
CUDNN_LIBRARY
=<full path>sunone_aimbot_cpp/sunone_aimbot_cpp/modules/cudnn/lib/x64/cudnn.lib
-
CUDNN_INCLUDE_DIR
=<full path>sunone_aimbot_cpp/sunone_aimbot_cpp/modules/cudnn/include
-
CUDA_ARCH_BIN
= Visit the CUDA Wiki to find your Nvidia GPU architecture. For example, forRTX 3080ti
, enter8.6
. -
OPENCV_DNN_CUDA
-
OPENCV_EXTRA_MODULES_PATH
=<full path>sunone_aimbot_cpp/sunone_aimbot_cpp/modules/opencv/opencv_contrib-4.10.0/modules
-
BUILD_opencv_world
-
Uncheck:
WITH_NVCUVENC
WITH_NVCUVID
-
- Click
Configure
again and ensure that the flags forCUDA_FAST_MATH
andENABLE_FAST_MATH
are not reset. - Click
Generate
to build the C++ solution. - Close CMake and open
sunone_aimbot_cpp/modules/opencv/build/OpenCV.sln
, or clickOpen Project
in cmake. - Switch the build configuration to
x64
andRelease
. - Open the
CMakeTargets
folder in the solution. - Right-click on
ALL_BUILD
and selectBuild
. (Building the project can take up to two hours.) - After building, right-click on
INSTALL
and selectBuild
. - Verify the built files exist in the following folders:
sunone_aimbot_cpp/sunone_aimbot_cpp/modules/opencv/build/install/include/opencv2
- Contains.hpp
and.h
files.sunone_aimbot_cpp/sunone_aimbot_cpp/modules/opencv/build/install/x64/vc16/bin
- Contains.dll
files.sunone_aimbot_cpp/sunone_aimbot_cpp/modules/opencv/build/install/x64/vc16/lib
- Contains.lib
files.
-
Download Required Libraries
- Boost
- TensorRT from Yandex or NVIDIA Developer
-
Extract Libraries
Extract the downloaded libraries into the respective directories:sunone_aimbot_cpp/sunone_aimbot_cpp/modules/boost_1_82_0
sunone_aimbot_cpp/sunone_aimbot_cpp/modules/TensorRT-10.6.0.26
-
Compile Boost Libraries
- Navigate to the Boost directory:
cd sunone_aimbot_cpp/sunone_aimbot_cpp/modules/boost_1_82_0
- Run the bootstrap script (from PowerShell):
bootstrap.bat vc142
- After successful bootstrapping, build Boost:
b2.exe --build-type=complete link=static runtime-link=static threading=multi variant=release
- Navigate to the Boost directory:
-
Download GLFW binaries (v3.4)
- Download GLWF Windows pre-compiled binaries
- Extract the downloaded binaries into:
sunone_aimbot_cpp/sunone_aimbot_cpp/modules/glfw-3.4.bin.WIN64
-
Configure Project Settings
- Open the project in Visual Studio.
- Ensure all library paths are correctly set in Project Properties under Library Directories.
- Go to NuGet packages and install
Microsoft.Windows.CppWinRT
.
-
Verify CUDA Integration
- Right-click on the project in Visual Studio.
- Navigate to Build Dependencies > Build Customizations.
- Ensure that CUDA 12.4 (.targets, .props) is included.
-
Build the Project
- Switch the build configuration to Release.
- Build the project by selecting Build > Build Solution.
- Stored here.
- The config documentation is available in a separate repository.
- TensorRT Documentation
- OpenCV Documentation
- Windows SDK
- Boost
- ImGui
- CppWinRT
- Python AI AIMBOT
- Snowflake.cpp
- GLFW
- License: Boost Software License 1.0
- License: Apache License 2.0
- License: MIT License