ethanh3514 / al_yolo Goto Github PK
View Code? Open in Web Editor NEW👺 基于Yolov5的Apex Legend游戏 AI 辅瞄外挂
License: Apache License 2.0
👺 基于Yolov5的Apex Legend游戏 AI 辅瞄外挂
License: Apache License 2.0
封的话会封机器码吗
实际上屏幕检测并不需要检测多种物体,仅仅是检测Person以及Head,似乎使用DAMO-YOLO能使程序占用更小一部分硬件资源,使用LightModel检测416x416像素区域已经可以满足开镜状态下跟枪的需求。对于APEX长TTK的游戏来说,稳定的跟枪比拉枪更重要。
同时在腰射状态下的识别与瞄准也没有必要,这时人物移动幅度过大,超出640的检测区域,这时候开启鼠标移动会因为错误的识别而导致人物鬼畜。
建议仅仅在右键开镜状态下,开启程序相关功能。
> 老哥,我能问个事?CUDA 11这个环境是N卡的应用,我的amd的显卡是不是用不了这个AI程序
是的,单纯使用可以考虑ncnn进行推理
Originally posted by @Chikage0o0 in #4 (comment)
这个对我来说太难了,我还是琢磨着双11买个N卡吧!
有一个库叫做pydirectinput可以直接用在directX的软件中,是基于pyaurogui的,可以移动鼠标,不需要罗技的ghub,这样更简单
请问一下作者用的是什么版本的GHUB,我尝试了几个旧版本的GHUB都无法使用这个dll
(无法使用即调用.mouse_open()返回为0)
另外请问:我在git上尝试想要搜索这个dll的来源,似乎没有找到有,是我的搜索姿势不对吗,也想顺便请教这个驱动的来源是哪里,谢谢!
以前做过一个类似的,开着游戏的情况下 RTX2060 使用 Intel OpenVINO 推理框架比 tensorRT 和 onnx 快一倍。
jsmpeg-vnc, 我使用这个方法截屏速度非常快(截取屏幕中心320x320 的图甚至超过 DXGI, 不确定是不是捕获到了重复的帧)
转动一弧度需要移动的像素 = 游戏内水平转动一周移动的像素 * (2pi / 360) * 游戏内灵敏度 * ADS
可以人工写脚本二分的测量出 游戏内水平转动一周移动的像素
通过你 README.md 中的图(3D版本)计算得到角度就能计算需要移动的像素量。
直接计算目标和当前鼠标之前的距离然后进行移动鼠标,没必要计算七七八八的三维的啥的
推荐看这个
https://github.com/JiaqinKang/AI-Aimbot/blob/main/main.py
你好:
首先感谢大佬的的资源分享。其实我并没有学习过电脑这块的相关知识,在结合大佬在“Issues”中的解答与chatgpt的帮助我已经成功打开了
但是在点击“开启目标检测”时会报错
_ctypes.COMError: (-2005270524,’指定的设备接口或功能级别在此系统上不受支持。',”(None,None,None,0,None))Exception ignored in: 〈compiled_function DXCamera.del at 0x0000028C3A5EC5EO>
和
AttributeError: 'DXCamera’object has no attribute 'is_capturing
这两个问题,我已询问过chatgpt但并不能解决,请问您可以给予帮助吗?十分感谢
我的版本是
torch 2.1.2+cu121
torchvision 0.16.2+cpu
torchaudio 2.1.2+cpu
cuda == 12.1
使用管理员启动 powershell
下面是错误日志
多谢老哥分享
我尝试百度搜了一下apex的图片 然后我的鼠标就开始锁定了明明我什么按键都没有按 请问 是通过什么按键锁定的呢
Traceback (most recent call last):
File "E:\codespace\AL_Yolo\apex.py", line 2, in
from detect import YOLOv5Detector
File "E:\codespace\AL_Yolo\detect.py", line 11, in
from Capture import LoadScreen
File "E:\codespace\AL_Yolo\Capture.py", line 1, in
import dxshot
ImportError: DLL load failed while importing dxshot: 找不到指定的模块。
dxshot模块在哪呢?
当我运行apex.py之后,会弹出用户界面,点击“开启目标检测”之后,会出现错误信息如下:
`目标检测已开启
'gbk' codec can't decode byte 0xff in position 0: illegal multibyte sequence
YOLOv5 v1.0-1-g0844685 Python-3.11.5 torch-2.0.1 CUDA:0 (NVIDIA GeForce RTX 4070 Laptop GPU, 8188MiB)
Fusing layers...
Model summary: 157 layers, 7015519 parameters, 0 gradients
Exception in thread Thread-2 (work):
Traceback (most recent call last):
File "D:\software\Anaconda3\envs\gym\Lib\threading.py", line 1038, in _bootstrap_inner
self.run()
File "D:\software\Anaconda3\envs\gym\Lib\threading.py", line 975, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\zhanghd\Desktop\download\AL_Yolo\detect.py", line 149, in work
self.run()
File "C:\Users\zhanghd\Desktop\download\AL_Yolo\detect.py", line 92, in run
dataset = LoadScreen()
^^^^^^^^^^^^
File "C:\Users\zhanghd\Desktop\download\AL_Yolo\Capture.py", line 19, in init
self.camera = dxshot.create(region=self.region, output_color="RGB")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\zhanghd\Desktop\download\AL_Yolo\dxshot.py", line 112, in create
File "C:\Users\zhanghd\Desktop\download\AL_Yolo\dxshot.py", line 71, in create
File "C:\Users\zhanghd\Desktop\download\AL_Yolo\dxcam\dxcam.py", line 51, in init
File "C:\Users\zhanghd\Desktop\download\AL_Yolo\dxcam\dxcam.py", line 294, in _validate_region
ValueError: Invalid Region: Region should be in 1707x1067
Exception ignored in: <compiled_function DXCamera.del at 0x000001EA91972840>
Traceback (most recent call last):
File "C:\Users\zhanghd\Desktop\download\AL_Yolo\dxcam\dxcam.py", line 310, in del
File "C:\Users\zhanghd\Desktop\download\AL_Yolo\dxcam\dxcam.py", line 305, in release
File "C:\Users\zhanghd\Desktop\download\AL_Yolo\dxcam\dxcam.py", line 199, in stop
AttributeError: 'DXCamera' object has no attribute 'is_capturing'`
我使用的是python3.11对应的dxshot,请问该错误是如何引起的?
requirements.txt中依赖都指向本地目录。
如题
比如按键盘上的某个键,或者鼠标,谢谢大佬了
用dxshot库那个演示能四百多帧数
基于内存的外挂自瞄原理是可以拿到三维坐标,直接修改方向角来瞄准敌人,而基于计算机视觉的外挂只能拿到目标在屏幕上的投影,这是一个二维坐标,要解算出移动的向量很依赖游戏底层的参数(视场角等),目前还没想明白怎么一帧锁敌,也许将来会去实现。
这段话我没太理解。既然已经是一个二维平面上的两点间的直线运动[比如从A(1,1)到B(1,2)仅需要直线移动],为什么要计算三维空间中的滑动角度?
加入PID平滑控制鼠标
class PID:
def __init__(self, Kp, Ki, Kd):
self.Kp = Kp
self.Ki = Ki
self.Kd = Kd
self.P = 0
self.I = 0
self.D = 0
self.last_error = 0
self.last_time = time.time()
self.first_time = None
self.first_pos = None
self.last_pos = None
self.Kc = None
self.Tu = None
self.auto_tune = False
def pid_control(self, error, dt):
self.P = error
self.I += error * dt
self.D = (error - self.last_error) / dt
self.last_error = error
control = self.Kp * self.P + self.Ki * self.I + self.Kd * self.D
return control
def auto_tune_pid(self):
if not self.auto_tune:
return
if self.first_time is None:
self.first_time = time.time()
self.first_pos = get_current_pos()
return
self.last_pos = get_current_pos()
if self.Kc is None:
self._find_critical_gain()
else:
self._calculate_pid_parameters()
# 更新鼠标位置
dt = time.time() - self.last_time
error = self.first_pos - self.last_pos
control = self.pid_control(error, dt)
new_pos = get_current_pos() + control
set_mouse_pos(new_pos)
self.last_time = time.time()
def _find_critical_gain(self):
Kp = 0.1
while True:
self.Kp = Kp
error = self.first_pos - get_current_pos()
if self.last_error * error < 0:
self.Kc = self.Kp
self.Tu = time.time() - self.first_time
break
self.last_error = error
time.sleep(0.01)
Kp += 0.1
def _calculate_pid_parameters(self):
self.Kp = 0.6 * self.Kc
self.Ki = 1.2 * self.Kp / self.Tu
self.Kd = 0.075 * self.Kp * self.Tu
Ziegler-Nichols
方法来自动调节 PID 参数。
Kp
,直到系统出现持续的振荡。Kc
和振荡周期 Tu
后,根据公式计算出合适的 PID 参数。time.sleep
函数来降低计算频率,从而避免 CPU 过度占用。File "D:\python\AL_Yolo-master\apex.py", line 34, in
detector.work()
File "D:\python\AL_Yolo-master\detect.py", line 168, in work
self.run(self)
File "D:\python\AL_Yolo-master\detect.py", line 67, in run
dataset = LoadScreen(stride=stride, auto=pt)
File "D:\python\AL_Yolo-master\Capture.py", line 28, in init
self.camera = dxshot.create(region=REGION, output_color="BGR")
File "D:\python\AL_Yolo-master\dxshot.py", line 115, in create
File "D:\python\AL_Yolo-master\dxshot.py", line 73, in create
File "D:\python\AL_Yolo-master\dxcam\dxcam.py", line 34, in init
File "", line 6, in init
File "D:\python\AL_Yolo-master\dxcam\core\duplicator.py", line 20, in post_init
_ctypes.COMError: (-2005270524, '指定的设备接口或功能级别在此系统上不受支持。', (None, None, None, 0, None))
如题 kmbox b pro或者kmbox net
你好 我是一个一窍不通的小白,摸索了大概三个小时用powershell打开了apex.py,后面的内容我太了解具体该怎么做
要把https://github.com/ultralytics/yolov5和https://github.com/goldjee/AL-YOLO-dataset的内容下载下来吗?
已点star
烦请指点一二,或者告诉请我应该学习哪方面的内容,非常感谢
https://www.bilibili.com/read/cv24981304 我看到这个上面说DirectML也可以实现别的显卡跑ai
求数据集的树状目录以及内容
默认是瞄准中间, 改了半天都没改好实在蚌埠住了
如题.因为我看一开始依赖里提到需要罗技的驱动,但是后续计划里又写了不依靠驱动
pythonGIL的问题可以用多进程来解决,但是问题是如何进行进程间的协调是个问题,作者有思路吗?
另外想问下目前整体运行帧数有多少
import dxshot.pyd
ImportError: DLL load failed while importing dxshot: 找不到指定的模块。
兄弟你确定你用到了这么多库吗,你是不是把你的整个电脑里面的库都打包给我了,可以搞虚拟环境来解决这个问题,我安装这些库都快死了,死活安不上
更改detect.py
def run(self):
# Load model
device = torch_directml.device()
程序正常启动,目标识别报错
`(apex) C:\Users\Lumi.conda\envs\apex\ai>python apex.py
目标检测已开启
Expected package name at the start of dependency specifier
rotli==1.0.9
^
Fusing layers...
Model summary: 157 layers, 7015519 parameters, 0 gradients, 15.8 GFLOPs
检测到旧版本的YOLOv5模型,正在尝试使用原始加载...
Fusing layers...
Model summary: 157 layers, 7015519 parameters, 0 gradients, 15.8 GFLOPs
Exception in thread Thread-4 (work):
Traceback (most recent call last):
File "C:\Users\Lumi.conda\envs\apex\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "C:\Users\Lumi.conda\envs\apex\lib\threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\Lumi.conda\envs\apex\ai\detect.py", line 169, in work
self.run()
File "C:\Users\Lumi.conda\envs\apex\ai\detect.py", line 128, in run
pred = non_max_suppression(pred, self.conf_thres, self.iou_thres, self.classes, self.agnostic_nms, max_det=self.max_det)
File "C:\Users\Lumi.conda\envs\apex\ai\utils\general.py", line 980, in non_max_suppression
x = torch.cat((box, conf, j.float(), mask), 1)[conf.view(-1) > conf_thres]
RuntimeError
关闭程序
(apex) C:\Users\Lumi.conda\envs\apex\ai>python apex.py
目标检测已开启
Expected package name at the start of dependency specifier
rotli==1.0.9
^
Fusing layers...
Model summary: 157 layers, 7015519 parameters, 0 gradients, 15.8 GFLOPs
检测到旧版本的YOLOv5模型,正在尝试使用原始加载...
Fusing layers...
Model summary: 157 layers, 7015519 parameters, 0 gradients, 15.8 GFLOPs
Exception in thread Thread-4 (work):
Traceback (most recent call last):
File "C:\Users\Lumi.conda\envs\apex\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "C:\Users\Lumi.conda\envs\apex\lib\threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\Lumi.conda\envs\apex\ai\detect.py", line 168, in work
self.run()
File "C:\Users\Lumi.conda\envs\apex\ai\detect.py", line 127, in run
pred = non_max_suppression(pred, self.conf_thres, self.iou_thres, self.classes, self.agnostic_nms, max_det=self.max_det)
File "C:\Users\Lumi.conda\envs\apex\ai\utils\general.py", line 980, in non_max_suppression
x = torch.cat((box, conf, j.float(), mask), 1)[conf.view(-1) > conf_thres]
RuntimeError
目标检测已开启
Expected package name at the start of dependency specifier
rotli==1.0.9
^
You already created a DXCamera Instance for Device 0--Output 0!
Returning the existed instance...
To change capture parameters you can manually delete the old object using del obj
.
Traceback (most recent call last):
File "C:\Users\Lumi.conda\envs\apex\ai\dxcam\dxcam.py", line 237, in __capture
File "C:\Users\Lumi.conda\envs\apex\ai\dxcam\dxcam.py", line 126, in _grab
File "C:\Users\Lumi.conda\envs\apex\ai\dxcam\core\duplicator.py", line 26, in update_frame
_ctypes.COMError: (-2005270527, '应用程序进行了无效的调用。调用的参数或某对象的状态不正确。\r\n启用 D3D 调试层以便通过调试消息查看详细信息。', (None, None, None, 0, None))
Exception in thread DXCamera:
Traceback (most recent call last):
File "C:\Users\Lumi.conda\envs\apex\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "C:\Users\Lumi.conda\envs\apex\lib\threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\Lumi.conda\envs\apex\ai\dxcam\dxcam.py", line 269, in __capture
File "C:\Users\Lumi.conda\envs\apex\ai\dxcam\dxcam.py", line 203, in stop
File "C:\Users\Lumi.conda\envs\apex\lib\threading.py", line 1093, in join
raise RuntimeError("cannot join current thread")
RuntimeError: cannot join current thread
Screen Capture FPS: 1643148
Fusing layers...
Model summary: 157 layers, 7015519 parameters, 0 gradients, 15.8 GFLOPs
检测到旧版本的YOLOv5模型,正在尝试使用原始加载...
Fusing layers...
Model summary: 157 layers, 7015519 parameters, 0 gradients, 15.8 GFLOPs
鼠标锁定已开启
鼠标锁定已开启`
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.