GithubHelp home page GithubHelp logo

ethanh3514 / al_yolo Goto Github PK

View Code? Open in Web Editor NEW
260.0 4.0 44.0 130.85 MB

👺 基于Yolov5的Apex Legend游戏 AI 辅瞄外挂

License: Apache License 2.0

Python 98.85% Shell 0.42% Dockerfile 0.72%
apex apex-legends cheat yolov5 aimbot cv game yolo

al_yolo's People

Contributors

ethanh3514 avatar fatinghenji avatar t-atlas avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

al_yolo's Issues

使用DAMO-YOLO替代YOLOv5并优化检测开关

实际上屏幕检测并不需要检测多种物体,仅仅是检测Person以及Head,似乎使用DAMO-YOLO能使程序占用更小一部分硬件资源,使用LightModel检测416x416像素区域已经可以满足开镜状态下跟枪的需求。对于APEX长TTK的游戏来说,稳定的跟枪比拉枪更重要。

同时在腰射状态下的识别与瞄准也没有必要,这时人物移动幅度过大,超出640的检测区域,这时候开启鼠标移动会因为错误的识别而导致人物鬼畜。

建议仅仅在右键开镜状态下,开启程序相关功能。

关于鼠标移动的提议

有一个库叫做pydirectinput可以直接用在directX的软件中,是基于pyaurogui的,可以移动鼠标,不需要罗技的ghub,这样更简单

罗技驱动失效

请问一下作者用的是什么版本的GHUB,我尝试了几个旧版本的GHUB都无法使用这个dll
(无法使用即调用.mouse_open()返回为0)

另外请问:我在git上尝试想要搜索这个dll的来源,似乎没有找到有,是我的搜索姿势不对吗,也想顺便请教这个驱动的来源是哪里,谢谢!

性能优化的建议

推理框架

以前做过一个类似的,开着游戏的情况下 RTX2060 使用 Intel OpenVINO 推理框架比 tensorRT 和 onnx 快一倍。

截屏

jsmpeg-vnc, 我使用这个方法截屏速度非常快(截取屏幕中心320x320 的图甚至超过 DXGI, 不确定是不是捕获到了重复的帧)

关于一帧定位的问题

转动一弧度需要移动的像素 = 游戏内水平转动一周移动的像素 * (2pi / 360) * 游戏内灵敏度 * ADS

可以人工写脚本二分的测量出 游戏内水平转动一周移动的像素

通过你 README.md 中的图(3D版本)计算得到角度就能计算需要移动的像素量。

在运行python apex.py之后的一些问题

你好:
首先感谢大佬的的资源分享。其实我并没有学习过电脑这块的相关知识,在结合大佬在“Issues”中的解答与chatgpt的帮助我已经成功打开了
UGP0HP~J5VKZHUZ}8L K0RB
但是在点击“开启目标检测”时会报错

_ctypes.COMError: (-2005270524,’指定的设备接口或功能级别在此系统上不受支持。',”(None,None,None,0,None))Exception ignored in: 〈compiled_function DXCamera.del at 0x0000028C3A5EC5EO>

AttributeError: 'DXCamera’object has no attribute 'is_capturing
这两个问题,我已询问过chatgpt但并不能解决,请问您可以给予帮助吗?十分感谢

无法正常启动?

(base) PS G:\AI\AL_Yolo> conda activate AL_Yolo
(AL_Yolo) PS G:\AI\AL_Yolo> python apex.py
[1280.0, 720.0] (2560, 1440)

罗技驱动版本:
image
python版本:3.10
除使用pip install -r requirements.txt进行pypi包安装外,另按照报错手动安装了pynputcomtypespyautogui请问是哪里配置出错了吗?

启动失败:DLL load failed while importing dxshot: 找不到指定的模块。

Traceback (most recent call last):
File "E:\codespace\AL_Yolo\apex.py", line 2, in
from detect import YOLOv5Detector
File "E:\codespace\AL_Yolo\detect.py", line 11, in
from Capture import LoadScreen
File "E:\codespace\AL_Yolo\Capture.py", line 1, in
import dxshot
ImportError: DLL load failed while importing dxshot: 找不到指定的模块。
dxshot模块在哪呢?

'DXCamera' object has no attribute 'is_capturing'

当我运行apex.py之后,会弹出用户界面,点击“开启目标检测”之后,会出现错误信息如下:
`目标检测已开启
'gbk' codec can't decode byte 0xff in position 0: illegal multibyte sequence
YOLOv5 v1.0-1-g0844685 Python-3.11.5 torch-2.0.1 CUDA:0 (NVIDIA GeForce RTX 4070 Laptop GPU, 8188MiB)

Fusing layers...
Model summary: 157 layers, 7015519 parameters, 0 gradients
Exception in thread Thread-2 (work):
Traceback (most recent call last):
File "D:\software\Anaconda3\envs\gym\Lib\threading.py", line 1038, in _bootstrap_inner
self.run()
File "D:\software\Anaconda3\envs\gym\Lib\threading.py", line 975, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\zhanghd\Desktop\download\AL_Yolo\detect.py", line 149, in work
self.run()
File "C:\Users\zhanghd\Desktop\download\AL_Yolo\detect.py", line 92, in run
dataset = LoadScreen()
^^^^^^^^^^^^
File "C:\Users\zhanghd\Desktop\download\AL_Yolo\Capture.py", line 19, in init
self.camera = dxshot.create(region=self.region, output_color="RGB")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\zhanghd\Desktop\download\AL_Yolo\dxshot.py", line 112, in create
File "C:\Users\zhanghd\Desktop\download\AL_Yolo\dxshot.py", line 71, in create
File "C:\Users\zhanghd\Desktop\download\AL_Yolo\dxcam\dxcam.py", line 51, in init
File "C:\Users\zhanghd\Desktop\download\AL_Yolo\dxcam\dxcam.py", line 294, in _validate_region
ValueError: Invalid Region: Region should be in 1707x1067
Exception ignored in: <compiled_function DXCamera.del at 0x000001EA91972840>
Traceback (most recent call last):
File "C:\Users\zhanghd\Desktop\download\AL_Yolo\dxcam\dxcam.py", line 310, in del
File "C:\Users\zhanghd\Desktop\download\AL_Yolo\dxcam\dxcam.py", line 305, in release
File "C:\Users\zhanghd\Desktop\download\AL_Yolo\dxcam\dxcam.py", line 199, in stop
AttributeError: 'DXCamera' object has no attribute 'is_capturing'`
我使用的是python3.11对应的dxshot,请问该错误是如何引起的?

一些疑问和一点点示例代码(

基于内存的外挂自瞄原理是可以拿到三维坐标,直接修改方向角来瞄准敌人,而基于计算机视觉的外挂只能拿到目标在屏幕上的投影,这是一个二维坐标,要解算出移动的向量很依赖游戏底层的参数(视场角等),目前还没想明白怎么一帧锁敌,也许将来会去实现。

这段话我没太理解。既然已经是一个二维平面上的两点间的直线运动[比如从A(1,1)到B(1,2)仅需要直线移动],为什么要计算三维空间中的滑动角度?

加入PID平滑控制鼠标

class PID:
    def __init__(self, Kp, Ki, Kd):
        self.Kp = Kp
        self.Ki = Ki
        self.Kd = Kd
        self.P = 0
        self.I = 0
        self.D = 0
        self.last_error = 0
        self.last_time = time.time()
        self.first_time = None
        self.first_pos = None
        self.last_pos = None
        self.Kc = None
        self.Tu = None
        self.auto_tune = False

    def pid_control(self, error, dt):
        self.P = error
        self.I += error * dt
        self.D = (error - self.last_error) / dt
        self.last_error = error
        control = self.Kp * self.P + self.Ki * self.I + self.Kd * self.D
        return control

    def auto_tune_pid(self):
        if not self.auto_tune:
            return
        if self.first_time is None:
            self.first_time = time.time()
            self.first_pos = get_current_pos()
            return
        self.last_pos = get_current_pos()
        if self.Kc is None:
            self._find_critical_gain()
        else:
            self._calculate_pid_parameters()
        # 更新鼠标位置
        dt = time.time() - self.last_time
        error = self.first_pos - self.last_pos
        control = self.pid_control(error, dt)
        new_pos = get_current_pos() + control
        set_mouse_pos(new_pos)
        self.last_time = time.time()

    def _find_critical_gain(self):
        Kp = 0.1
        while True:
            self.Kp = Kp
            error = self.first_pos - get_current_pos()
            if self.last_error * error < 0:
                self.Kc = self.Kp
                self.Tu = time.time() - self.first_time
                break
            self.last_error = error
            time.sleep(0.01)
            Kp += 0.1

    def _calculate_pid_parameters(self):
        self.Kp = 0.6 * self.Kc
        self.Ki = 1.2 * self.Kp / self.Tu
        self.Kd = 0.075 * self.Kp * self.Tu
  • 使用 Ziegler-Nichols 方法来自动调节 PID 参数。
    • 首先检查是否需要自动调节,如果不需要则直接返回。
    • 如果需要自动调节,则记录下当前时间和鼠标位置,并逐渐增加比例系数 Kp,直到系统出现持续的振荡。
    • 记录下临界增益 Kc 和振荡周期 Tu 后,根据公式计算出合适的 PID 参数。
    • 使用了 time.sleep 函数来降低计算频率,从而避免 CPU 过度占用。

一个问题2

File "D:\python\AL_Yolo-master\apex.py", line 34, in
detector.work()
File "D:\python\AL_Yolo-master\detect.py", line 168, in work
self.run(self)
File "D:\python\AL_Yolo-master\detect.py", line 67, in run
dataset = LoadScreen(stride=stride, auto=pt)
File "D:\python\AL_Yolo-master\Capture.py", line 28, in init
self.camera = dxshot.create(region=REGION, output_color="BGR")
File "D:\python\AL_Yolo-master\dxshot.py", line 115, in create
File "D:\python\AL_Yolo-master\dxshot.py", line 73, in create
File "D:\python\AL_Yolo-master\dxcam\dxcam.py", line 34, in init
File "", line 6, in init
File "D:\python\AL_Yolo-master\dxcam\core\duplicator.py", line 20, in post_init
_ctypes.COMError: (-2005270524, '指定的设备接口或功能级别在此系统上不受支持。', (None, None, None, 0, None))

视角定位相关

在使用的时候发现鼠标的定位存在卡顿和漂移的现象,
请问是咋回事
同时问一下FOV、DPI这些参数在哪里用到了,我在mouse control里没有找到相关的用法
我的环境:Python3.11,用的是罗技驱动(不是pyautogui,这个好像用不了)

WeChat_20231204011443~2

python多线程是没用,多进程呢?

pythonGIL的问题可以用多进程来解决,但是问题是如何进行进程间的协调是个问题,作者有思路吗?
另外想问下目前整体运行帧数有多少

requirements问题

兄弟你确定你用到了这么多库吗,你是不是把你的整个电脑里面的库都打包给我了,可以搞虚拟环境来解决这个问题,我安装这些库都快死了,死活安不上

关于使用DML存在的问题

更改detect.py
def run(self):
# Load model
device = torch_directml.device()
程序正常启动,目标识别报错

`(apex) C:\Users\Lumi.conda\envs\apex\ai>python apex.py
目标检测已开启
Expected package name at the start of dependency specifier
rotli==1.0.9
^
Fusing layers...
Model summary: 157 layers, 7015519 parameters, 0 gradients, 15.8 GFLOPs
检测到旧版本的YOLOv5模型,正在尝试使用原始加载...
Fusing layers...
Model summary: 157 layers, 7015519 parameters, 0 gradients, 15.8 GFLOPs
Exception in thread Thread-4 (work):
Traceback (most recent call last):
File "C:\Users\Lumi.conda\envs\apex\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "C:\Users\Lumi.conda\envs\apex\lib\threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\Lumi.conda\envs\apex\ai\detect.py", line 169, in work
self.run()
File "C:\Users\Lumi.conda\envs\apex\ai\detect.py", line 128, in run
pred = non_max_suppression(pred, self.conf_thres, self.iou_thres, self.classes, self.agnostic_nms, max_det=self.max_det)
File "C:\Users\Lumi.conda\envs\apex\ai\utils\general.py", line 980, in non_max_suppression
x = torch.cat((box, conf, j.float(), mask), 1)[conf.view(-1) > conf_thres]
RuntimeError
关闭程序

(apex) C:\Users\Lumi.conda\envs\apex\ai>python apex.py
目标检测已开启
Expected package name at the start of dependency specifier
rotli==1.0.9
^
Fusing layers...
Model summary: 157 layers, 7015519 parameters, 0 gradients, 15.8 GFLOPs
检测到旧版本的YOLOv5模型,正在尝试使用原始加载...
Fusing layers...
Model summary: 157 layers, 7015519 parameters, 0 gradients, 15.8 GFLOPs
Exception in thread Thread-4 (work):
Traceback (most recent call last):
File "C:\Users\Lumi.conda\envs\apex\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "C:\Users\Lumi.conda\envs\apex\lib\threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\Lumi.conda\envs\apex\ai\detect.py", line 168, in work
self.run()
File "C:\Users\Lumi.conda\envs\apex\ai\detect.py", line 127, in run
pred = non_max_suppression(pred, self.conf_thres, self.iou_thres, self.classes, self.agnostic_nms, max_det=self.max_det)
File "C:\Users\Lumi.conda\envs\apex\ai\utils\general.py", line 980, in non_max_suppression
x = torch.cat((box, conf, j.float(), mask), 1)[conf.view(-1) > conf_thres]
RuntimeError

目标检测已开启
Expected package name at the start of dependency specifier
rotli==1.0.9
^
You already created a DXCamera Instance for Device 0--Output 0!
Returning the existed instance...
To change capture parameters you can manually delete the old object using del obj.
Traceback (most recent call last):
File "C:\Users\Lumi.conda\envs\apex\ai\dxcam\dxcam.py", line 237, in __capture
File "C:\Users\Lumi.conda\envs\apex\ai\dxcam\dxcam.py", line 126, in _grab
File "C:\Users\Lumi.conda\envs\apex\ai\dxcam\core\duplicator.py", line 26, in update_frame
_ctypes.COMError: (-2005270527, '应用程序进行了无效的调用。调用的参数或某对象的状态不正确。\r\n启用 D3D 调试层以便通过调试消息查看详细信息。', (None, None, None, 0, None))

Exception in thread DXCamera:
Traceback (most recent call last):
File "C:\Users\Lumi.conda\envs\apex\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "C:\Users\Lumi.conda\envs\apex\lib\threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\Lumi.conda\envs\apex\ai\dxcam\dxcam.py", line 269, in __capture
File "C:\Users\Lumi.conda\envs\apex\ai\dxcam\dxcam.py", line 203, in stop
File "C:\Users\Lumi.conda\envs\apex\lib\threading.py", line 1093, in join
raise RuntimeError("cannot join current thread")
RuntimeError: cannot join current thread
Screen Capture FPS: 1643148
Fusing layers...
Model summary: 157 layers, 7015519 parameters, 0 gradients, 15.8 GFLOPs
检测到旧版本的YOLOv5模型,正在尝试使用原始加载...
Fusing layers...
Model summary: 157 layers, 7015519 parameters, 0 gradients, 15.8 GFLOPs
鼠标锁定已开启
鼠标锁定已开启`

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.