Comments (16)
Hi @luocheng610 , 把 {path/to/your/best.pt}
改成你真实的模型路径即可(同时去掉花括号),相对路径或者绝对路径都行。另外你是想给 ONNX Runtime 还是给 TensorRT 用的?
from yolort.
你好@luocheng610, 把
{path/to/your/best.pt}
改成你真正的模型路径即可(同时去掉花抽号),相对路径或者绝不能对路径都行。另外你是想给ONNX Runtime还是给TensorRT的?
谢谢大佬回复,好像不行。
我已经用yolov5 v5或v7的版本训练好自己的模型了,并且用pytorch推理,测试都没问题。
现在我要用libtorch部署
卡在转换模型上了
我希望是torch.jit.script后,在save模型,不想要trace
有完整的示例不
from yolort.
https://github.com/zhiqwang/yolov5-rt-stack/tree/main/deployment/libtorch 完整的例子就是这个,pytorch最新的代码似乎有些不向后兼容的更新,建议在torch 1.10版本上优先测试
from yolort.
建议把代码最小复现流程也说一下,不然很难知道问题是哪一块造成的
from yolort.
https://github.com/zhiqwang/yolov5-rt-stack/tree/main/deployment/libtorch 完整的例子就是这个,pytorch最新的代码似乎有些不向后兼容的更新,建议在torch 1.10版本上优先测试
我的版本是conda install pytorch==1.10.0 torchvision==0.11.0 torchaudio==0.10.0 cudatoolkit=10.2 -c pytorch
应该是一样的
from yolort.
我用的是convert_yolov5_to_yolort.py
export_model.py 这2个能转换不
from yolort.
libtorch我会用点点
就是不知道怎么把yolo训练的模型,转换成libtorch的
要么,load出问题,要么就是forward出问题
from yolort.
export_model.py 不能用于 torchscript
格式的导出,libtorch 的导出要使用如下命令:
from yolort.models import YOLOv5
# 假设这是您用 YOLOv5 训练得到的模型
ckpt_path_from_ultralytics = "yolov5s.pt"
model = YOLOv5.load_from_yolov5(ckpt_path_from_ultralytics, score_thresh=0.25)
model.eval()
traced_model = torch.jit.script(model)
traced_model.save("yolov5n.torchscript.pt")
from yolort.
export_model.py 不能用于
torchscript
格式的导出,libtorch 的导出要使用如下命令:from yolort.models import YOLOv5 # 假设这是您用 YOLOv5 训练得到的模型 ckpt_path_from_ultralytics = "yolov5s.pt" model = YOLOv5.load_from_yolov5(ckpt_path_from_ultralytics, score_thresh=0.25) model.eval() traced_model = torch.jit.script(model) traced_model.save("yolov5n.torchscript.pt")
看到了,看到了
但是我转换的时候遇到了错误,好像你们用的是yolo v6的版本是吧
我用的是v5的,提示conv的错误
我等下换个版本训练一下
谢谢,谢谢,看到了希望
from yolort.
我们 main 分支支持 YOLOv5 v3.1 - v6.2 版本,如果你说的是 v5.0 release 的话,需要按照如下方式加 version
参数:
model = YOLOv5.load_from_yolov5(ckpt_path_from_ultralytics, version="r4.0", score_thresh=0.25)
torchscript
的导出比较简单,所以我就没写专用的脚本
from yolort.
version="r4.0"
加了版本号,导出成功啦
但是,但是
torch::jit::script::ExtraFilesMap extra_files;
m_module = torch::jit::load(m_FileName.GetBuffer(0), g_CUDADev); //导入失败了
m_module.eval();
torch::set_num_threads(2);//cpu线程?
还要参数能填吗,比如CUDA CPU?
from yolort.
这个应该要检查你的 libtorch 环境,可以先测试 CPU 版本的。
LibTorchVision 也是需要编译的,注意切换到 https://github.com/pytorch/vision/releases/tag/v0.11.0 branch, 也可能是 v0.11.1 或者 v0.11.2,请仔细检查您自己的版本,并且通过 ldd
检查编译出的 torchvision.so 是否正确链接到您的 cuda 上。
如果还有问题,请参考 https://github.com/zhiqwang/yolov5-rt-stack/tree/main/deployment/libtorch 给出正确的代码复现流程以分析错误原因。
from yolort.
我也猜到是版本的问题,但是我检查了我的相关的
pytorch==1.10.0
torchvision==0.11.0
torchaudio==0.10.0
cudatoolkit=10.2
libtorch对应的版本是
libtorch-win-shared-with-deps-1.10.0+cu102
应该是对应上了
我再找找原因
谢谢大佬耐心回答
from yolort.
torch::jit::ErrorReport at memory location 0x00000019D08E6A20.
不是传统的c10::Error定位到内存的错误
catch 也抓不到,直接就挂到那行了
from yolort.
先确保 cpu 端能跑哈,这个我做到了 CI 里面,你可以查看一下流程是否正确
from yolort.
test_torchscript.zip
我把原始模型上传了
我是用yolov5 v5版本 gpu训练的(基于yolov5s.pt预权重文件训练的)
训练环境是我上面提到的版本
您要是有空,可以试试转换后/测试一下
from yolort.
Related Issues (20)
- `numpy` does not support newline delimiter from version 1.23
- zlibwapi.dll (solved)
- No module named 'yolort.utils.update_module_state' while saving the Yolo Model HOT 6
- Dynamic batch dimension not working with ONNX export HOT 1
- module 'yolort' has no attribute 'utils' HOT 4
- Can't load custom trained model HOT 2
- Unexpected side effect on matplotlib's backend HOT 2
- Remove `NestedTensor` from pre-processing
- Loading pre-trained model is not supported for num_classes != 80 HOT 1
- Can bbox coordinates be negative in yolo output? HOT 6
- Remove `ComputeLoss` from TorchScript graph
- SpeedUp with microsoft/nni HOT 3
- Can not export to ONNX model. AttributeError: 'NoneType' object has no attribute 'shape' HOT 7
- Is it correct to subtract x_offset twice when performing bbox scale as post-processing? HOT 1
- If put yolov5 onnx exported from ultralytics into export_engine api, the postprocess speed slows down in cpp deploy. HOT 8
- SetCriterion's forward() incompatible with P6 models. Can't train P6 models.
- [defaultAllocator.cpp::deallocate::42] Error Code 1: Cuda Runtime (invalid argument) Segmentation fault (core dumped)
- Exporting ONNX Model with Fixed Batch Size of 1 Using export_tensorrt_engine
- cant batch infer?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from yolort.