Comments (10)
Something ended up being strange with my conda env. I realized it when i went back to use the native ultralytics tool-chain. Same thing, just a quiet exit.
Just a from scratch fresh env that might help people:
# 2. Update conda
conda update -yn base -c defaults conda
# 3. Create virtual environment named 'p37'
conda create -y -n p37 python=3.7 anaconda
# 4. Activate
conda activate p37 # conda deactivate
# 5. Install dependencies
conda install -yc anaconda opencv matplotlib tqdm pillow
conda install -yc conda-forge scikit-image pycocotools protobuf numpy
conda install -yc pytorch pytorch torchvision
pip install onnx onnxruntime
from yolort.
Hi @rlewkowicz we support the master branch of YOLOv5, could you post your original log when exporting with export_model.py
And please read https://zhiqwang.com/yolov5-rt-stack/notebooks/onnx-graphsurgeon-inference-tensorrt.html and https://github.com/zhiqwang/yolov5-rt-stack/tree/main/deployment/tensorrt#usage first .
from yolort.
Sure! @zhiqwang
(p37) PS C:\Repos\screen\yolov5-rt-stack\tools> python .\export_model.py --checkpoint_path ..\..\yolov5\runs\train\exp9\weights\best.pt --include engine --trt_path "."
Command Line Args: Namespace(batch_size=1, checkpoint_path='..\\..\\yolov5\\runs\\train\\exp9\\weights\\best.pt', image_size=[640, 640], include=['engine'], nms_thresh=0.45, onnx_path=None, opset=11, score_thresh=0.25, simplify=False, size_divisible=32, skip_preprocess=False, trt_path='.', version='r6.0')
Loaded saved model from ..\..\yolov5\runs\train\exp9\weights\best.pt
C:\ProgramData\Anaconda3\envs\p37\lib\site-packages\torch\functional.py:568: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\TensorShape.cpp:2228.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
C:\ProgramData\Anaconda3\envs\p37\lib\site-packages\yolort\models\anchor_utils.py:46: TracerWarning: torch.as_tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
anchors = torch.as_tensor(self.anchor_grids, dtype=torch.float32, device=device).to(dtype=dtype)
C:\ProgramData\Anaconda3\envs\p37\lib\site-packages\yolort\models\anchor_utils.py:47: TracerWarning: torch.as_tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
strides = torch.as_tensor(self.strides, dtype=torch.float32, device=device).to(dtype=dtype)
C:\ProgramData\Anaconda3\envs\p37\lib\site-packages\yolort\relay\logits_decoder.py:45: TracerWarning: torch.as_tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
strides = torch.as_tensor(self.strides, dtype=torch.float32, device=device).to(dtype=dtype)
C:\ProgramData\Anaconda3\envs\p37\lib\site-packages\yolort\models\box_head.py:337: TracerWarning: Iterating over a tensor might cause the trace to be incorrect. Passing a tensor of different shape won't change the number of iterations executed (and might lead to errors or silently give incorrect results).
for head_output, grid, shift, stride in zip(head_outputs, grids, shifts, strides):
I'll take a look at those docs
from yolort.
Seems that these are only warning there, I guess you don't install the onnx-graphsurgeon tool?
from yolort.
And seems that we should input the file path to trt_path
python .\export_model.py --checkpoint_path ..\..\yolov5\runs\train\exp9\weights\best.pt --include engine --trt_path "model.engine"
from yolort.
Following those docs, I get the same outcome:
(p37) PS C:\Repos\screen\yolov5-rt-stack\tools> python .\convert.py
We're using TensorRT: 8.4.1.5 on cuda device: 0.
Loaded saved model from C:\Repos\screen\yolov5\runs\train\exp9\weights\best.pt
C:\ProgramData\Anaconda3\envs\p37\lib\site-packages\torch\functional.py:568: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\TensorShape.cpp:2228.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
C:\ProgramData\Anaconda3\envs\p37\lib\site-packages\yolort\models\anchor_utils.py:46: TracerWarning: torch.as_tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
anchors = torch.as_tensor(self.anchor_grids, dtype=torch.float32, device=device).to(dtype=dtype)
C:\ProgramData\Anaconda3\envs\p37\lib\site-packages\yolort\models\anchor_utils.py:47: TracerWarning: torch.as_tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
strides = torch.as_tensor(self.strides, dtype=torch.float32, device=device).to(dtype=dtype)
C:\ProgramData\Anaconda3\envs\p37\lib\site-packages\yolort\relay\logits_decoder.py:45: TracerWarning: torch.as_tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
strides = torch.as_tensor(self.strides, dtype=torch.float32, device=device).to(dtype=dtype)
C:\ProgramData\Anaconda3\envs\p37\lib\site-packages\yolort\models\box_head.py:337: TracerWarning: Iterating over a tensor might cause the trace to be incorrect. Passing a tensor of different shape won't change the number of iterations executed (and might lead to errors or silently give incorrect results).
for head_output, grid, shift, stride in zip(head_outputs, grids, shifts, strides):
from yolort.runtime.trt_helper import export_tensorrt_engine
import os
import torch
import cv2
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
cuda_visible = "0"
os.environ["CUDA_VISIBLE_DEVICES"] = cuda_visible
assert torch.cuda.is_available()
device = torch.device('cuda')
import tensorrt as trt
print(f"We're using TensorRT: {trt.__version__} on {device} device: {cuda_visible}.")
img_raw = cv2.imread("C:\\Users\\Administrator\\Pictures\\0a1df875-out00900.jpg")
model_path = "C:\\Repos\\screen\\yolov5\\runs\\train\\exp9\\weights\\best.pt"
engine_path = "C:\\best.engine"
batch_size = 1
img_size = 640
size_divisible = 32
fixed_shape = True
score_thresh = 0.80
nms_thresh = 0.45
detections_per_img = 1
precision = "fp16"
export_tensorrt_engine(
model_path,
score_thresh=score_thresh,
nms_thresh=nms_thresh,
engine_path=engine_path,
detections_per_img=detections_per_img,
precision=precision
)
from yolort.
This is the only yolov5 c++ tensort project on the internet that actually seems to compile on windows. If this conversion worked, it would be the end to this nightmarish hellscape that is trying to get yolov5, tensorrt, and c++ to work on windows in a native windows project.
I've tried running it as admin etc, it case it was pathing issues. It's odd it exits quietly. I'll dig more at this tomorrow.
from yolort.
Did you get the intermediate ONNX models? If the "*.trt.onnx" is generated, you can use trtexec
provided by tensorrt to export the plan (engine).
from yolort.
Well, not so fast. So I did have to reinstall the whole env to get the ultralytics toolchain to work, but installing yolort is what causes the env to break I believe.
there's so many moving parts, ymmv. I did a
pip install yolort --no-deps
And this allowed it to install without breaking the env. From there the only missing dep was onnxruntime, which I've added to the commands above.
and it works
Serialize engine success, saved as best.engine
from yolort.
Hi @rlewkowicz ,
Very glad that you've resolved this environment problem, it's really not very easy to install TensorRT properly , and using TensorRT on Windows Python will be even harder.
I guess that there are some dependencies in following requirements.txt that is not compatible on Windows system, could you open a new ticket and post the error message there so we can check if we can eliminate the use of this dependency?
from yolort.
Related Issues (20)
- `numpy` does not support newline delimiter from version 1.23
- zlibwapi.dll (solved)
- No module named 'yolort.utils.update_module_state' while saving the Yolo Model HOT 6
- Dynamic batch dimension not working with ONNX export HOT 1
- module 'yolort' has no attribute 'utils' HOT 4
- Can't load custom trained model HOT 2
- Unexpected side effect on matplotlib's backend HOT 2
- Remove `NestedTensor` from pre-processing
- Loading pre-trained model is not supported for num_classes != 80 HOT 1
- Can bbox coordinates be negative in yolo output? HOT 6
- Remove `ComputeLoss` from TorchScript graph
- SpeedUp with microsoft/nni HOT 3
- Can not export to ONNX model. AttributeError: 'NoneType' object has no attribute 'shape' HOT 7
- CLI tool for exporting models.: error: the following arguments are required: --checkpoint_path HOT 16
- Is it correct to subtract x_offset twice when performing bbox scale as post-processing? HOT 1
- If put yolov5 onnx exported from ultralytics into export_engine api, the postprocess speed slows down in cpp deploy. HOT 8
- SetCriterion's forward() incompatible with P6 models. Can't train P6 models.
- [defaultAllocator.cpp::deallocate::42] Error Code 1: Cuda Runtime (invalid argument) Segmentation fault (core dumped)
- Exporting ONNX Model with Fixed Batch Size of 1 Using export_tensorrt_engine
- cant batch infer?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from yolort.