GithubHelp home page GithubHelp logo

TensorRT is slower than pytorch about yolov5 HOT 7 CLOSED

namogg avatar namogg commented on June 2, 2024
TensorRT is slower than pytorch

from yolov5.

Comments (7)

github-actions avatar github-actions commented on June 2, 2024

👋 Hello @namogg, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.

Requirements

Python>=3.8.0 with all requirements.txt installed including PyTorch>=1.8. To get started:

git clone https://github.com/ultralytics/yolov5  # clone
cd yolov5
pip install -r requirements.txt  # install

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

YOLOv5 CI

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export and benchmarks on macOS, Windows, and Ubuntu every 24 hours and on every commit.

Introducing YOLOv8 🚀

We're excited to announce the launch of our latest state-of-the-art (SOTA) object detection model for 2023 - YOLOv8 🚀!

Designed to be fast, accurate, and easy to use, YOLOv8 is an ideal choice for a wide range of object detection, image segmentation and image classification tasks. With YOLOv8, you'll be able to quickly and accurately detect objects in real-time, streamline your workflows, and achieve new levels of accuracy in your projects.

Check out our YOLOv8 Docs for details and get started with:

pip install ultralytics

from yolov5.

glenn-jocher avatar glenn-jocher commented on June 2, 2024

Hey there! Thanks for sharing the details of your YOLOv8n pose estimation model deployment.

Based on your log, it seems there are several warnings during the ONNX conversion process which might impact the performance when using TensorRT. These warnings indicate operations that failed to execute could be causing inefficiencies in the model when run using TensorRT. Also, consider that TensorRT and PyTorch may have differences in handling certain operations or optimizations.

Here are a couple of suggestions:

  1. Look into the warnings thrown during the ONNX export. Solving these might improve the TensorRT performance.
  2. Experiment with different settings during the export process (like changing dynamic settings or tensor precision).

Since model optimization can be quite specific to the operations used and hardware architecture, sometimes it may require a bit of fine-tuning to get the best performance out of TensorRT.

I hope this helps! If you need more detailed guidance, feel free to ask! 🚀

from yolov5.

namogg avatar namogg commented on June 2, 2024

Thanks for your respond, the warning doest happend when i try to convert onnx sperately and use trtexec to convert to TensorRT. i cant inference. Is there any solution to this.

Loading detection/model/yolo.engine for TensorRT inference...
Traceback (most recent call last):
  File "/home/namogg/Grab And Go/main.py", line 14, in <module>
    main()
  File "/home/namogg/Grab And Go/main.py", line 11, in main
    engine.run()
  File "/home/namogg/Grab And Go/engine.py", line 45, in run
    self.run_predict()
  File "/home/namogg/Grab And Go/engine.py", line 86, in run_predict
    pose_list = self.pose_estimation_model.extract_keypoints(combined_frames,sources = camera_ids)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/namogg/Grab And Go/detection/pose.py", line 45, in extract_keypoints
    results = self.model.predict(frames,show = False, save = False,verbose = False)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/namogg/anaconda3/envs/layout/lib/python3.11/site-packages/ultralytics/engine/model.py", line 445, in predict
    self.predictor.setup_model(model=self.model, verbose=is_cli)
  File "/home/namogg/anaconda3/envs/layout/lib/python3.11/site-packages/ultralytics/engine/predictor.py", line 297, in setup_model
    self.model = AutoBackend(
                 ^^^^^^^^^^^^
  File "/home/namogg/anaconda3/envs/layout/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/namogg/anaconda3/envs/layout/lib/python3.11/site-packages/ultralytics/nn/autobackend.py", line 235, in __init__
    metadata = json.loads(f.read(meta_len).decode("utf-8"))  # read metadata
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8c in position 12: invalid start byte

from yolov5.

namogg avatar namogg commented on June 2, 2024

I have set simplicity = False, and convert successfully without any warning. The result is still slower than Pytorch. Can you suggest any solution?

from yolov5.

glenn-jocher avatar glenn-jocher commented on June 2, 2024

@namogg hello! If converting without warnings still results in slower TensorRT performance compared to PyTorch, you might want to consider the following adjustments:

  1. Verify that TensorRT is utilizing all available optimizations, such as layer fusion, precision calibration (using FP16 or INT8 where possible), and optimal kernel selection for your specific GPU.

  2. Ensure your GPU driver and TensorRT are updated to their latest versions, as improvements in newer versions might enhance performance.

  3. Experiment with different batch sizes to determine the optimal throughput for TensorRT on your hardware setup.

Each model and hardware combination might require unique tweaks to fully optimize, so these steps could help pinpoint more effective configurations. Keep experimenting! 🚀

from yolov5.

namogg avatar namogg commented on June 2, 2024

I havent solve the problem yet but thanks for your support

from yolov5.

glenn-jocher avatar glenn-jocher commented on June 2, 2024

You're welcome! Keep experimenting with the settings, and if there's anything more we can help with, don't hesitate to reach out. Best of luck with your project! 😊

from yolov5.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.