Comments (3)
Hello! 👋 It sounds like you're deep into exploring YOLOv5's output formats; I'll do my best to clarify.
The output discrepancy you see mainly stems from different configurations for exporting the model. The output from export.py
(1, 25200, 133) primarily deals with post-processed detections which combine all grid cells and anchor boxes across all scales into a single dimension. This format is useful for direct deployment uses where a consolidated list of predictions is more practical.
Contrastingly, when you modify the forward
function, you get the outputs arranged by each detection layer (i.e., (1, 399, 80, 80), (1, 256, 40, 40), and (1, 512, 20, 20)). Each tuple reflects:
- The number of batches
- The number of anchors multiplied by the number of detection classes plus box attributes (5 for x, y, width, height, confidence level)
- The grid size
Each detection layer's output corresponds to different scales at which the model detects objects. The smaller grid sizes detect larger objects, and the larger grid sizes detect smaller objects due to the receptive field of the convolutional layers.
If you're converting to another platform like RKNN and encountering shape issues, it's essential to ensure that the output dimensions expected by your RKNN implementation match the output shapes provided by the YOLOv5 model. You may need to adjust either the export script or your RKNN input processing accordingly to match these dimensions.
I hope this clarification helps! If there are more questions or a need for further help, feel free to ask. Happy coding! 😊
from yolov5.
Thank you very much for your reply. Your answer has helped me resolve part of my confusion, but there is still something I don't understand. After encountering this issue with the output shape, I also printed the shape in detect.py. Why is the shape of its prediction torch.Size([1, 15120, 133]), torch.Size([1, 3, 48, 80, 133]), torch.Size([1, 3, 24, 40, 133]), torch.Size([1, 3, 12, 20, 133])? Was there any additional processing done to these outputs? Why is the output (48, 80) instead of (80, 80)?
from yolov5.
@lllittleX hello again! Glad to hear the previous response was somewhat helpful!
The shapes you're seeing in detect.py
are due to the processing and design of YOLOv5's architecture, which uses different sizes of feature maps to detect objects at various scales.
torch.Size([1, 15120, 133])
represents the flattened form of predictions combining all scales.torch.Size([1, 3, 48, 80, 133])
,torch.Size([1, 3, 24, 40, 133])
, andtorch.Size([1, 3, 12, 20, 133])
are predictions from three different scales. The dimensions48x80
,24x40
, and12x20
reflect different grid sizes each layer uses to capture features of various extents.
The reason grids are not square (e.g., 80x80), but rather rectangular (48x80), is based on the input image aspect ratio and how it's processed and downsampled through the network layers, maintaining a certain aspect ratio. This means that the neural network architecture is designed in such a way to best capture the features relevant for detection tasks, given the original input dimensions and desired efficiency.
Keep diving into the details; that's how you get the best out of these models! 😉 Happy coding!
from yolov5.
Related Issues (20)
- Why is the number of FP, TP calculated by val.py for a certain class different from the number predicted by detect.py and then calculated? HOT 1
- Saving Augmented Images HOT 1
- TensorRT is slower than pytorch HOT 7
- The accuracy of the .pt model will decrease after being converted to .engine model. HOT 5
- Guide on how to utilize 'Weighted Loss' method in yolov5 custom training HOT 2
- Extract information of Coordinates HOT 2
- ModuleNotFoundError: No module named 'models.yolo' on custom trained weight. Is it a bug? HOT 6
- Why 'targets' values from train.py are different from the ground truth annotation in txt files? HOT 3
- A question between yolov5s-p5 and the parameters imgsz HOT 2
- YOLOv5 Output Size Issue HOT 2
- stuck training on NVIDIA H100 HOT 2
- Problem running inference on YOLOv5s for custom trained model using Python HOT 2
- The confidence loss (obj_loss) scales of the training and validation sets are inconsistent. HOT 1
- Can I load YOLOv5 model without Ultralytics HOT 3
- The memory usage of the 0 card will increase until it is out of memory HOT 4
- scale_masks fucntion HOT 1
- cls loss HOT 1
- Problem with training for a single class HOT 4
- Issue when try to validate openvino format model HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from yolov5.