Comments (5)
👋 Hello @courtneywhelan, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.
If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it.
If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.
Requirements
Python>=3.8.0 with all requirements.txt installed including PyTorch>=1.8. To get started:
git clone https://github.com/ultralytics/yolov5 # clone
cd yolov5
pip install -r requirements.txt # install
Environments
YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
- Notebooks with free GPU:
- Google Cloud Deep Learning VM. See GCP Quickstart Guide
- Amazon Deep Learning AMI. See AWS Quickstart Guide
- Docker Image. See Docker Quickstart Guide
Status
If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export and benchmarks on macOS, Windows, and Ubuntu every 24 hours and on every commit.
Introducing YOLOv8 🚀
We're excited to announce the launch of our latest state-of-the-art (SOTA) object detection model for 2023 - YOLOv8 🚀!
Designed to be fast, accurate, and easy to use, YOLOv8 is an ideal choice for a wide range of object detection, image segmentation and image classification tasks. With YOLOv8, you'll be able to quickly and accurately detect objects in real-time, streamline your workflows, and achieve new levels of accuracy in your projects.
Check out our YOLOv8 Docs for details and get started with:
pip install ultralytics
from yolov5.
Hello,
Thank you for reaching out and for your detailed question! It sounds like you're on the right track using the --class
flag to filter detections. However, the issue you're encountering with the output video still showing all detections could be due to how the video is being processed and saved.
To ensure that only the filtered classes are shown in the output video, you can modify the detect.py
script slightly. Here's a step-by-step guide to help you achieve this:
-
Ensure you're using the latest versions:
- Make sure you have the latest version of YOLOv5 from the repository.
- Update your
torch
package to the latest version.
-
Modify
detect.py
:- Open the
detect.py
script and locate the section where the results are being processed and saved. - Add a filter to ensure only the desired classes are included in the output.
- Open the
Here's an example modification you can make to the detect.py
script:
# Assuming you have already parsed the --class argument and have a list of class indices to filter
filtered_classes = [0, 1, 2] # Example class indices you want to keep
# Inside the loop where detections are processed
for i, det in enumerate(pred): # detections per image
if len(det):
# Apply NMS
det[:, :4] = scale_coords(im0.shape[2:], det[:, :4], im0.shape).round()
# Filter detections by class
det = det[det[:, 5].isin(filtered_classes)]
# Proceed with saving the filtered detections
for *xyxy, conf, cls in reversed(det):
if save_txt: # Write to file
xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist() # normalized xywh
line = (cls, *xywh, conf) if save_conf else (cls, *xywh) # label format
with open(txt_path + '.txt', 'a') as f:
f.write(('%g ' * len(line)).rstrip() % line + '\n')
if save_img or view_img: # Add bbox to image
label = f'{names[int(cls)]} {conf:.2f}'
plot_one_box(xyxy, im0, label=label, color=colors[int(cls)], line_thickness=3)
- Run the script:
- Use the
--class
flag to specify the classes you want to filter. - Run the modified
detect.py
script to process your video and save the output with only the filtered classes.
- Use the
If you encounter any issues or need further assistance, please provide a minimum reproducible example of your code and the exact command you are using to run the script. This will help us better understand the problem and provide a more accurate solution. You can refer to our minimum reproducible example guide for more details.
Thank you for your cooperation, and I hope this helps! 😊
from yolov5.
Thanks! I am getting the error IndexError: tuple index out of range.
I run the command python detect_filter.py --weights yolov5x.pt --source ~/file --class 0 2 5 7
Here is the modified code. Any thoughts on why I am getting this error?
# Process predictions
for i, det in enumerate(pred): # per image
seen += 1
if webcam: # batch_size >= 1
p, im0, frame = path[i], im0s[i].copy(), dataset.count
s += f"{i}: "
else:
p, im0, frame = path, im0s.copy(), getattr(dataset, "frame", 0)
if len(det):
# Apply NMS
det[:, :4] = scale_boxes(im0.shape[2:], det[:, :4], im0.shape).round()
det = det[det[:, 5].isin(opt.classes)]
p = Path(p) # to Path
save_path = str(save_dir / p.name) # im.jpg
txt_path = str(save_dir / "labels" / p.stem) + ("" if dataset.mode == "image" else f"_{frame}") # im.txt
s += "%gx%g " % im.shape[2:] # print string
gn = torch.tensor(im0.shape)[[1, 0, 1, 0]] # normalization gain whwh
imc = im0.copy() if save_crop else im0 # for save_crop
annotator = Annotator(im0, line_width=line_thickness, example=str(names))
if len(det):
# Rescale boxes from img_size to im0 size
det[:, :4] = scale_boxes(im.shape[2:], det[:, :4], im0.shape).round()
# Print results
for c in det[:, 5].unique():
n = (det[:, 5] == c).sum() # detections per class
s += f"{n} {names[int(c)]}{'s' * (n > 1)}, " # add to string
# Write results
for *xyxy, conf, cls in reversed(det):
c = int(cls) # integer class
label = names[c] if hide_conf else f"{names[c]}"
confidence = float(conf)
confidence_str = f"{confidence:.2f}"
if save_csv:
write_to_csv(p.name, label, confidence_str)
if save_txt: # Write to file
xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist() # normalized xywh
line = (cls, *xywh, conf) if save_conf else (cls, *xywh) # label format
with open(f"{txt_path}.txt", "a") as f:
f.write(("%g " * len(line)).rstrip() % line + "\n")
if save_img or view_img: # Add bbox to image
label = f'{names[int(cls)]} {conf:.2f}'
plot_one_box(xyxy, im0, label=label, color=colors[int(cls)], line_thickness=3)
from yolov5.
@courtneywhelan hello,
Thank you for sharing the details of your issue and the modified code. The IndexError: tuple index out of range
typically indicates that the code is trying to access an index that doesn't exist in a tuple or list. Let's address this step-by-step.
Steps to Troubleshoot:
-
Ensure Latest Versions:
- First, please make sure you're using the latest versions of YOLOv5 and
torch
. You can update YOLOv5 by pulling the latest changes from the repository and updatetorch
using:pip install --upgrade torch
- First, please make sure you're using the latest versions of YOLOv5 and
-
Minimum Reproducible Example:
- To help us investigate further, could you please provide a minimum reproducible example? This includes a snippet of your code that can be run independently and reproduces the error. You can refer to our minimum reproducible example guide for more details.
-
Code Review:
- From the code snippet you provided, it seems like there might be an issue with how the
det
tensor is being accessed. Specifically, the linedet = det[det[:, 5].isin(opt.classes)]
might be causing the issue ifdet
does not have the expected dimensions.
- From the code snippet you provided, it seems like there might be an issue with how the
Suggested Code Fix:
Here's a revised version of your code snippet with added checks and corrections:
# Process predictions
for i, det in enumerate(pred): # per image
seen += 1
if webcam: # batch_size >= 1
p, im0, frame = path[i], im0s[i].copy(), dataset.count
s += f"{i}: "
else:
p, im0, frame = path, im0s.copy(), getattr(dataset, "frame", 0)
if len(det):
# Apply NMS
det[:, :4] = scale_boxes(im0.shape[2:], det[:, :4], im0.shape).round()
# Ensure det has the expected dimensions
if det.shape[1] > 5:
det = det[det[:, 5].isin(opt.classes)]
else:
print(f"Warning: det does not have the expected dimensions: {det.shape}")
continue
p = Path(p) # to Path
save_path = str(save_dir / p.name) # im.jpg
txt_path = str(save_dir / "labels" / p.stem) + ("" if dataset.mode == "image" else f"_{frame}") # im.txt
s += "%gx%g " % im.shape[2:] # print string
gn = torch.tensor(im0.shape)[[1, 0, 1, 0]] # normalization gain whwh
imc = im0.copy() if save_crop else im0 # for save_crop
annotator = Annotator(im0, line_width=line_thickness, example=str(names))
if len(det):
# Rescale boxes from img_size to im0 size
det[:, :4] = scale_boxes(im.shape[2:], det[:, :4], im0.shape).round()
# Print results
for c in det[:, 5].unique():
n = (det[:, 5] == c).sum() # detections per class
s += f"{n} {names[int(c)]}{'s' * (n > 1)}, " # add to string
# Write results
for *xyxy, conf, cls in reversed(det):
c = int(cls) # integer class
label = names[c] if hide_conf else f"{names[c]} {conf:.2f}"
confidence = float(conf)
confidence_str = f"{confidence:.2f}"
if save_csv:
write_to_csv(p.name, label, confidence_str)
if save_txt: # Write to file
xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist() # normalized xywh
line = (cls, *xywh, conf) if save_conf else (cls, *xywh) # label format
with open(f"{txt_path}.txt", "a") as f:
f.write(("%g " * len(line)).rstrip() % line + "\n")
if save_img or view_img: # Add bbox to image
label = f'{names[int(cls)]} {conf:.2f}'
plot_one_box(xyxy, im0, label=label, color=colors[int(cls)], line_thickness=3)
Next Steps:
- Run the updated script and check if the issue persists.
- Provide a minimum reproducible example if the issue continues, so we can further investigate.
Thank you for your patience and cooperation. We're here to help! 😊
from yolov5.
👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.
For additional resources and information, please see the links below:
- Docs: https://docs.ultralytics.com
- HUB: https://hub.ultralytics.com
- Community: https://community.ultralytics.com
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!
Thank you for your contributions to YOLO 🚀 and Vision AI ⭐
from yolov5.
Related Issues (20)
- How can I process the features during inference? HOT 5
- metric question HOT 1
- Does yolov5 need to add additional negative samples? In addition, what is the label format of negative samples HOT 2
- multi-gpu validation HOT 1
- can not convert model to tflite file HOT 2
- Setting up confidence threshold while training HOT 2
- How can a model trained on Ultralytics HUB perform inference prediction on the test set? HOT 2
- Problems with prediction ratios in multi-class training HOT 4
- The GPU is not used when running detection with YOLOv5 HOT 5
- error in cmd HOT 2
- Saving Early Stopping Patience Value in last.pt Checkpoint HOT 1
- Training with HIP/ROCm HOT 2
- Performance difference in model formats HOT 8
- how do i export my yolov5s model to torchscript then download that model as yolov5s.torchscript file? i have no di HOT 2
- Understanding operation inside non_max_suppression() function HOT 7
- yolov5 HOT 2
- mAP of nano and small models for different image sizes HOT 4
- Display YouTube Source + Bounding Boxes HOT 3
- Hello author, can YOLOv5 be downloaded directly from third-party libraries like YOLOv5 and trained directly? I downloaded the yolov5 library and encountered an error when trying to run it HOT 1
- pip dependencies HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from yolov5.