GithubHelp home page GithubHelp logo

Filter Detections Error about supervision HOT 7 CLOSED

XiaJunfly avatar XiaJunfly commented on September 24, 2024
Filter Detections Error

from supervision.

Comments (7)

XiaJunfly avatar XiaJunfly commented on September 24, 2024 1

@rolson24 @SkalskiP
Thank you for your help. It works. However,the ID of each object within the set zone is chaotic,as IDs are assigned throughout the entire map range. As you say,the object is inactivate(se here (and I don‘t understand why this happend),so the tracker_ids is empty. So,I used this code to deal with this problems:

import time
from tqdm import tqdm
import cv2
import numpy as np
from ultralytics import YOLO
import supervision as sv
import matplotlib.pyplot as plt

model = YOLO('yolov8x.pt')

VIDEO_PATH = 'work_people.mp4'
# VIDEO_PATH = 'traffic_analysis.mov'

video_info = sv.VideoInfo.from_video_path(VIDEO_PATH)

polygons = [
   np.array([[100, 600], [600, 485], [480, 390], [80, 450]]),
   np.array([[950, 400], [1200, 340], [1100, 280], [800, 330]])
]
# polygons = [
#     np.array([[100, 600], [600, 485], [480, 390], [80, 450]])]

# polygons = [
#     np.array([[592, 282], [900, 282], [900, 82], [592, 82]]),
#     np.array([[1250, 282], [1250, 530], [1450, 530], [1450, 282]])
# ]

zones = [sv.PolygonZone(polygon=polygon) for polygon in polygons]
colors = sv.ColorPalette.DEFAULT
zone_annotators = [sv.PolygonZoneAnnotator(zone=zone, color=colors.by_idx(index), thickness=2)
                  for index, zone in enumerate(zones)]
box_annotators = [sv.BoundingBoxAnnotator(color=colors.by_idx(index), thickness=2)
                 for index in range(len(polygons))]
track1 = sv.ByteTrack()
track2 = sv.ByteTrack()
trackers = [track1, track2]
label_annotators = [sv.LabelAnnotator(color=colors.by_idx(index)) for index in range(len(polygons))]
trace_annotators = [sv.TraceAnnotator(color=colors.by_idx(index), trace_length=50) for index in range(len(polygons))]

# print('len list', len(label_annotators), label_annotators[0])


def process_frame(frame: np.ndarray, i) -> np.ndarray:
   results = model(frame, imgsz=1280, verbose=False, show=False, device='cuda:0')[0]

   # supervision
   detections = sv.Detections.from_ultralytics(results)

   annotated_frame = frame.copy()
   i = 0
   for zone, zone_annotator, box_annotator, label_annotator, trace_annotator, tracker in (
           zip(zones, zone_annotators, box_annotators, label_annotators, trace_annotators, trackers)):
       # print('i', i)

       mask = zone.trigger(detections=detections)
       # print("mask", mask)

       # get target
       detections_filtered = detections[mask]
       # print('dection-get', detections_filtered)

       detections_filtered = tracker.update_with_detections(detections_filtered)
       # print('detections_filtered.tracker_id', detections_filtered.class_id, detections_filtered.tracker_id)

       labels = [
           f"#{tracker_id} {results.names[class_id]}"
           for class_id, tracker_id
           in zip(detections_filtered.class_id, detections_filtered.tracker_id)
       ]
       # print('label', labels, len(labels))

       # box
       # annotated_frame = box_annotator.annotate(annotated_frame, detections=detections_filtered)
       # cv2.imshow("box", annotated_frame)
       # cv2.waitKey(0)
       #
       # annotated_frame = label_annotator.annotate(annotated_frame, detections=detections_filtered, labels=labels)
       # # cv2.imshow("ID", annotated_frame)
       # # cv2.waitKey(0)
       #
       # # track
       # annotated_frame = trace_annotator.annotate(annotated_frame, detections=detections_filtered)
       # # cv2.imshow("trac", annotated_frame)
       # # cv2.waitKey(0)
       #
       # annotated_frame = zone_annotator.annotate(scene=annotated_frame)

       if len(labels) >= 1:
           # ID and labels
           annotated_frame = label_annotator.annotate(annotated_frame, detections=detections_filtered, labels=labels)

           # track
           annotated_frame = trace_annotator.annotate(annotated_frame, detections=detections_filtered)


           annotated_frame = zone_annotator.annotate(scene=annotated_frame)

       else:
           annotated_frame = zone_annotator.annotate(scene=annotated_frame, )

       i += 1

   pbar.update(1)

   return annotated_frame


filehead = VIDEO_PATH.split('/')[-1]
OUT_PATH = "out-" + filehead

with tqdm(total=video_info.total_frames - 1) as pbar:
   sv.process_video(source_path=VIDEO_PATH, target_path=OUT_PATH, callback=process_frame)

from supervision.

SkalskiP avatar SkalskiP commented on September 24, 2024

Hi @XiaJunfly 👋🏻 Try running this code:

  import time
  from tqdm import tqdm
  import cv2
  import numpy as np
  from ultralytics import YOLO
  import supervision as sv
  import matplotlib.pyplot as plt
  
  model = YOLO('yolov8x.pt')
  
  VIDEO_PATH = 'people-walking.mp4'
  
  video_info = sv.VideoInfo.from_video_path(VIDEO_PATH)
  
  polygons = [
      np.array([[100, 800], [500, 800],[500, 100], [100, 100]]),
      np.array([[600, 800],[800, 800], [800, 600], [600, 600]])
  ]
  
  zones = [sv.PolygonZone(polygon=polygon, frame_resolution_wh=video_info.resolution_wh) for polygon in polygons]
  colors = sv.ColorPalette.default()
  zone_annotators = [sv.PolygonZoneAnnotator(zone=zone, color=colors.by_idx(index), thickness=6)
                     for index, zone in enumerate(zones)]
  box_annotators = [sv.BoundingBoxAnnotator(color=colors.by_idx(index), thickness=2)
                    for index in range(len(polygons))]
  tracker = sv.ByteTrack()
  label_annotators = [sv.LabelAnnotator(color=colors.by_idx(index)) for index in range(len(polygons))]
  trace_annotators = [sv.TraceAnnotator(color=colors.by_idx(index)) for index in range(len(polygons))]
  
  def process_frame(frame: np.ndarray, i) -> np.ndarray:
  
      results = model(frame, imgsz=1280, verbose=False, show=False, device='cuda:0')[0]
  
  
      # supervision 
      detections = sv.Detections.from_ultralytics(results)
  
      annotated_frame = frame.copy()
      for zone, zone_annotator, box_annotator, label_annotator,  trace_annotator in (
              zip(zones, zone_annotators, box_annotators, label_annotators, trace_annotators)):
          print('i', zone, zone_annotator, box_annotator, label_annotator,  trace_annotator)
  
          mask = zone.trigger(detections=detections)
  
          # get target
          detections_filtered = detections[mask]
          print('dection', detections_filtered)
  
  
          detections_filtered = tracker.update_with_detections(detections_filtered)
          print('detections_filtered.tracker_id', detections_filtered.class_id, detections_filtered.tracker_id)
  
          labels = [
              f"#{tracker_id} {results.names[class_id]}"
              for class_id, tracker_id
              in zip(detections_filtered.class_id, detections_filtered.tracker_id)
          ]
          print('label', labels)
  
          # box
          annotated_frame = box_annotator.annotate(annotated_frame, detections=detections_filtered)
  
          # ID and labels
          annotated_frame = label_annotator.annotate(annotated_frame, detections=detections_filtered, labels=labels)
  
          # track
          annotated_frame = trace_annotator.annotate(annotated_frame, detections=detections_filtered)
          cv2.imshow("trac", frame)
          cv2.waitKey(0)
  
          annotated_frame = zone_annotator.annotate(scene=annotated_frame)
  
      pbar.update(1)
  
      return annotated_frame
  
  filehead = VIDEO_PATH.split('/')[-1]
  OUT_PATH = "out-" + filehead
  
  with tqdm(total=video_info.total_frames - 1) as pbar:
      sv.process_video(source_path=VIDEO_PATH, target_path=OUT_PATH, callback=process_frame)

from supervision.

XiaJunfly avatar XiaJunfly commented on September 24, 2024

@skylargivens
Thank you for your reply. I tried your code,but it shows the same error.

from supervision.

SkalskiP avatar SkalskiP commented on September 24, 2024

@XiaJunfly can you reproduce this bug in google colab?

from supervision.

rolson24 avatar rolson24 commented on September 24, 2024

Hi @SkalskiP and @XiaJunfly
I know what the issue is.

In this PR #1035 I changed the tracker to return the original detections only if they were tracked. It worked in all of my tests, except I forgot to test when none of the detections were tracked. Later after we merged the PR I noticed some bugs with the PR that when none of the detections are tracked, the function returns the original detections, but with the tracker_ids empty (see here). I fixed it in my PR here: #1076, but that was not included in 0.20.0.

I realize now that I should have fixed the issue in a separate PR for transparency, and I'm sorry about that.

For now, one solution could be to check if the tracker_ids are empty and if the they are just throw them away (hacky).

Here is a colab notebook with the broken code and the possible fix.

from supervision.

SkalskiP avatar SkalskiP commented on September 24, 2024

Hi @XiaJunfly 👋🏻 I am very sorry for the confusion. As @rolson24 has already explained, we are experiencing some issues with the ByteTrack version released in supervision=0.20.0. I apologize once again. For this reason, I encourage you to use the pre-released version supervision=0.21.0rc3.

https://pypi.org/project/supervision/0.21.0rc3/

from supervision.

SkalskiP avatar SkalskiP commented on September 24, 2024

I'm closing this issue. Feel free to reopen it if you have more questions.

btw, @rolson24 you are becoming our tracking expert! 🙌🏻

from supervision.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.