GithubHelp home page GithubHelp logo

rainbowluocs / diffusiontrack Goto Github PK

View Code? Open in Web Editor NEW
164.0 164.0 7.0 714 KB

[AAAI 2024] DiffusionTrack: Diffusion Model For Multi-Object Tracking. DiffusionTrack is the first work to employ the diffusion model for multi-object tracking by formulating it as a generative noise-to-tracking diffusion process.

License: Other

Python 95.58% C++ 4.42%
diffusion multi-object-tracking object-detection tracking

diffusiontrack's Introduction

Hi there ๐Ÿ‘‹

Anurag's github stats

diffusiontrack's People

Contributors

rainbowluocs avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

diffusiontrack's Issues

Question about diffusion_postprocess

Hi, author. I am very interested in your work, but I have encountered some issues while reading the code below:

    def diffusion_postprocess(self,diffusion_outputs,conf_scores,nms_thre=0.7,conf_thre=0.6):
        # input shape: (2B, num_proposal, 5), (B, num_proposal, 1)

        pre_prediction,cur_prediction=diffusion_outputs.split(len(diffusion_outputs)//2,dim=0)  
        # (B, num_proposal, 5), (B, num_proposal, 5)

        output = [None for _ in range(len(pre_prediction))]  # [None, None, ... ...] shape=(B)
        for i,(pre_image_pred,cur_image_pred,association_score) in enumerate(zip(pre_prediction,cur_prediction,conf_scores)):

            association_score=association_score.flatten()  # (num_proposal)
            # If none are remaining => process next image
            if not pre_image_pred.size(0):
                continue
            # _, conf_mask = torch.topk((image_pred[:, 4] * class_conf.squeeze()), 1000)
            # Detections ordered as (x1, y1, x2, y2, obj_conf, class_conf, class_pred)
            detections=torch.zeros((2,len(cur_image_pred),7),dtype=cur_image_pred.dtype,device=cur_image_pred.device)
            # detections.shape = (2, num_proposals, 7)
            # detections[0]
            detections[0,:,:4]=pre_image_pred[:,:4]
            detections[1,:,:4]=cur_image_pred[:,:4]
            detections[0,:,4]=association_score
            detections[1,:,4]=association_score
            detections[0,:,5]=torch.sqrt(torch.sigmoid(pre_image_pred[:,4])*association_score)
            detections[1,:,5]=torch.sqrt(torch.sigmoid(cur_image_pred[:,4])*association_score)

            score_out_index=association_score>conf_thre  

            # strategy=torch.mean
            # value=strategy(detections[:,:,5],dim=0,keepdim=False)
            # score_out_index=value>conf_thre

            detections=detections[:,score_out_index,:]  

            if not detections.size(1):
                output[i]=detections
                continue

            nms_out_index_3d = cluster_nms(
                                        detections[0,:,:4],
                                        detections[1,:,:4],
                                        # value[score_out_index],
                                        detections[0,:,4],
                                        iou_threshold=nms_thre)

            detections = detections[:,nms_out_index_3d,:]

            if output[i] is None:
                output[i] = detections
            else:
                output[i] = torch.cat((output[i], detections))

            # output[i] = (2, num_proposal, 7)

        return output[0][0],output[0][1],torch.cat([output[1][0],output[1][1]],dim=0) if len(output)>=2 else None

For the diffusion_postprocess function, given that it takes diffusion_outputs (2*batch_size, num_proposal, 5) and conf_scores (batch_size, num_proposal, 1)as inputs, the return value output[0][0] (num_proposal, 7)๏ผŒoutput[0][1] (num_proposal, 7) ... ... represents the detection results of the previous image for the first/second iteration in the batch. This seems to discard the majority of the information in the batch. Why is the process handled this way? If there's a misunderstanding on my part, please correct me. Much appreciated, thank you!

ERROR:python3 tools/train.py -f exps/example/mot/yolox_x_diffusion_det_mot17_ablation.py -d 8 -b 16 -o -c pretrained/bytetrack_ablation.pth.tar

2023-11-01 20:36:21.464 | INFO | yolox.core.trainer:before_train:129 - args: Namespace(batch_size=2, ckpt='../pretrained/bytetrack_ablation.pth.tar', devices=1, dist_backend='nccl', dist_url=None, exp_file='../exps/example/mot/yolox_x_diffusion_det_mot17_ablation.py', experiment_name='yolox_x_diffusion_det_mot17_ablation', fp16=False, local_rank=0, machine_rank=0, name=None, num_machines=1, occupy=True, opts=[], resume=False, start_epoch=None)
2023-11-01 20:36:21.465 | INFO | yolox.core.trainer:before_train:130 - exp value:
โ•’โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•คโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ••
โ”‚ keys โ”‚ values โ”‚
โ•žโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ชโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ก
โ”‚ seed โ”‚ 8823 โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ output_dir โ”‚ './DiffusionTrack_outputs' โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ print_interval โ”‚ 20 โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ eval_interval โ”‚ 5 โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ num_classes โ”‚ 1 โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ depth โ”‚ 1.33 โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ width โ”‚ 1.25 โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ data_num_workers โ”‚ 4 โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ input_size โ”‚ (800, 1440) โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ random_size โ”‚ (18, 32) โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ train_ann โ”‚ 'train_half.json' โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ val_ann โ”‚ 'val_half.json' โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ degrees โ”‚ 10.0 โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ translate โ”‚ 0.1 โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ scale โ”‚ (0.1, 2) โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ mscale โ”‚ (0.8, 1.6) โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ shear โ”‚ 2.0 โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ perspective โ”‚ 0.0 โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ enable_mixup โ”‚ True โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ warmup_epochs โ”‚ 1 โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ max_epoch โ”‚ 30 โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ warmup_lr โ”‚ 0 โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ basic_lr_per_img โ”‚ 1.5625e-05 โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ scheduler โ”‚ 'yoloxwarmcos' โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ no_aug_epochs โ”‚ 10 โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ min_lr_ratio โ”‚ 0.05 โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ ema โ”‚ True โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ weight_decay โ”‚ 0.0005 โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ momentum โ”‚ 0.9 โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ exp_name โ”‚ 'yolox_x_diffusion_det_mot17_ablation' โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ test_size โ”‚ (800, 1440) โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ random_flip โ”‚ False โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ task โ”‚ 'detection' โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ conf_thresh โ”‚ 0.4 โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ det_thresh โ”‚ 0.7 โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ nms_thresh2d โ”‚ 0.75 โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ nms_thresh3d โ”‚ 0.7 โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ interval โ”‚ 5 โ”‚
โ•˜โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•งโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•›
2023-11-01 20:36:24.553 | INFO | yolox.data.datasets.mot:init:39 - loading annotations into memory...
2023-11-01 20:36:24.640 | INFO | yolox.data.datasets.mot:init:39 - Done (t=0.09s)
2023-11-01 20:36:24.641 | INFO | pycocotools.coco:init:86 - creating index...
2023-11-01 20:36:24.649 | INFO | pycocotools.coco:init:86 - index created!
2023-11-01 20:36:24.773 | INFO | yolox.core.trainer:before_train:153 - init prefetcher, this might take one minute or less...
2023-11-01 20:36:30.630 | INFO | yolox.data.datasets.mot:init:39 - loading annotations into memory...
2023-11-01 20:36:30.735 | INFO | yolox.data.datasets.mot:init:39 - Done (t=0.10s)
2023-11-01 20:36:30.735 | INFO | pycocotools.coco:init:86 - creating index...
2023-11-01 20:36:30.744 | INFO | pycocotools.coco:init:86 - index created!
2023-11-01 20:36:30.857 | INFO | yolox.core.trainer:before_train:181 - Training start...
2023-11-01 20:36:30.857 | INFO | yolox.core.trainer:before_epoch:192 - ---> start train epoch1
2023-11-01 20:36:31.669 | INFO | yolox.core.trainer:after_train:187 - Training of experiment is done and the best AP is 0.00
2023-11-01 20:36:31.670 | ERROR | yolox.core.launch:launch:90 - An error has been caught in function 'launch', process 'MainProcess' (38847), thread 'MainThread' (140253388433216):
Traceback (most recent call last):

File "/home/wangtuo/Downloads/Multi-Object-Tracking/Paper/DiffusionTrack-main/tools/train.py", line 133, in
args=(exp, args),
โ”‚ โ”” Namespace(batch_size=2, ckpt='../pretrained/bytetrack_ablation.pth.tar', devices=1, dist_backend='nccl', dist_url=None, exp_f...
โ”” โ•’โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•คโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•...

File "/home/wangtuo/Downloads/Multi-Object-Tracking/Paper/DiffusionTrack-main/yolox/core/launch.py", line 90, in launch
main_func(*args)
โ”‚ โ”” (โ•’โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•คโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•...
โ”” <function main at 0x7f8f34e2ec80>

File "/home/wangtuo/Downloads/Multi-Object-Tracking/Paper/DiffusionTrack-main/tools/train.py", line 110, in main
trainer.train()
โ”‚ โ”” <function Trainer.train at 0x7f8f44900840>
โ”” <yolox.core.trainer.Trainer object at 0x7f8f34e38a20>

File "/home/wangtuo/Downloads/Multi-Object-Tracking/Paper/DiffusionTrack-main/yolox/core/trainer.py", line 73, in train
self.train_in_epoch()
โ”‚ โ”” <function Trainer.train_in_epoch at 0x7f8f44905620>
โ”” <yolox.core.trainer.Trainer object at 0x7f8f34e38a20>

File "/home/wangtuo/Downloads/Multi-Object-Tracking/Paper/DiffusionTrack-main/yolox/core/trainer.py", line 82, in train_in_epoch
self.train_in_iter()
โ”‚ โ”” <function Trainer.train_in_iter at 0x7f8f4491ebf8>
โ”” <yolox.core.trainer.Trainer object at 0x7f8f34e38a20>

File "/home/wangtuo/Downloads/Multi-Object-Tracking/Paper/DiffusionTrack-main/yolox/core/trainer.py", line 88, in train_in_iter
self.train_one_iter()
โ”‚ โ”” <function Trainer.train_one_iter at 0x7f8f4491ec80>
โ”” <yolox.core.trainer.Trainer object at 0x7f8f34e38a20>

File "/home/wangtuo/Downloads/Multi-Object-Tracking/Paper/DiffusionTrack-main/yolox/core/trainer.py", line 105, in train_one_iter
outputs = self.model(inps,targets,self.random_flip,self.input_size)
โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”” (800, 1440)
โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”” <yolox.core.trainer.Trainer object at 0x7f8f34e38a20>
โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”” False
โ”‚ โ”‚ โ”‚ โ”‚ โ”” <yolox.core.trainer.Trainer object at 0x7f8f34e38a20>
โ”‚ โ”‚ โ”‚ โ”” (tensor([[[ 0.0000, 1009.0435, 306.2504, 15.8012, 31.7075],
โ”‚ โ”‚ โ”‚ [ 0.0000, 992.1570, 280.6044, 14.5298, 27...
โ”‚ โ”‚ โ”” (tensor([[[[-0.8335, -0.8335, -0.8335, ..., 1.2282, 1.1493, 1.1380],
โ”‚ โ”‚ [-0.8335, -0.8335, -0.8335, ..., 1.2282,...
โ”‚ โ”” DiffusionNet(
โ”‚ (backbone): YOLOPAFPN(
โ”‚ (backbone): CSPDarknet(
โ”‚ (stem): Focus(
โ”‚ (conv): BaseConv(
โ”‚ (...
โ”” <yolox.core.trainer.Trainer object at 0x7f8f34e38a20>

File "/home/wangtuo/anaconda3/envs/diffusionTrack/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
โ”‚ โ”‚ โ”” {}
โ”‚ โ”” ((tensor([[[[-0.8335, -0.8335, -0.8335, ..., 1.2282, 1.1493, 1.1380],
โ”‚ [-0.8335, -0.8335, -0.8335, ..., 1.2282...
โ”” <bound method DiffusionNet.forward of DiffusionNet(
(backbone): YOLOPAFPN(
(backbone): CSPDarknet(
(stem): Focus(...

File "/home/wangtuo/Downloads/Multi-Object-Tracking/Paper/DiffusionTrack-main/diffusion/models/diffusionnet.py", line 77, in forward
features,mate_info,targets=torch.cat([pre_targets,cur_targets],dim=0))
โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”” tensor([[[ 0.0000, 1009.0435, 306.2504, 15.8012, 31.7075],
โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ [ 0.0000, 992.1570, 280.6044, 14.5298, 27....
โ”‚ โ”‚ โ”‚ โ”‚ โ”” tensor([[[ 0.0000, 1009.0435, 306.2504, 15.8012, 31.7075],
โ”‚ โ”‚ โ”‚ โ”‚ [ 0.0000, 992.1570, 280.6044, 14.5298, 27....
โ”‚ โ”‚ โ”‚ โ”” <built-in method cat of type object at 0x7f8f30351e80>
โ”‚ โ”‚ โ”” <module 'torch' from '/home/wangtuo/anaconda3/envs/diffusionTrack/lib/python3.7/site-packages/torch/init.py'>
โ”‚ โ”” (torch.Size([2, 3, 800, 1440]), device(type='cuda', index=0), torch.float32)
โ”” ([tensor([[[[-1.9567e-02, -9.6989e-02, -2.6508e-01, ..., -2.2024e-01,
-2.7844e-01, -2.7244e-01],
[ 4.84...

File "/home/wangtuo/anaconda3/envs/diffusionTrack/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
โ”‚ โ”‚ โ”” {'targets': tensor([[[ 0.0000, 1009.0435, 306.2504, 15.8012, 31.7075],
โ”‚ โ”‚ [ 0.0000, 992.1570, 280.6044, 14...
โ”‚ โ”” (([tensor([[[[-1.9567e-02, -9.6989e-02, -2.6508e-01, ..., -2.2024e-01,
โ”‚ -2.7844e-01, -2.7244e-01],
โ”‚ [ 4.8...
โ”” <bound method DiffusionHead.forward of DiffusionHead(
(head): DynamicHead(
(box_pooler): ROIPooler(
(level_pooler...

File "/home/wangtuo/Downloads/Multi-Object-Tracking/Paper/DiffusionTrack-main/diffusion/models/diffusion_head.py", line 388, in forward
loss_dict = self.criterion(output, targets)
โ”‚ โ”‚ โ”” [{'labels': tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
โ”‚ โ”‚ 0, 0, 0, 0, 0, 0, 0, 0, 0...
โ”‚ โ”” {'pred_logits': tensor([[[-2.6417],
โ”‚ [-3.3217],
โ”‚ [-3.2851],
โ”‚ ...,
โ”‚ [-3.0609],
โ”‚ [-2.90...
โ”” DiffusionHead(
(head): DynamicHead(
(box_pooler): ROIPooler(
(level_poolers): ModuleList(
(0): ROIAlign(o...

File "/home/wangtuo/anaconda3/envs/diffusionTrack/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
โ”‚ โ”‚ โ”” {}
โ”‚ โ”” ({'pred_logits': tensor([[[-2.6417],
โ”‚ [-3.3217],
โ”‚ [-3.2851],
โ”‚ ...,
โ”‚ [-3.0609],
โ”‚ [-2.9...
โ”” <bound method SetCriterionDynamicK.forward of SetCriterionDynamicK(
(matcher): HungarianMatcherDynamicK()
)>

File "/home/wangtuo/Downloads/Multi-Object-Tracking/Paper/DiffusionTrack-main/diffusion/models/diffusion_losses.py", line 233, in forward
indices, _ = self.matcher(outputs_without_aux, targets)
โ”‚ โ”‚ โ”” [{'labels': tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
โ”‚ โ”‚ 0, 0, 0, 0, 0, 0, 0, 0, 0...
โ”‚ โ”” {'pred_logits': tensor([[[-2.6417],
โ”‚ [-3.3217],
โ”‚ [-3.2851],
โ”‚ ...,
โ”‚ [-3.0609],
โ”‚ [-2.90...
โ”” SetCriterionDynamicK(
(matcher): HungarianMatcherDynamicK()
)

File "/home/wangtuo/anaconda3/envs/diffusionTrack/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
โ”‚ โ”‚ โ”” {}
โ”‚ โ”” ({'pred_logits': tensor([[[-2.6417],
โ”‚ [-3.3217],
โ”‚ [-3.2851],
โ”‚ ...,
โ”‚ [-3.0609],
โ”‚ [-2.9...
โ”” <bound method HungarianMatcherDynamicK.forward of HungarianMatcherDynamicK()>

File "/home/wangtuo/Downloads/Multi-Object-Tracking/Paper/DiffusionTrack-main/diffusion/models/diffusion_losses.py", line 383, in forward
cost_giou = -generalized_box_iou(bz_boxes_pre,bz_boxes_curr,bz_gtboxs_abs_xyxy_pre,bz_gtboxs_abs_xyxy_curr)
โ”‚ โ”‚ โ”‚ โ”‚ โ”” tensor([[1001.1429, 290.3966, 1016.9442, 322.1042],
โ”‚ โ”‚ โ”‚ โ”‚ [ 984.8921, 266.8271, 999.4219, 294.3816],
โ”‚ โ”‚ โ”‚ โ”‚ [1068.397...
โ”‚ โ”‚ โ”‚ โ”” tensor([[1001.1429, 290.3966, 1016.9442, 322.1042],
โ”‚ โ”‚ โ”‚ [ 984.8921, 266.8271, 999.4219, 294.3816],
โ”‚ โ”‚ โ”‚ [1068.397...
โ”‚ โ”‚ โ”” tensor([[ 480.7337, -1106.8778, 480.7337, -1106.8778],
โ”‚ โ”‚ [ 817.8951, -325.3340, 817.8951, -325.3340],
โ”‚ โ”‚ [...
โ”‚ โ”” tensor([[ 905.1316, 297.8546, 905.1316, 297.8546],
โ”‚ [ 964.9771, 156.5497, 964.9771, 156.5497],
โ”‚ [ 727.037...
โ”” <function generalized_box_iou at 0x7f8f32cb07b8>

File "/home/wangtuo/Downloads/Multi-Object-Tracking/Paper/DiffusionTrack-main/yolox/utils/box_ops.py", line 77, in generalized_box_iou
return giou_3d(boxes1,boxes3,boxes2,boxes4)
โ”‚ โ”‚ โ”‚ โ”‚ โ”” tensor([[1001.1429, 290.3966, 1016.9442, 322.1042],
โ”‚ โ”‚ โ”‚ โ”‚ [ 984.8921, 266.8271, 999.4219, 294.3816],
โ”‚ โ”‚ โ”‚ โ”‚ [1068.397...
โ”‚ โ”‚ โ”‚ โ”” tensor([[ 480.7337, -1106.8778, 480.7337, -1106.8778],
โ”‚ โ”‚ โ”‚ [ 817.8951, -325.3340, 817.8951, -325.3340],
โ”‚ โ”‚ โ”‚ [...
โ”‚ โ”‚ โ”” tensor([[1001.1429, 290.3966, 1016.9442, 322.1042],
โ”‚ โ”‚ [ 984.8921, 266.8271, 999.4219, 294.3816],
โ”‚ โ”‚ [1068.397...
โ”‚ โ”” tensor([[ 905.1316, 297.8546, 905.1316, 297.8546],
โ”‚ [ 964.9771, 156.5497, 964.9771, 156.5497],
โ”‚ [ 727.037...
โ”” <function giou_3d at 0x7f8f40e481e0>

File "/home/wangtuo/Downloads/Multi-Object-Tracking/Paper/DiffusionTrack-main/yolox/utils/cluster_nms.py", line 70, in giou_3d
intercd = intersect(box_c,box_d)
โ”‚ โ”‚ โ”” tensor([[[1001.1429, 290.3966, 1016.9442, 322.1042],
โ”‚ โ”‚ [ 984.8921, 266.8271, 999.4219, 294.3816],
โ”‚ โ”‚ [1068....
โ”‚ โ”” tensor([[[ 480.7337, -1106.8778, 480.7337, -1106.8778],
โ”‚ [ 817.8951, -325.3340, 817.8951, -325.3340],
โ”‚ ...
โ”” <torch.jit.ScriptFunction object at 0x7f8f40e4c258>

RuntimeError: nvrtc: error: invalid value for --gpu-architecture (-arch)

nvrtc compilation failed:

#define NAN __int_as_float(0x7fffffff)
#define POS_INFINITY __int_as_float(0x7f800000)
#define NEG_INFINITY __int_as_float(0xff800000)

template
device T maximum(T a, T b) {
return isnan(a) ? a : (a > b ? a : b);
}

template
device T minimum(T a, T b) {
return isnan(a) ? a : (a < b ? a : b);
}

extern "C" global
void fused_min_max_sub_clamp(float* t_, float* t__, float* t___, float* t____, float* aten_clamp) {
{
if (512 * blockIdx.x + threadIdx.x<61000 ? 1 : 0) {
float t____1 = ldg(t_ + (512 * blockIdx.x + threadIdx.x) % 2 + 4 * (((512 * blockIdx.x + threadIdx.x) / 122) % 500));
float t_____1 = ldg(t__ + (512 * blockIdx.x + threadIdx.x) % 2 + 4 * (((512 * blockIdx.x + threadIdx.x) / 2) % 61));
float t__1 = _ldg(t + (512 * blockIdx.x + threadIdx.x) % 2 + 4 * (((512 * blockIdx.x + threadIdx.x) / 122) % 500));
float t___1 = ldg(t + (512 * blockIdx.x + threadIdx.x) % 2 + 4 * (((512 * blockIdx.x + threadIdx.x) / 2) % 61));
aten_clamp[512 * blockIdx.x + threadIdx.x] = (minimum(t_____1,t____1)) - (maximum(t___1,t__1))<0.f ? 0.f : (minimum(t_____1,t____1)) - (maximum(t___1,t__1));
}
}
}

โ€˜sampling_stepsโ€™ during inference

When I read the code, I found that due to โ€˜self.sampling_steps=1โ€™ during tracking, the model will not enter ddimโ€™s โ€˜def diffusion(sampling_times,bboxes,x_start,pred_noise)โ€™. What is the authorโ€™s intention of doing this?
Looking forward to your reply

yolox error

ERROR | yolox.core.launch:90 - An error has been caught in function 'launch', process 'MainProcess' (44024), thread 'MainThread' (540):

็ฒพๅบฆ้—ฎ้ข˜

ๆ‚จๅฅฝ๏ผŒๆˆ‘ๅœจDancetrackไธŠ่ฟ›่กŒ่ฎญ็ปƒๆต‹่ฏ•๏ผŒ็ฒพๅบฆๅฅ‡ไฝŽๆ— ๆฏ”๏ผŒHOTAๅชๆœ‰23.7๏ผŒๅŒๆ—ถๆˆ‘ๅˆฉ็”จๆ‚จๆไพ›็š„checkpoint่ฟ›่กŒinference็ฒพๅบฆไนŸๅชๆœ‰46.3

File missing

Dear author thank you very much for your excellent work! Are files such as /datasets missing in yolox/data/ ? I encountered missing 'DiffusionMosaicDetection' from 'yolox.data' and 'MosaicDetection' from 'yolox.data' while training

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.