GithubHelp home page GithubHelp logo

chris-hughes10 / yolov7-training Goto Github PK

View Code? Open in Web Editor NEW
115.0 115.0 35.0 11.8 MB

A clean, modular implementation of the Yolov7 model family, which uses the official pretrained weights, with utilities for training the model on custom (non-COCO) tasks.

License: GNU General Public License v3.0

Dockerfile 0.01% Makefile 0.01% Jupyter Notebook 97.68% Python 2.30% Shell 0.01%

yolov7-training's People

Contributors

bepuca avatar chris-hughes10 avatar dariush-bahrami avatar lars-nieradzik avatar tkupek avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

yolov7-training's Issues

Request for yolov7-tiny weights

Hi, I do not see the release file for yolov7-tiny. Just wanted to know if you plan to release it? It is available from the official repo from here: https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-tiny.pt

When I tried to load this ckpt myself, I encountered the following error:

(Pdb) torch.load('yolov7-tiny.pt', map_location="cpu")
*** AttributeError: Can't get attribute 'Model' on <module 'models.yolo' (namespace)>

Is there something special to be done to convert the official weights to the ones compatible with your repo?

Thanks in advance!

Is there a bug in get_yolov7_e6e_config

line 1152 - 1157
[
[257, 258, 259, 260, 261, 262, 263, 264],
1,
detection_head,
[num_channels, anchor_sizes_per_layer, strides, True],
],

does it should be the following?
[
[257, 258, 259, 260, 261, 262, 263, 264],
1,
detection_head,
[num_classes, anchor_sizes_per_layer, strides, True],
],

annotated data format

I was looking into your blog post and the codebase here. Your annotation file is a .csv. Which format is this annotated? COCO format or yolo format?

AttributeError: 'DistributedDataParallel' object has no attribute 'postprocess'

When i run the code with 2 gpus ( ''torchrun --nproc_per_node=2 train_cars.py'' or ''notebook_launcher(main, num_processes=2) in .py file'' ), the error occur:

`
File "/data/code/Yolov7-training/yolov7/trainer.py", line 132, in calculate_eval_batch_loss
preds = self.model.postprocess(fpn_heads_outputs, conf_thres=0.001)
File "/home/xiaoxiong/anaconda3/envs/yolov7/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1207, in getattr
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'DistributedDataParallel' object has no attribute 'postprocess'

`
How can i fix it ?
(I also asked in Chris-hughes10/pytorch-accelerated#56)

Error during fine tuning

Hi,

Thank you for providing your code for YOLOv7 training. It's very intuitive. While trying to fine-tune with my own dataset, I ran into the below error, which I believe may not be related to my changes. I setup the env using your requirements file and I am running on a GCP machine with a Tesla T4. Below is the snippet of the error. Please let me know if this is a known issue or possible cause. Thanks in advance.

Transferred 555/566 items from https://github.com/Chris-hughes10/Yolov7-training/releases/download/0.1.0/yolov7_training_state_dict.pt

Starting training run

Starting epoch 1
  0%|                                                                                                   | 0/1013 [00:00<?, ?it/s]
torch.Size([9, 6]) 4.  <--- output of print(image_preds.shape, PredIdx.OBJ)
../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:367: operator(): block: [0,0,0], thread: [0,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
Traceback (most recent call last):
    main()
  File "/opt/conda/envs/yolov7/lib/python3.9/site-packages/func_to_script/core.py", line 108, in scripted_function
    return func(**args)
  File "/home/varshanth/yolov7/fine_tune.py", line 77, in main
    trainer.train(
  File "/opt/conda/envs/yolov7/lib/python3.9/site-packages/pytorch_accelerated/trainer.py", line 467, in train
    self._run_training()
  File "/opt/conda/envs/yolov7/lib/python3.9/site-packages/pytorch_accelerated/trainer.py", line 676, in _run_training
    self._run_train_epoch(self._train_dataloader)
  File "/opt/conda/envs/yolov7/lib/python3.9/site-packages/pytorch_accelerated/trainer.py", line 749, in _run_train_epoch
    self._perform_forward_and_backward_passes(batch)
  File "/opt/conda/envs/yolov7/lib/python3.9/site-packages/pytorch_accelerated/trainer.py", line 776, in _perform_forward_and_backward_passes
    batch_output = self.calculate_train_batch_loss(batch)
  File "/home/varshanth/yolov7/yolov7/trainer.py", line 107, in calculate_train_batch_loss
    loss, _ = self.loss_func(
  File "/home/varshanth/yolov7/yolov7/loss.py", line 204, in __call__
    box_loss, obj_loss, cls_loss = self._compute_losses(
  File "/home/varshanth/yolov7/yolov7/loss.py", line 237, in _compute_losses_for_train
    anchor_boxes_per_layer, targets_per_layer = self.simOTA_assignment(
  File "/home/varshanth/yolov7/yolov7/loss.py", line 619, in simOTA_assignment
    pred_objectness = image_preds[:, [PredIdx.OBJ]]
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
  0%|                                                                                                   | 0/1013 [00:03<?, ?it/s

if bbox error

Hey Chris, excellent bit of work, both the blog post and this repo. I'm just borrowing your mixup and mosaic augmentation implementations for the timebeing but I'm a little confused by this line:

For me it was causing this error:

 The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()

e.g. Python has no idea what we're checking the truth of when we feed it the bbox list, which I would have thought is the default happy path?

For my use case I've just removed the condition.

Resume training

My training stop due to PC accidentally shut down. Is it possible to resume back the training? If yes, how I'm gonna do it?

Error when training a _w6 model

Hi,
Loading the a w6 config model using the below code:

    model = create_yolov7_model(
        architecture="yolov7-w6", num_classes=num_classes, pretrained=pretrained
    )

gives you an error in the constructor of the Yolov7DetectionHeadWithAux class. The cause of the error is that use_implicit_modules is being passed to this classes constructor, which is not expecting this arg. To fix this, you can remove the True from the config on line 470 of the model_configs.py, but that exposes a second error.
When use_implicit_modules is defaulted, the forward pass of the network throws an index out of bounds error. I have patched this by defaulting the parameter to true when called from the Yolov7DetectionHeadWithAux constructor, but im not sure if this is the correct approach

List or Tuple as output?

PyTorch trace likes to have Tuples as output but the model currently uses lists. Can this be changed without breaking things?

Testing the trained model

Thank you very much for publishing this code. This has better helped us to understand YOLOv7 building blocks.
Recently I used your dataset to train the yolov7 model and it seems to be working fine, even the MAP on the cars dataset released by you is around 0.98. However, after training, I wanted to see if I can do inference on test data using the trained model and did the following code to do so.


image_size: int = 640
num_classes = 1
pretrained=False
DATA_PATH = Path("Yolov7-trainer/datasets/data")
ckpt_path = "./lightning_logs/checkpoints/epoch=4-step=205.ckpt"
batch_size: int = 8

data_path = Path(DATA_PATH)
images_path = data_path / "training_images"
annotations_file_path = data_path / "annotations.csv"


model = create_yolov7_model(
        architecture="yolov7", num_classes=num_classes, pretrained=True
    )
loss_func = create_yolov7_loss(model, image_size=image_size)

optimizer = torch.optim.SGD(
    model.parameters(), lr=0.01, momentum=0.9, nesterov=True
)
loaded_model = Yolov7Trainer.load_from_checkpoint(ckpt_path, model=model, optimizer=optimizer, loss_func=loss_func)
loaded_model.eval()

#single image prediction
img = np.array(Image.open(f'{images_path}/vid_4_2160.jpg').convert("RGB"))
original_image_sizes = torch.tensor([img.shape[:2]])

transforms = create_yolov7_transforms()
img = transforms(image=img, bboxes=[], labels = [])

with torch.no_grad():
    img = torch.from_numpy(img["image"]).permute(2,0,1).unsqueeze(0).type('torch.FloatTensor')
    y_hat = loaded_model(img.cuda())
    preds = model.postprocess(y_hat, conf_thres=0.8) #threshold
    preds = filter_eval_predictions(preds)  #nms

After doing so, I tried to show the inference results on top of the image that is inference. But the results don't seem to align with the ground truth. Here is how I tried to draw the predicted bounding boxes.

img = np.array(Image.open(f'{images_path}/vid_4_2160.jpg').convert("RGB"))
img = transforms(image=img, bboxes=[], labels = [])
fig, axe = plt.subplots()
axe.imshow(img["image"])
bboxes = preds[0][:,:4]
for box in bboxes.cpu():
    box_cvt = torchvision.ops.box_convert(box, in_fmt="xyxy", out_fmt="xywh")
    rect = patches.Rectangle((box_cvt[0],box_cvt[1]), 
                             box_cvt[2], 
                             box_cvt[3],
                             edgecolor="b",
                             fill=False)
    axe.add_patch(rect)

here is the result of the above code run.
output

I am wondering if I have a mistake about translating the predicted bounding boxes?

pair_wise_cls_loss incorrect?

Hi, thank you for rewriting YOLOv7. The original code is really a mess.

pair_wise_cls_loss = F.binary_cross_entropy(
might be incorrect. At least mixed precision does not work, when you write it this way.

The original code does: y = sqrt(sigmoid(x) * sigmoid(y)), then applying log(y/(1-y)) to get back to the logit. binary_cross_entropy_with_logits is needed for mixed precision. In the end, the original pair_wise_cls_loss is simplified something like -0.5*(p*log(e^(x + y)/((1 + e^x) (1 + e^y))) + (1-p)*log(1 - e^(x + y)/((1 + e^x) (1 + e^y)))). Might be possible to simplify this part in a different way.

pred_class_probs = (
    (pred_class_scores.sigmoid_() * pred_objectness.sigmoid_())
    .unsqueeze(dim=0)
    .repeat(num_image_targets, 1, 1)
)
y = pred_class_probs.sqrt_()
pair_wise_cls_loss = F.binary_cross_entropy_with_logits(
    torch.log(y/(1-y)), target_class_probs, reduction="none"
).sum(-1)

I was also wondering if you tested your code on bigger datasets. I have trouble with PASCAL VOC.

Transforms throw errors on non-square images

Given 2 images with inverted aspect ratios (image 1: 1000x500, image 2: 500x1000), and a desired image size of 1000x500, the transform A.LongestMaxSize will not pad either image, resulting in a non-batchable image batch. To resolve this, the images should be resized to the largest size that fits within the given dimensions.

Hi, How do you resume from best_model.pt?

Hello,

For some reason my machine lost power during training.
Is it possible to resume the previous training from best_model.pt ?

I realized #13 (comment) didn't solve the problem complete...

My rough code:

   if RESUME_LOCAL_PATH is not None:
        ckpoint = torch.load(RESUME_LOCAL_PATH)
        optimizer.state_dict().update(ckpoint['optimizer_state_dict'])
   
   ......

   state_dict = torch.load(RESUME_LOCAL_PATH )
   state_dict = intersect_dicts(
                state_dict,
                model.state_dict(),
                exclude=["anchor"],
            )

   model.load_state_dict(state_dict, strict=False)

I'm not sure if I'll be able to get back to my previous training.

Once custom yolov7 trained, how can I use it into test.py and detect.py?

Hello,
Thanks for your deep dive implementation of custom yolov7 model. Just finalized the training and got best_model.pt and ema_model.pt. Then, I copy these pytorch files in the yolov7 repository I downloaded (your forked version found here: https://github.com/Chris-hughes10/yolov7) in order to use the model with detect.py. But It does not work. Please find below the script I run and the error I got. Can you help me to fix that please? Or can you tell me the steps I have to follow to use the model please?

Many thanks in advance for your helps! Have a nice day!

The script:
!python detect.py --weights best_model.pt --conf 0.5 --img-size 640 --source {image_folder} --no-trace

The error:

Namespace(weights=['best_model.pt'], source='/content/gdrive/MyDrive/CookiePy/Script/yolov7/data/TestSet/images', img_size=640, conf_thres=0.5, iou_thres=0.45, device='', view_img=False, save_txt=False, save_conf=False, nosave=False, classes=None, agnostic_nms=False, augment=False, update=False, project='runs/detect', name='exp', exist_ok=False, no_trace=True)
YOLOR ๐Ÿš€ 55b90e1 torch 2.0.1+cu118 CPU

Traceback (most recent call last):
  File "/content/gdrive/MyDrive/CookiePy/Script/yolov7/detect.py", line 195, in <module>
    detect()
  File "/content/gdrive/MyDrive/CookiePy/Script/yolov7/detect.py", line 34, in detect
    model = attempt_load(weights, map_location=device)  # load FP32 model
  File "/content/gdrive/MyDrive/CookiePy/Script/yolov7/models/experimental.py", line 253, in attempt_load
    model.append(ckpt['ema' if ckpt.get('ema') else 'model'].float().fuse().eval())  # FP32 model
KeyError: 'model'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.