GithubHelp home page GithubHelp logo

yolov4-tf2's Introduction

YOLOV4:You Only Look Once目标检测模型在Tensorflow2当中的实现


目录

  1. 仓库更新 Top News
  2. 相关仓库 Related code
  3. 性能情况 Performance
  4. 实现的内容 Achievement
  5. 所需环境 Environment
  6. 文件下载 Download
  7. 训练步骤 How2train
  8. 预测步骤 How2predict
  9. 评估步骤 How2eval
  10. 参考资料 Reference

Top News

2023-07:新增Seed设定,用于保证每次训练结果一致。

2022-04:支持多GPU训练,新增各个种类目标数量计算,新增heatmap。

2022-03:进行了大幅度的更新,修改了loss组成,使得分类、目标、回归loss的比例合适、支持step、cos学习率下降法、支持adam、sgd优化器选择、支持学习率根据batch_size自适应调整、新增图片裁剪。
BiliBili视频中的原仓库地址为:https://github.com/bubbliiiing/yolov4-tf2/tree/bilibili

2021-10:进行了大幅度的更新,增加了大量注释、增加了大量可调整参数、对代码的组成模块进行修改、增加fps、视频预测、批量预测等功能。

相关仓库

模型 路径
YoloV3 https://github.com/bubbliiiing/yolo3-tf2
Efficientnet-Yolo3 https://github.com/bubbliiiing/efficientnet-yolo3-tf2
YoloV4 https://github.com/bubbliiiing/yolov4-tf2
YoloV4-tiny https://github.com/bubbliiiing/yolov4-tiny-tf2
Mobilenet-Yolov4 https://github.com/bubbliiiing/mobilenet-yolov4-tf2
YoloV5-V5.0 https://github.com/bubbliiiing/yolov5-tf2
YoloV5-V6.1 https://github.com/bubbliiiing/yolov5-v6.1-tf2
YoloX https://github.com/bubbliiiing/yolox-tf2
Yolov7 https://github.com/bubbliiiing/yolov7-tf2
Yolov7-tiny https://github.com/bubbliiiing/yolov7-tiny-tf2

性能情况

训练数据集 权值文件名称 测试数据集 输入图片大小 mAP 0.5:0.95 mAP 0.5
VOC07+12+COCO yolo4_voc_weights.h5 VOC-Test07 416x416 - 88.9
COCO-Train2017 yolo4_weight.h5 COCO-Val2017 416x416 46.4 70.5

实现的内容

  • 主干特征提取网络:DarkNet53 => CSPDarkNet53
  • 特征金字塔:SPP,PAN
  • 训练用到的小技巧:Mosaic数据增强、Label Smoothing平滑、CIOU、学习率余弦退火衰减
  • 激活函数:使用Mish激活函数
  • ……balabla

所需环境

tensorflow-gpu==2.2.0

文件下载

训练所需的yolo4_weights.h5可在百度网盘中下载。
链接: https://pan.baidu.com/s/1PULpnKyv6ZQ6NHqmS5qejw
提取码: 56fe
yolo4_weights.h5是coco数据集的权重。
yolo4_voc_weights.h5是voc数据集的权重。

VOC数据集下载地址如下,里面已经包括了训练集、测试集、验证集(与测试集一样),无需再次划分:
链接: https://pan.baidu.com/s/19Mw2u_df_nBzsC2lg20fQA
提取码: j5ge

训练步骤

a、训练VOC07+12数据集

  1. 数据集的准备
    本文使用VOC格式进行训练,训练前需要下载好VOC07+12的数据集,解压后放在根目录

  2. 数据集的处理
    修改voc_annotation.py里面的annotation_mode=2,运行voc_annotation.py生成根目录下的2007_train.txt和2007_val.txt。

  3. 开始网络训练
    train.py的默认参数用于训练VOC数据集,直接运行train.py即可开始训练。

  4. 训练结果预测
    训练结果预测需要用到两个文件,分别是yolo.py和predict.py。我们首先需要去yolo.py里面修改model_path以及classes_path,这两个参数必须要修改。
    model_path指向训练好的权值文件,在logs文件夹里。
    classes_path指向检测类别所对应的txt。

    完成修改后就可以运行predict.py进行检测了。运行后输入图片路径即可检测。

b、训练自己的数据集

  1. 数据集的准备
    本文使用VOC格式进行训练,训练前需要自己制作好数据集,
    训练前将标签文件放在VOCdevkit文件夹下的VOC2007文件夹下的Annotation中。
    训练前将图片文件放在VOCdevkit文件夹下的VOC2007文件夹下的JPEGImages中。

  2. 数据集的处理
    在完成数据集的摆放之后,我们需要利用voc_annotation.py获得训练用的2007_train.txt和2007_val.txt。
    修改voc_annotation.py里面的参数。第一次训练可以仅修改classes_path,classes_path用于指向检测类别所对应的txt。
    训练自己的数据集时,可以自己建立一个cls_classes.txt,里面写自己所需要区分的类别。
    model_data/cls_classes.txt文件内容为:

cat
dog
...

修改voc_annotation.py中的classes_path,使其对应cls_classes.txt,并运行voc_annotation.py。

  1. 开始网络训练
    训练的参数较多,均在train.py中,大家可以在下载库后仔细看注释,其中最重要的部分依然是train.py里的classes_path。
    classes_path用于指向检测类别所对应的txt,这个txt和voc_annotation.py里面的txt一样!训练自己的数据集必须要修改!
    修改完classes_path后就可以运行train.py开始训练了,在训练多个epoch后,权值会生成在logs文件夹中。

  2. 训练结果预测
    训练结果预测需要用到两个文件,分别是yolo.py和predict.py。在yolo.py里面修改model_path以及classes_path。
    model_path指向训练好的权值文件,在logs文件夹里。
    classes_path指向检测类别所对应的txt。

    完成修改后就可以运行predict.py进行检测了。运行后输入图片路径即可检测。

预测步骤

a、使用预训练权重

  1. 下载完库后解压,在百度网盘下载yolo_weights.pth,放入model_data,运行predict.py,输入
img/street.jpg
  1. 在predict.py里面进行设置可以进行fps测试和video视频检测。

b、使用自己训练的权重

  1. 按照训练步骤训练。
  2. 在yolo.py文件里面,在如下部分修改model_path和classes_path使其对应训练好的文件;model_path对应logs文件夹下面的权值文件,classes_path是model_path对应分的类
_defaults = {
    #--------------------------------------------------------------------------#
    #   使用自己训练好的模型进行预测一定要修改model_path和classes_path!
    #   model_path指向logs文件夹下的权值文件,classes_path指向model_data下的txt
    #   如果出现shape不匹配,同时要注意训练时的model_path和classes_path参数的修改
    #--------------------------------------------------------------------------#
    "model_path"        : 'model_data/yolo4_weight.h5',
    "classes_path"      : 'model_data/coco_classes.txt',
    #---------------------------------------------------------------------#
    #   anchors_path代表先验框对应的txt文件,一般不修改。
    #   anchors_mask用于帮助代码找到对应的先验框,一般不修改。
    #---------------------------------------------------------------------#
    "anchors_path"      : 'model_data/yolo_anchors.txt',
    "anchors_mask"      : [[6, 7, 8], [3, 4, 5], [0, 1, 2]],
    #---------------------------------------------------------------------#
    #   输入图片的大小,必须为32的倍数。
    #---------------------------------------------------------------------#
    "input_shape"       : [416, 416],
    #---------------------------------------------------------------------#
    #   只有得分大于置信度的预测框会被保留下来
    #---------------------------------------------------------------------#
    "confidence"        : 0.5,
    #---------------------------------------------------------------------#
    #   非极大抑制所用到的nms_iou大小
    #---------------------------------------------------------------------#
    "nms_iou"           : 0.3,
    #---------------------------------------------------------------------#
    #   最大框的数量
    #---------------------------------------------------------------------#
    "max_boxes"         : 100,
    #---------------------------------------------------------------------#
    #   该变量用于控制是否使用letterbox_image对输入图像进行不失真的resize,
    #   在多次测试后,发现关闭letterbox_image直接resize的效果更好
    #---------------------------------------------------------------------#
    "letterbox_image"   : True,
}
  1. 运行predict.py,输入
img/street.jpg
  1. 在predict.py里面进行设置可以进行fps测试和video视频检测。

评估步骤

a、评估VOC07+12的测试集

  1. 本文使用VOC格式进行评估。VOC07+12已经划分好了测试集,无需利用voc_annotation.py生成ImageSets文件夹下的txt。
  2. 在yolo.py里面修改model_path以及classes_path。model_path指向训练好的权值文件,在logs文件夹里。classes_path指向检测类别所对应的txt。
  3. 运行get_map.py即可获得评估结果,评估结果会保存在map_out文件夹中。

b、评估自己的数据集

  1. 本文使用VOC格式进行评估。
  2. 如果在训练前已经运行过voc_annotation.py文件,代码会自动将数据集划分成训练集、验证集和测试集。如果想要修改测试集的比例,可以修改voc_annotation.py文件下的trainval_percent。trainval_percent用于指定(训练集+验证集)与测试集的比例,默认情况下 (训练集+验证集):测试集 = 9:1。train_percent用于指定(训练集+验证集)中训练集与验证集的比例,默认情况下 训练集:验证集 = 9:1。
  3. 利用voc_annotation.py划分测试集后,前往get_map.py文件修改classes_path,classes_path用于指向检测类别所对应的txt,这个txt和训练时的txt一样。评估自己的数据集必须要修改。
  4. 在yolo.py里面修改model_path以及classes_path。model_path指向训练好的权值文件,在logs文件夹里。classes_path指向检测类别所对应的txt。
  5. 运行get_map.py即可获得评估结果,评估结果会保存在map_out文件夹中。

Reference

https://github.com/qqwweee/keras-yolo3/
https://github.com/Cartucho/mAP
https://github.com/Ma-Dan/keras-yolo4

yolov4-tf2's People

Contributors

bubbliiiing avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

yolov4-tf2's Issues

如何生成自己想訓練的資料集的權重檔??

如何生成自己想訓練的資料集的權重檔??
因為您提供的yolo4_weights.h5或者yolo4_voc_weights.h5都是直接下載,我也沒看到其他convert.py的code想問問是否有其他升成專屬自己要訓練資料集的方式?

Resource exhausted

我训练到大概第30个epoch时出现了这个问题,请问是爆显存了吗?
我是在colab上进行训练的,分配的GPU是tesla T4,我设置的batch size是16.
error1
error

triton inference sever部署问题

使用大佬的代码训练和测试model的过程都很顺利,但是使用保存的model在triton inference sever上面部署的话需要4个inputs,我想大概应该是大佬的模型结构相关

训练到20epo的时候报错了

Exception ignored in: <bound method Image.del of <tkinter.PhotoImage object at 0x7fc45c92f0b8>>
Traceback (most recent call last):
File "/usr/lib/python3.6/tkinter/init.py", line 3507, in del
self.tk.call('image', 'delete', self.name)
RuntimeError: main thread is not in main loop

模型转换问题h5toweights

您好,我现在使用您提供的yolov4_weights.h5文件需要转换成yolov4.weights,但是遇到了一个问题,我使用给的预训练模型可以将h5转换成weights但是通过用您的代码训练出来的.h5却转换不了weights,请问这是什么原因呢

不能批量裁剪图片

运行‘predict.py’//已修改 ——mode == "dir_predict":
图片都已经画框了。
我想要把画框的图片都裁剪出来。
我在‘yolo.py’中已经修改 ‘def detect_image(self, image, crop=True):’

但是在img_crop文件夹中裁剪出来的图片就只有两个。我在程序运行中发现。这两个图片不停的被覆盖

请教,要把从xml来生成中间文件的代码改写成json的,能不能给一个中间文件2007_train.txt的例子?只需要给一行就可以。

请教,生成的中间文件2007_train.txt格式是什么样子的?我生成的是这样的:
/media/cfs/xxxx/Projects/DetectionOCR_seal/seal_detectionOCR_v1/yolov4_tf2/VOCdevkit/VOC2007/JPEGImages/110.jpg 791.6666666666667,295.0,961.6666666666667,605.0,0 968.6046511627907,347.6744186046512,1086.046511627907,676.7441860465117,1
其中第一个是图片的路径,紧接着的是两个box, 按照顺序分别是x1,y1,x2,y2,class_id, 请问这样对吗?

如何使用模型检测视频流?

老大 我尝试修改了程序调用网络摄像头,测试程序可以调用起来摄像头,但是在预测程序里调用不成功,有空帮我看看

训练效果问题

upup 请问为啥yolo4的mAP特别的低啊 别的网络都是80多 这个就特别低

DIoU-NMS

你好,感谢你的分享,请问代码中

nms_index = tf.image.non_max_suppression(

使用的NMS好像不是论文中使用的DIoU-NMS是出于什么考虑呢?谢谢!

mAP能达到多少

您好,我加载预权重后又训练了VOC数据集,之后计算mAP(用训练过的一部分图片计算的AP)达到了将近96%,请问这样正确吗,能达到这么高的精准度吗?为什么yolov4论文中mAP才四十多呢?一直不太明白希望您解答一下谢谢!
mAP

如何用 yolo4_voc_weights.h5

如何用 yolo4_voc_weights.h5预测? 直接替换yolo4_weight.h5 predict.py出错

(0) Not found: No algorithm worked!
[[{{node conv2d_93/Conv2D}}]]
[[boolean_mask_100/GatherV2/_3501]]
(1) Not found: No algorithm worked!
[[{{node conv2d_93/Conv2D}}]]

运行出错了

大哥,,,帮帮忙!!!!!
报错位置是这里:
image
image

model.fit(data_generator(lines[:num_train], batch_size, input_shape, anchors, num_classes, mosaic=mosaic),
steps_per_epoch=max(1, num_train//batch_size),
validation_data=data_generator(lines[num_train:], batch_size, input_shape, anchors, num_classes, mosaic=False),
validation_steps=max(1, num_val//batch_size),
epochs=Freeze_epoch,
initial_epoch=Init_epoch,
max_queue_size=1,
callbacks=[logging, checkpoint, reduce_lr, early_stopping]
)

TypeError: int() argument must be a string, a bytes-like object or a number, not 'tuple'

predict.py 运行出现错误

我的环境:
window10+pycharm
conda 下的tensorflow-gpu2.2包含的库如下:
Name Version Build Channel
absl-py 0.9.0 pypi_0 pypi
astunparse 1.6.3 pypi_0 pypi
blas 1.0 mkl https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
ca-certificates 2020.6.24 0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
cachetools 4.1.1 pypi_0 pypi
certifi 2020.6.20 py37_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
chardet 3.0.4 pypi_0 pypi
cudatoolkit 10.1.243 h74a9793_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
cudnn 7.6.5 cuda10.1_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
cycler 0.10.0 py37_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
freetype 2.10.2 hd328e21_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
gast 0.3.3 pypi_0 pypi
google-auth 1.18.0 pypi_0 pypi
google-auth-oauthlib 0.4.1 pypi_0 pypi
google-pasta 0.2.0 pypi_0 pypi
grpcio 1.30.0 pypi_0 pypi
h5py 2.10.0 pypi_0 pypi
hdf5 1.8.20 hac2f561_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
icc_rt 2019.0.0 h0cc432a_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
icu 58.2 ha925a31_3 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
idna 2.10 pypi_0 pypi
importlib-metadata 1.7.0 pypi_0 pypi
intel-openmp 2020.1 216 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
joblib 0.16.0 pypi_0 pypi
jpeg 9b hb83a4c4_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
keras-preprocessing 1.1.2 pypi_0 pypi
kiwisolver 1.2.0 py37h74a9793_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libopencv 3.4.2 h20b85fd_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libpng 1.6.37 h2a8f88b_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libtiff 4.1.0 h56a325e_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
lz4-c 1.9.2 h62dcd97_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
markdown 3.2.2 pypi_0 pypi
matplotlib 3.2.2 0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
matplotlib-base 3.2.2 py37h64f37c6_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
mkl 2020.1 216 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
mkl-service 2.3.0 py37hb782905_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
mkl_fft 1.1.0 py37h45dec08_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
mkl_random 1.1.1 py37h47e9c7a_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
numpy 1.19.0 pypi_0 pypi
numpy-base 1.18.5 py37hc3f5095_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
oauthlib 3.1.0 pypi_0 pypi
olefile 0.46 py37_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
opencv 3.4.2 py37h40b0b35_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
openssl 1.1.1g he774522_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
opt-einsum 3.2.1 pypi_0 pypi
pandas 1.0.5 py37h47e9c7a_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
pillow 7.1.2 py37hcc1f983_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
pip 20.1.1 py37_1 defaults
protobuf 3.12.2 pypi_0 pypi
py-opencv 3.4.2 py37hc319ecb_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
pyasn1 0.4.8 pypi_0 pypi
pyasn1-modules 0.2.8 pypi_0 pypi
pyparsing 2.4.7 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
pyqt 5.9.2 py37h6538335_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
python 3.7.7 h81c818b_4 defaults
python-dateutil 2.8.1 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
pytz 2020.1 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
qt 5.9.7 vc14h73c81de_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
requests 2.24.0 pypi_0 pypi
requests-oauthlib 1.3.0 pypi_0 pypi
rsa 4.6 pypi_0 pypi
scikit-learn 0.23.1 pypi_0 pypi
scipy 1.4.1 pypi_0 pypi
setuptools 47.3.1 py37_0 defaults
sip 4.19.8 py37h6538335_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
six 1.15.0 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
sklearn 0.0 pypi_0 pypi
sqlite 3.32.3 h2a8f88b_0 defaults
tensorboard 2.2.2 pypi_0 pypi
tensorboard-plugin-wit 1.7.0 pypi_0 pypi
tensorflow-gpu 2.2.0 pypi_0 pypi
tensorflow-gpu-estimator 2.2.0 pypi_0 pypi
termcolor 1.1.0 pypi_0 pypi
threadpoolctl 2.1.0 pypi_0 pypi
tk 8.6.10 he774522_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
tornado 6.0.4 py37he774522_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
urllib3 1.25.9 pypi_0 pypi
vc 14.1 h0510ff6_4 defaults
vs2015_runtime 14.16.27012 hf0eaf9b_3 defaults
werkzeug 1.0.1 pypi_0 pypi
wheel 0.34.2 py37_0 defaults
wincertstore 0.2 py37_0 defaults
wrapt 1.12.1 pypi_0 pypi
xz 5.2.5 h62dcd97_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
zipp 3.1.0 pypi_0 pypi
zlib 1.2.11 h62dcd97_4 defaults
zstd 1.4.4 ha9fde0e_3 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main

问题
在按照提示将权重文件放到相应文件夹后,设置好相关路径,我运行predict.py,输入图片路径
出现错误,一直traceback到tf_stack.extract_stack(),但未指出具体错误在哪里。
有人帮忙解答吗

具体运行结果如下:model_data/yolo4_weight.h5 model, anchors, and classes loaded.
Input image filename:people.jpg
2020-08-03 15:14:44.832473: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll
2020-08-03 15:14:46.176765: W tensorflow/stream_executor/gpu/redzone_allocator.cc:314] Internal: Invoking GPU asm compilation is supported on Cuda non-Windows platforms only
Relying on driver to perform ptx compilation.
Modify $PATH to customize ptxas location.
This message will be only logged once.
2020-08-03 15:14:46.399655: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll
2020-08-03 15:14:47.508150: W tensorflow/core/common_runtime/bfc_allocator.cc:311] Garbage collection: deallocate free memory regions (i.e., allocations) so that we can re-allocate a larger region to avoid OOM due to memory fragmentation. If you see this message frequently, you are running near the threshold of the available device memory and re-allocation may incur great performance overhead. You may try smaller batch sizes to observe the performance impact. Set TF_ENABLE_GPU_GARBAGE_COLLECTION=false if you'd like to disable this feature.
2020-08-03 15:14:50.454673: W tensorflow/core/framework/op_kernel.cc:1753] OP_REQUIRES failed at conv_ops.cc:1110 : Not found: No algorithm worked!
2020-08-03 15:14:50.514161: W tensorflow/core/framework/op_kernel.cc:1753] OP_REQUIRES failed at conv_ops.cc:1110 : Not found: No algorithm worked!
2020-08-03 15:14:50.515266: W tensorflow/core/framework/op_kernel.cc:1753] OP_REQUIRES failed at conv_ops.cc:1110 : Not found: No algorithm worked!
Traceback (most recent call last):
File "C:\Users\Administrator.DESKTOP-5SSH0GF\anaconda3\envs\tensorflow2-2\lib\site-packages\tensorflow\python\client\session.py", line 1365, in _do_call
return fn(*args)
File "C:\Users\Administrator.DESKTOP-5SSH0GF\anaconda3\envs\tensorflow2-2\lib\site-packages\tensorflow\python\client\session.py", line 1350, in _run_fn
target_list, run_metadata)
File "C:\Users\Administrator.DESKTOP-5SSH0GF\anaconda3\envs\tensorflow2-2\lib\site-packages\tensorflow\python\client\session.py", line 1443, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.NotFoundError: 2 root error(s) found.
(0) Not found: No algorithm worked!
[[{{node conv2d_93/Conv2D}}]]
[[boolean_mask_33/GatherV2/_3367]]
(1) Not found: No algorithm worked!
[[{{node conv2d_93/Conv2D}}]]
0 successful operations.
0 derived errors ignored.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:/studyfile/yolov4-tf2-master/predict.py", line 18, in
r_image = yolo.detect_image(image)
File "D:\studyfile\yolov4-tf2-master\yolo.py", line 141, in detect_image
K.learning_phase(): 0
File "C:\Users\Administrator.DESKTOP-5SSH0GF\anaconda3\envs\tensorflow2-2\lib\site-packages\tensorflow\python\client\session.py", line 958, in run
run_metadata_ptr)
File "C:\Users\Administrator.DESKTOP-5SSH0GF\anaconda3\envs\tensorflow2-2\lib\site-packages\tensorflow\python\client\session.py", line 1181, in _run
feed_dict_tensor, options, run_metadata)
File "C:\Users\Administrator.DESKTOP-5SSH0GF\anaconda3\envs\tensorflow2-2\lib\site-packages\tensorflow\python\client\session.py", line 1359, in _do_run
run_metadata)
File "C:\Users\Administrator.DESKTOP-5SSH0GF\anaconda3\envs\tensorflow2-2\lib\site-packages\tensorflow\python\client\session.py", line 1384, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.NotFoundError: 2 root error(s) found.
(0) Not found: No algorithm worked!
[[node conv2d_93/Conv2D (defined at D:\studyfile\yolov4-tf2-master\nets\yolo4.py:86) ]]
[[boolean_mask_33/GatherV2/_3367]]
(1) Not found: No algorithm worked!
[[node conv2d_93/Conv2D (defined at D:\studyfile\yolov4-tf2-master\nets\yolo4.py:86) ]]
0 successful operations.
0 derived errors ignored.

Errors may have originated from an input operation.
Input Source operations connected to node conv2d_93/Conv2D:
leaky_re_lu_20/LeakyRelu (defined at D:\studyfile\yolov4-tf2-master\utils\utils.py:15)

Input Source operations connected to node conv2d_93/Conv2D:
leaky_re_lu_20/LeakyRelu (defined at D:\studyfile\yolov4-tf2-master\utils\utils.py:15)

Original stack trace for 'conv2d_93/Conv2D':
File "D:/studyfile/yolov4-tf2-master/predict.py", line 8, in
yolo = YOLO()
File "D:\studyfile\yolov4-tf2-master\yolo.py", line 50, in init
self.generate()
File "D:\studyfile\yolov4-tf2-master\yolo.py", line 85, in generate
self.yolo_model = yolo_body(Input(shape=(None,None,3)), num_anchors//3, num_classes)
File "D:\studyfile\yolov4-tf2-master\nets\yolo4.py", line 86, in yolo_body
P3_output = DarknetConv2D(num_anchors * (num_classes + 5), (1, 1))(P3_output)
File "C:\Users\Administrator.DESKTOP-5SSH0GF\anaconda3\envs\tensorflow2-2\lib\site-packages\tensorflow\python\keras\engine\base_layer_v1.py", line 778, in call
outputs = call_fn(cast_inputs, *args, **kwargs)
File "C:\Users\Administrator.DESKTOP-5SSH0GF\anaconda3\envs\tensorflow2-2\lib\site-packages\tensorflow\python\keras\layers\convolutional.py", line 207, in call
outputs = self._convolution_op(inputs, self.kernel)
File "C:\Users\Administrator.DESKTOP-5SSH0GF\anaconda3\envs\tensorflow2-2\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 1106, in call
return self.conv_op(inp, filter)
File "C:\Users\Administrator.DESKTOP-5SSH0GF\anaconda3\envs\tensorflow2-2\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 638, in call
return self.call(inp, filter)
File "C:\Users\Administrator.DESKTOP-5SSH0GF\anaconda3\envs\tensorflow2-2\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 237, in call
name=self.name)
File "C:\Users\Administrator.DESKTOP-5SSH0GF\anaconda3\envs\tensorflow2-2\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 2014, in conv2d
name=name)
File "C:\Users\Administrator.DESKTOP-5SSH0GF\anaconda3\envs\tensorflow2-2\lib\site-packages\tensorflow\python\ops\gen_nn_ops.py", line 969, in conv2d
data_format=data_format, dilations=dilations, name=name)
File "C:\Users\Administrator.DESKTOP-5SSH0GF\anaconda3\envs\tensorflow2-2\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 744, in _apply_op_helper
attrs=attr_protos, op_def=op_def)
File "C:\Users\Administrator.DESKTOP-5SSH0GF\anaconda3\envs\tensorflow2-2\lib\site-packages\tensorflow\python\framework\ops.py", line 3327, in _create_op_internal
op_def=op_def)
File "C:\Users\Administrator.DESKTOP-5SSH0GF\anaconda3\envs\tensorflow2-2\lib\site-packages\tensorflow\python\framework\ops.py", line 1791, in init
self._traceback = tf_stack.extract_stack()

No algorithm worked! [Op:Conv2D]

when I train yolov4 on my datasets:
Restoring weights from: /home/jm3090/wxl/Jm_code/tensorflow-yolov4-tflite-master/weights/yolov4.weights ...
2020-12-27 17:21:05.708877: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
2020-12-27 17:21:08.769703: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
2020-12-27 17:21:08.775555: E tensorflow/stream_executor/cuda/cuda_blas.cc:226] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
2020-12-27 17:21:08.775785: W tensorflow/core/framework/op_kernel.cc:1763] OP_REQUIRES failed at conv_ops.cc:1094 : Not found: No algorithm worked!
Traceback (most recent call last):
File "/home/jm3090/wxl/Jm_code/tensorflow-yolov4-tflite-master/train.py", line 165, in
app.run(main)
File "/home/jm3090/.local/lib/python3.7/site-packages/absl/app.py", line 303, in run
_run_main(main, args)
File "/home/jm3090/.local/lib/python3.7/site-packages/absl/app.py", line 251, in _run_main
sys.exit(main(argv))
File "/home/jm3090/wxl/Jm_code/tensorflow-yolov4-tflite-master/train.py", line 157, in main
train_step(image_data, target)
File "/home/jm3090/wxl/Jm_code/tensorflow-yolov4-tflite-master/train.py", line 86, in train_step
pred_result = model(image_data, training=True)
File "/home/jm3090/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 1007, in call
outputs = call_fn(inputs, *args, **kwargs)
File "/home/jm3090/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/engine/functional.py", line 425, in call
inputs, training=training, mask=mask)
File "/home/jm3090/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/engine/functional.py", line 560, in _run_internal_graph
outputs = node.layer(*args, **kwargs)
File "/home/jm3090/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 1007, in call
outputs = call_fn(inputs, *args, **kwargs)
File "/home/jm3090/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/layers/convolutional.py", line 248, in call
outputs = self._convolution_op(inputs, self.kernel)
File "/home/jm3090/anaconda3/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py", line 201, in wrapper
return target(*args, **kwargs)
File "/home/jm3090/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/nn_ops.py", line 1020, in convolution_v2
name=name)
File "/home/jm3090/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/nn_ops.py", line 1150, in convolution_internal
name=name)
File "/home/jm3090/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/nn_ops.py", line 2604, in _conv2d_expanded_batch
name=name)
File "/home/jm3090/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/gen_nn_ops.py", line 932, in conv2d
_ops.raise_from_not_ok_status(e, name)
File "/home/jm3090/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 6862, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "", line 3, in raise_from
tensorflow.python.framework.errors_impl.NotFoundError: No algorithm worked! [Op:Conv2D]
Exception ignored in: <function Buckets.del at 0x7fc081bb2488>
Traceback (most recent call last):
File "/home/jm3090/anaconda3/lib/python3.7/site-packages/tensorflow/python/eager/monitoring.py", line 407, in del
AttributeError: 'NoneType' object has no attribute 'TFE_MonitoringDeleteBuckets'

模型转换

想问一下训练出来的.h5文件有办法转换成.weights格式吗,或者如何修改save的方式,直接保存成.weights,谢谢!

模型重训练后无法侦测到任何对象

您好,请问有人反应依照下载的程序重train后得到的权重文件作为预测的模型来做预测。结果完全侦测不到任何对象?使用yolo4_voc_weights.h5/voc_classes.txt or yolo4_weight.h5/voc_classes.txt做预测模型则没问题!请问我那里搞错了。谢谢!

請問電腦需求?

感謝作者提供資源,
1.我嘗試執行image(416,416) batch_size=2, 發現會OOM, 後更改為(256,256) batch_size=1才可執行。
[作業系統 windows 10; 顯卡 RTX2060 內存6GB]
2.請問是在train.py中修改freeze_layer或是在哪裡設定,可以讀取更大的image_size?

如您能回覆, 非常謝謝~

Lower Batch Size 導致Overfitting?

我這邊在執行訓練的時候,因為顯卡需求把Batch size從8改成1
Batch size 8 的時候訓練Loss下降正常,validation loss也下降正常
但是在Batch size1的時候Loss下降正常,但是validation loss會開始亂跳

想請問您這個是Overfitting的現象嗎?
另外請問有甚麼方式可以修正這個現象?

VOC轉換

您好我想要問,我用voc_annotation轉換完出來的2007.txt為甚麼有些有座標的沒辦法完整轉換,還有一些是還沒跑完就沒了

关于增加anchor的问题,程序运行报错

由于我使用的数据集尺寸非常多,导致默认的mIOU很低,因此我尝试在yolo_anchors.txt文件中加入anchors使得总共有20个anchors,但是程序运行报错,这个该如何求解。谢谢博主
image
image

训练中,改成save_weights_only = False,出现错误

CustomMaskWarning: Custom mask layers require a config and must override get_config. When loading, the custom mask layer must be passed to the custom_objects argument.
warnings.warn('Custom mask layers require a config and must override '

训练COCO数据集

请问要怎么训练COCO数据集呢?要在哪里修改呢?谢谢!

訓練集的問題

您好我想請問訊練時每張照片的annotations是否可以用txt檔而不是用xml檔,如果是用xml檔裡面需要包含的資料有那些呢

冻结训练问题相关

很高兴UP能分享这么棒的项目,想询问一下UP冻结训练的冻结层数有讲究,要怎么去设置比较合适?

h5转onnx

你好 ,这个训练产出的.h5文件怎么转.onnx文件呀,参照了很多都转不成功

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.