GithubHelp home page GithubHelp logo

stronger-yolo's Introduction

train dataset: VOC 2012 + VOC 2007
test size: 544
test code: Faster rcnn (not use 07 metric)
test GPU: 12G 2080Ti
test CPU: E5-2678 v3 @ 2.50GHz

VersionNetworkBackboneInitial weightVOC2007 Test(mAP)Inference(GPU)Inference(CPU)Params
V1YOLOV3Darknet53YOLOV3-608.weights88.830.0ms255.8ms248M
V2YOLOV3Darknet53Darknet53_448.weights83.330.0ms255.8ms248M
V3YOLOV3-LiteMobilenetV2MobilenetV2_1.0_224.ckpt79.418.9ms80.9ms27.3M

Check Strongeryolo-pytorch for pytorch version with channel-pruning.
There is also a MNN Demo for Verson V3.

stronger-yolo's People

Contributors

apxlwl avatar jamiechoi1995 avatar stinky-tofu avatar supersuperegg avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar

stronger-yolo's Issues

help me with validation data please

@Stinky-Tofu Hi! Thx for your excellent work here. I'm just new to yolo, so I don't understand with data split here! I see you just merged the 2007 and 2012 trainval datasets and then put them into the yolo network to train. So i'm puzzled with where the validation dataset is? And if we don't need the validation dataset, why? please help me, thx

Key conv52/batch_normalization/scale/ExponentialMovingAverage not found in checkpoint

Hello Stinky-Tofu, I follow your Readme file, and I got this error. I don't know how to fix this. I need your help. Thanks
Caused by op 'save/RestoreV2', defined at:
File "test.py", line 297, in
T = YoloTest()
File "test.py", line 49, in init
self.__saver = tf.train.Saver(ema_obj.variables_to_restore())
File "/home/tienduchoang/miniconda3/envs/miniconda/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 832, in init
self.build()
File "/home/tienduchoang/miniconda3/envs/miniconda/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 844, in build
self._build(self._filename, build_save=True, build_restore=True)
File "/home/tienduchoang/miniconda3/envs/miniconda/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 881, in _build
build_save=build_save, build_restore=build_restore)
File "/home/tienduchoang/miniconda3/envs/miniconda/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 513, in _build_internal
restore_sequentially, reshape)
File "/home/tienduchoang/miniconda3/envs/miniconda/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 332, in _AddRestoreOps
restore_sequentially)
File "/home/tienduchoang/miniconda3/envs/miniconda/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 580, in bulk_restore
return io_ops.restore_v2(filename_tensor, names, slices, dtypes)
File "/home/tienduchoang/miniconda3/envs/miniconda/lib/python3.5/site-packages/tensorflow/python/ops/gen_io_ops.py", line 1572, in restore_v2
name=name)
File "/home/tienduchoang/miniconda3/envs/miniconda/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 788, in _apply_op_helper
op_def=op_def)
File "/home/tienduchoang/miniconda3/envs/miniconda/lib/python3.5/site-packages/tensorflow/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "/home/tienduchoang/miniconda3/envs/miniconda/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 3300, in create_op
op_def=op_def)
File "/home/tienduchoang/miniconda3/envs/miniconda/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1801, in init
self._traceback = tf_stack.extract_stack()

NotFoundError (see above for traceback): Restoring from checkpoint failed. This is most likely due to a Variable name or other graph key that is missing from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:

Key conv52/batch_normalization/scale/ExponentialMovingAverage not found in checkpoint
[[node save/RestoreV2 (defined at test.py:49) ]]

YOLOV3 labels to this one?

My currently label structure is:
Class x1, y1, w, h
0, 0.05, 0.03, 0.2, 0.2

How would I go about converting that to this format?

data.py __create_label问题

你好,在看data.py 中__create_label函数有点疑问,为什么先用大于0.3的anchor, 当exist_positive为false时再用最大IOU对应的best anchor,而不是直接用最大IOU对应的best anchor?这样是担心多个ground_true box对应同一个位置的同一个anchor 从而出现有gt box被挤掉的情况吗?但是这样也会出现不同anchor和同一个gt box的iou大于0.3,这没有影响吗?

xy_loss problem

感谢代码分享! 看代码时对loss函数中的xy_loss有点疑问?为什么使用tf.nn.sigmoid_cross_entropy_with_logits这个函数而不是平方差函数计算xy loss, 看了tensorflow sigmoid_cross_entropy_with_logits函数说明如下:
For brevity, let x = logits, z = labels. The logistic loss is
z * -log(sigmoid(x)) + (1 - z) * -log(1 - sigmoid(x))
不太理解后面 (1 - z) * -log(1 - sigmoid(x))这部分的作用

ValueError: need more than 0 values to unpack

您好,我昨天能跑成功代码,今天更新之后,报错了><,请帮忙看看是哪里的问题
我检查了数据annotation,图片,储存路径,之前跑成功过,应该没啥问题的。。
报错如下:

Traceback (most recent call last):
File "train.py", line 202, in
YoloTrain().train()
File "train.py", line 166, in train
in self.__test_data:
File "/home/ds1/Downloads/YOLO_V3-master/data.py", line 102, in next
image, bboxes = self.__parse_annotation(annotation)
File "/home/ds1/Downloads/YOLO_V3-master/data.py", line 136, in __parse_annotation
image, bboxes = data_aug.random_horizontal_flip(np.copy(image), np.copy(bboxes))
File "/home/ds1/Downloads/YOLO_V3-master/utils/data_aug.py", line 53, in random_horizontal_flip
_, w_img, _ = img.shape
ValueError: need more than 0 values to unpack

a error in random_crop

    crop_xmax = max(w_img, int(max_bbox[2] + random.uniform(0, max_r_trans)))
    crop_ymax = max(h_img, int(max_bbox[3] + random.uniform(0, max_d_trans)))

    In context,it seems like you want the minimum value of the ymax and xmax. It looks like a mistake here

    crop_xmax = min(w_img, int(max_bbox[2] + random.uniform(0, max_r_trans)))
    crop_ymax = min(h_img, int(max_bbox[3] + random.uniform(0, max_d_trans)))

hi,about the keras-yolo3

hi,I am using keras-yolo3 detection,I thought The annotation file that refers to ground-truth in (keras-yolo3 format) are produced by labelImg. What about the the annotation file that refers to predictions in (keras-yolo3 format)? The trained model can detect the image ,but can't produce a annotation file that refers to predictions......Uh could you explain how to implement calculation map in keras-yolo3 version.Thank you very much.

训练自己数据集loss为nan

你好,我使用自己数据集进行训练,大概1000多张图片,五个类别,改了config.py中类别及data path 使用voc预训练权重,出现梯度爆炸是什么原因?
Traceback (most recent call last):
File "train.py", line 197, in
YoloTrain().train()
File "train.py", line 143, in train
raise ArithmeticError('The gradient is exploded')
ArithmeticError: The gradient is exploded

v2版本训练保存好模型会报错,请问这是为什么?

INFO:tensorflow:Restoring parameters from weights/yolo.ckpt-49-0.5021
[]
[[u'yolov3/darknet53/stage0/conv0/weight', TensorShape([Dimension(3), Dimension(3), Dimension(3), Dimension(32)])], [u'yolov3/darknet53/stage0/conv0/batch_normalization/scale', TensorShape([Dimension(32)])], [u'yolov3/darknet53/stage0/conv0/batch_normalization/shift', TensorShape([Dimension(32)])], [u'yolov3/darknet53/stage0/conv0/batch_normalization/moving_mean', TensorShape([Dimension(32)])], [u'yolov3/darknet53/stage0/conv0/batch_normalization/moving_var', TensorShape([Dimension(32)])], [u'yolov3/darknet53/stage0/conv1/weight', TensorShape([Dimension(3), Dimension(3), Dimension(32), Dimension(64)])], [u'yolov3/darknet53/stage0/conv1/batch_normalization/scale', TensorShape([Dimension(64)])], [u'yolov3/darknet53/stage0/conv1/batch_normalization/shift', TensorShape([Dimension(64)])], [u'yolov3/darknet53/stage0/conv1/batch_normalization/moving_mean', TensorShape([Dimension(64)])], [u'yolov3/darknet53/stage0/conv1/batch_normalization/moving_var', TensorShape([Dimension(64)])], [u'yolov3/darknet53/stage1/residual0/conv1/weight', TensorShape([Dimension(1), Dimension(1), Dimension(64), Dimension(32)])], [u'yolov3/darknet53/stage1/residual0/conv1/batch_normalization/scale', TensorShape([Dimension(32)])], [u'yolov3/darknet53/stage1/residual0/conv1/batch_normalization/shift', TensorShape([Dimension(32)])], [u'yolov3/darknet53/stage1/residual0/conv1/batch_normalization/moving_mean', TensorShape([Dimension(32)])], [u'yolov3/darknet53/stage1/residual0/conv1/batch_normalization/moving_var', TensorShape([Dimension(32)])], [u'yolov3/darknet53/stage1/residual0/conv2/weight', TensorShape([Dimension(3), Dimension(3), Dimension(32), Dimension(64)])], [u'yolov3/darknet53/stage1/residual0/conv2/batch_normalization/scale', TensorShape([Dimension(64)])], [u'yolov3/darknet53/stage1/residual0/conv2/batch_normalization/shift', TensorShape([Dimension(64)])], [u'yolov3/darknet53/stage1/residual0/conv2/batch_normalization/moving_mean', TensorShape([Dimension(64)])], [u'yolov3/darknet53/stage1/residual0/conv2/batch_normalization/moving_var', TensorShape([Dimension(64)])], [u'yolov3/darknet53/stage1/conv0/weight', TensorShape([Dimension(3), Dimension(3), Dimension(64), Dimension(128)])], [u'yolov3/darknet53/stage1/conv0/batch_normalization/scale', TensorShape([Dimension(128)])], [u'yolov3/darknet53/stage1/conv0/batch_normalization/shift', TensorShape([Dimension(128)])], [u'yolov3/darknet53/stage1/conv0/batch_normalization/moving_mean', TensorShape([Dimension(128)])], [u'yolov3/darknet53/stage1/conv0/batch_normalization/moving_var', TensorShape([Dimension(128)])], [u'yolov3/darknet53/stage2/residual0/conv1/weight', TensorShape([Dimension(1), Dimension(1), Dimension(128), Dimension(64)])], [u'yolov3/darknet53/stage2/residual0/conv1/batch_normalization/scale', TensorShape([Dimension(64)])], [u'yolov3/darknet53/stage2/residual0/conv1/batch_normalization/shift', TensorShape([Dimension(64)])], [u'yolov3/darknet53/stage2/residual0/conv1/batch_normalization/moving_mean', TensorShape([Dimension(64)])], [u'yolov3/darknet53/stage2/residual0/conv1/batch_normalization/moving_var', TensorShape([Dimension(64)])], [u'yolov3/darknet53/stage2/residual0/conv2/weight', TensorShape([Dimension(3), Dimension(3), Dimension(64), Dimension(128)])], [u'yolov3/darknet53/stage2/residual0/conv2/batch_normalization/scale', TensorShape([Dimension(128)])], [u'yolov3/darknet53/stage2/residual0/conv2/batch_normalization/shift', TensorShape([Dimension(128)])], [u'yolov3/darknet53/stage2/residual0/conv2/batch_normalization/moving_mean', TensorShape([Dimension(128)])], [u'yolov3/darknet53/stage2/residual0/conv2/batch_normalization/moving_var', TensorShape([Dimension(128)])], [u'yolov3/darknet53/stage2/residual1/conv1/weight', TensorShape([Dimension(1), Dimension(1), Dimension(128), Dimension(64)])], [u'yolov3/darknet53/stage2/residual1/conv1/batch_normalization/scale', TensorShape([Dimension(64)])], [u'yolov3/darknet53/stage2/residual1/conv1/batch_normalization/shift', TensorShape([Dimension(64)])], [u'yolov3/darknet53/stage2/residual1/conv1/batch_normalization/moving_mean', TensorShape([Dimension(64)])], [u'yolov3/darknet53/stage2/residual1/conv1/batch_normalization/moving_var', TensorShape([Dimension(64)])], [u'yolov3/darknet53/stage2/residual1/conv2/weight', TensorShape([Dimension(3), Dimension(3), Dimension(64), Dimension(128)])], [u'yolov3/darknet53/stage2/residual1/conv2/batch_normalization/scale', TensorShape([Dimension(128)])], [u'yolov3/darknet53/stage2/residual1/conv2/batch_normalization/shift', TensorShape([Dimension(128)])], [u'yolov3/darknet53/stage2/residual1/conv2/batch_normalization/moving_mean', TensorShape([Dimension(128)])], [u'yolov3/darknet53/stage2/residual1/conv2/batch_normalization/moving_var', TensorShape([Dimension(128)])], [u'yolov3/darknet53/stage2/conv0/weight', TensorShape([Dimension(3), Dimension(3), Dimension(128), Dimension(256)])], [u'yolov3/darknet53/stage2/conv0/batch_normalization/scale', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage2/conv0/batch_normalization/shift', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage2/conv0/batch_normalization/moving_mean', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage2/conv0/batch_normalization/moving_var', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage3/residual0/conv1/weight', TensorShape([Dimension(1), Dimension(1), Dimension(256), Dimension(128)])], [u'yolov3/darknet53/stage3/residual0/conv1/batch_normalization/scale', TensorShape([Dimension(128)])], [u'yolov3/darknet53/stage3/residual0/conv1/batch_normalization/shift', TensorShape([Dimension(128)])], [u'yolov3/darknet53/stage3/residual0/conv1/batch_normalization/moving_mean', TensorShape([Dimension(128)])], [u'yolov3/darknet53/stage3/residual0/conv1/batch_normalization/moving_var', TensorShape([Dimension(128)])], [u'yolov3/darknet53/stage3/residual0/conv2/weight', TensorShape([Dimension(3), Dimension(3), Dimension(128), Dimension(256)])], [u'yolov3/darknet53/stage3/residual0/conv2/batch_normalization/scale', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage3/residual0/conv2/batch_normalization/shift', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage3/residual0/conv2/batch_normalization/moving_mean', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage3/residual0/conv2/batch_normalization/moving_var', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage3/residual1/conv1/weight', TensorShape([Dimension(1), Dimension(1), Dimension(256), Dimension(128)])], [u'yolov3/darknet53/stage3/residual1/conv1/batch_normalization/scale', TensorShape([Dimension(128)])], [u'yolov3/darknet53/stage3/residual1/conv1/batch_normalization/shift', TensorShape([Dimension(128)])], [u'yolov3/darknet53/stage3/residual1/conv1/batch_normalization/moving_mean', TensorShape([Dimension(128)])], [u'yolov3/darknet53/stage3/residual1/conv1/batch_normalization/moving_var', TensorShape([Dimension(128)])], [u'yolov3/darknet53/stage3/residual1/conv2/weight', TensorShape([Dimension(3), Dimension(3), Dimension(128), Dimension(256)])], [u'yolov3/darknet53/stage3/residual1/conv2/batch_normalization/scale', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage3/residual1/conv2/batch_normalization/shift', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage3/residual1/conv2/batch_normalization/moving_mean', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage3/residual1/conv2/batch_normalization/moving_var', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage3/residual2/conv1/weight', TensorShape([Dimension(1), Dimension(1), Dimension(256), Dimension(128)])], [u'yolov3/darknet53/stage3/residual2/conv1/batch_normalization/scale', TensorShape([Dimension(128)])], [u'yolov3/darknet53/stage3/residual2/conv1/batch_normalization/shift', TensorShape([Dimension(128)])], [u'yolov3/darknet53/stage3/residual2/conv1/batch_normalization/moving_mean', TensorShape([Dimension(128)])], [u'yolov3/darknet53/stage3/residual2/conv1/batch_normalization/moving_var', TensorShape([Dimension(128)])], [u'yolov3/darknet53/stage3/residual2/conv2/weight', TensorShape([Dimension(3), Dimension(3), Dimension(128), Dimension(256)])], [u'yolov3/darknet53/stage3/residual2/conv2/batch_normalization/scale', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage3/residual2/conv2/batch_normalization/shift', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage3/residual2/conv2/batch_normalization/moving_mean', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage3/residual2/conv2/batch_normalization/moving_var', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage3/residual3/conv1/weight', TensorShape([Dimension(1), Dimension(1), Dimension(256), Dimension(128)])], [u'yolov3/darknet53/stage3/residual3/conv1/batch_normalization/scale', TensorShape([Dimension(128)])], [u'yolov3/darknet53/stage3/residual3/conv1/batch_normalization/shift', TensorShape([Dimension(128)])], [u'yolov3/darknet53/stage3/residual3/conv1/batch_normalization/moving_mean', TensorShape([Dimension(128)])], [u'yolov3/darknet53/stage3/residual3/conv1/batch_normalization/moving_var', TensorShape([Dimension(128)])], [u'yolov3/darknet53/stage3/residual3/conv2/weight', TensorShape([Dimension(3), Dimension(3), Dimension(128), Dimension(256)])], [u'yolov3/darknet53/stage3/residual3/conv2/batch_normalization/scale', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage3/residual3/conv2/batch_normalization/shift', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage3/residual3/conv2/batch_normalization/moving_mean', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage3/residual3/conv2/batch_normalization/moving_var', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage3/residual4/conv1/weight', TensorShape([Dimension(1), Dimension(1), Dimension(256), Dimension(128)])], [u'yolov3/darknet53/stage3/residual4/conv1/batch_normalization/scale', TensorShape([Dimension(128)])], [u'yolov3/darknet53/stage3/residual4/conv1/batch_normalization/shift', TensorShape([Dimension(128)])], [u'yolov3/darknet53/stage3/residual4/conv1/batch_normalization/moving_mean', TensorShape([Dimension(128)])], [u'yolov3/darknet53/stage3/residual4/conv1/batch_normalization/moving_var', TensorShape([Dimension(128)])], [u'yolov3/darknet53/stage3/residual4/conv2/weight', TensorShape([Dimension(3), Dimension(3), Dimension(128), Dimension(256)])], [u'yolov3/darknet53/stage3/residual4/conv2/batch_normalization/scale', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage3/residual4/conv2/batch_normalization/shift', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage3/residual4/conv2/batch_normalization/moving_mean', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage3/residual4/conv2/batch_normalization/moving_var', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage3/residual5/conv1/weight', TensorShape([Dimension(1), Dimension(1), Dimension(256), Dimension(128)])], [u'yolov3/darknet53/stage3/residual5/conv1/batch_normalization/scale', TensorShape([Dimension(128)])], [u'yolov3/darknet53/stage3/residual5/conv1/batch_normalization/shift', TensorShape([Dimension(128)])], [u'yolov3/darknet53/stage3/residual5/conv1/batch_normalization/moving_mean', TensorShape([Dimension(128)])], [u'yolov3/darknet53/stage3/residual5/conv1/batch_normalization/moving_var', TensorShape([Dimension(128)])], [u'yolov3/darknet53/stage3/residual5/conv2/weight', TensorShape([Dimension(3), Dimension(3), Dimension(128), Dimension(256)])], [u'yolov3/darknet53/stage3/residual5/conv2/batch_normalization/scale', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage3/residual5/conv2/batch_normalization/shift', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage3/residual5/conv2/batch_normalization/moving_mean', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage3/residual5/conv2/batch_normalization/moving_var', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage3/residual6/conv1/weight', TensorShape([Dimension(1), Dimension(1), Dimension(256), Dimension(128)])], [u'yolov3/darknet53/stage3/residual6/conv1/batch_normalization/scale', TensorShape([Dimension(128)])], [u'yolov3/darknet53/stage3/residual6/conv1/batch_normalization/shift', TensorShape([Dimension(128)])], [u'yolov3/darknet53/stage3/residual6/conv1/batch_normalization/moving_mean', TensorShape([Dimension(128)])], [u'yolov3/darknet53/stage3/residual6/conv1/batch_normalization/moving_var', TensorShape([Dimension(128)])], [u'yolov3/darknet53/stage3/residual6/conv2/weight', TensorShape([Dimension(3), Dimension(3), Dimension(128), Dimension(256)])], [u'yolov3/darknet53/stage3/residual6/conv2/batch_normalization/scale', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage3/residual6/conv2/batch_normalization/shift', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage3/residual6/conv2/batch_normalization/moving_mean', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage3/residual6/conv2/batch_normalization/moving_var', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage3/residual7/conv1/weight', TensorShape([Dimension(1), Dimension(1), Dimension(256), Dimension(128)])], [u'yolov3/darknet53/stage3/residual7/conv1/batch_normalization/scale', TensorShape([Dimension(128)])], [u'yolov3/darknet53/stage3/residual7/conv1/batch_normalization/shift', TensorShape([Dimension(128)])], [u'yolov3/darknet53/stage3/residual7/conv1/batch_normalization/moving_mean', TensorShape([Dimension(128)])], [u'yolov3/darknet53/stage3/residual7/conv1/batch_normalization/moving_var', TensorShape([Dimension(128)])], [u'yolov3/darknet53/stage3/residual7/conv2/weight', TensorShape([Dimension(3), Dimension(3), Dimension(128), Dimension(256)])], [u'yolov3/darknet53/stage3/residual7/conv2/batch_normalization/scale', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage3/residual7/conv2/batch_normalization/shift', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage3/residual7/conv2/batch_normalization/moving_mean', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage3/residual7/conv2/batch_normalization/moving_var', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage3/conv0/weight', TensorShape([Dimension(3), Dimension(3), Dimension(256), Dimension(512)])], [u'yolov3/darknet53/stage3/conv0/batch_normalization/scale', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage3/conv0/batch_normalization/shift', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage3/conv0/batch_normalization/moving_mean', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage3/conv0/batch_normalization/moving_var', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage4/residual0/conv1/weight', TensorShape([Dimension(1), Dimension(1), Dimension(512), Dimension(256)])], [u'yolov3/darknet53/stage4/residual0/conv1/batch_normalization/scale', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage4/residual0/conv1/batch_normalization/shift', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage4/residual0/conv1/batch_normalization/moving_mean', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage4/residual0/conv1/batch_normalization/moving_var', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage4/residual0/conv2/weight', TensorShape([Dimension(3), Dimension(3), Dimension(256), Dimension(512)])], [u'yolov3/darknet53/stage4/residual0/conv2/batch_normalization/scale', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage4/residual0/conv2/batch_normalization/shift', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage4/residual0/conv2/batch_normalization/moving_mean', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage4/residual0/conv2/batch_normalization/moving_var', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage4/residual1/conv1/weight', TensorShape([Dimension(1), Dimension(1), Dimension(512), Dimension(256)])], [u'yolov3/darknet53/stage4/residual1/conv1/batch_normalization/scale', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage4/residual1/conv1/batch_normalization/shift', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage4/residual1/conv1/batch_normalization/moving_mean', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage4/residual1/conv1/batch_normalization/moving_var', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage4/residual1/conv2/weight', TensorShape([Dimension(3), Dimension(3), Dimension(256), Dimension(512)])], [u'yolov3/darknet53/stage4/residual1/conv2/batch_normalization/scale', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage4/residual1/conv2/batch_normalization/shift', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage4/residual1/conv2/batch_normalization/moving_mean', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage4/residual1/conv2/batch_normalization/moving_var', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage4/residual2/conv1/weight', TensorShape([Dimension(1), Dimension(1), Dimension(512), Dimension(256)])], [u'yolov3/darknet53/stage4/residual2/conv1/batch_normalization/scale', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage4/residual2/conv1/batch_normalization/shift', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage4/residual2/conv1/batch_normalization/moving_mean', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage4/residual2/conv1/batch_normalization/moving_var', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage4/residual2/conv2/weight', TensorShape([Dimension(3), Dimension(3), Dimension(256), Dimension(512)])], [u'yolov3/darknet53/stage4/residual2/conv2/batch_normalization/scale', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage4/residual2/conv2/batch_normalization/shift', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage4/residual2/conv2/batch_normalization/moving_mean', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage4/residual2/conv2/batch_normalization/moving_var', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage4/residual3/conv1/weight', TensorShape([Dimension(1), Dimension(1), Dimension(512), Dimension(256)])], [u'yolov3/darknet53/stage4/residual3/conv1/batch_normalization/scale', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage4/residual3/conv1/batch_normalization/shift', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage4/residual3/conv1/batch_normalization/moving_mean', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage4/residual3/conv1/batch_normalization/moving_var', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage4/residual3/conv2/weight', TensorShape([Dimension(3), Dimension(3), Dimension(256), Dimension(512)])], [u'yolov3/darknet53/stage4/residual3/conv2/batch_normalization/scale', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage4/residual3/conv2/batch_normalization/shift', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage4/residual3/conv2/batch_normalization/moving_mean', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage4/residual3/conv2/batch_normalization/moving_var', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage4/residual4/conv1/weight', TensorShape([Dimension(1), Dimension(1), Dimension(512), Dimension(256)])], [u'yolov3/darknet53/stage4/residual4/conv1/batch_normalization/scale', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage4/residual4/conv1/batch_normalization/shift', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage4/residual4/conv1/batch_normalization/moving_mean', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage4/residual4/conv1/batch_normalization/moving_var', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage4/residual4/conv2/weight', TensorShape([Dimension(3), Dimension(3), Dimension(256), Dimension(512)])], [u'yolov3/darknet53/stage4/residual4/conv2/batch_normalization/scale', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage4/residual4/conv2/batch_normalization/shift', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage4/residual4/conv2/batch_normalization/moving_mean', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage4/residual4/conv2/batch_normalization/moving_var', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage4/residual5/conv1/weight', TensorShape([Dimension(1), Dimension(1), Dimension(512), Dimension(256)])], [u'yolov3/darknet53/stage4/residual5/conv1/batch_normalization/scale', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage4/residual5/conv1/batch_normalization/shift', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage4/residual5/conv1/batch_normalization/moving_mean', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage4/residual5/conv1/batch_normalization/moving_var', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage4/residual5/conv2/weight', TensorShape([Dimension(3), Dimension(3), Dimension(256), Dimension(512)])], [u'yolov3/darknet53/stage4/residual5/conv2/batch_normalization/scale', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage4/residual5/conv2/batch_normalization/shift', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage4/residual5/conv2/batch_normalization/moving_mean', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage4/residual5/conv2/batch_normalization/moving_var', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage4/residual6/conv1/weight', TensorShape([Dimension(1), Dimension(1), Dimension(512), Dimension(256)])], [u'yolov3/darknet53/stage4/residual6/conv1/batch_normalization/scale', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage4/residual6/conv1/batch_normalization/shift', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage4/residual6/conv1/batch_normalization/moving_mean', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage4/residual6/conv1/batch_normalization/moving_var', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage4/residual6/conv2/weight', TensorShape([Dimension(3), Dimension(3), Dimension(256), Dimension(512)])], [u'yolov3/darknet53/stage4/residual6/conv2/batch_normalization/scale', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage4/residual6/conv2/batch_normalization/shift', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage4/residual6/conv2/batch_normalization/moving_mean', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage4/residual6/conv2/batch_normalization/moving_var', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage4/residual7/conv1/weight', TensorShape([Dimension(1), Dimension(1), Dimension(512), Dimension(256)])], [u'yolov3/darknet53/stage4/residual7/conv1/batch_normalization/scale', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage4/residual7/conv1/batch_normalization/shift', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage4/residual7/conv1/batch_normalization/moving_mean', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage4/residual7/conv1/batch_normalization/moving_var', TensorShape([Dimension(256)])], [u'yolov3/darknet53/stage4/residual7/conv2/weight', TensorShape([Dimension(3), Dimension(3), Dimension(256), Dimension(512)])], [u'yolov3/darknet53/stage4/residual7/conv2/batch_normalization/scale', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage4/residual7/conv2/batch_normalization/shift', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage4/residual7/conv2/batch_normalization/moving_mean', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage4/residual7/conv2/batch_normalization/moving_var', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage4/conv0/weight', TensorShape([Dimension(3), Dimension(3), Dimension(512), Dimension(1024)])], [u'yolov3/darknet53/stage4/conv0/batch_normalization/scale', TensorShape([Dimension(1024)])], [u'yolov3/darknet53/stage4/conv0/batch_normalization/shift', TensorShape([Dimension(1024)])], [u'yolov3/darknet53/stage4/conv0/batch_normalization/moving_mean', TensorShape([Dimension(1024)])], [u'yolov3/darknet53/stage4/conv0/batch_normalization/moving_var', TensorShape([Dimension(1024)])], [u'yolov3/darknet53/stage5/residual0/conv1/weight', TensorShape([Dimension(1), Dimension(1), Dimension(1024), Dimension(512)])], [u'yolov3/darknet53/stage5/residual0/conv1/batch_normalization/scale', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage5/residual0/conv1/batch_normalization/shift', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage5/residual0/conv1/batch_normalization/moving_mean', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage5/residual0/conv1/batch_normalization/moving_var', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage5/residual0/conv2/weight', TensorShape([Dimension(3), Dimension(3), Dimension(512), Dimension(1024)])], [u'yolov3/darknet53/stage5/residual0/conv2/batch_normalization/scale', TensorShape([Dimension(1024)])], [u'yolov3/darknet53/stage5/residual0/conv2/batch_normalization/shift', TensorShape([Dimension(1024)])], [u'yolov3/darknet53/stage5/residual0/conv2/batch_normalization/moving_mean', TensorShape([Dimension(1024)])], [u'yolov3/darknet53/stage5/residual0/conv2/batch_normalization/moving_var', TensorShape([Dimension(1024)])], [u'yolov3/darknet53/stage5/residual1/conv1/weight', TensorShape([Dimension(1), Dimension(1), Dimension(1024), Dimension(512)])], [u'yolov3/darknet53/stage5/residual1/conv1/batch_normalization/scale', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage5/residual1/conv1/batch_normalization/shift', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage5/residual1/conv1/batch_normalization/moving_mean', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage5/residual1/conv1/batch_normalization/moving_var', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage5/residual1/conv2/weight', TensorShape([Dimension(3), Dimension(3), Dimension(512), Dimension(1024)])], [u'yolov3/darknet53/stage5/residual1/conv2/batch_normalization/scale', TensorShape([Dimension(1024)])], [u'yolov3/darknet53/stage5/residual1/conv2/batch_normalization/shift', TensorShape([Dimension(1024)])], [u'yolov3/darknet53/stage5/residual1/conv2/batch_normalization/moving_mean', TensorShape([Dimension(1024)])], [u'yolov3/darknet53/stage5/residual1/conv2/batch_normalization/moving_var', TensorShape([Dimension(1024)])], [u'yolov3/darknet53/stage5/residual2/conv1/weight', TensorShape([Dimension(1), Dimension(1), Dimension(1024), Dimension(512)])], [u'yolov3/darknet53/stage5/residual2/conv1/batch_normalization/scale', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage5/residual2/conv1/batch_normalization/shift', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage5/residual2/conv1/batch_normalization/moving_mean', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage5/residual2/conv1/batch_normalization/moving_var', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage5/residual2/conv2/weight', TensorShape([Dimension(3), Dimension(3), Dimension(512), Dimension(1024)])], [u'yolov3/darknet53/stage5/residual2/conv2/batch_normalization/scale', TensorShape([Dimension(1024)])], [u'yolov3/darknet53/stage5/residual2/conv2/batch_normalization/shift', TensorShape([Dimension(1024)])], [u'yolov3/darknet53/stage5/residual2/conv2/batch_normalization/moving_mean', TensorShape([Dimension(1024)])], [u'yolov3/darknet53/stage5/residual2/conv2/batch_normalization/moving_var', TensorShape([Dimension(1024)])], [u'yolov3/darknet53/stage5/residual3/conv1/weight', TensorShape([Dimension(1), Dimension(1), Dimension(1024), Dimension(512)])], [u'yolov3/darknet53/stage5/residual3/conv1/batch_normalization/scale', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage5/residual3/conv1/batch_normalization/shift', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage5/residual3/conv1/batch_normalization/moving_mean', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage5/residual3/conv1/batch_normalization/moving_var', TensorShape([Dimension(512)])], [u'yolov3/darknet53/stage5/residual3/conv2/weight', TensorShape([Dimension(3), Dimension(3), Dimension(512), Dimension(1024)])], [u'yolov3/darknet53/stage5/residual3/conv2/batch_normalization/scale', TensorShape([Dimension(1024)])], [u'yolov3/darknet53/stage5/residual3/conv2/batch_normalization/shift', TensorShape([Dimension(1024)])], [u'yolov3/darknet53/stage5/residual3/conv2/batch_normalization/moving_mean', TensorShape([Dimension(1024)])], [u'yolov3/darknet53/stage5/residual3/conv2/batch_normalization/moving_var', TensorShape([Dimension(1024)])]]
(0, 260)
Traceback (most recent call last):
File "/home/mzx/Stronger-yolo/v2/train.py", line 202, in
Yolo_train().train()
File "/home/mzx/Stronger-yolo/v2/train.py", line 57, in init
restore_dict = self.__get_restore_dict(load_var)
File "/home/mzx/Stronger-yolo/v2/train.py", line 121, in __get_restore_dict
assert cur_weights_num == org_weights_num
AssertionError

Process finished with exit code 1

想继续训练之前的模型,会报错,加载预训练的darknet就没问题

why mixup cause bad performance?

Hi, your work is awesome, but i have a question, as limu's paper said, the mixup trick can improve mAP a lot on MSCOCO datasets, but results in your readme shows that the mixup trick hurts the performance on VOC2007 datasets, can you share the experience during the process of adding mixup into tensorflow yolov3?

puzzled about the anchors

Hi, the ANCHORS in the config.py seems like they are statistics from coco dataset kmeans, not the voc. But your training datasets are voc, so i wonder the reason for this inconsistency or maybe there are ideas I havn't got. Thanks

ANCHORS = [[(1.25, 1.625), (2.0, 3.75), (4.125, 2.875)], # Anchors for small obj
[(1.875, 3.8125), (3.875, 2.8125), (3.6875, 7.4375)], # Anchors for medium obj
[(3.625, 2.8125), (4.875, 6.1875), (11.65625, 10.1875)]] # Anchors for big obj

预测时间

为什么我运行test.py,预测单张图像要5s多呢。GPU是gtx 1080ti,图像尺寸608*608。

老哥,你这个v2的速度如何?

fps是多少?占显存多少?我用上一个版本的代码,conf threshold也设了0.2,精度确实高,但是fps差不多10帧,显存占了10g有点太多了。

Key conv52/batch_normalization/moving_mean not found in checkpoint

2019-03-09 16:54:43.055737: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at save_restore_v2_ops.cc:184 : Not found: Key conv52/batch_normalization/moving_mean not found in checkpoint
Traceback (most recent call last):
File "train.py", line 197, in
YoloTrain().train()
File "train.py", line 116, in train
self.__load.restore(self.__sess, ckpt_path)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 1562, in restore
err, "a Variable name or other graph key that is missing")
tensorflow.python.framework.errors_impl.NotFoundError: Restoring from checkpoint failed. This is most likely due to a Variable name or other graph key that is missing from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:

Key conv52/batch_normalization/moving_mean not found in checkpoint
[[node load_save/save/RestoreV2 (defined at train.py:99) = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_load_save/save/Const_0_0, load_save/save/RestoreV2/tensor_names, load_save/save/RestoreV2/shape_and_slices)]]

Caused by op u'load_save/save/RestoreV2', defined at:
File "train.py", line 197, in
YoloTrain().train()
File "train.py", line 99, in init
self.__load = tf.train.Saver(self.__net_var)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 1102, in init
self.build()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 1114, in build
self._build(self._filename, build_save=True, build_restore=True)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 1151, in _build
build_save=build_save, build_restore=build_restore)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 795, in _build_internal
restore_sequentially, reshape)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 406, in _AddRestoreOps
restore_sequentially)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 862, in bulk_restore
return io_ops.restore_v2(filename_tensor, names, slices, dtypes)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_io_ops.py", line 1466, in restore_v2
shape_and_slices=shape_and_slices, dtypes=dtypes, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 3274, in create_op
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1770, in init
self._traceback = tf_stack.extract_stack()

NotFoundError (see above for traceback): Restoring from checkpoint failed. This is most likely due to a Variable name or other graph key that is missing from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:

Key conv52/batch_normalization/moving_mean not found in checkpoint
[[node load_save/save/RestoreV2 (defined at train.py:99) = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_load_save/save/Const_0_0, load_save/save/RestoreV2/tensor_names, load_save/save/RestoreV2/shape_and_slices)]]

data.py改了点代码,把不合法的bbox先修剪,修剪后还不行的忽略掉了。

def __parse_annotation(self, annotation):
line = annotation.split()
image_path = line[0]
image = np.array(cv2.imread(image_path))
bboxes = np.array([map(int, box.split(',')) for box in line[1:]])
image, bboxes = data_aug.random_horizontal_flip(np.copy(image), np.copy(bboxes))
image, bboxes = data_aug.random_crop(np.copy(image), np.copy(bboxes))
image, bboxes = data_aug.random_translate(np.copy(image), np.copy(bboxes))
image, bboxes = utils.img_preprocess2(np.copy(image), np.copy(bboxes),
(self.__train_input_size, self.__train_input_size), True)
bboxes_temp=[]
bboxes[:,0:4]=np.maximum(bboxes[:,0:4],0)
bboxes[:,0:4]=np.minimum(bboxes[:,0:4],image.shape[0])
for bbox in bboxes:
if bbox[0]>=bbox[2] or bbox[1]>=bbox[3]:
continue
if bbox[0]>image.shape[0] or bbox[1]>image.shape[1]:
continue
bboxes_temp.append(bbox)
return image, np.array(bboxes_temp)

when train my own dataset?How to modify the anchors

,
`ANCHORS = [[(1.25, 1.625), (2.0, 3.75), (4.125, 2.875)], # Anchors for small obj
[(1.875, 3.8125), (3.875, 2.8125), (3.6875, 7.4375)], # Anchors for medium obj
[(3.625, 2.8125), (4.875, 6.1875), (11.65625, 10.1875)]]``

How to generate on annotation file more easily

Hi! Your instruction to train the model with own dataset requires to generate your own annotation file train_annotation.txt and test_annotation.txt, one row for one image.
Row format: image_path bbox0 bbox1 ...
Bbox format: xmin,ymin,xmax,ymax,class_id(no space).

I have my annotations in COCO, VOC, CSV and JSON. Is there a way to convert any of these formats to the format stated in your instructions?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.