GithubHelp home page GithubHelp logo

jmpap / yolov2-tensorflow-2.0 Goto Github PK

View Code? Open in Web Editor NEW
48.0 3.0 18.0 90.01 MB

Just another YOLO V2 implementation. Train your own dataset in a jupyter notebook!

License: MIT License

Jupyter Notebook 100.00%
tensorflow object-detection keras yolo eager-execution agriculture tensorflow-2

yolov2-tensorflow-2.0's People

Contributors

jmpap avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

yolov2-tensorflow-2.0's Issues

Permission error no 13

Hello! Thank you so much for such a wonderful code and solving my previous error before. I'm getting this error and I'm trying it to solve for the past few days. I'm running my JUPYTER NOTEBOOK via ANACONDA VIRTUAL ENVIRONMENT as an ADMIN. I've PYTHON 3.7.7 and TENSORFLOW 2.1.0. WINDOWS 10 X64 bit with no GPU. Would really appreciate your help. Thank you so much for your time.
Capture 2
Capture

getting error in generating dateset


NotFoundError Traceback (most recent call last)
~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\eager\context.py in execution_mode(mode)
1985 ctx.executor = executor_new
-> 1986 yield
1987 finally:

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\data\ops\iterator_ops.py in _next_internal(self)
654 output_types=self._flat_output_types,
--> 655 output_shapes=self._flat_output_shapes)
656

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\ops\gen_dataset_ops.py in iterator_get_next(iterator, output_types, output_shapes, name)
2362 except _core._NotOkStatusException as e:
-> 2363 _ops.raise_from_not_ok_status(e, name)
2364 # Add nodes to the TensorFlow graph.

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\framework\ops.py in raise_from_not_ok_status(e, name)
6652 # pylint: disable=protected-access
-> 6653 six.raise_from(core._status_to_exception(e.code, message), None)
6654 # pylint: enable=protected-access

C:\ProgramData\Anaconda3\lib\site-packages\six.py in raise_from(value, from_value)

NotFoundError: NewRandomAccessFile failed to Create/Open: Opana-7.jpg : The system cannot find the file specified.
; No such file or directory
[[{{node ReadFile}}]] [Op:IteratorGetNext]

During handling of the above exception, another exception occurred:

NotFoundError Traceback (most recent call last)
in
24 break
25
---> 26 test_dataset(train_dataset)
27 train_dataset

in test_dataset(dataset)
2
3 def test_dataset(dataset):
----> 4 for batch in dataset:
5 img = batch[0][0]
6 label = batch[1][0]

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\data\ops\iterator_ops.py in next(self)
629
630 def next(self): # For Python 3 compatibility
--> 631 return self.next()
632
633 def _next_internal(self):

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\data\ops\iterator_ops.py in next(self)
668 """Returns a nested structure of Tensors containing the next element."""
669 try:
--> 670 return self._next_internal()
671 except errors.OutOfRangeError:
672 raise StopIteration

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\data\ops\iterator_ops.py in _next_internal(self)
659 return self._element_spec._from_compatible_tensor_list(ret) # pylint: disable=protected-access
660 except AttributeError:
--> 661 return structure.from_compatible_tensor_list(self._element_spec, ret)
662
663 @Property

C:\ProgramData\Anaconda3\lib\contextlib.py in exit(self, type, value, traceback)
128 value = type()
129 try:
--> 130 self.gen.throw(type, value, traceback)
131 except StopIteration as exc:
132 # Suppress StopIteration unless it's the same exception that

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\eager\context.py in execution_mode(mode)
1987 finally:
1988 ctx.executor = executor_old
-> 1989 executor_new.wait()
1990
1991

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\eager\executor.py in wait(self)
65 def wait(self):
66 """Waits for ops dispatched in this executor to finish."""
---> 67 pywrap_tfe.TFE_ExecutorWaitForAllPendingNodes(self._handle)
68
69 def clear_error(self):

NotFoundError: NewRandomAccessFile failed to Create/Open: Opana-7.jpg : The system cannot find the file specified.
; No such file or directory
[[{{node ReadFile}}]]

use_bias=False or True ?

Hello,

Is it normal that use_bias=False ?

Ex:
x = Conv2D(32, (3,3), strides=(1,1), padding='same', name='conv_1', use_bias=False)(input_image)

Cheers

Damien

train_annot_folder

hello. I am new to this whole AI thing. I am currently working on a project similar to this. I Would really like your help. I am not getting this " contains annotations in PASCAL VOC format (one xml file for each image)". I have my own dataset of images. What is this PASCAL VOC format? should I need to create one xml file of my very own image? If yes, what should I put in it? I'm really clueless. Please help.

Error: StopIteration

Hi.

I'm wondering if you could help you with an issue. When running window 21 I get the following error:
err

Thank you

ResourceExhaustedError: OOM when allocating tensor with shape[5,32,512,512] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [Op:FusedBatchNormGradV3] Error occured

Epoch 0 :


ResourceExhaustedError Traceback (most recent call last)
in
----> 1 results = train(EPOCHS, model, train_gen, val_gen, 10, 2, 'training_1')
2
3 plt.plot(results[0])
4 plt.plot(results[1])

in train(epochs, model, train_dataset, val_dataset, steps_per_epoch_train, steps_per_epoch_val, train_name)
49 for batch_idx in range(steps_per_epoch_train):
50 img, detector_mask, matching_true_boxes, class_one_hot, true_boxes = next(train_dataset)
---> 51 loss, _, grads = grad(model, img, detector_mask, matching_true_boxes, class_one_hot, true_boxes)
52 optimizer.apply_gradients(zip(grads, model.trainable_variables))
53 epoch_loss.append(loss)

in grad(model, img, detector_mask, matching_true_boxes, class_one_hot, true_boxes, training)
4 y_pred = model(img, training)
5 loss, sub_loss = yolov2_loss(detector_mask, matching_true_boxes, class_one_hot, true_boxes, y_pred)
----> 6 return loss, sub_loss, tape.gradient(loss, model.trainable_variables)
7
8 # save weights

c:\users\muho\anaconda3\envs\traffic_light\lib\site-packages\tensorflow_core\python\eager\backprop.py in gradient(self, target, sources, output_gradients, unconnected_gradients)
1012 output_gradients=output_gradients,
1013 sources_raw=flat_sources_raw,
-> 1014 unconnected_gradients=unconnected_gradients)
1015
1016 if not self._persistent:

c:\users\muho\anaconda3\envs\traffic_light\lib\site-packages\tensorflow_core\python\eager\imperative_grad.py in imperative_grad(tape, target, sources, output_gradients, sources_raw, unconnected_gradients)
74 output_gradients,
75 sources_raw,
---> 76 compat.as_str(unconnected_gradients.value))

c:\users\muho\anaconda3\envs\traffic_light\lib\site-packages\tensorflow_core\python\eager\backprop.py in _gradient_function(op_name, attr_tuple, num_inputs, inputs, outputs, out_grads, skip_input_indices)
136 return [None] * num_inputs
137
--> 138 return grad_fn(mock_op, *out_grads)
139
140

c:\users\muho\anaconda3\envs\traffic_light\lib\site-packages\tensorflow_core\python\ops\nn_grad.py in _FusedBatchNormV3Grad(op, *grad)
924 @ops.RegisterGradient("FusedBatchNormV3")
925 def _FusedBatchNormV3Grad(op, *grad):
--> 926 return _BaseFusedBatchNormGrad(op, 2, *grad)
927
928

c:\users\muho\anaconda3\envs\traffic_light\lib\site-packages\tensorflow_core\python\ops\nn_grad.py in _BaseFusedBatchNormGrad(op, version, *grad)
888 if version == 2:
889 args["reserve_space_3"] = op.outputs[5]
--> 890 return grad_fun(**args)
891 else:
892 pop_mean = op.inputs[3]

c:\users\muho\anaconda3\envs\traffic_light\lib\site-packages\tensorflow_core\python\ops\gen_nn_ops.py in fused_batch_norm_grad_v3(y_backprop, x, scale, reserve_space_1, reserve_space_2, reserve_space_3, epsilon, data_format, is_training, name)
4341 else:
4342 message = e.message
-> 4343 _six.raise_from(_core._status_to_exception(e.code, message), None)
4344 # Add nodes to the TensorFlow graph.
4345 if epsilon is None:

c:\users\muho\anaconda3\envs\traffic_light\lib\site-packages\six.py in raise_from(value, from_value)

ResourceExhaustedError: OOM when allocating tensor with shape[5,32,512,512] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [Op:FusedBatchNormGradV3]

I tried to change batch size and epoch, but the error are still occured...

Training result is abnormal without pre-trained weights.

Hi,

Thanks for sharing your Yolo_v2 implementation using tf 2.0, first.
I trained this neural network after remarking "Load Pre-trained weights" code lines.
But both main and validation loss is not decreased at all. I got a loss graph looks like spike.
I think training is not proceeded correctly. Would you please give me an advice to solve this problem?
For example, I need to adjust hyper parameters, and so on.

Using pictures that are not all the same size

Hey there! I'm attempting to use my own dataset where the images are both non-square and varying in dims. Is there a way to modify your code to allow for this?
Thanks so much in advance!

Images in dimension (H=1024, W=2048), unable to use the notebook to generate dataset.

Hi,

Your repository is super useful and interesting, good work! I try to use it to do object detection on Cityscapes images which are in the dimension of 1024x2048 instead of 512x512. I have perpared the desired images and corresponding annotations(PASCAL VOC format). According to the information given in the README.md file, I made the following changes:

IMAGE_H, IMAGE_W = 1024, 2048
GRID_H, GRID_W = 32, 64 # GRID size = IMAGE size / 32
LABELS = ('car', 'pedestrian')

I have put my images and labels in the path that is shown in your repository. Everything seems all right. But when I run
train_dataset = None
train_dataset= get_dataset(train_image_folder, train_annot_folder, LABELS, TRAIN_BATCH_SIZE)

It shows "ParseError: no element found: line 1, column 0".

Then I change to IMAGE_H, IMAGE_W = 2048, 1024, this error did not appear. However, the images and annotations are not correctly matched. Could you please give me some hint about this kind of issue? Thank you. The notebook built on the basis of yours with Cityscapes data is here: https://colab.research.google.com/drive/19g62IznotKZEgtNEowgyKmLcXIjy4OTJ?usp=sharing.

Computing MAP

Hello
Thank you very much for your code
How to calculate the total accuracy at the end?

Getting error while running the test generator pipeline


IndexError Traceback (most recent call last)
in
4
5 # batch
----> 6 img, detector_mask, matching_true_boxes, class_one_hot, true_boxes = next(train_gen)
7
8 # y

in ground_truth_generator(dataset)
38 ANCHORS,
39 IMAGE_W,
---> 40 IMAGE_H)
41 batch_matching_true_boxes.append(one_matching_true_boxes)
42 batch_detector_mask.append(one_detector_mask)

in process_true_boxes(true_boxes, anchors, image_width, image_height)
65 x_coord = np.floor(x).astype('int')
66 y_coord = np.floor(y).astype('int')
---> 67 detector_mask[y_coord, x_coord, best_anchor] = 1
68 yolo_box = np.array([x, y, w, h, box[4]])
69 matching_true_boxes[y_coord, x_coord, best_anchor] = yolo_box

IndexError: index 31 is out of bounds for axis 0 with size 16

In my dataset the image with the maximum coordinates (boxes) is 16. So the shape of boxes is (392, 17, 5) for training set and (79, 17, 5) for the test set.

Error while using custom dataset

Hi
I got this error in section [3.3. Process data to YOLO prediction]. I use my own dataset with all images that have size [720 x 1280] with only one class but I got this error so please help me in this regard. I can't understand the issue.

waiting for your kind response

this is the error
ValueError: Dimension size must be evenly divisible by 225280 but is 460800 for '{{node model_2/space_to_depth_2/Reshape}} = Reshape[T=DT_FLOAT, Tshape=DT_INT32](model_2/leaky_re_lu_64/LeakyRelu, model_2/space_to_depth_2/Reshape/shape)' with input shapes: [2,45,80,64], [6] and with input tensors computed as partial shapes: input[1] = [?,22,2,40,2,64].

Screenshot from 2020-04-14 07-48-02

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.