GithubHelp home page GithubHelp logo

pierluigiferrari / fcn8s_tensorflow Goto Github PK

View Code? Open in Web Editor NEW
51.0 51.0 21.0 45.34 MB

A TensorFlow implementation of FCN-8s

License: GNU General Public License v3.0

Python 99.93% C 0.07%
cityscapes fcn-8s semantic-segmentation tensorflow

fcn8s_tensorflow's People

Contributors

pierluigiferrari avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

fcn8s_tensorflow's Issues

Batch Generation Index problem

IndexError Traceback (most recent call last)
in ()
1 # Print out some diagnostics to make sure that our batches aren't empty and it doesn't take
Hi,

Getting this issue when running the code below. Some batch generation is failing somewhere.
Can you please let me know. I am running on cityscapes dataset, following your tutorial.

Thanks.
Taz

Print out some diagnostics to make sure that our batches aren't empty and it doesn't take forever to generate them.
start_time = time.time()
images, gt_images = next(train_generator)
print('Time to generate one batch: {:.3f} seconds'.format(time.time() - start_time))
print('Number of images generated:' , len(images))
print('Number of ground truth images generated:' , len(gt_images))

  2 start_time = time.time()

----> 3 images, gt_images = next(train_generator)
4 print('Time to generate one batch: {:.3f} seconds'.format(time.time() - start_time))
5 print('Number of images generated:' , len(images))

~/fcn8s_tensorflow/data_generator/batch_generator.py in generate(self, batch_size, convert_colors_to_ids, convert_ids_to_ids, convert_to_one_hot, void_class_id, random_crop, crop, resize, brightness, flip, translate, scale, gray, to_disk, shuffle)
389 # Maybe convert ground truth IDs to one-hot.
390 if convert_to_one_hot:
--> 391 gt_image = convert_IDs_to_one_hot(gt_image, self.num_classes)
392
393 if to_disk: # If the processed data is to be written to disk instead of yieled.

~/fcn8s_tensorflow/helpers/ground_truth_conversion_utils.py in convert_IDs_to_one_hot(image, num_classes)
86 unity_vectors = np.eye(num_classes, dtype=np.bool)
87
---> 88 return unity_vectors[image]

IndexError: index 23 is out of bounds for axis 0 with size 20

index out of bounds

start_time = time.time()
images, gt_images = next(train_generator)
print('Time to generate one batch: {:.3f} seconds'.format(time.time() - start_time))
print('Number of images generated:' , len(images))
print('Number of ground truth images generated:' , len(gt_images))

While running this I am getting --
IndexError: index 23 is out of bounds for axis 0 with size 20

input shape

First of, great work.

I'm fiddling around with different input shapes on the pre-trained VGG-16, but can't really get a hold of what shape it excepts...

Doesn't VGG-16 need a 224 x 224 input shape?

What happens when the input shape is different?

Incompatible shapes: [4,20,20,3] vs. [4,19,19,3]

When training on custom dataset using .ipynb file. Custom dataset uses roughly 7,000 images of size 300*300:

InvalidArgumentError Traceback (most recent call last)
/usr/local/lib/python3.6/site-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
1326 try:
-> 1327 return fn(*args)
1328 except errors.OpError as e:

/usr/local/lib/python3.6/site-packages/tensorflow/python/client/session.py in _run_fn(feed_dict, fetch_list, target_list, options, run_metadata)
1311 return self._call_tf_sessionrun(
-> 1312 options, feed_dict, fetch_list, target_list, run_metadata)
1313

/usr/local/lib/python3.6/site-packages/tensorflow/python/client/session.py in _call_tf_sessionrun(self, options, feed_dict, fetch_list, target_list, run_metadata)
1419 self._session, options, feed_dict, fetch_list, target_list,
-> 1420 status, run_metadata)
1421

/usr/local/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py in exit(self, type_arg, value_arg, traceback_arg)
515 compat.as_text(c_api.TF_Message(self.status.status)),
--> 516 c_api.TF_GetCode(self.status.status))
517 # Delete the underlying status object from memory otherwise it stays alive

InvalidArgumentError: Incompatible shapes: [4,20,20,3] vs. [4,19,19,3]
[[Node: optimizer/gradients/decoder/add_fc7_pool4_grad/BroadcastGradientArgs = BroadcastGradientArgs[T=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"](optimizer/gradients/decoder/add_fc7_pool4_grad/Shape, optimizer/gradients/decoder/add_fc7_pool4_grad/Shape_1)]]

During handling of the above exception, another exception occurred:

InvalidArgumentError Traceback (most recent call last)
in ()
31 summaries_dir='tensorboard_log/cityscapes',
32 summaries_name='configuration_01',
---> 33 training_loss_display_averaging=3)

~/Desktop/ai skin cancer/AI algorithms/fcn8s_tensorflow-master/fcn8s_tensorflow.py in train(self, train_generator, epochs, steps_per_epoch, learning_rate_schedule, keep_prob, l2_regularization, eval_dataset, eval_frequency, val_generator, val_steps, metrics, save_during_training, save_dir, save_best_only, save_tags, save_name, save_frequency, saver, monitor, record_summaries, summaries_frequency, summaries_dir, summaries_name, training_loss_display_averaging)
560 self.learning_rate: learning_rate,
561 self.keep_prob: keep_prob,
--> 562 self.l2_regularization_rate: l2_regularization})
563 training_writer.add_summary(summary=training_summary, global_step=self.g_step)
564 else:

/usr/local/lib/python3.6/site-packages/tensorflow/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata)
903 try:
904 result = self._run(None, fetches, feed_dict, options_ptr,
--> 905 run_metadata_ptr)
906 if run_metadata:
907 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

/usr/local/lib/python3.6/site-packages/tensorflow/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
1138 if final_fetches or final_targets or (handle and feed_dict_tensor):
1139 results = self._do_run(handle, final_targets, final_fetches,
-> 1140 feed_dict_tensor, options, run_metadata)
1141 else:
1142 results = []

/usr/local/lib/python3.6/site-packages/tensorflow/python/client/session.py in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata)
1319 if handle is None:
1320 return self._do_call(_run_fn, feeds, fetches, targets, options,
-> 1321 run_metadata)
1322 else:
1323 return self._do_call(_prun_fn, handle, feeds, fetches)

/usr/local/lib/python3.6/site-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
1338 except KeyError:
1339 pass
-> 1340 raise type(e)(node_def, op, message)
1341
1342 def _extend_graph(self):

InvalidArgumentError: Incompatible shapes: [4,20,20,3] vs. [4,19,19,3]
[[Node: optimizer/gradients/decoder/add_fc7_pool4_grad/BroadcastGradientArgs = BroadcastGradientArgs[T=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"](optimizer/gradients/decoder/add_fc7_pool4_grad/Shape, optimizer/gradients/decoder/add_fc7_pool4_grad/Shape_1)]]

Caused by op 'optimizer/gradients/decoder/add_fc7_pool4_grad/BroadcastGradientArgs', defined at:
File "/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.6/site-packages/ipykernel_launcher.py", line 16, in
app.launch_new_instance()
File "/usr/local/lib/python3.6/site-packages/traitlets/config/application.py", line 658, in launch_instance
app.start()
File "/usr/local/lib/python3.6/site-packages/ipykernel/kernelapp.py", line 486, in start
self.io_loop.start()
File "/usr/local/lib/python3.6/site-packages/tornado/platform/asyncio.py", line 127, in start
self.asyncio_loop.run_forever()
File "/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/asyncio/base_events.py", line 422, in run_forever
self._run_once()
File "/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/asyncio/base_events.py", line 1432, in _run_once
handle._run()
File "/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/asyncio/events.py", line 145, in _run
self._callback(*self._args)
File "/usr/local/lib/python3.6/site-packages/tornado/platform/asyncio.py", line 117, in _handle_events
handler_func(fileobj, events)
File "/usr/local/lib/python3.6/site-packages/tornado/stack_context.py", line 276, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/zmq/eventloop/zmqstream.py", line 450, in _handle_events
self._handle_recv()
File "/usr/local/lib/python3.6/site-packages/zmq/eventloop/zmqstream.py", line 480, in _handle_recv
self._run_callback(callback, msg)
File "/usr/local/lib/python3.6/site-packages/zmq/eventloop/zmqstream.py", line 432, in _run_callback
callback(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/tornado/stack_context.py", line 276, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/ipykernel/kernelbase.py", line 283, in dispatcher
return self.dispatch_shell(stream, msg)
File "/usr/local/lib/python3.6/site-packages/ipykernel/kernelbase.py", line 233, in dispatch_shell
handler(stream, idents, msg)
File "/usr/local/lib/python3.6/site-packages/ipykernel/kernelbase.py", line 399, in execute_request
user_expressions, allow_stdin)
File "/usr/local/lib/python3.6/site-packages/ipykernel/ipkernel.py", line 208, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "/usr/local/lib/python3.6/site-packages/ipykernel/zmqshell.py", line 537, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2662, in run_cell
raw_cell, store_history, silent, shell_futures)
File "/usr/local/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2785, in _run_cell
interactivity=interactivity, compiler=compiler, result=result)
File "/usr/local/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2903, in run_ast_nodes
if self.run_code(code, result):
File "/usr/local/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2963, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "", line 5, in
variables_load_dir=None)
File "/Users/mattfrohman/Desktop/ai skin cancer/AI algorithms/fcn8s_tensorflow-master/fcn8s_tensorflow.py", line 111, in init
self.total_loss, self.train_op, self.learning_rate, self.global_step = self._build_optimizer()
File "/Users/mattfrohman/Desktop/ai skin cancer/AI algorithms/fcn8s_tensorflow-master/fcn8s_tensorflow.py", line 257, in _build_optimizer
train_op = optimizer.minimize(total_loss, global_step=global_step, name='train_op')
File "/usr/local/lib/python3.6/site-packages/tensorflow/python/training/optimizer.py", line 399, in minimize
grad_loss=grad_loss)
File "/usr/local/lib/python3.6/site-packages/tensorflow/python/training/optimizer.py", line 492, in compute_gradients
colocate_gradients_with_ops=colocate_gradients_with_ops)
File "/usr/local/lib/python3.6/site-packages/tensorflow/python/ops/gradients_impl.py", line 488, in gradients
gate_gradients, aggregation_method, stop_gradients)
File "/usr/local/lib/python3.6/site-packages/tensorflow/python/ops/gradients_impl.py", line 625, in _GradientsHelper
lambda: grad_fn(op, *out_grads))
File "/usr/local/lib/python3.6/site-packages/tensorflow/python/ops/gradients_impl.py", line 379, in _MaybeCompile
return grad_fn() # Exit early
File "/usr/local/lib/python3.6/site-packages/tensorflow/python/ops/gradients_impl.py", line 625, in
lambda: grad_fn(op, *out_grads))
File "/usr/local/lib/python3.6/site-packages/tensorflow/python/ops/math_grad.py", line 845, in _AddGrad
rx, ry = gen_array_ops._broadcast_gradient_args(sx, sy)
File "/usr/local/lib/python3.6/site-packages/tensorflow/python/ops/gen_array_ops.py", line 671, in _broadcast_gradient_args
"BroadcastGradientArgs", s0=s0, s1=s1, name=name)
File "/usr/local/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/usr/local/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3290, in create_op
op_def=op_def)
File "/usr/local/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1654, in init
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access

...which was originally created as op 'decoder/add_fc7_pool4', defined at:
File "/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
[elided 23 identical lines from previous traceback]
File "", line 5, in
variables_load_dir=None)
File "/Users/mattfrohman/Desktop/ai skin cancer/AI algorithms/fcn8s_tensorflow-master/fcn8s_tensorflow.py", line 108, in init
self.fcn8s_output, self.l2_regularization_rate = self._build_decoder()
File "/Users/mattfrohman/Desktop/ai skin cancer/AI algorithms/fcn8s_tensorflow-master/fcn8s_tensorflow.py", line 213, in _build_decoder
add_fc7_pool4 = tf.add(fc7_conv2d_trans, pool4_1x1, name='add_fc7_pool4')
File "/usr/local/lib/python3.6/site-packages/tensorflow/python/ops/gen_math_ops.py", line 296, in add
"Add", x=x, y=y, name=name)
File "/usr/local/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/usr/local/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3290, in create_op
op_def=op_def)
File "/usr/local/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1654, in init
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access

InvalidArgumentError (see above for traceback): Incompatible shapes: [4,20,20,3] vs. [4,19,19,3]
[[Node: optimizer/gradients/decoder/add_fc7_pool4_grad/BroadcastGradientArgs = BroadcastGradientArgs[T=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"](optimizer/gradients/decoder/add_fc7_pool4_grad/Shape, optimizer/gradients/decoder/add_fc7_pool4_grad/Shape_1)]]

Is there any way that count pixels by class?

Hello!
I have one question about your code.
I done segmentation of my images dataset, and i want to count pixels by class.

For example, the total number of pixels is 131,072 (512 x 256), and the number of vegetation pixel is 200, car is 400 in the whole images.

Could you let me know if you have some information about this question?

GraphDef cannot be larger than 2GB.

TensorFlow Version: 1.3.0
INFO:tensorflow:Restoring parameters from b'C:/Users/uidk4301/Desktop/fcn8s_tensorflow-master/VGG-16_mod2FCN_ImageNet-Classification/variables\variables'


ValueError Traceback (most recent call last)
in ()
3 vgg16_dir='C:/Users/uidk4301/Desktop/fcn8s_tensorflow-master/VGG-16_mod2FCN_ImageNet-Classification/',
4 num_classes=num_classes,
----> 5 variables_load_dir=None)

C:\Users\uidk4301\Desktop\fcn8s_tensorflow-master\fcn8s_tensorflow.py in init(self, model_load_dir, tags, vgg16_dir, num_classes, variables_load_dir)
104
105 # Load the pretrained convolutionalized VGG-16 model as our encoder.
--> 106 self.image_input, self.keep_prob, self.pool3_out, self.pool4_out, self.fc7_out = self._load_vgg16()
107 # Build the decoder on top of the VGG-16 encoder.
108 self.fcn8s_output, self.l2_regularization_rate = self._build_decoder()

C:\Users\uidk4301\Desktop\fcn8s_tensorflow-master\fcn8s_tensorflow.py in _load_vgg16(self)
132 # 1: Load the model
133
--> 134 tf.saved_model.loader.load(sess=self.sess, tags=[self.vgg16_tag], export_dir=self.vgg16_dir)
135
136 # 2: Return the tensors of interest

C:\Users\uidk4301\AppData\Local\Continuum\Anaconda3\lib\site-packages\tensorflow\python\saved_model\loader_impl.py in load(sess, tags, export_dir, **saver_kwargs)
224
225 # Restore the variables using the built saver in the provided session.
--> 226 saver.restore(sess, variables_path)
227 else:
228 tf_logging.info("The specified SavedModel has no variables; no "

C:\Users\uidk4301\AppData\Local\Continuum\Anaconda3\lib\site-packages\tensorflow\python\training\saver.py in restore(self, sess, save_path)
1558 logging.info("Restoring parameters from %s", save_path)
1559 sess.run(self.saver_def.restore_op_name,
-> 1560 {self.saver_def.filename_tensor_name: save_path})
1561
1562 @staticmethod

C:\Users\uidk4301\AppData\Local\Continuum\Anaconda3\lib\site-packages\tensorflow\python\client\session.py in run(self, fetches, feed_dict, options, run_metadata)
893 try:
894 result = self._run(None, fetches, feed_dict, options_ptr,
--> 895 run_metadata_ptr)
896 if run_metadata:
897 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

C:\Users\uidk4301\AppData\Local\Continuum\Anaconda3\lib\site-packages\tensorflow\python\client\session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
1122 if final_fetches or final_targets or (handle and feed_dict_tensor):
1123 results = self._do_run(handle, final_targets, final_fetches,
-> 1124 feed_dict_tensor, options, run_metadata)
1125 else:
1126 results = []

C:\Users\uidk4301\AppData\Local\Continuum\Anaconda3\lib\site-packages\tensorflow\python\client\session.py in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata)
1319 if handle is None:
1320 return self._do_call(_run_fn, self._session, feeds, fetches, targets,
-> 1321 options, run_metadata)
1322 else:
1323 return self._do_call(_prun_fn, self._session, handle, feeds, fetches)

C:\Users\uidk4301\AppData\Local\Continuum\Anaconda3\lib\site-packages\tensorflow\python\client\session.py in _do_call(self, fn, *args)
1325 def _do_call(self, fn, *args):
1326 try:
-> 1327 return fn(*args)
1328 except errors.OpError as e:
1329 message = compat.as_text(e.message)

C:\Users\uidk4301\AppData\Local\Continuum\Anaconda3\lib\site-packages\tensorflow\python\client\session.py in _run_fn(session, feed_dict, fetch_list, target_list, options, run_metadata)
1295 run_metadata):
1296 # Ensure any changes to the graph are reflected in the runtime.
-> 1297 self._extend_graph()
1298 with errors.raise_exception_on_not_ok_status() as status:
1299 if self._created_with_new_api:

C:\Users\uidk4301\AppData\Local\Continuum\Anaconda3\lib\site-packages\tensorflow\python\client\session.py in _extend_graph(self)
1351 graph_def, self._current_version = self._graph._as_graph_def(
1352 from_version=self._current_version,
-> 1353 add_shapes=self._add_shapes)
1354 # pylint: enable=protected-access
1355

C:\Users\uidk4301\AppData\Local\Continuum\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py in _as_graph_def(self, from_version, add_shapes)
2444 bytesize += op.node_def.ByteSize()
2445 if bytesize >= (1 << 31) or bytesize < 0:
-> 2446 raise ValueError("GraphDef cannot be larger than 2GB.")
2447 if self._functions:
2448 for f in self._functions.values():

ValueError: GraphDef cannot be larger than 2GB.

I am getting this while loading model

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.