titu1994 / fully-connected-densenets-semantic-segmentation Goto Github PK
View Code? Open in Web Editor NEWFully Connected DenseNet for Image Segmentation (https://arxiv.org/pdf/1611.09326v1.pdf)
Fully Connected DenseNet for Image Segmentation (https://arxiv.org/pdf/1611.09326v1.pdf)
My stack: keras 1.2.2 + tensorflow on python 3.6; keras.json configured correctly to dim_ordering='tf'
ValueError Traceback (most recent call last)
in ()
----> 1 dn.fit(x=mini_trn, y=y_trn, batch_size=32, validation_data=[mini_val, y_val])
/home/zwerg/anaconda3/lib/python3.6/site-packages/keras/engine/training.py in fit(self, x, y, batch_size, nb_epoch, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch)
1114 class_weight=class_weight,
1115 check_batch_axis=False,
-> 1116 batch_size=batch_size)
1117 # prepare validation data
1118 if validation_data:
/home/zwerg/anaconda3/lib/python3.6/site-packages/keras/engine/training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, check_batch_axis, batch_size)
1031 output_shapes,
1032 check_batch_axis=False,
-> 1033 exception_prefix='model target')
1034 sample_weights = standardize_sample_weights(sample_weight,
1035 self.output_names)
/home/zwerg/anaconda3/lib/python3.6/site-packages/keras/engine/training.py in standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
110 ' to have ' + str(len(shapes[i])) +
111 ' dimensions, but got array with shape ' +
--> 112 str(array.shape))
113 for j, (dim, ref_dim) in enumerate(zip(array.shape, shapes[i])):
114 if not j and not check_batch_axis:
ValueError: Error when checking model target: expected merge_384 to have 4 dimensions, but got array with shape (432, 8)
I created a dense net using
dn = DenseNetFCN(IMAGE_SIZE+(3,), classes=NUM_CLASSES, include_top=False, input_tensor=Input(IMAGE_SIZE+(3,)))
with IMAGE_SIZE = (224,224)
The image tensor shape is 432, 224, 224, 3
The merge layer 384 in question is configured like:
m384 = dn.get_layer('merge_384').get_config()
{'arguments': {},
'concat_axis': -1,
'dot_axes': -1,
'mode': 'concat',
'mode_type': 'raw',
'name': 'merge_384',
'output_mask': None,
'output_mask_type': 'raw',
'output_shape': None,
'output_shape_type': 'raw'}
Have you got any pointers as to why I might be getting this? I suspect some issue with the backend, but keras is configured correctly to tensorflow.
depth_to_scale_tf function does not concatenate channels appropriately.
Line 48 at layers.py:
Xc = tf.split(3, 3, input)
should be replaced by:
Xc = tf.split(3, channels, input)
It looks like the transition_down_block uses AveragePooling2D when it should be using MaxPooling2D as specified in the paper 3.3. Semantic Segmentation Architecture.
Though perhaps that's worth making configurable with a flag.
I tried training this in TF and ran into the following bug:
I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1080, pci bus id: 0000:02:00.0)
Running with tf training, initializing batches...
creating densenet model
Traceback (most recent call last):
File "densenet_fcn.py", line 134, in <module>
model = get_model(image_train_size=image_train_size,model_type=model_type,tensor=image_batch)
File "densenet_fcn.py", line 50, in get_model
model = densenet_fc.create_fc_dense_net(number_of_classes,image_train_size)
File "/home/ahundt/src/tf-image-segmentation/tf_image_segmentation/models/densenet_fcn/densenet_fc.py", line 260, in create_fc_dense_net
x = transition_up_block(x, nb_filters=upsampling_conv, type=upscaling_type, output_shape=out_shape)
File "/home/ahundt/src/tf-image-segmentation/tf_image_segmentation/models/densenet_fcn/densenet_fc.py", line 97, in transition_up_block
x = SubPixelUpscaling(r=2, channels=int(nb_filters // 4))(x)
File "/usr/local/lib/python2.7/dist-packages/Keras-1.2.2-py2.7.egg/keras/engine/topology.py", line 572, in __call__
self.add_inbound_node(inbound_layers, node_indices, tensor_indices)
File "/usr/local/lib/python2.7/dist-packages/Keras-1.2.2-py2.7.egg/keras/engine/topology.py", line 635, in add_inbound_node
Node.create_node(self, inbound_layers, node_indices, tensor_indices)
File "/usr/local/lib/python2.7/dist-packages/Keras-1.2.2-py2.7.egg/keras/engine/topology.py", line 166, in create_node
output_tensors = to_list(outbound_layer.call(input_tensors[0], mask=input_masks[0]))
File "/home/ahundt/src/tf-image-segmentation/tf_image_segmentation/models/densenet_fcn/layers.py", line 70, in call
y = depth_to_scale_tf(x, self.r, self.channels)
File "/home/ahundt/src/tf-image-segmentation/tf_image_segmentation/models/densenet_fcn/layers.py", line 48, in depth_to_scale_tf
Xc = tf.split(3, channels, input)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/array_ops.py", line 1216, in split
split_dim=axis, num_split=num_or_size_splits, value=value, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_array_ops.py", line 3426, in _split
num_split=num_split, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 509, in apply_op
(prefix, dtypes.as_dtype(input_arg.type).name))
TypeError: Input 'split_dim' of 'Split' Op has type float32 that does not match expected type of int32.
The code successfully trains for a while on pascal-voc with a simpler U-net:
https://github.com/ahundt/tf-image-segmentation/blob/ahundt-keras/tf_image_segmentation/recipes/pascal_voc/FCNs/densenet_fcn.py
Have you trained and tested with this? I don't see anything to load a semantic segmentation dataset.
When using newer versions of Keras, the DenseNet implementation here raises some warnings due to deprecation of some of the Keras calls used. It still works for now, but the Keras wraning states, that from 08/2017, it won't.
Do you have any dataset and/or working example to use the given model? I am trying to use this on the CamVid dataset used in the original paper, without success..
Hi,
Thanks for your code.
Have you trained the model yourself. Could you release the corresponding weight file please :)
I run the code as is given in the read me (import and just define the model). But it gives dimension error, saying input dimension should be at least 32x32.
I reversed the input order from 32,32,1 to 1,32,32
This error removed but following error occurred:
ValueError: "concat" mode can only merge layers with matching output shapes except for the concat axis. Layer shapes: [(None, 64, 2, 2), (None, 368, 3, 3)]
What may be the problem? Could you help?
I get the following code when I try to run the script.
`NameError Traceback (most recent call last)
in ()
1 import densenet_fc as dc
2
----> 3 model = DenseNetFCN((64, 64, 3), nb_dense_block=5, growth_rate=16,
4 nb_layers_per_block=4, upsampling_type='upsampling', classes=1)
NameError: name 'DenseNetFCN' is not defined`
Once #7 is committed and verified to work with theano, I'm hoping to look at extending your keras-contrib densenet.py submission to support densenet-fcn as well.
Would that be okay with you?
I'll plan to also include a conversion script for pascal_voc.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.