GithubHelp home page GithubHelp logo

charlesshang / fastmaskrcnn Goto Github PK

View Code? Open in Web Editor NEW
3.1K 223.0 1.1K 510 KB

Mask RCNN in TensorFlow

License: Apache License 2.0

Python 93.46% Makefile 0.06% C++ 2.97% C 2.35% Shell 0.02% Cuda 1.14%
maskrcnn rcnn fasterrcnn tensorflow detection segmentation pfn coco

fastmaskrcnn's Introduction

Mask RCNN

Mask RCNN in TensorFlow

This repo attempts to reproduce this amazing work by Kaiming He et al. : Mask R-CNN

Requirements

How-to

  1. Go to ./libs/datasets/pycocotools and run make
  2. Download COCO dataset, place it into ./data, then run python download_and_convert_data.py to build tf-records. It takes a while.
  3. Download pretrained resnet50 model, wget http://download.tensorflow.org/models/resnet_v1_50_2016_08_28.tar.gz, unzip it, place it into ./data/pretrained_models/
  4. Go to ./libs and run make
  5. run python train/train.py for training
  6. There are certainly some bugs, please report them back, and let's solve them together.

TODO:

  • ROIAlign
  • COCO Data Provider
  • Resnet50
  • Feature Pyramid Network
  • Anchor and ROI layer
  • Mask layer
  • Speedup anchor layer with cython
  • Combining all modules together.
  • Testing and debugging (in progress)
  • Training / evaluation on COCO
  • Add image summary to show some results
  • Converting ResneXt
  • Training >2 images

Call for contributions

  • Anything helps this repo, including discussion, testing, promotion and of course your awesome code.

Acknowledgment

This repo borrows tons of code from

License

See LICENSE for details.

fastmaskrcnn's People

Contributors

amirbar avatar anner-dejong avatar charlesshang avatar jjgarrett0 avatar kongsea avatar santisy avatar souryuu avatar thefoxofsky avatar ycjing avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fastmaskrcnn's Issues

ROIAlign wrong??

when I run data_test.py, it went wrong as follows:

Traceback (most recent call last):
File "unit_test/data_test.py", line 48, in
feat = ROIAlign(image, boxes, False, 16, 7, 7)
File "unit_test/../libs/layers/crop.py", line 42, in crop
with tf.control_dependencies([assert_op, images, batch_inds]):
File "/home/tuku/.jumbo/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 3595, in control_dependencies
return get_default_graph().control_dependencies(control_inputs)
File "/home/tuku/.jumbo/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 3324, in control_dependencies
c = self.as_graph_element(c)
File "/home/tuku/.jumbo/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2414, in as_graph_element
return self._as_graph_element_locked(obj, allow_tensor, allow_operation)
File "/home/tuku/.jumbo/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2503, in _as_graph_element_locked
% (type(obj).name, types_str))
TypeError: Can not convert a bool into a Tensor or Operation.

Can anyone tell me why?thanks

Code stops running after arbitrary number of iterations in train .py

This is the entire error that the code is throwing, the number of iterations it stops after changes every time I run the code

iter 608: image-id:0022482, time:22.124(sec), regular_loss: 0.248815, total-loss 4425.7246(94.0396, 4331.6851, 0.000000, 0.0000, 0.0000), instances: 13, batch:(32|136, 0|64, 0|0)
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/script_ops.py", line 82, in call
ret = func(*args)
File "train/../libs/layers/sample.py", line 99, in sample_rpn_outputs_wrt_gt_boxes
gt_argmax_overlaps = overlaps.argmax(axis=0) # G
ValueError: attempt to get argmax of an empty sequence
2017-05-02 20:46:23.686108: W tensorflow/core/framework/op_kernel.cc:1152] Internal: Failed to run py callback pyfunc_5: see error log.
Traceback (most recent call last):
File "train/train.py", line 221, in
train()
File "train/train.py", line 173, in train
batch_info )
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 778, in run
run_metadata_ptr)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 982, in _run
feed_dict_string, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1032, in _do_run
target_list, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1052, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InternalError: Failed to run py callback pyfunc_5: see error log.
[[Node: pyramid_1/SampleBoxesWithGT/PyFunc = PyFunc[Tin=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_BOOL], Tout=[DT_FLOAT, DT_FLOAT, DT_INT32, DT_FLOAT, DT_FLOAT, DT_INT32], token="pyfunc_5", _device="/job:localhost/replica:0/task:0/cpu:0"](pyramid_1/AnchorDecoder/Reshape, pyramid_1/strided_slice_8, concat, pyramid_1/SampleBoxesWithGT/PyFunc/input_3)]]

Caused by op u'pyramid_1/SampleBoxesWithGT/PyFunc', defined at:
File "train/train.py", line 221, in
train()
File "train/train.py", line 120, in train
loss_weights=[0.2, 0.2, 1.0, 0.2, 1.0])
File "train/../libs/nets/pyramid_network.py", line 536, in build
is_training=is_training, gt_boxes=gt_boxes)
File "train/../libs/nets/pyramid_network.py", line 246, in build_heads
sample_rpn_outputs_with_gt(rois, rpn_probs[:, 1], gt_boxes, is_training=is_training)
File "train/../libs/layers/wrapper.py", line 132, in sample_with_gt_wrapper
[tf.float32, tf.float32, tf.int32, tf.float32, tf.float32, tf.int32])
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/script_ops.py", line 189, in py_func
input=inp, token=token, Tout=Tout, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_script_ops.py", line 40, in _py_func
name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 768, in apply_op
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2336, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1228, in init
self._traceback = _extract_stack()

InternalError (see above for traceback): Failed to run py callback pyfunc_5: see error log.
[[Node: pyramid_1/SampleBoxesWithGT/PyFunc = PyFunc[Tin=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_BOOL], Tout=[DT_FLOAT, DT_FLOAT, DT_INT32, DT_FLOAT, DT_FLOAT, DT_INT32], token="pyfunc_5", _device="/job:localhost/replica:0/task:0/cpu:0"](pyramid_1/AnchorDecoder/Reshape, pyramid_1/strided_slice_8, concat, pyramid_1/SampleBoxesWithGT/PyFunc/input_3)]]

Can someone help me out here?

Issues about train.py .How long will the training take?

Up to now ,I have trained for 170k iters ~70h, but it seems that the "FLAGS.max_iters" is to be 2500k in the code. That is to say, I need more than a month to train the model.
Does anyone finish the training and then test the model? I want to know your training speed or process .
Thank you!

when i run unit_test/data_test.py, it shows an error: TypeError: Can not convert a bool into a Tensor or Operation

File "/home/JKe/sunshihua/FastMaskRCNN-master/unit_test/data_test.py", line 64, in
feat = ROIAlign(image, boxes, False, 16, 7, 7)
File "/home/JKe/sunshihua/FastMaskRCNN-master/libs/layers/crop.py", line 59, in crop
with tf.control_dependencies([assert_op, images, batch_inds]):
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 3776, in control_dependencies
return get_default_graph().control_dependencies(control_inputs)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 3505, in control_dependencies
c = self.as_graph_element(c)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2578, in as_graph_element
return self._as_graph_element_locked(obj, allow_tensor, allow_operation)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2667, in _as_graph_element_locked
% (type(obj).name, types_str))
TypeError: Can not convert a bool into a Tensor or Operation.

Testing data

Hi, Can anyone please guide me how to test the network on a testing dataset? I have been trying for hours and still without any results.

About im_batch

Hello,

In your code, im_batch is always set to 1, but original mask rcnn set im_batch=2, can you fix it?

Invalid Argument error : 'module' object not callable in 'test/ resnet50_test.py'

While running python test/resnet50_test.py
I get the following error

`exceptions.TypeError: 'module' object is not callable
[[Node: pyramid_1/SampleBoxes/PyFunc = PyFunc[Tin=[DT_FLOAT, DT_FLOAT, DT_BOOL], Tout=[DT_FLOAT, DT_FLOAT], token="pyfunc_2", _device="/job:localhost/replica:0/task:0/cpu:0"](pyramid_1/AnchorDecoder/Reshape, pyramid_1/AnchorDecoder/Reshape_2, pyramid_1/SampleBoxes/PyFunc/input_2)]]

Caused by op u'pyramid_1/SampleBoxes/PyFunc', defined at:
File "test/resnet50_test2.py", line 57, in
outputs = pyramid_network.build_heads(pyramid, ih, iw, num_classes=81, base_anchors=15, is_training=True)
File "test/../libs/nets/pyramid_network.py", line 161, in build_heads
rois, scores = sample_rpn_outputs(rois, scores)
File "test/../libs/layers/wrapper.py", line 116, in sample_wrapper
[tf.float32, tf.float32])
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/script_ops.py", line 198, in py_func
input=inp, token=token, Tout=Tout, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_script_ops.py", line 40, in _py_func
name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 768, in apply_op
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2336, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1228, in init
self._traceback = _extract_stack()

InvalidArgumentError (see above for traceback): exceptions.TypeError: 'module' object is not callable
[[Node: pyramid_1/SampleBoxes/PyFunc = PyFunc[Tin=[DT_FLOAT, DT_FLOAT, DT_BOOL], Tout=[DT_FLOAT, DT_FLOAT], token="pyfunc_2", _device="/job:localhost/replica:0/task:0/cpu:0"](pyramid_1/AnchorDecoder/Reshape, pyramid_1/AnchorDecoder/Reshape_2, pyramid_1/SampleBoxes/PyFunc/input_2)]] `

in the following line
sess.run([update_op, total_loss], options = run_options, run_metadata= run_metadata)

Can you please help me with what the issue is?

About Multi-GPU version

Hello Everyone,
I am trying to write a multi-gpu version training code of this repository referring to this code cifar-multi-gpu.
However, I found out it always ended up with None value of gradients by opt.compute_gradient

    with tf.variable_scope(tf.get_variable_scope()) as tower_graph:
      for i in xrange(FLAGS.num_gpus):
        with tf.device('/gpu:%d' % i):

          # this should be outof the loop or not, the work of pipline

          with tf.name_scope('%s_%d' % ('tower', i)) as scope:

            input_list = coco.read(file_name_list) #image, ih, iw, gt_boxes, gt_masks, num_instances, img_id 
            input_list = list(input_list)
            input_list[0], input_list[3], input_list[4] = coco_preprocess.preprocess_image(input_list[0], input_list[3], input_list[4], is_training=True)

            with slim.arg_scope(resnet_v1.resnet_arg_scope()):
              logits, end_points = resnet50(input_list[0], 1000, is_training=False)
              
            loss = tower_loss(scope, input_list, end_points)

            # Reuse variables for the next tower.
            tf.get_variable_scope().reuse_variables()

            # Retain the summaries from the final tower.
            summaries = tf.get_collection(tf.GraphKeys.SUMMARIES, scope)

            # Calculate the gradients for the batch of data on this CIFAR tower.
            grads = opt.compute_gradients(loss)

            # Keep track of the gradients across all towers.
            tower_grads.append(grads)


    # We must calculate the mean of each gradient. Note that this is the
    # synchronization point across all towers.
    grads = average_gradients(tower_grads)

The mechanism I understand is to use name_scope to distinguish different towers to calculate the gradients separately while reuse all the variables in all towers to update them at once with averaged gradients. I think the main problem here is about the resnet50, because of the different name scope, the end_points' name in every tower changed. So I updated the dictionary by passing scope name. However, I cannot get valid gradients. Someone has any idea?

problems about FPN

I'm just trying to implement FPN in caffe. However, I find that large image input size(800*1200 rather than 600 * 1000) can lead to large training loss and rpn performance drops(about 5% mAP on voc2007 test, using single layer res4 for rpn). Do you encounter similar problems?
Best!
guangxing.

HELP download_and_convert_data.py

There some errors when I run the download_and_convert_data.py.
..................
File "libs/datasets/pycocotools/coco.py", line 79, in init
dataset = json.load(open(annotation_file, 'r'))
IOError: [Errno 2] No such file or directory: 'data/coco/annotations/instances_train2014.json'

I am a new man to python. I need help. Thanks very much. By the way, I also do not know how to organize the coco data.

assign_boxes (in wrapper.py) wrong?

According to the usage of function assign_boxes as follows, i'm wondering if the two loops (tensors and layers) were in the wrong order :

[assigned_rois, assigned_batch_inds, assigned_layer_inds] = \

#             assign_boxes(rois, [rois, batch_inds], [2, 3, 4, 5])

def assign_boxes(gt_boxes, tensors, layers, scope='AssignGTBoxes'):
with tf.name_scope(scope) as sc:
min_k = layers[0]
max_k = layers[-1]
assigned_layers =
tf.py_func(assign.assign_boxes,
[ gt_boxes, min_k, max_k ],
tf.int32)
assigned_layers = tf.reshape(assigned_layers, [-1])

    assigned_tensors = []
    for t in tensors:   ##########changed to for l in layers:
        split_tensors = []
        for l in layers:  ##########for t in tensors:
            tf.cast(l, tf.int32)
            inds = tf.where(tf.equal(assigned_layers, l))
            inds = tf.reshape(inds, [-1])
            split_tensors.append(tf.gather(t, inds))
        assigned_tensors.append(split_tensors)

    return assigned_tensors + [assigned_layers]

What my issue when I run train.py

whats my issue is:

  • 1 my 'make' in lib was:
    python setup.py build_ext --inplace running build_ext skipping 'boxes/bbox.c' Cython extension (up-to-date) skipping 'boxes/cython_anchor.c' Cython extension (up-to-date) skipping 'boxes/cython_bbox_transform.c' Cython extension (up-to-date) skipping 'boxes/nms.c' Cython extension (up-to-date) skipping 'nms/cpu_nms.c' Cython extension (up-to-date) skipping 'nms/gpu_nms.cpp' Cython extension (up-to-date) rm -rf build sh make.sh make[1]: Entering directory /home/king/Documents/macqueen/FastMaskRCNN/libs/datasets/pycocotools python setup.py build_ext --inplace running build_ext rm -rf build make[1]: Leaving directory /home/king/Documents/macqueen/FastMaskRCNN/libs/datasets/pycocotools /home/king/Documents/macqueen/FastMaskRCNN/libs

  • I don't know whether it is right

  • and when i run the train.py I get the error:

`
Traceback (most recent call last):

File "train/train.py", line 187, in
train()
File "train/train.py", line 100, in train
is_training=True)
File "train/../libs/datasets/dataset_factory.py", line 18, in get_dataset
image, gt_boxes, gt_masks = coco_preprocess.preprocess_image(image, gt_boxes, gt_masks, is_training)
File "train/../libs/preprocessings/coco_v1.py", line 23, in preprocess_image
return preprocess_for_training(image, gt_boxes, gt_masks)
File "train/../libs/preprocessings/coco_v1.py", line 38, in preprocess_for_training
lambda: (image, gt_boxes, gt_masks))
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/control_flow_ops.py", line 1710, in cond
orig_res, res_t = context_t.BuildCondBranch(fn1)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/control_flow_ops.py", line 1613, in BuildCondBranch
r = fn()
File "train/../libs/preprocessings/coco_v1.py", line 35, in
lambda: (preprocess_utils.flip_image(image),
File "train/../libs/preprocessings/utils.py", line 170, in flip_image
return tf.reverse(image, axis=[1])
TypeError: reverse() got an unexpected keyword argument 'axis'
`

RuntimeWarning: overflow encountered in exp

running test.resnet50_test and encounter this issue.

test/../libs/boxes/bbox_transform.py:61: RuntimeWarning: overflow encountered in exp pred_w = np.exp(dw) * widths[:, np.newaxis]

test/../libs/boxes/bbox_transform.py:61: RuntimeWarning: overflow encountered in multiply pred_w = np.exp(dw) * widths[:, np.newaxis]

Does anyone successfully train the model yet?

With GTX1080, it takes 24 hours to train 22k iterations, but it seems need to train 1500k iterations. which means it will take more than TWO MONTHS to train the model!!!

INTOLERABLE!!!

argparse.ArgumentError: argument --dataset_name: conflicting option string(s): --dataset_name

I got this error when I run "python download_and_convert_data.py". Can anyone help me out? Thanks in advance.

File "./download_and_convert_data.py", line 15, in
'The name of the dataset to convert, one of "cifar10", "flowers", "mnist".')
File "/home/xli47/anaconda2/lib/python2.7/site-packages/tensorflow/python/platform/flags.py", line 80, in DEFINE_string
_define_helper(flag_name, default_value, docstring, str)
File "/home/xli47/anaconda2/lib/python2.7/site-packages/tensorflow/python/platform/flags.py", line 65, in _define_helper
type=flagtype)
File "/home/xli47/anaconda2/lib/python2.7/argparse.py", line 1308, in add_argument
return self._add_action(action)
File "/home/xli47/anaconda2/lib/python2.7/argparse.py", line 1682, in _add_action
self._optionals._add_action(action)
File "/home/xli47/anaconda2/lib/python2.7/argparse.py", line 1509, in _add_action
action = super(_ArgumentGroup, self)._add_action(action)
File "/home/xli47/anaconda2/lib/python2.7/argparse.py", line 1322, in _add_action
self._check_conflict(action)
File "/home/xli47/anaconda2/lib/python2.7/argparse.py", line 1460, in _check_conflict
conflict_handler(action, confl_optionals)
File "/home/xli47/anaconda2/lib/python2.7/argparse.py", line 1467, in _handle_conflict_error
raise ArgumentError(action, message % conflict_string)
argparse.ArgumentError: argument --dataset_name: conflicting option string(s): --dataset_name

Finetuning

Hi,

I don't want to retrain the model from scratch, I want to fine-tune it using a different dataset and most likely different number of classes. is there anyway to do so?

"None Annotations" when running python download_and_convert_data.py

I extracted the coco files and annotations in data/coco in the following manner:
Running from FastMaskRCNN
ls data/coco

shows:
annotations test2014 test2015 train2014 val2014
I merged all annotations into one folder

Running from FastMaskRCNN:
python download_and_convert_data.py

delivers:

I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcublas.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcudnn.so.5 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcufft.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcurand.so.8.0 locally
loading annotations into memory...
Done (t=12.21s)
creating index...
index created!
train2014 has 82783 images
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:910] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties: 
name: GeForce GT 730
major: 3 minor: 5 memoryClockRate (GHz) 0.9015
pciBusID 0000:01:00.0
Total memory: 1.95GiB
Free memory: 1.44GiB
I tensorflow/core/common_runtime/gpu/gpu_device.cc:906] DMA: 0 
I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 0:   Y 
I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GT 730, pci bus id: 0000:01:00.0)
>> Converting image 1/82783 shard 0
>> Converting image 2/82783 shard 0
>> Converting image 3/82783 shard 0
>> Converting image 4/82783 shard 0
>> Converting image 5/82783 shard 0
>> Converting image 6/82783 shard 0
>> Converting image 7/82783 shard 0
>> Converting image 8/82783 shard 0
>> Converting image 9/82783 shard 0
>> Converting image 10/82783 shard 0
>> Converting image 11/82783 shard 0
>> Converting image 12/82783 shard 0
>> Converting image 13/82783 shard 0
>> Converting image 14/82783 shard 0
>> Converting image 15/82783 shard 0
>> Converting image 16/82783 shard 0
>> Converting image 17/82783 shard 0
>> Converting image 18/82783 shard 0
>> Converting image 19/82783 shard 0
>> Converting image 20/82783 shard 0
>> Converting image 21/82783 shard 0
>> Converting image 22/82783 shard 0
>> Converting image 23/82783 shard 0
>> Converting image 24/82783 shard 0
>> Converting image 25/82783 shard 0
>> Converting image 26/82783 shard 0
>> Converting image 27/82783 shard 0
>> Converting image 28/82783 shard 0
>> Converting image 29/82783 shard 0
>> Converting image 30/82783 shard 0
None Annotations data/coco/train2014/COCO_train2014_000000262184.jpg
Traceback (most recent call last):
  File "download_and_convert_data.py", line 35, in <module>
    tf.app.run()
  File "/home/schaal/.local/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 44, in run
    _sys.exit(main(_sys.argv[:1] + flags_passthrough))
  File "download_and_convert_data.py", line 29, in main
    download_and_convert_coco.run(FLAGS.dataset_dir2)
  File "/home/schaal/Tensorflow/gits/FastMaskRCNN_org/libs/datasets/download_and_convert_coco.py", line 330, in run
    'train2014')
  File "/home/schaal/Tensorflow/gits/FastMaskRCNN_org/libs/datasets/download_and_convert_coco.py", line 290, in _add_to_tfrecord
    gt_boxes, masks, mask = _get_coco_masks(coco, img_id, height, width, img_name)
  File "/home/schaal/Tensorflow/gits/FastMaskRCNN_org/libs/datasets/download_and_convert_coco.py", line 221, in _get_coco_masks
    LOG('None Annotations %s' % img_name)
  File "/home/schaal/Tensorflow/gits/FastMaskRCNN_org/libs/logs/log.py", line 11, in LOG
    datefmt='%m/%d/%Y %I:%M:%S %p', format='%(asctime)s %(message)s')
  File "/usr/lib/python2.7/logging/__init__.py", line 1547, in basicConfig
    hdlr = FileHandler(filename, mode)
  File "/usr/lib/python2.7/logging/__init__.py", line 913, in __init__
    StreamHandler.__init__(self, self._open())
  File "/usr/lib/python2.7/logging/__init__.py", line 943, in _open
    stream = open(self.baseFilename, self.mode)
IOError: [Errno 2] No such file or directory: '/home/schaal/Tensorflow/gits/FastMaskRCNN_org/train/maskrcnn/maskrcnn.log'

Are you suspecting some issues with the download of the coco zip files?

Version of tf_record?

Hi Charles,
I noticed that you change the file /libs/datesets/coco.py in recent commits.
Changes are below:
Original:
[123] gt_masks = tf.decode_raw(features['label/gt_masks'], tf.int32)
New:
[125] gt_masks = tf.decode_raw(features['label/gt_masks'], tf.uint8)
[126] gt_masks = tf.cast(gt_masks, tf.int32)

But now, I can't serilaize the record correctlly. because the size of gt_mask if 4 times bigger than num_instances *ih *iw.

So, I guess there is must something different between our platform?

cpu only compile

when I compile ./libs as in the step 4 in the "how to", there is a error ,"EnvironmentError: The nvcc binary could not be located in your $PATH. Either add it to your path, or set $CUDAHOME" . I know i didn't have a cuda card, can I build in the cpu only mode? any one can give some advice? thanks.

clarification

Hi Charles,

Thank you for your interest and implementing Mask R-CNN!

I would like to clarify some descriptions in your Readme (which may suggest misunderstanding of our work):

"The original work involves two stages, a pyramid Faster-RCNN for object detection and another network (with the same structure) for instance level segmentation."

This is not true. In our original work, object detection and instance segmentation are in one stage. They are in parallel, and they are two tasks of a multi-task learning network.

I hope this will ease your effort of a correct reproduction.

What's problem about this,when I run 'python train/train.py'

2017-05-25 13:23:12.879749: I tensorflow/core/common_runtime/gpu/gpu_device.cc:908] DMA: 0
2017-05-25 13:23:12.879754: I tensorflow/core/common_runtime/gpu/gpu_device.cc:918] 0: Y
2017-05-25 13:23:12.879760: I tensorflow/core/common_runtime/gpu/gpu_device.cc:977] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1080, pci bus id: 0000:06:00.0)
--restore_previous_if_exists is set, but failed to restore in ./output/mask_rcnn/ None
restoring resnet_v1_50/conv1/weights:0
restoring resnet_v1_50/conv1/BatchNorm/beta:0
restoring resnet_v1_50/conv1/BatchNorm/gamma:0
restoring resnet_v1_50/conv1/BatchNorm/moving_mean:0
restoring resnet_v1_50/conv1/BatchNorm/moving_variance:0
restoring resnet_v1_50/block1/unit_1/bottleneck_v1/shortcut/weights:0
restoring resnet_v1_50/block1/unit_1/bottleneck_v1/shortcut/BatchNorm/beta:0
restoring resnet_v1_50/block1/unit_1/bottleneck_v1/shortcut/BatchNorm/gamma:0
restoring resnet_v1_50/block1/unit_1/bottleneck_v1/shortcut/BatchNorm/moving_mean:0
restoring resnet_v1_50/block1/unit_1/bottleneck_v1/shortcut/BatchNorm/moving_variance:0
restoring resnet_v1_50/block1/unit_1/bottleneck_v1/conv1/weights:0
restoring resnet_v1_50/block1/unit_1/bottleneck_v1/conv1/BatchNorm/beta:0
restoring resnet_v1_50/block1/unit_1/bottleneck_v1/conv1/BatchNorm/gamma:0
restoring resnet_v1_50/block1/unit_1/bottleneck_v1/conv1/BatchNorm/moving_mean:0
restoring resnet_v1_50/block1/unit_1/bottleneck_v1/conv1/BatchNorm/moving_variance:0
restoring resnet_v1_50/block1/unit_1/bottleneck_v1/conv2/weights:0
restoring resnet_v1_50/block1/unit_1/bottleneck_v1/conv2/BatchNorm/beta:0
restoring resnet_v1_50/block1/unit_1/bottleneck_v1/conv2/BatchNorm/gamma:0
restoring resnet_v1_50/block1/unit_1/bottleneck_v1/conv2/BatchNorm/moving_mean:0
restoring resnet_v1_50/block1/unit_1/bottleneck_v1/conv2/BatchNorm/moving_variance:0
restoring resnet_v1_50/block1/unit_1/bottleneck_v1/conv3/weights:0
restoring resnet_v1_50/block1/unit_1/bottleneck_v1/conv3/BatchNorm/beta:0
restoring resnet_v1_50/block1/unit_1/bottleneck_v1/conv3/BatchNorm/gamma:0
restoring resnet_v1_50/block1/unit_1/bottleneck_v1/conv3/BatchNorm/moving_mean:0
restoring resnet_v1_50/block1/unit_1/bottleneck_v1/conv3/BatchNorm/moving_variance:0
restoring resnet_v1_50/block1/unit_2/bottleneck_v1/conv1/weights:0
restoring resnet_v1_50/block1/unit_2/bottleneck_v1/conv1/BatchNorm/beta:0
restoring resnet_v1_50/block1/unit_2/bottleneck_v1/conv1/BatchNorm/gamma:0
restoring resnet_v1_50/block1/unit_2/bottleneck_v1/conv1/BatchNorm/moving_mean:0
restoring resnet_v1_50/block1/unit_2/bottleneck_v1/conv1/BatchNorm/moving_variance:0
restoring resnet_v1_50/block1/unit_2/bottleneck_v1/conv2/weights:0
restoring resnet_v1_50/block1/unit_2/bottleneck_v1/conv2/BatchNorm/beta:0
restoring resnet_v1_50/block1/unit_2/bottleneck_v1/conv2/BatchNorm/gamma:0
restoring resnet_v1_50/block1/unit_2/bottleneck_v1/conv2/BatchNorm/moving_mean:0
restoring resnet_v1_50/block1/unit_2/bottleneck_v1/conv2/BatchNorm/moving_variance:0
restoring resnet_v1_50/block1/unit_2/bottleneck_v1/conv3/weights:0
restoring resnet_v1_50/block1/unit_2/bottleneck_v1/conv3/BatchNorm/beta:0
restoring resnet_v1_50/block1/unit_2/bottleneck_v1/conv3/BatchNorm/gamma:0
restoring resnet_v1_50/block1/unit_2/bottleneck_v1/conv3/BatchNorm/moving_mean:0
restoring resnet_v1_50/block1/unit_2/bottleneck_v1/conv3/BatchNorm/moving_variance:0
restoring resnet_v1_50/block1/unit_3/bottleneck_v1/conv1/weights:0
restoring resnet_v1_50/block1/unit_3/bottleneck_v1/conv1/BatchNorm/beta:0
restoring resnet_v1_50/block1/unit_3/bottleneck_v1/conv1/BatchNorm/gamma:0
restoring resnet_v1_50/block1/unit_3/bottleneck_v1/conv1/BatchNorm/moving_mean:0
restoring resnet_v1_50/block1/unit_3/bottleneck_v1/conv1/BatchNorm/moving_variance:0
restoring resnet_v1_50/block1/unit_3/bottleneck_v1/conv2/weights:0
restoring resnet_v1_50/block1/unit_3/bottleneck_v1/conv2/BatchNorm/beta:0
restoring resnet_v1_50/block1/unit_3/bottleneck_v1/conv2/BatchNorm/gamma:0
restoring resnet_v1_50/block1/unit_3/bottleneck_v1/conv2/BatchNorm/moving_mean:0
restoring resnet_v1_50/block1/unit_3/bottleneck_v1/conv2/BatchNorm/moving_variance:0
restoring resnet_v1_50/block1/unit_3/bottleneck_v1/conv3/weights:0
restoring resnet_v1_50/block1/unit_3/bottleneck_v1/conv3/BatchNorm/beta:0
restoring resnet_v1_50/block1/unit_3/bottleneck_v1/conv3/BatchNorm/gamma:0
restoring resnet_v1_50/block1/unit_3/bottleneck_v1/conv3/BatchNorm/moving_mean:0
restoring resnet_v1_50/block1/unit_3/bottleneck_v1/conv3/BatchNorm/moving_variance:0
restoring resnet_v1_50/block2/unit_1/bottleneck_v1/shortcut/weights:0
restoring resnet_v1_50/block2/unit_1/bottleneck_v1/shortcut/BatchNorm/beta:0
restoring resnet_v1_50/block2/unit_1/bottleneck_v1/shortcut/BatchNorm/gamma:0
restoring resnet_v1_50/block2/unit_1/bottleneck_v1/shortcut/BatchNorm/moving_mean:0
restoring resnet_v1_50/block2/unit_1/bottleneck_v1/shortcut/BatchNorm/moving_variance:0
restoring resnet_v1_50/block2/unit_1/bottleneck_v1/conv1/weights:0
restoring resnet_v1_50/block2/unit_1/bottleneck_v1/conv1/BatchNorm/beta:0
restoring resnet_v1_50/block2/unit_1/bottleneck_v1/conv1/BatchNorm/gamma:0
restoring resnet_v1_50/block2/unit_1/bottleneck_v1/conv1/BatchNorm/moving_mean:0
restoring resnet_v1_50/block2/unit_1/bottleneck_v1/conv1/BatchNorm/moving_variance:0
restoring resnet_v1_50/block2/unit_1/bottleneck_v1/conv2/weights:0
restoring resnet_v1_50/block2/unit_1/bottleneck_v1/conv2/BatchNorm/beta:0
restoring resnet_v1_50/block2/unit_1/bottleneck_v1/conv2/BatchNorm/gamma:0
restoring resnet_v1_50/block2/unit_1/bottleneck_v1/conv2/BatchNorm/moving_mean:0
restoring resnet_v1_50/block2/unit_1/bottleneck_v1/conv2/BatchNorm/moving_variance:0
restoring resnet_v1_50/block2/unit_1/bottleneck_v1/conv3/weights:0
restoring resnet_v1_50/block2/unit_1/bottleneck_v1/conv3/BatchNorm/beta:0
restoring resnet_v1_50/block2/unit_1/bottleneck_v1/conv3/BatchNorm/gamma:0
restoring resnet_v1_50/block2/unit_1/bottleneck_v1/conv3/BatchNorm/moving_mean:0
restoring resnet_v1_50/block2/unit_1/bottleneck_v1/conv3/BatchNorm/moving_variance:0
restoring resnet_v1_50/block2/unit_2/bottleneck_v1/conv1/weights:0
restoring resnet_v1_50/block2/unit_2/bottleneck_v1/conv1/BatchNorm/beta:0
restoring resnet_v1_50/block2/unit_2/bottleneck_v1/conv1/BatchNorm/gamma:0
restoring resnet_v1_50/block2/unit_2/bottleneck_v1/conv1/BatchNorm/moving_mean:0
restoring resnet_v1_50/block2/unit_2/bottleneck_v1/conv1/BatchNorm/moving_variance:0
restoring resnet_v1_50/block2/unit_2/bottleneck_v1/conv2/weights:0
restoring resnet_v1_50/block2/unit_2/bottleneck_v1/conv2/BatchNorm/beta:0
restoring resnet_v1_50/block2/unit_2/bottleneck_v1/conv2/BatchNorm/gamma:0
restoring resnet_v1_50/block2/unit_2/bottleneck_v1/conv2/BatchNorm/moving_mean:0
restoring resnet_v1_50/block2/unit_2/bottleneck_v1/conv2/BatchNorm/moving_variance:0
restoring resnet_v1_50/block2/unit_2/bottleneck_v1/conv3/weights:0
restoring resnet_v1_50/block2/unit_2/bottleneck_v1/conv3/BatchNorm/beta:0
restoring resnet_v1_50/block2/unit_2/bottleneck_v1/conv3/BatchNorm/gamma:0
restoring resnet_v1_50/block2/unit_2/bottleneck_v1/conv3/BatchNorm/moving_mean:0
restoring resnet_v1_50/block2/unit_2/bottleneck_v1/conv3/BatchNorm/moving_variance:0
restoring resnet_v1_50/block2/unit_3/bottleneck_v1/conv1/weights:0
restoring resnet_v1_50/block2/unit_3/bottleneck_v1/conv1/BatchNorm/beta:0
restoring resnet_v1_50/block2/unit_3/bottleneck_v1/conv1/BatchNorm/gamma:0
restoring resnet_v1_50/block2/unit_3/bottleneck_v1/conv1/BatchNorm/moving_mean:0
restoring resnet_v1_50/block2/unit_3/bottleneck_v1/conv1/BatchNorm/moving_variance:0
restoring resnet_v1_50/block2/unit_3/bottleneck_v1/conv2/weights:0
restoring resnet_v1_50/block2/unit_3/bottleneck_v1/conv2/BatchNorm/beta:0
restoring resnet_v1_50/block2/unit_3/bottleneck_v1/conv2/BatchNorm/gamma:0
restoring resnet_v1_50/block2/unit_3/bottleneck_v1/conv2/BatchNorm/moving_mean:0
restoring resnet_v1_50/block2/unit_3/bottleneck_v1/conv2/BatchNorm/moving_variance:0
restoring resnet_v1_50/block2/unit_3/bottleneck_v1/conv3/weights:0
restoring resnet_v1_50/block2/unit_3/bottleneck_v1/conv3/BatchNorm/beta:0
restoring resnet_v1_50/block2/unit_3/bottleneck_v1/conv3/BatchNorm/gamma:0
restoring resnet_v1_50/block2/unit_3/bottleneck_v1/conv3/BatchNorm/moving_mean:0
restoring resnet_v1_50/block2/unit_3/bottleneck_v1/conv3/BatchNorm/moving_variance:0
restoring resnet_v1_50/block2/unit_4/bottleneck_v1/conv1/weights:0
restoring resnet_v1_50/block2/unit_4/bottleneck_v1/conv1/BatchNorm/beta:0
restoring resnet_v1_50/block2/unit_4/bottleneck_v1/conv1/BatchNorm/gamma:0
restoring resnet_v1_50/block2/unit_4/bottleneck_v1/conv1/BatchNorm/moving_mean:0
restoring resnet_v1_50/block2/unit_4/bottleneck_v1/conv1/BatchNorm/moving_variance:0
restoring resnet_v1_50/block2/unit_4/bottleneck_v1/conv2/weights:0
restoring resnet_v1_50/block2/unit_4/bottleneck_v1/conv2/BatchNorm/beta:0
restoring resnet_v1_50/block2/unit_4/bottleneck_v1/conv2/BatchNorm/gamma:0
restoring resnet_v1_50/block2/unit_4/bottleneck_v1/conv2/BatchNorm/moving_mean:0
restoring resnet_v1_50/block2/unit_4/bottleneck_v1/conv2/BatchNorm/moving_variance:0
restoring resnet_v1_50/block2/unit_4/bottleneck_v1/conv3/weights:0
restoring resnet_v1_50/block2/unit_4/bottleneck_v1/conv3/BatchNorm/beta:0
restoring resnet_v1_50/block2/unit_4/bottleneck_v1/conv3/BatchNorm/gamma:0
restoring resnet_v1_50/block2/unit_4/bottleneck_v1/conv3/BatchNorm/moving_mean:0
restoring resnet_v1_50/block2/unit_4/bottleneck_v1/conv3/BatchNorm/moving_variance:0
restoring resnet_v1_50/block3/unit_1/bottleneck_v1/shortcut/weights:0
restoring resnet_v1_50/block3/unit_1/bottleneck_v1/shortcut/BatchNorm/beta:0
restoring resnet_v1_50/block3/unit_1/bottleneck_v1/shortcut/BatchNorm/gamma:0
restoring resnet_v1_50/block3/unit_1/bottleneck_v1/shortcut/BatchNorm/moving_mean:0
restoring resnet_v1_50/block3/unit_1/bottleneck_v1/shortcut/BatchNorm/moving_variance:0
restoring resnet_v1_50/block3/unit_1/bottleneck_v1/conv1/weights:0
restoring resnet_v1_50/block3/unit_1/bottleneck_v1/conv1/BatchNorm/beta:0
restoring resnet_v1_50/block3/unit_1/bottleneck_v1/conv1/BatchNorm/gamma:0
restoring resnet_v1_50/block3/unit_1/bottleneck_v1/conv1/BatchNorm/moving_mean:0
restoring resnet_v1_50/block3/unit_1/bottleneck_v1/conv1/BatchNorm/moving_variance:0
restoring resnet_v1_50/block3/unit_1/bottleneck_v1/conv2/weights:0
restoring resnet_v1_50/block3/unit_1/bottleneck_v1/conv2/BatchNorm/beta:0
restoring resnet_v1_50/block3/unit_1/bottleneck_v1/conv2/BatchNorm/gamma:0
restoring resnet_v1_50/block3/unit_1/bottleneck_v1/conv2/BatchNorm/moving_mean:0
restoring resnet_v1_50/block3/unit_1/bottleneck_v1/conv2/BatchNorm/moving_variance:0
restoring resnet_v1_50/block3/unit_1/bottleneck_v1/conv3/weights:0
restoring resnet_v1_50/block3/unit_1/bottleneck_v1/conv3/BatchNorm/beta:0
restoring resnet_v1_50/block3/unit_1/bottleneck_v1/conv3/BatchNorm/gamma:0
restoring resnet_v1_50/block3/unit_1/bottleneck_v1/conv3/BatchNorm/moving_mean:0
restoring resnet_v1_50/block3/unit_1/bottleneck_v1/conv3/BatchNorm/moving_variance:0
restoring resnet_v1_50/block3/unit_2/bottleneck_v1/conv1/weights:0
restoring resnet_v1_50/block3/unit_2/bottleneck_v1/conv1/BatchNorm/beta:0
restoring resnet_v1_50/block3/unit_2/bottleneck_v1/conv1/BatchNorm/gamma:0
restoring resnet_v1_50/block3/unit_2/bottleneck_v1/conv1/BatchNorm/moving_mean:0
restoring resnet_v1_50/block3/unit_2/bottleneck_v1/conv1/BatchNorm/moving_variance:0
restoring resnet_v1_50/block3/unit_2/bottleneck_v1/conv2/weights:0
restoring resnet_v1_50/block3/unit_2/bottleneck_v1/conv2/BatchNorm/beta:0
restoring resnet_v1_50/block3/unit_2/bottleneck_v1/conv2/BatchNorm/gamma:0
restoring resnet_v1_50/block3/unit_2/bottleneck_v1/conv2/BatchNorm/moving_mean:0
restoring resnet_v1_50/block3/unit_2/bottleneck_v1/conv2/BatchNorm/moving_variance:0
restoring resnet_v1_50/block3/unit_2/bottleneck_v1/conv3/weights:0
restoring resnet_v1_50/block3/unit_2/bottleneck_v1/conv3/BatchNorm/beta:0
restoring resnet_v1_50/block3/unit_2/bottleneck_v1/conv3/BatchNorm/gamma:0
restoring resnet_v1_50/block3/unit_2/bottleneck_v1/conv3/BatchNorm/moving_mean:0
restoring resnet_v1_50/block3/unit_2/bottleneck_v1/conv3/BatchNorm/moving_variance:0
restoring resnet_v1_50/block3/unit_3/bottleneck_v1/conv1/weights:0
restoring resnet_v1_50/block3/unit_3/bottleneck_v1/conv1/BatchNorm/beta:0
restoring resnet_v1_50/block3/unit_3/bottleneck_v1/conv1/BatchNorm/gamma:0
restoring resnet_v1_50/block3/unit_3/bottleneck_v1/conv1/BatchNorm/moving_mean:0
restoring resnet_v1_50/block3/unit_3/bottleneck_v1/conv1/BatchNorm/moving_variance:0
restoring resnet_v1_50/block3/unit_3/bottleneck_v1/conv2/weights:0
restoring resnet_v1_50/block3/unit_3/bottleneck_v1/conv2/BatchNorm/beta:0
restoring resnet_v1_50/block3/unit_3/bottleneck_v1/conv2/BatchNorm/gamma:0
restoring resnet_v1_50/block3/unit_3/bottleneck_v1/conv2/BatchNorm/moving_mean:0
restoring resnet_v1_50/block3/unit_3/bottleneck_v1/conv2/BatchNorm/moving_variance:0
restoring resnet_v1_50/block3/unit_3/bottleneck_v1/conv3/weights:0
restoring resnet_v1_50/block3/unit_3/bottleneck_v1/conv3/BatchNorm/beta:0
restoring resnet_v1_50/block3/unit_3/bottleneck_v1/conv3/BatchNorm/gamma:0
restoring resnet_v1_50/block3/unit_3/bottleneck_v1/conv3/BatchNorm/moving_mean:0
restoring resnet_v1_50/block3/unit_3/bottleneck_v1/conv3/BatchNorm/moving_variance:0
restoring resnet_v1_50/block3/unit_4/bottleneck_v1/conv1/weights:0
restoring resnet_v1_50/block3/unit_4/bottleneck_v1/conv1/BatchNorm/beta:0
restoring resnet_v1_50/block3/unit_4/bottleneck_v1/conv1/BatchNorm/gamma:0
restoring resnet_v1_50/block3/unit_4/bottleneck_v1/conv1/BatchNorm/moving_mean:0
restoring resnet_v1_50/block3/unit_4/bottleneck_v1/conv1/BatchNorm/moving_variance:0
restoring resnet_v1_50/block3/unit_4/bottleneck_v1/conv2/weights:0
restoring resnet_v1_50/block3/unit_4/bottleneck_v1/conv2/BatchNorm/beta:0
restoring resnet_v1_50/block3/unit_4/bottleneck_v1/conv2/BatchNorm/gamma:0
restoring resnet_v1_50/block3/unit_4/bottleneck_v1/conv2/BatchNorm/moving_mean:0
restoring resnet_v1_50/block3/unit_4/bottleneck_v1/conv2/BatchNorm/moving_variance:0
restoring resnet_v1_50/block3/unit_4/bottleneck_v1/conv3/weights:0
restoring resnet_v1_50/block3/unit_4/bottleneck_v1/conv3/BatchNorm/beta:0
restoring resnet_v1_50/block3/unit_4/bottleneck_v1/conv3/BatchNorm/gamma:0
restoring resnet_v1_50/block3/unit_4/bottleneck_v1/conv3/BatchNorm/moving_mean:0
restoring resnet_v1_50/block3/unit_4/bottleneck_v1/conv3/BatchNorm/moving_variance:0
restoring resnet_v1_50/block3/unit_5/bottleneck_v1/conv1/weights:0
restoring resnet_v1_50/block3/unit_5/bottleneck_v1/conv1/BatchNorm/beta:0
restoring resnet_v1_50/block3/unit_5/bottleneck_v1/conv1/BatchNorm/gamma:0
restoring resnet_v1_50/block3/unit_5/bottleneck_v1/conv1/BatchNorm/moving_mean:0
restoring resnet_v1_50/block3/unit_5/bottleneck_v1/conv1/BatchNorm/moving_variance:0
restoring resnet_v1_50/block3/unit_5/bottleneck_v1/conv2/weights:0
restoring resnet_v1_50/block3/unit_5/bottleneck_v1/conv2/BatchNorm/beta:0
restoring resnet_v1_50/block3/unit_5/bottleneck_v1/conv2/BatchNorm/gamma:0
restoring resnet_v1_50/block3/unit_5/bottleneck_v1/conv2/BatchNorm/moving_mean:0
restoring resnet_v1_50/block3/unit_5/bottleneck_v1/conv2/BatchNorm/moving_variance:0
restoring resnet_v1_50/block3/unit_5/bottleneck_v1/conv3/weights:0
restoring resnet_v1_50/block3/unit_5/bottleneck_v1/conv3/BatchNorm/beta:0
restoring resnet_v1_50/block3/unit_5/bottleneck_v1/conv3/BatchNorm/gamma:0
restoring resnet_v1_50/block3/unit_5/bottleneck_v1/conv3/BatchNorm/moving_mean:0
restoring resnet_v1_50/block3/unit_5/bottleneck_v1/conv3/BatchNorm/moving_variance:0
restoring resnet_v1_50/block3/unit_6/bottleneck_v1/conv1/weights:0
restoring resnet_v1_50/block3/unit_6/bottleneck_v1/conv1/BatchNorm/beta:0
restoring resnet_v1_50/block3/unit_6/bottleneck_v1/conv1/BatchNorm/gamma:0
restoring resnet_v1_50/block3/unit_6/bottleneck_v1/conv1/BatchNorm/moving_mean:0
restoring resnet_v1_50/block3/unit_6/bottleneck_v1/conv1/BatchNorm/moving_variance:0
restoring resnet_v1_50/block3/unit_6/bottleneck_v1/conv2/weights:0
restoring resnet_v1_50/block3/unit_6/bottleneck_v1/conv2/BatchNorm/beta:0
restoring resnet_v1_50/block3/unit_6/bottleneck_v1/conv2/BatchNorm/gamma:0
restoring resnet_v1_50/block3/unit_6/bottleneck_v1/conv2/BatchNorm/moving_mean:0
restoring resnet_v1_50/block3/unit_6/bottleneck_v1/conv2/BatchNorm/moving_variance:0
restoring resnet_v1_50/block3/unit_6/bottleneck_v1/conv3/weights:0
restoring resnet_v1_50/block3/unit_6/bottleneck_v1/conv3/BatchNorm/beta:0
restoring resnet_v1_50/block3/unit_6/bottleneck_v1/conv3/BatchNorm/gamma:0
restoring resnet_v1_50/block3/unit_6/bottleneck_v1/conv3/BatchNorm/moving_mean:0
restoring resnet_v1_50/block3/unit_6/bottleneck_v1/conv3/BatchNorm/moving_variance:0
restoring resnet_v1_50/block4/unit_1/bottleneck_v1/shortcut/weights:0
restoring resnet_v1_50/block4/unit_1/bottleneck_v1/shortcut/BatchNorm/beta:0
restoring resnet_v1_50/block4/unit_1/bottleneck_v1/shortcut/BatchNorm/gamma:0
restoring resnet_v1_50/block4/unit_1/bottleneck_v1/shortcut/BatchNorm/moving_mean:0
restoring resnet_v1_50/block4/unit_1/bottleneck_v1/shortcut/BatchNorm/moving_variance:0
restoring resnet_v1_50/block4/unit_1/bottleneck_v1/conv1/weights:0
restoring resnet_v1_50/block4/unit_1/bottleneck_v1/conv1/BatchNorm/beta:0
restoring resnet_v1_50/block4/unit_1/bottleneck_v1/conv1/BatchNorm/gamma:0
restoring resnet_v1_50/block4/unit_1/bottleneck_v1/conv1/BatchNorm/moving_mean:0
restoring resnet_v1_50/block4/unit_1/bottleneck_v1/conv1/BatchNorm/moving_variance:0
restoring resnet_v1_50/block4/unit_1/bottleneck_v1/conv2/weights:0
restoring resnet_v1_50/block4/unit_1/bottleneck_v1/conv2/BatchNorm/beta:0
restoring resnet_v1_50/block4/unit_1/bottleneck_v1/conv2/BatchNorm/gamma:0
restoring resnet_v1_50/block4/unit_1/bottleneck_v1/conv2/BatchNorm/moving_mean:0
restoring resnet_v1_50/block4/unit_1/bottleneck_v1/conv2/BatchNorm/moving_variance:0
restoring resnet_v1_50/block4/unit_1/bottleneck_v1/conv3/weights:0
restoring resnet_v1_50/block4/unit_1/bottleneck_v1/conv3/BatchNorm/beta:0
restoring resnet_v1_50/block4/unit_1/bottleneck_v1/conv3/BatchNorm/gamma:0
restoring resnet_v1_50/block4/unit_1/bottleneck_v1/conv3/BatchNorm/moving_mean:0
restoring resnet_v1_50/block4/unit_1/bottleneck_v1/conv3/BatchNorm/moving_variance:0
restoring resnet_v1_50/block4/unit_2/bottleneck_v1/conv1/weights:0
restoring resnet_v1_50/block4/unit_2/bottleneck_v1/conv1/BatchNorm/beta:0
restoring resnet_v1_50/block4/unit_2/bottleneck_v1/conv1/BatchNorm/gamma:0
restoring resnet_v1_50/block4/unit_2/bottleneck_v1/conv1/BatchNorm/moving_mean:0
restoring resnet_v1_50/block4/unit_2/bottleneck_v1/conv1/BatchNorm/moving_variance:0
restoring resnet_v1_50/block4/unit_2/bottleneck_v1/conv2/weights:0
restoring resnet_v1_50/block4/unit_2/bottleneck_v1/conv2/BatchNorm/beta:0
restoring resnet_v1_50/block4/unit_2/bottleneck_v1/conv2/BatchNorm/gamma:0
restoring resnet_v1_50/block4/unit_2/bottleneck_v1/conv2/BatchNorm/moving_mean:0
restoring resnet_v1_50/block4/unit_2/bottleneck_v1/conv2/BatchNorm/moving_variance:0
restoring resnet_v1_50/block4/unit_2/bottleneck_v1/conv3/weights:0
restoring resnet_v1_50/block4/unit_2/bottleneck_v1/conv3/BatchNorm/beta:0
restoring resnet_v1_50/block4/unit_2/bottleneck_v1/conv3/BatchNorm/gamma:0
restoring resnet_v1_50/block4/unit_2/bottleneck_v1/conv3/BatchNorm/moving_mean:0
restoring resnet_v1_50/block4/unit_2/bottleneck_v1/conv3/BatchNorm/moving_variance:0
restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv1/weights:0
restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv1/BatchNorm/beta:0
restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv1/BatchNorm/gamma:0
restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv1/BatchNorm/moving_mean:0
restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv1/BatchNorm/moving_variance:0
restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv2/weights:0
restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv2/BatchNorm/beta:0
restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv2/BatchNorm/gamma:0
restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv2/BatchNorm/moving_mean:0
restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv2/BatchNorm/moving_variance:0
restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv3/weights:0
restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv3/BatchNorm/beta:0
restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv3/BatchNorm/gamma:0
restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv3/BatchNorm/moving_mean:0
restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv3/BatchNorm/moving_variance:0
restoring resnet_v1_50/logits/weights:0
restoring resnet_v1_50/logits/biases:0
Restored 267(544) vars from ./data/pretrained_models/resnet_v1_50.ckpt
Traceback (most recent call last):
Traceback (most recent call last):
File "/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/ops/script_ops.py", line 82, in call
File "/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/ops/script_ops.py", line 82, in call
ret = func(*args)
File "train/../libs/boxes/anchor.py", line 23, in anchors_plane
ret = func(*args)
File "train/../libs/boxes/anchor.py", line 23, in anchors_plane
anc = anchors(scales, ratios, base)
File "train/../libs/boxes/anchor.py", line 10, in anchors
anc = anchors(scales, ratios, base)
File "train/../libs/boxes/anchor.py", line 10, in anchors
return generate_anchors(base_size=base, scales=np.asarray(scales, np.int32), ratios=ratios)
File "train/../libs/boxes/anchor.py", line 38, in generate_anchors
return generate_anchors(base_size=base, scales=np.asarray(scales, np.int32), ratios=ratios)
File "train/../libs/boxes/anchor.py", line 38, in generate_anchors
for i in xrange(ratio_anchors.shape[0])])
NameError: name 'xrange' is not defined
2017-05-25 13:25:32.432683: W tensorflow/core/framework/op_kernel.cc:1152] Internal: Failed to run py callback pyfunc_1: see error log.
for i in xrange(ratio_anchors.shape[0])])
NameError: name 'xrange' is not defined
2017-05-25 13:25:32.432796: W tensorflow/core/framework/op_kernel.cc:1152] Internal: Failed to run py callback pyfunc_0: see error log.
2017-05-25 13:25:32.456233: W tensorflow/core/framework/op_kernel.cc:1152] Internal: Failed to run py callback pyfunc_0: see error log.
[[Node: pyramid_1/GenAnchors/PyFunc = PyFunc[Tin=[DT_INT32, DT_INT32, DT_INT32, DT_INT32], Tout=[DT_DOUBLE], token="pyfunc_0", _device="/job:localhost/replica:0/task:0/cpu:0"](pyramid_1/strided_slice/_847, pyramid_1/strided_slice_1/_849, pyramid_1/GenAnchors/PyFunc/input_2, pyramid_1/GenAnchors/PyFunc/input_3)]]
2017-05-25 13:25:32.456326: W tensorflow/core/framework/op_kernel.cc:1152] Internal: Failed to run py callback pyfunc_0: see error log.
[[Node: pyramid_1/GenAnchors/PyFunc = PyFunc[Tin=[DT_INT32, DT_INT32, DT_INT32, DT_INT32], Tout=[DT_DOUBLE], token="pyfunc_0", _device="/job:localhost/replica:0/task:0/cpu:0"](pyramid_1/strided_slice/_847, pyramid_1/strided_slice_1/_849, pyramid_1/GenAnchors/PyFunc/input_2, pyramid_1/GenAnchors/PyFunc/input_3)]]
2017-05-25 13:25:32.456379: W tensorflow/core/framework/op_kernel.cc:1152] Internal: Failed to run py callback pyfunc_0: see error log.
[[Node: pyramid_1/GenAnchors/PyFunc = PyFunc[Tin=[DT_INT32, DT_INT32, DT_INT32, DT_INT32], Tout=[DT_DOUBLE], token="pyfunc_0", _device="/job:localhost/replica:0/task:0/cpu:0"](pyramid_1/strided_slice/_847, pyramid_1/strided_slice_1/_849, pyramid_1/GenAnchors/PyFunc/input_2, pyramid_1/GenAnchors/PyFunc/input_3)]]
2017-05-25 13:25:32.456397: W tensorflow/core/framework/op_kernel.cc:1152] Internal: Failed to run py callback pyfunc_0: see error log.
[[Node: pyramid_1/GenAnchors/PyFunc = PyFunc[Tin=[DT_INT32, DT_INT32, DT_INT32, DT_INT32], Tout=[DT_DOUBLE], token="pyfunc_0", _device="/job:localhost/replica:0/task:0/cpu:0"](pyramid_1/strided_slice/_847, pyramid_1/strided_slice_1/_849, pyramid_1/GenAnchors/PyFunc/input_2, pyramid_1/GenAnchors/PyFunc/input_3)]]
2017-05-25 13:25:32.456511: W tensorflow/core/framework/op_kernel.cc:1152] Internal: Failed to run py callback pyfunc_0: see error log.
[[Node: pyramid_1/GenAnchors/PyFunc = PyFunc[Tin=[DT_INT32, DT_INT32, DT_INT32, DT_INT32], Tout=[DT_DOUBLE], token="pyfunc_0", _device="/job:localhost/replica:0/task:0/cpu:0"](pyramid_1/strided_slice/_847, pyramid_1/strided_slice_1/_849, pyramid_1/GenAnchors/PyFunc/input_2, pyramid_1/GenAnchors/PyFunc/input_3)]]
2017-05-25 13:25:32.555291: W tensorflow/core/framework/op_kernel.cc:1152] Internal: Failed to run py callback pyfunc_0: see error log.
[[Node: pyramid_1/GenAnchors/PyFunc = PyFunc[Tin=[DT_INT32, DT_INT32, DT_INT32, DT_INT32], Tout=[DT_DOUBLE], token="pyfunc_0", _device="/job:localhost/replica:0/task:0/cpu:0"](pyramid_1/strided_slice/_847, pyramid_1/strided_slice_1/_849, pyramid_1/GenAnchors/PyFunc/input_2, pyramid_1/GenAnchors/PyFunc/input_3)]]
2017-05-25 13:25:32.555329: W tensorflow/core/framework/op_kernel.cc:1152] Internal: Failed to run py callback pyfunc_0: see error log.
[[Node: pyramid_1/GenAnchors/PyFunc = PyFunc[Tin=[DT_INT32, DT_INT32, DT_INT32, DT_INT32], Tout=[DT_DOUBLE], token="pyfunc_0", _device="/job:localhost/replica:0/task:0/cpu:0"](pyramid_1/strided_slice/_847, pyramid_1/strided_slice_1/_849, pyramid_1/GenAnchors/PyFunc/input_2, pyramid_1/GenAnchors/PyFunc/input_3)]]
2017-05-25 13:25:32.555346: W tensorflow/core/framework/op_kernel.cc:1152] Internal: Failed to run py callback pyfunc_0: see error log.
[[Node: pyramid_1/GenAnchors/PyFunc = PyFunc[Tin=[DT_INT32, DT_INT32, DT_INT32, DT_INT32], Tout=[DT_DOUBLE], token="pyfunc_0", _device="/job:localhost/replica:0/task:0/cpu:0"](pyramid_1/strided_slice/_847, pyramid_1/strided_slice_1/_849, pyramid_1/GenAnchors/PyFunc/input_2, pyramid_1/GenAnchors/PyFunc/input_3)]]
2017-05-25 13:25:32.555377: W tensorflow/core/framework/op_kernel.cc:1152] Internal: Failed to run py callback pyfunc_0: see error log.
[[Node: pyramid_1/GenAnchors/PyFunc = PyFunc[Tin=[DT_INT32, DT_INT32, DT_INT32, DT_INT32], Tout=[DT_DOUBLE], token="pyfunc_0", _device="/job:localhost/replica:0/task:0/cpu:0"](pyramid_1/strided_slice/_847, pyramid_1/strided_slice_1/_849, pyramid_1/GenAnchors/PyFunc/input_2, pyramid_1/GenAnchors/PyFunc/input_3)]]
2017-05-25 13:25:32.555393: W tensorflow/core/framework/op_kernel.cc:1152] Internal: Failed to run py callback pyfunc_0: see error log.
[[Node: pyramid_1/GenAnchors/PyFunc = PyFunc[Tin=[DT_INT32, DT_INT32, DT_INT32, DT_INT32], Tout=[DT_DOUBLE], token="pyfunc_0", _device="/job:localhost/replica:0/task:0/cpu:0"](pyramid_1/strided_slice/_847, pyramid_1/strided_slice_1/_849, pyramid_1/GenAnchors/PyFunc/input_2, pyramid_1/GenAnchors/PyFunc/input_3)]]
2017-05-25 13:25:32.555412: W tensorflow/core/framework/op_kernel.cc:1152] Internal: Failed to run py callback pyfunc_0: see error log.
[[Node: pyramid_1/GenAnchors/PyFunc = PyFunc[Tin=[DT_INT32, DT_INT32, DT_INT32, DT_INT32], Tout=[DT_DOUBLE], token="pyfunc_0", _device="/job:localhost/replica:0/task:0/cpu:0"](pyramid_1/strided_slice/_847, pyramid_1/strided_slice_1/_849, pyramid_1/GenAnchors/PyFunc/input_2, pyramid_1/GenAnchors/PyFunc/input_3)]]
2017-05-25 13:25:32.555525: W tensorflow/core/framework/op_kernel.cc:1152] Internal: Failed to run py callback pyfunc_0: see error log.
[[Node: pyramid_1/GenAnchors/PyFunc = PyFunc[Tin=[DT_INT32, DT_INT32, DT_INT32, DT_INT32], Tout=[DT_DOUBLE], token="pyfunc_0", _device="/job:localhost/replica:0/task:0/cpu:0"](pyramid_1/strided_slice/_847, pyramid_1/strided_slice_1/_849, pyramid_1/GenAnchors/PyFunc/input_2, pyramid_1/GenAnchors/PyFunc/input_3)]]
2017-05-25 13:25:32.555617: W tensorflow/core/framework/op_kernel.cc:1152] Internal: Failed to run py callback pyfunc_0: see error log.
[[Node: pyramid_1/GenAnchors/PyFunc = PyFunc[Tin=[DT_INT32, DT_INT32, DT_INT32, DT_INT32], Tout=[DT_DOUBLE], token="pyfunc_0", _device="/job:localhost/replica:0/task:0/cpu:0"](pyramid_1/strided_slice/_847, pyramid_1/strided_slice_1/_849, pyramid_1/GenAnchors/PyFunc/input_2, pyramid_1/GenAnchors/PyFunc/input_3)]]
2017-05-25 13:25:32.555638: W tensorflow/core/framework/op_kernel.cc:1152] Internal: Failed to run py callback pyfunc_0: see error log.
[[Node: pyramid_1/GenAnchors/PyFunc = PyFunc[Tin=[DT_INT32, DT_INT32, DT_INT32, DT_INT32], Tout=[DT_DOUBLE], token="pyfunc_0", _device="/job:localhost/replica:0/task:0/cpu:0"](pyramid_1/strided_slice/_847, pyramid_1/strided_slice_1/_849, pyramid_1/GenAnchors/PyFunc/input_2, pyramid_1/GenAnchors/PyFunc/input_3)]]
2017-05-25 13:25:32.555699: W tensorflow/core/framework/op_kernel.cc:1152] Internal: Failed to run py callback pyfunc_0: see error log.
[[Node: pyramid_1/GenAnchors/PyFunc = PyFunc[Tin=[DT_INT32, DT_INT32, DT_INT32, DT_INT32], Tout=[DT_DOUBLE], token="pyfunc_0", _device="/job:localhost/replica:0/task:0/cpu:0"](pyramid_1/strided_slice/_847, pyramid_1/strided_slice_1/_849, pyramid_1/GenAnchors/PyFunc/input_2, pyramid_1/GenAnchors/PyFunc/input_3)]]
2017-05-25 13:25:32.555717: W tensorflow/core/framework/op_kernel.cc:1152] Internal: Failed to run py callback pyfunc_0: see error log.
[[Node: pyramid_1/GenAnchors/PyFunc = PyFunc[Tin=[DT_INT32, DT_INT32, DT_INT32, DT_INT32], Tout=[DT_DOUBLE], token="pyfunc_0", _device="/job:localhost/replica:0/task:0/cpu:0"](pyramid_1/strided_slice/_847, pyramid_1/strided_slice_1/_849, pyramid_1/GenAnchors/PyFunc/input_2, pyramid_1/GenAnchors/PyFunc/input_3)]]
2017-05-25 13:25:32.555732: W tensorflow/core/framework/op_kernel.cc:1152] Internal: Failed to run py callback pyfunc_0: see error log.
[[Node: pyramid_1/GenAnchors/PyFunc = PyFunc[Tin=[DT_INT32, DT_INT32, DT_INT32, DT_INT32], Tout=[DT_DOUBLE], token="pyfunc_0", _device="/job:localhost/replica:0/task:0/cpu:0"](pyramid_1/strided_slice/_847, pyramid_1/strided_slice_1/_849, pyramid_1/GenAnchors/PyFunc/input_2, pyramid_1/GenAnchors/PyFunc/input_3)]]
2017-05-25 13:25:32.555746: W tensorflow/core/framework/op_kernel.cc:1152] Internal: Failed to run py callback pyfunc_0: see error log.
[[Node: pyramid_1/GenAnchors/PyFunc = PyFunc[Tin=[DT_INT32, DT_INT32, DT_INT32, DT_INT32], Tout=[DT_DOUBLE], token="pyfunc_0", _device="/job:localhost/replica:0/task:0/cpu:0"](pyramid_1/strided_slice/_847, pyramid_1/strided_slice_1/_849, pyramid_1/GenAnchors/PyFunc/input_2, pyramid_1/GenAnchors/PyFunc/input_3)]]
2017-05-25 13:25:32.555765: W tensorflow/core/framework/op_kernel.cc:1152] Internal: Failed to run py callback pyfunc_0: see error log.
[[Node: pyramid_1/GenAnchors/PyFunc = PyFunc[Tin=[DT_INT32, DT_INT32, DT_INT32, DT_INT32], Tout=[DT_DOUBLE], token="pyfunc_0", _device="/job:localhost/replica:0/task:0/cpu:0"](pyramid_1/strided_slice/_847, pyramid_1/strided_slice_1/_849, pyramid_1/GenAnchors/PyFunc/input_2, pyramid_1/GenAnchors/PyFunc/input_3)]]
2017-05-25 13:25:32.555780: W tensorflow/core/framework/op_kernel.cc:1152] Internal: Failed to run py callback pyfunc_0: see error log.
[[Node: pyramid_1/GenAnchors/PyFunc = PyFunc[Tin=[DT_INT32, DT_INT32, DT_INT32, DT_INT32], Tout=[DT_DOUBLE], token="pyfunc_0", _device="/job:localhost/replica:0/task:0/cpu:0"](pyramid_1/strided_slice/_847, pyramid_1/strided_slice_1/_849, pyramid_1/GenAnchors/PyFunc/input_2, pyramid_1/GenAnchors/PyFunc/input_3)]]
2017-05-25 13:25:32.555803: W tensorflow/core/framework/op_kernel.cc:1152] Internal: Failed to run py callback pyfunc_0: see error log.
[[Node: pyramid_1/GenAnchors/PyFunc = PyFunc[Tin=[DT_INT32, DT_INT32, DT_INT32, DT_INT32], Tout=[DT_DOUBLE], token="pyfunc_0", _device="/job:localhost/replica:0/task:0/cpu:0"](pyramid_1/strided_slice/_847, pyramid_1/strided_slice_1/_849, pyramid_1/GenAnchors/PyFunc/input_2, pyramid_1/GenAnchors/PyFunc/input_3)]]
2017-05-25 13:25:32.555879: W tensorflow/core/framework/op_kernel.cc:1152] Internal: Failed to run py callback pyfunc_0: see error log.
[[Node: pyramid_1/GenAnchors/PyFunc = PyFunc[Tin=[DT_INT32, DT_INT32, DT_INT32, DT_INT32], Tout=[DT_DOUBLE], token="pyfunc_0", _device="/job:localhost/replica:0/task:0/cpu:0"](pyramid_1/strided_slice/_847, pyramid_1/strided_slice_1/_849, pyramid_1/GenAnchors/PyFunc/input_2, pyramid_1/GenAnchors/PyFunc/input_3)]]
2017-05-25 13:25:32.555923: W tensorflow/core/framework/op_kernel.cc:1152] Internal: Failed to run py callback pyfunc_0: see error log.
[[Node: pyramid_1/GenAnchors/PyFunc = PyFunc[Tin=[DT_INT32, DT_INT32, DT_INT32, DT_INT32], Tout=[DT_DOUBLE], token="pyfunc_0", _device="/job:localhost/replica:0/task:0/cpu:0"](pyramid_1/strided_slice/_847, pyramid_1/strided_slice_1/_849, pyramid_1/GenAnchors/PyFunc/input_2, pyramid_1/GenAnchors/PyFunc/input_3)]]
2017-05-25 13:25:32.555940: W tensorflow/core/framework/op_kernel.cc:1152] Internal: Failed to run py callback pyfunc_0: see error log.
[[Node: pyramid_1/GenAnchors/PyFunc = PyFunc[Tin=[DT_INT32, DT_INT32, DT_INT32, DT_INT32], Tout=[DT_DOUBLE], token="pyfunc_0", _device="/job:localhost/replica:0/task:0/cpu:0"](pyramid_1/strided_slice/_847, pyramid_1/strided_slice_1/_849, pyramid_1/GenAnchors/PyFunc/input_2, pyramid_1/GenAnchors/PyFunc/input_3)]]
2017-05-25 13:25:32.556028: W tensorflow/core/framework/op_kernel.cc:1152] Internal: Failed to run py callback pyfunc_0: see error log.
[[Node: pyramid_1/GenAnchors/PyFunc = PyFunc[Tin=[DT_INT32, DT_INT32, DT_INT32, DT_INT32], Tout=[DT_DOUBLE], token="pyfunc_0", _device="/job:localhost/replica:0/task:0/cpu:0"](pyramid_1/strided_slice/_847, pyramid_1/strided_slice_1/_849, pyramid_1/GenAnchors/PyFunc/input_2, pyramid_1/GenAnchors/PyFunc/input_3)]]
2017-05-25 13:25:32.556048: W tensorflow/core/framework/op_kernel.cc:1152] Internal: Failed to run py callback pyfunc_0: see error log.
[[Node: pyramid_1/GenAnchors/PyFunc = PyFunc[Tin=[DT_INT32, DT_INT32, DT_INT32, DT_INT32], Tout=[DT_DOUBLE], token="pyfunc_0", _device="/job:localhost/replica:0/task:0/cpu:0"](pyramid_1/strided_slice/_847, pyramid_1/strided_slice_1/_849, pyramid_1/GenAnchors/PyFunc/input_2, pyramid_1/GenAnchors/PyFunc/input_3)]]
2017-05-25 13:25:32.556208: W tensorflow/core/framework/op_kernel.cc:1152] Internal: Failed to run py callback pyfunc_0: see error log.
[[Node: pyramid_1/GenAnchors/PyFunc = PyFunc[Tin=[DT_INT32, DT_INT32, DT_INT32, DT_INT32], Tout=[DT_DOUBLE], token="pyfunc_0", _device="/job:localhost/replica:0/task:0/cpu:0"](pyramid_1/strided_slice/_847, pyramid_1/strided_slice_1/_849, pyramid_1/GenAnchors/PyFunc/input_2, pyramid_1/GenAnchors/PyFunc/input_3)]]
2017-05-25 13:25:32.556247: W tensorflow/core/framework/op_kernel.cc:1152] Internal: Failed to run py callback pyfunc_0: see error log.
[[Node: pyramid_1/GenAnchors/PyFunc = PyFunc[Tin=[DT_INT32, DT_INT32, DT_INT32, DT_INT32], Tout=[DT_DOUBLE], token="pyfunc_0", _device="/job:localhost/replica:0/task:0/cpu:0"](pyramid_1/strided_slice/_847, pyramid_1/strided_slice_1/_849, pyramid_1/GenAnchors/PyFunc/input_2, pyramid_1/GenAnchors/PyFunc/input_3)]]
Traceback (most recent call last):
File "/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1039, in _do_call
return fn(*args)
File "/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1021, in _run_fn
status, run_metadata)
File "/opt/anaconda3/lib/python3.5/contextlib.py", line 66, in exit
next(self.gen)
File "/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/errors_impl.py", line 466, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors_impl.InternalError: Failed to run py callback pyfunc_0: see error log.
[[Node: pyramid_1/GenAnchors/PyFunc = PyFunc[Tin=[DT_INT32, DT_INT32, DT_INT32, DT_INT32], Tout=[DT_DOUBLE], token="pyfunc_0", _device="/job:localhost/replica:0/task:0/cpu:0"](pyramid_1/strided_slice/_847, pyramid_1/strided_slice_1/_849, pyramid_1/GenAnchors/PyFunc/input_2, pyramid_1/GenAnchors/PyFunc/input_3)]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "train/train.py", line 222, in
train()
File "train/train.py", line 190, in train
batch_info )
File "/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 778, in run
run_metadata_ptr)
File "/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 982, in _run
feed_dict_string, options, run_metadata)
File "/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1032, in _do_run
target_list, options, run_metadata)
File "/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1052, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InternalError: Failed to run py callback pyfunc_0: see error log.
[[Node: pyramid_1/GenAnchors/PyFunc = PyFunc[Tin=[DT_INT32, DT_INT32, DT_INT32, DT_INT32], Tout=[DT_DOUBLE], token="pyfunc_0", _device="/job:localhost/replica:0/task:0/cpu:0"](pyramid_1/strided_slice/_847, pyramid_1/strided_slice_1/_849, pyramid_1/GenAnchors/PyFunc/input_2, pyramid_1/GenAnchors/PyFunc/input_3)]]

Caused by op 'pyramid_1/GenAnchors/PyFunc', defined at:
File "train/train.py", line 222, in
train()
File "train/train.py", line 136, in train
loss_weights=[0.2, 0.2, 1.0, 0.2, 1.0])
File "train/../libs/nets/pyramid_network.py", line 526, in build
is_training=is_training, gt_boxes=gt_boxes)
File "train/../libs/nets/pyramid_network.py", line 225, in build_heads
all_anchors = gen_all_anchors(height, width, stride, anchor_scales)
File "train/../libs/layers/wrapper.py", line 149, in gen_all_anchors
[tf.float64]
File "/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/ops/script_ops.py", line 189, in py_func
input=inp, token=token, Tout=Tout, name=name)
File "/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/ops/gen_script_ops.py", line 40, in _py_func
name=name)
File "/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 768, in apply_op
op_def=op_def)
File "/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 2336, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1228, in init
self._traceback = _extract_stack()

InternalError (see above for traceback): Failed to run py callback pyfunc_0: see error log.
[[Node: pyramid_1/GenAnchors/PyFunc = PyFunc[Tin=[DT_INT32, DT_INT32, DT_INT32, DT_INT32], Tout=[DT_DOUBLE], token="pyfunc_0", _device="/job:localhost/replica:0/task:0/cpu:0"](pyramid_1/strided_slice/_847, pyramid_1/strided_slice_1/_849, pyramid_1/GenAnchors/PyFunc/input_2, pyramid_1/GenAnchors/PyFunc/input_3)]]

The version of my python is 3.5.2 and tensorflow is 1.1.0,how can I solve it?

Inconsistent bounding box comparison in roi.py

I just found bug that might be crucial to the training performance.

When the gt_boxes and rois are compared for overlapping bbox in roi.py, the location of gt_boxes is always located outside of the image_width and image_height.

restored previous model ./output/mask_rcnn/coco_resnet50_model.ckpt-10000 from ./output/mask_rcnn/
gt_boxes, all_rois_boxes, image_height, image_width
[ 323.63354492 620.6907959 519.75689697 853. ]
[ 471.59197998 637.71154785 479. 639. ]
640
480
iter 0: image-id:0511854, time:2.907(sec), regular_loss: 0.045890, total-loss 0.3424(0.0201, 0.1604, 0.013625, 0.1309, 0.0174), instances: 6, batch:(32|143, 29|116, 29|29)
gt_boxes, all_rois_boxes, image_height, image_width
[ 84.42687988 156.66087341 743.39923096 437.59051514]
[ 579.50476074 501.31628418 639. 505. ]
506
640
gt_boxes, all_rois_boxes, image_height, image_width
[ 204.19114685 302.1496582 639.75 853. ]
[ 371.91244507 496.81021118 374. 499. ]
500
375
iter 1: image-id:0118642, time:0.503(sec), regular_loss: 0.045887, total-loss 0.1679(0.0218, 0.0755, 0.003574, 0.0446, 0.0224), instances: 3, batch:(2|34, 5|69, 5|5)
gt_boxes, all_rois_boxes, image_height, image_width
[ 777.29266357 414.71099854 959.25054932 584.87304688]
[ 635.87683105 421.24139404 639. 426. ]
427
640
iter 2: image-id:0369299, time:1.459(sec), regular_loss: 0.045884, total-loss 0.2647(0.0216, 0.2318, 0.000000, 0.0113, 0.0000), instances: 11, batch:(44|186, 0|64, 0|0)
gt_boxes, all_rois_boxes, image_height, image_width
[ 419.12524414 254.65420532 744.14953613 595.51403809]
[ 627.88568115 425.22784424 639. 427. ]
428
640
iter 3: image-id:0118644, time:0.584(sec), regular_loss: 0.045881, total-loss 0.2346(0.0158, 0.1414, 0.004386, 0.0530, 0.0200), instances: 2, batch:(2|34, 7|71, 7|7)
gt_boxes, all_rois_boxes, image_height, image_width
[ 510.07662964 413.48086548 579.43835449 428.32342529]
[ 635.88970947 373.29919434 639. 375. ]
376
640
iter 4: image-id:0191625, time:1.247(sec), regular_loss: 0.045879, total-loss 0.1773(0.0076, 0.1624, 0.000000, 0.0074, 0.0000), instances: 6, batch:(8|56, 0|64, 0|0)
gt_boxes, all_rois_boxes, image_height, image_width
[ 673.07409668 325.83309937 870.11651611 601.65960693]
[ 635.88812256 423.82373047 639. 428. ]
429
640
iter 5: image-id:0511866, time:0.901(sec), regular_loss: 0.045877, total-loss 0.1991(0.0065, 0.1242, 0.005354, 0.0449, 0.0182), instances: 5, batch:(20|104, 7|71, 7|7)

TypeError: long() argument must be a string or a number, not 'JpegImageFile'

when I run :python download_and_convert_data.py
`>> Converting image 23751/82783 shard 11

Converting image 23801/82783 shard 11
Converting image 23851/82783 shard 11
None Annotations data/coco/train2014/COCO_train2014_000000167118.jpg
Traceback (most recent call last):
File "download_and_convert_data.py", line 36, in
tf.app.run()
File "/mnt/data1/daniel/tensorflow/_python_build/tensorflow/python/platform/app.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "download_and_convert_data.py", line 30, in main
download_and_convert_coco.run(FLAGS.dataset_dir)
File "/mnt/data1/daniel/codes/FastMaskRCNN/libs/datasets/download_and_convert_coco.py", line 338, in run
'train2014')
File "/mnt/data1/daniel/codes/FastMaskRCNN/libs/datasets/download_and_convert_coco.py", line 299, in _add_to_tfrecord
img = img.astype(np.uint8)
TypeError: long() argument must be a string or a number, not 'JpegImageFile'`

why this happened?

About the initial learning rate of RPN in FPN

In the paper, the initial learning rate of RPN is 0.02. I've implemented the RPN following the instructions in the paper using caffe. However, when I set the lr to 0.02, the network just cannot converge. And when I set the lr to 0.001, the learning process seems to be too slow to catch up the performance of the original RPN.

So, how about this version in tf? Have you ever tried a larger lr than 0.001?

FPN

Have you try your fpn with faster-rcnn?

Checkpoint saver not defined in "resnet50_test.py"

The saver seems to be not defined before using it:

NameError: name 'saver' is not defined

Adding these before "print ('Starting training...')" makes it good

# Create a saver
saver = tf.train.Saver(...variables...)

But I think "...variables..." should be "variables_to_train"(including params of heads) rather than "vars_to_restore"(Only Resnet params)

Just a suggestion,
Thanks!

hello evryone !! when I python train.py , TF version:1.0.1 meet this issue :

(tensorflow)he_su@ljz-Ubuntu:~/work/FastMaskRCNN/train$ python train.py
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcublas.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcudnn.so.5 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcufft.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcurand.so.8.0 locally
Traceback (most recent call last):
File "train.py", line 222, in
train()
File "train.py", line 114, in train
is_training=True)
File "../libs/datasets/dataset_factory.py", line 16, in get_dataset
image, ih, iw, gt_boxes, gt_masks, num_instances, img_id = coco.read(tfrecords)
File "../libs/datasets/coco.py", line 98, in read
tfrecords_filename, num_epochs=100)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/input.py", line 211, in string_input_producer
raise ValueError(not_null_err)
ValueError: string_input_producer requires a non-null input tensor

error when download_and_convert_coco

Traceback (most recent call last):
File "download_and_convert_data.py", line 9, in
from libs.datasets import download_and_convert_coco
File "/home/liuming/python_projects/FastMaskRCNN/libs/datasets/download_and_convert_coco.py", line 17, in
from libs.datasets.pycocotools.coco import COCO
File "/home/liuming/python_projects/FastMaskRCNN/libs/datasets/pycocotools/coco.py", line 55, in
from libs.datasets.pycocotools import mask as maskUtils # change importing
File "/home/liuming/python_projects/FastMaskRCNN/libs/datasets/pycocotools/mask.py", line 3, in
import libs.datasets.pycocotools._mask as _mask
ImportError: No module named _mask

About the 0 bytes bug

I checked the mask_targets' shape, it seems that after being prepossessed by _filter_negative_samples, its first dimension became 0, which is corresponding to the problem reported in the community.

Number of BaseAnchors equals 15?

In the FPN paper, there are totally 15 anchors in the pyramid. But in this code, there are 15 anchors in each layer of the pyramid? Do I understand right?

Invalid argument: Could not parse example input

When step of training reaches about 1140, there is a parse problem which seems to be related to .tfrecordfile.

Error type: UnicodeDecodeError: 'utf8' codec can't decode byte 0xaf in position 40: invalid start byte

The corresponding tfrecord file is ./data/coco/record/coco_train2014_00000-of-00040.tfrecord

Do you encounter the same problem? @CharlesShang

W tensorflow/core/kernels/queue_base.cc:294] _0_input_producer: Skipping cancelled enqueue attempt with queue not closed
Traceback (most recent call last):
  File "test/resnet50_test.py", line 137, in <module>
    run_metadata=run_metadata)
  File "/home/santis/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 767, in run
    run_metadata_ptr)
  File "/home/santis/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 965, in _run
    feed_dict_string, options, run_metadata)
  File "/home/santis/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1015, in _do_run
    target_list, options, run_metadata)
  File "/home/santis/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1022, in _do_call
    return fn(*args)
  File "/home/santis/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1004, in _run_fn
    status, run_metadata)
  File "/home/santis/anaconda2/lib/python2.7/contextlib.py", line 24, in __exit__
    self.gen.next()
  File "/home/santis/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/errors_impl.py", line 465, in raise_exception_on_not_ok_status
    compat.as_text(pywrap_tensorflow.TF_Message(status)),
  File "/home/santis/anaconda2/lib/python2.7/site-packages/tensorflow/python/util/compat.py", line 84, in as_text
    return bytes_or_text.decode(encoding)
  File "/home/santis/anaconda2/lib/python2.7/encodings/utf_8.py", line 16, in decode
    return codecs.utf_8_decode(input, errors, True)
UnicodeDecodeError: 'utf8' codec can't decode byte 0xaf in position 40: invalid start byte

when i run train.py: W tensorflow/core/framework/op_kernel.cc:1158] Invalid argument: Input to reshape is a tensor with 1 values, but the requested shape has 0

when i run the train.py, it occurs an error as follows:

iter 143: image-id:0347794, time:0.350(sec), regular_loss: 0.202597, total-loss 0.7763(0.0430, 0.3557, 0.000000, 0.3776, 0.0000), instances: 2, batch:(2|34, 0|64, 0|0)
iter 144: image-id:0478967, time:0.364(sec), regular_loss: 0.202596, total-loss 1.5152(0.0312, 0.3461, 0.007214, 0.4630, 0.6678), instances: 6, batch:(18|88, 13|77, 13|13)
iter 145: image-id:0478903, time:0.338(sec), regular_loss: 0.202596, total-loss 1.4753(0.0226, 0.3851, 0.002755, 0.3975, 0.6674), instances: 1, batch:(1|33, 4|68, 4|4)
2017-05-23 14:28:53.840796: W tensorflow/core/framework/op_kernel.cc:1158] Invalid argument: Input to reshape is a tensor with 1 values, but the requested shape has 0
[[Node: Reshape = Reshape[T=DT_FLOAT, Tshape=DT_INT32, _device="/job:localhost/replica:0/task:0/gpu:0"](DecodeRaw_1/_555, Reshape/shape)]]
INFO:tensorflow:Error reported to Coordinator: <class 'tensorflow.python.framework.errors_impl.InvalidArgumentError'>, Input to reshape is a tensor with 1 values, but the requested shape has 0
[[Node: Reshape = Reshape[T=DT_FLOAT, Tshape=DT_INT32, _device="/job:localhost/replica:0/task:0/gpu:0"](DecodeRaw_1/_555, Reshape/shape)]]
[[Node: ReverseV2/_599 = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_277_ReverseV2", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]]
iter 146: image-id:0522233, time:0.361(sec), regular_loss: 0.202596, total-loss 0.7871(0.0609, 0.3661, 0.000000, 0.3601, 0.0000), instances: 5, batch:(22|101, 0|64, 0|0)
Traceback (most recent call last):
File "/home/JKe/sunshihua/Program/pycharm/helpers/pydev/pydev_run_in_console.py", line 78, in
globals = run_file(file, None, None)
File "/home/JKe/sunshihua/Program/pycharm/helpers/pydev/pydev_run_in_console.py", line 35, in run_file
pydev_imports.execfile(file, globals, locals) # execute the script
File "/home/JKe/sunshihua/FastMaskRCNN-master/train/train.py", line 225, in
train()
File "/home/JKe/sunshihua/FastMaskRCNN-master/train/train.py", line 221, in train
coord.join(threads)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/coordinator.py", line 389, in join
six.reraise(*self._exc_info_to_raise)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/queue_runner_impl.py", line 237, in _run
enqueue_callable()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1063, in _single_operation_run
target_list_as_strings, status, None)
File "/usr/lib/python2.7/contextlib.py", line 24, in exit
self.gen.next()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/errors_impl.py", line 466, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: Input to reshape is a tensor with 1 values, but the requested shape has 0
[[Node: Reshape = Reshape[T=DT_FLOAT, Tshape=DT_INT32, _device="/job:localhost/replica:0/task:0/gpu:0"](DecodeRaw_1/_555, Reshape/shape)]]
[[Node: ReverseV2/_599 = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_277_ReverseV2", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]]
Process finished with exit code 1

Someone has any idea?

Missing licence

Under what license is the code distributed?

I propose creating a LICENSE file specifying it (probably MIT?).

Thanks for the implementation - great work!

Some inconsistent details compared with FPN paper

Hi,

I've found out that there are several details which are inconsistent with original FPN paper.

  1. The authors claim that the RPN head are shared across all levels of features.
  2. The RoIs will be cropped from some feature map based on their height and width.

Do I miss-read your codes or I miss-read the paper? Or you just decide not to implement in this way for now.

Best,
Jason

hen I make './libs/', there is a problem as follows.

''''
Traceback (most recent call last):
File "setup.py", line 55, in
CUDA = locate_cuda()
File "setup.py", line 50, in locate_cuda
for k, v in cudaconfig.iteritems():
AttributeError: 'dict' object has no attribute 'iteritems'
''''
My cuda version is cuda 7.0
Is that my fault of cuda?

ROIAlign

It's a nice job and I wonder where is the ROIAlign layer or it does not implemented yet?

pycocotools/make

I cloned your repo, and started to run how you describe in instructions. Whenever I run make command in pycocotools directory an error had been rised. That:

python setup.py build_ext --inplace
Warning: Extension name '_mask' does not match fully qualified name 'libs.datasets.pycocotools._mask' of '_mask.pyx'
running build_ext
rm -rf build

What is wrong with my setup?

Thanks.
Best.

EigenAllocator for GPU ran out of memory when allocating 0

Has anyone encountered this error ?

Starting training...
: I tensorflow/stream_executor/dso_loader.cc:139] successfully opened CUDA library libcupti.so.8.0 locally
: E tensorflow/core/common_runtime/bfc_allocator.cc:244] tried to allocate 0 bytes
: W tensorflow/core/common_runtime/allocator_retry.cc:32] Request to allocate 0 bytes
: F tensorflow/core/common_runtime/gpu/gpu_device.cc:103] EigenAllocator for GPU ran out of memory when allocating 0. See error logs for more detailed info.
Aborted (core dumped)

I found the error after running python test/resnet50_test.py in the docker container with the following set up.

platform : Ubuntu 16.04.2 LTS
Cuda: 8.0, V8.0.61
nvidia driver version : 367.48
tf version : 1.0.1
GPU : GTX1080

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.