GithubHelp home page GithubHelp logo

cihp_pgn's Issues

Terrible output

Hi,
I am running test_pgn.py on the given example images in the dataset folder of this repo and shows some very bad results. Where is the problem? Note: I am just using given pre-trained weights.
The first one is input, then next two are outputs.

pgan1
pgan2
pgan3

purpose of tail_list_rev in test_pgn.py?

Hi, can anyone explain this code in test_pgn.py? I'm really confused. Why choose the order of output channel 14-19, and reverse them?

for xx in xrange(14):
tail_list_rev[xx] = tail_list[xx]
tail_list_rev[14] = tail_list[15]
tail_list_rev[15] = tail_list[14]
tail_list_rev[16] = tail_list[17]
tail_list_rev[17] = tail_list[16]
tail_list_rev[18] = tail_list[19]
tail_list_rev[19] = tail_list[18]
tail_output_rev = tf.stack(tail_list_rev, axis=2)
tail_output_rev = tf.reverse(tail_output_rev, tf.stack([1]))

Thanks

The infeence time

Thank you for this great work! Is it normal that the inference time takes very long time for an image? It takes more than 7minutes of even much more.

And if I am right, the inference step don't use GPU, right?

OS influence with the code (Windows/Ubuntu)

Hi, thanks for sharing your work. I have executed this project, in my machine with Windows 10 OS. It fails to execute when the cuda capable gpu is pointed. But runs without any issues if the cpu is pointed.

When it comes to Ubuntu18.04.4 LTS , I got stuck with the Segmentation Fault Core Dumped error. I have installed all necessary packages. Help me clear this error and run in UBUNTU.
Thanks in advance.

what output nodes are they?

I wanna convert the pgn model to .pb model file ,but I need to know output node names of it , so , how can I get them?

Error:undefined name 'basestring'

Hello! I was lucky to see this article in the news, and I found the source program here. I'm sorry to disturb you. Now there are some problems in debugging the running program. Have you ever had such a problem? If so, how should we solve this problem? Do you mind solving it for me? Thank you.

The problem is that

File "C:\Users\Administrator\Desktop\CIHP_PGN-master\kaffe\tensorflow\network.py", line 78, in feed
if isinstance(fed_layer,basestring):

NameError: name 'basestring' is not defined.

How to get instance level human id during inference?

Hi, Thanks for your sharing the great work!

I notice that in your test_pgn.py will generate the parsing maps and edge maps. I wonder how can these can generate instance level human id like the datasets you have released.

Thanks in advance!

License for CIHP dataset

Hello, thanks for the amazing work. Was just wondering if the license for the CIHP dataset is the same as the LIP dataset.

Image, label, edge preparation

I have 7 pictures that I need to create segmentation for them.
I put them in the image folder. but I dont know how to prepare the rest of files -> (labels,...).
Please please let me know how to prepare my images.
Thank you

The code is voilated with the paper

In your code , your refine feat combine feat after PPM and remap feat, while in your paper you combine the feat before PPM and remap feat. Which is the really feat you use?

How to use CIHP datasets

instance_level_human_parsing
TrainVal_images

how to generate train_rev.txt ?

Can you explain in detail how to train with this data set?

class definiation

hi , it's works well for me. but could you tell about class definiation of your pretrain model? In utils.py you just comment seven class . and it's not the same as in http://www.sysu-hcp.net mentioned. so please tell about the rest class. thanks

@LogWell found a solution how to prepare DATASETS. You can use following files structure:

@LogWell found a solution how to prepare DATASETS. You can use following files structure:

datasets/CIHP/images/0002190.png
datasets/CIHP/list/img_list.txt
datasets/CIHP/images/tool/logwell_script.m
datasets/CIHP/images/edges
datasets/CIHP/images/labels

where
datasets/CIHP/images/0002190.png - it's a source image

datasets/CIHP/list/img_list.txt contain following data:
0002190.png

and datasets/CIHP/images/tool/logwell_script.m - it's a MATLAB script for prepare edges and labels:

clear;
close all;
fclose all;
%%
imglist = '../list/img_list.txt';  % 00000.png
list = textread(imglist, '%s');

for i = 1:length(list);
    imname = list{i};
    instance_map = imread(fullfile('../images', imname));
    instance_contour = uint8(imgradient(rgb2gray(instance_map)) > 0);
    imwrite(instance_contour, fullfile('../edges', imname));
    imwrite(instance_contour, fullfile('../labels', imname));
    
end

Just run this script using command like:
/opt/MATLAB/R2018b/bin/matlab -nodisplay -nojvm -nosplash -nodesktop -r "try, run('tool/logwell_script.m'), catch, exit(1), end, exit(0);"

and after that you can run
python test_pgn.py
for get segmented images.

Originally posted by @rcrvano in #26 (comment)

the result in pascal is poor

Hi,thanks for your work.When I train PGN in pascal,I get a poor result that the miou is 0.38. besides,In your code,you use 'tf.reduce_sum' to caculate the edge loss,why not 'tf.reduce_mean'.this cause the edge loss is vary large,above 10000.But when I use 'tf.reduce_mean',I get a worse result.Why?
could you explain the reason and share how do you trian pascal.Thank you.

There is no Pascal's pre-trained model

Hi,I know about you load the pre-trained model of pascal-VOC in your thesis.But you just provide the 'CHIP_PGN' model .Could you provide a tensorflow version of Pascal's pre-training model? I don't know if your Pascal model is transferred from caffe. If so, can you provide a conversion method?

How to adjust parameters?

Hi, Professor. I'm sorry to disturb you. There are some problems when running code. Would you mind telling me the parameter setting of convolution layer?look at it!

@U7E 7T337J$B7T04ZLFJS0

what means the latter two parameters? conv(7,7,64,2,2)

INST_PART_GT_DIR = './Instance_part_val'

Hello, thanks for the code .

First: I have a question that how do you get the data in the "Instance_part_val" directory? In code, you use
both the data in that directory and cihp_instance_part_maps to valuate scripts which in $HOME/evaluation.

Second: I would like to train the model in Pascal, but Pascal does not have labels like in CIHP. How do you
run train_pgn.py to train PGN.model in PASCAL Dataset ?

Thank you and look forward to your reply!

Edge level has no edges in example

Why in the examples the edge map donot show any edges ?
I am training on my own dataset with edges got by cv2.canny(). My losses are going NAN.

Out of memory while testing

hey,
while i was running test_pgn.py on the sample images provided. i sometimes get warnings like cannot allocate memory but it runs slowly.
can I change anything like batch size while running test_pgn.py

I use gtx 1050 gpu.

batch size for inference

can i send batch of images as the input for the network for inference .in inference they takes only 3 channel not with batch.

About the time

I found that the running time was a little bit long. The image I used was 1080, 1080. Is there any way to accelerate it?THank you

Some question?

Is there a test file for us just to get the results of our own data, without having to have the groundtruth?

what the environment or requirement for this project?

When i running this project,I have encountered a segmentation Fault(core dump).So I guess there is a problem with my environment.
My environment as follows:
systerm:Ubuntu14.04
cuda:9.0
tensorflow:1.11.0
Is there any problem with my environment? Can you provide the version requirements you use?

How to predict for just one image ?

Hi,
I am trying to see the result of a in-the-wild image, but seemingly there is no setting for that.
Is there any script which receives a image and outputs edge mask? I tried test_pgn.py but I think it is not for single image prediction.

Thanks in advance...

What do the colors represent?

I'm not quite sure what label these colors represent. So where can I get a description of this from?

label_colours = [(0,0,0)
                , (128,0,0), (255,0,0), (0,85,0), (170,0,51), (255,85,0), (0,0,85), (0,119,221), (85,85,0), (0,85,85), (85,51,0), (52,86,128), (0,128,0)
                , (0,0,255), (51,170,221), (0,255,255), (85,255,170), (170,255,85), (255,255,0), (255,170,0)]

Getting bad segmentation results in output folder even after running test_pgn.py

I downloaded the pretrained checkpoints and stored them inside checkpoints folder. After which I ran the test_pgn.py file and it created an output folder with two subfolders cihp_edge_maps and cihp_parsing_maps. Below is the output 0002190_vis.png which was created inside cihp_parsing_maps. And this is clearly not the output I was expecting.
0002190
What went wrong ? What have I missed ?
I have completed all these steps which were mentioned in Readme file.

  • Download the pre-trained model and store in $HOME/checkpoint.
  • Prepare the images and store in $HOME/datasets.
  • Run test_pgn.py.
  • The results are saved in $HOME/output

How does this work compare to ATEN?

Hi, I'm looking at both your work and ATEN. The conclusions from both your papers are very similar.

Your paper

In this paper, we presented a novel detection-free Part Grouping Network to investigate instance-level human parsing, which is a more pioneering and challenging work in analyzing human in the wild. To push the research boundary of human parsing to match real-world scenarios much better, we further introduce a new large-scale (...)
Experimental results on PASCAL-Person-Part [6] and our CIHP dataset demonstrate the superiority of our proposed approach, which surpasses previous methods for both semantic part segmentation and edge detection tasks, and achieves state-of-the-art performance for instance-level human parsing.

ATEN Paper

In this work, we investigate video instance-level human parsing that is a more pioneering and realistic task in analyzing human in the wild. To fill the blank of video human parsing data resources, we further introduce a large-scale (...)
Experimental results on DAVIS [36] and our VIP dataset demonstrate the superiority of our proposed approach, which achieves state-of-the-art performance on both video instance-level human parsing and video segmentation tasks.

I'm wondering - which produces better accuracy, this work or ATEN? Considering that both claim "more pioneering", "demonstrate the superiority of our proposed approach", and "achieve state-of-the-art", can you help explain the differences? I'm not clear which I should use. Thanks!

Instance partition process

Hi,
I have downloaded your code and read it carefully. I really enjoy and admire your work. The only thing was that I couldn’t find the Instance partition process that you talk in the paper. Could you tell me in which of the scripts, and in what part, you do the Instance partition process?

Thank you

error: imgradientxy: IMG must be a gray-scale image

Hello, thanks for the code and the paper.

When i run generate_instance_part.m (after running generate_instance_human for all images) on some of the images I get the error:

error: imgradientxy: IMG must be a gray-scale image

Interestingly, the order of the images are important in val_id.txt. For example, an image x may throw and error when it is 5th in val_id.txt, but NOT throw an error and succesfully processed when it is 3rd. And vice versa.

On what image and at how manyth the error would throw seems random, but never exceeds 5.

I use octave version 4.2.2 and Image package version 2.8.0 on Ubuntu 16.04.

Did you encounter any similiar problem?

How to prepare datasets/CIHP/labels and datasets/CIHP/edges

Hello!

Can you explain please how to prepare files, which should be located in this directories:
datasets/CIHP/labels and datasets/CIHP/edges

I've prepared the image file datasets/CIHP/images/image.jpg,

file datasets/CIHP/list/val_id.txt with content:

image

and file datasets/CIHP/list/val.txt with content:

images/image.jpg /labels/image.png

But I don't understand how I can generate datasets/CIHP/labels/image.png
and datasets/CIHP/edges/image.png ?

Because I need them for test_pgn.py.
You answered here that in this dir can be placed any png image. But if I just convert image.jpg to image.png and put converted file into labels and edges dir the execution of the test_pgn.py failed with error

InvalidArgumentError (see above for traceback): assertion failed: [`labels` out of bound] [Condition x < y did not hold element-wise:] [x (mean_iou/confusion_matrix/control_dependency:0) = ] [255 255 255...] [y (mean_iou/ToInt64_1:0) = ] [20]

Please explain how to prepare image files for starting the segmentation process?
Because your README.md file doesn't explain how to do it. Just "Prepare the images and store in $HOME/datasets". But what steps need to be done to prepare images in datasets dir?

Inference

  • Download the pre-trained model and store in $HOME/checkpoint.
  • Prepare the images and store in $HOME/datasets.
  • Run test_pgn.py.
  • The results are saved in $HOME/output
  • Evaluation scripts are in $HOME/evaluation. Copy the groundtruth files (in Instance_ids folder) - into $HOME/evaluation/Instance_part_val before you run the script.

how to use the CHIP-Human or CHIP-Human-ids to train semantic segmentation?

hi,thanks for sharing the CHIP datasets, its a perfect dataset i have seen for human.
currently,i want to use the data to train semantic segmentation,but the chip human or chip-human-ids has many colors, one color respect a class, and i am not clear the class numbers. so can you give the number of color in chip-trainning-human an every color-value,thank you。

i have idea , can i use opencv to transfor the pixel in chip-hunman images to 255 or some value >0 if the value !=0?
so the image only have 000 for background and 255,255,255 for human?

Looking forward to your feedback,thanks again.

Support tf2.x?

Does PGN supports tensorflow2.x?Thank for your reply!

License for the trained model

Hello there, just wondering what the license is for your trained model. i.e: Am I free to use the trained model for all purposes? Thank you.

Thanks!

Not working for 1080x1080 images

The pre-trained model works for smaller images(300,200) but when training with (1080,1080) images gives the error:
OP_REQUIRES failed at concat_op.cc:153 : Resource exhausted: OOM when allocating tensor with shape[2,2560,169,169] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc

Please suggest a solution as I need to use 1080x1080 image for 3D reconstruction. I am working with 4 GPUs and enough RAM for the process.

NameError: name 'xrange' is not defined

Hi, I am facing this problem while running test_pgn.py

File "test_pgn.py", line 211, in
main()
File "test_pgn.py", line 112, in main
for xx in xrange(14):
NameError: name 'xrange' is not defined

Note. I have made these two changes to resolve other issues mentioned in the issues section by someone else.
#13 and #36

Pascal dataset downloading issue

Hi, Thanks for sharing the code. You have given a link related to pascal voc part based dataset. Where as that link is currently not working. How can i retrieve the data.

Thanks in advance

how to adjust the parameters

dear professor:
Sorry to bother you again.thanks for your code. I'm afraid I'll have to take up some of your time again.I would like to ask you how to set the parameters at the beginning.i don't know how to set the LIST_PATH When I using the CHIP/humanparsing datasets.As is shown in following picture.
3BOJ_NQO2%S7`QJ{9ZQYOLR
how do I set it up?Looking forward to your reply. Thank you very much!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.