GithubHelp home page GithubHelp logo

panopticapi's Introduction

COCO 2018 Panoptic Segmentation Task API (Beta version)

This API is an experimental version of COCO 2018 Panoptic Segmentation Task API.

To install panopticapi, run:

pip install git+https://github.com/cocodataset/panopticapi.git

Summary

Evaluation script

panopticapi/evaluation.py calculates PQ metrics. For more information about the script usage: python -m panopticapi.evaluation --help

Format converters

COCO panoptic segmentation is stored in a new format. Unlike COCO detection format that stores each segment independently, COCO panoptic format stores all segmentations for an image in a single PNG file. This compact representation naturally maintains non-overlapping property of the panoptic segmentation.

We provide several converters for COCO panoptic format. Full description and usage examples are available here.

Semantic and instance segmentation heuristic combination

We provide a simple script that heuristically combines semantic and instance segmentation predictions into panoptic segmentation prediction.

The merging logic of the script is described in the panoptic segmentation paper. In addition, this script is able to filter out stuff predicted segments that have their area below the threshold defined by --stuff_area_limit parameter.

For more information about the script logic and usage: python -m panopticapi.combine_semantic_and_instance_predictions.py --help

COCO panoptic segmentation challenge categories

Json file panoptic_coco_categories.json contains the list of all categories used in COCO panoptic segmentation challenge 2018.

Visualization

visualization.py provides an example of generating visually appealing representation of the panoptic segmentation data.

Contact

If you have any questions regarding this API, please contact us at alexander.n.kirillov-at-gmail.com.

panopticapi's People

Contributors

alexander-kirillov avatar ppwwyyxx avatar yucornetto avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

panopticapi's Issues

Number of Classes in Panoptic Categories is 133, not 172 as mentioned on MS COCO Site

From http://cocodataset.org/#panoptic-2018:

"The panoptic task uses all the annotated COCO images and includes the 80 thing categories from the detection task and a subset of the 91 stuff categories from the stuff task, with any overlaps manually resolved."

Should the panoptic categories not contain 171 classes then? When I run the following code it prints out 133.

pan_cats = "panoptic_coco_categories.json"
with open(pan_cats, 'r') as COCO_Pan:
    pan = json.loads(COCO_Pan.read())
    print(len(pan))

Error when extracting semantic segmentation json from panoptic segmentation json

Following these indications, I am trying to obtain the json file for semantic segmentation, starting from the json file for panoptic segmentation.

The command that I used:
python converters/panoptic2semantic_segmentation.py --input_json_file /path/to/json/panoptic_<train, val>2017.json --output_json_file /path/to/json/semantic_<train, val>2017.json

The error:

 File "converters/panoptic2semantic_segmentation.py", line 210, in <module>
   extract_semantic(args.input_json_file,
 File "converters/panoptic2semantic_segmentation.py", line 174, in extract_semantic
   json.dump(d_coco, out)
 File "/usr/lib/python3.8/json/__init__.py", line 179, in dump
   for chunk in iterable:
 File "/usr/lib/python3.8/json/encoder.py", line 431, in _iterencode
   yield from _iterencode_dict(o, _current_indent_level)
 File "/usr/lib/python3.8/json/encoder.py", line 405, in _iterencode_dict
   yield from chunks
 File "/usr/lib/python3.8/json/encoder.py", line 325, in _iterencode_list
   yield from chunks
 File "/usr/lib/python3.8/json/encoder.py", line 405, in _iterencode_dict
   yield from chunks
 File "/usr/lib/python3.8/json/encoder.py", line 405, in _iterencode_dict
   yield from chunks
 File "/usr/lib/python3.8/json/encoder.py", line 438, in _iterencode
   o = _default(o)
 File "/usr/lib/python3.8/json/encoder.py", line 179, in default
   raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type bytes is not JSON serializable

Other information: I successfully used the converter for the images (from panoptic json to folder with semantic segmentation png) with both the training and validation set.

Python version: 3.8.5

converting cityscapes test images

I am using cityscapes_panoptic_converter.py with test images. i only change the original_format_folder to the test images but I am getting blank black images. it works fine for train and val. how to fix this

Does this work also with polygones, or only with RLE format?

In the code, every ann['segmentation'], both for the instance annotations and the semantic annotations, is read through the functions in pycocotools.mask:

from pycocotools import mask as COCOmask

but these functions work only with the RLE format, so my guess is that the code fails for segments saved as polygons, instead of RLE.

Am I correct?

Getting semantic and instance segmentation from panoptic segmentation

Hi!

I am new to image segmentation and wanted to know if it's possible to extract the semantic and instance segmentation images using visualization.py script. I understand that we have converters scripts but they require pycocopi which requires downloading huge images dataset (more than 1 GB). Is it possible to modify the visualization script somehow to get the two segmentation results for the sample images?

Thanks in advance

different variable name

in line:
print('no prediction for the image with id: {}'.format(img_id))
I think the variable img_id should be image_id

Key error when running panoptic2detection_coco_format.py

When running the script with the following arguments:

--input_json_file "panoptic_train2017.json"
--segmentations_folder "stuff_train2017_pixelmaps"
--output_json_file "my_folder"
--categories_json_file "~/panopticapi/panoptic_coco_categories.json"
--things_only

I get the following error:

CONVERTING...
COCO panoptic format:
Segmentation folder: stuff_train2017_pixelmaps
JSON file: panoptic_train2017.json
TO
COCO detection format
JSON file: train_semantic_seg
Saving only segments of things classes.

Reading annotation information from panoptic_train2017.json
Number of cores: 8, images per core: 14786
Core: 0, 0 from 14786 images processed
Caught exception in worker thread:
Traceback (most recent call last):
File "/home/nicolas/panopticapi/converters/utils.py", line 14, in wrapper
return f(*args, **kwargs)
File "/home/nicolas/panopticapi/converters/panoptic2detection_coco_format.py", line 61, in convert_panoptic_to_detection_coco_format_single_core
segm_info['segmentation'] = COCOmask.encode(np.asfortranarray(mask))[0]
KeyError: 0
multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "/home/nicolas/.pyenv/versions/3.6.0/lib/python3.6/multiprocessing/pool.py", line 119, in worker
result = (True, func(*args, **kwds))
File "/home/nicolas/panopticapi/converters/utils.py", line 18, in wrapper
raise e
File "/home/nicolas/panopticapi/converters/utils.py", line 14, in wrapper
return f(*args, **kwargs)
File "/home/nicolas/panopticapi/converters/panoptic2detection_coco_format.py", line 61, in convert_panoptic_to_detection_coco_format_single_core
segm_info['segmentation'] = COCOmask.encode(np.asfortranarray(mask))[0]
KeyError: 0
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/home/nicolas/panopticapi/converters/panoptic2detection_coco_format.py", line 153, in
args.things_only)
File "/home/nicolas/panopticapi/converters/panoptic2detection_coco_format.py", line 109, in convert_panoptic_to_detection_coco_format
annotations_coco_detection.extend(p.get())
File "/home/nicolas/.pyenv/versions/3.6.0/lib/python3.6/multiprocessing/pool.py", line 608, in get
raise self._value
KeyError: 0

I tried to remove the [0] in the line segm_info['segmentation'] = COCOmask.encode(np.asfortranarray(mask))[0] which resulted in

KeyError: 'color'

later on. Any help would be much appreciated.

Panoptic quality not symmetric?

In section 6 of "Panoptic Quality" paper, under "Human annotations", it is mentioned that PQ is symmetric, i.e. the order of ground truth and predictions is unimportant. In practice this does not seem to be the case, for example:

python panopticapi/evaluation.py --gt_json_file gt.json --pred_json_file pred.json
...
          |    PQ     SQ     RQ     N
--------------------------------------
All       |  30.1   45.2   33.3     2
Things    |  30.1   45.2   33.3     2
Stuff     |   0.0    0.0    0.0     0
...
python panopticapi/evaluation.py --gt_json_file pred.json --pred_json_file gt.json
...
          |    PQ     SQ     RQ     N
--------------------------------------
All       |  81.3   81.3  100.0     1
Things    |  81.3   81.3  100.0     1
Stuff     |   0.0    0.0    0.0     0
...

One of the reasons seems to be the special treatment of VOID class, for example predictions are not counted as false positives when their overlap with ground truth VOID is bigger than 0.5 (section 4.2 in the paper, under "Void labels"). I think it enforces one interpretation of VOID, meaning "not labeled", when ignoring such areas would make sense. Alternative would be "not known class", in which case marking it with known class would count as false positive. The second interpretation is more common IMHO and would retain the symmetric property of the metric.

I suggest to change the interpretation of VOID to make the metric symmetric, or add it as a command-line option.

Panoptic To Segmentation Converter Bug

Hi. Thank you for sharing this work and dataset :)

When trying to run the conversion script from panoptic to segmentation, I ran into a KeyError because of this line. I'm afraid I didn't save away the full stack trace.

Anyways, commenting that line out made the conversion go smoothly. I believe the intended reference was to the categories variable which does have a color key

panoptic_coco_categories.json not found

So I'm trying to prepare some data in the coco panoptic format for Detectron2's PanopticFPN. I'm missing the stuff annotation pngs. Apparently you can use the panoptic2semantic_segmentation Converter. But there is a default file path:

parser.add_argument('--categories_json_file', type=str,
help="JSON file with Panoptic COCO categories information",
default='./panoptic_coco_categories.json')

I have no idea where to get this json file from... I have the png annotations and the overall .json file with all the instances etc. Someone already did this?

Instance annotation

I am trying to visualize some annotations output from instance_data.py. In the following are three images of class PERSON (in the validation set), each shows the mask of ONE particular instance of class PERSON (the foreground are WHITE). Are they supposed to be like this?

000000364884.jpg
000000364884_person
000000005586.jpg
000000005586_person
000000191845.jpg
000000191845_person

evaluation

Hi Team,
Thanks for the great work.
I was wondering if it is possible to evaluate panoptic without predections in COCO panoptic json format. just with ground truth image and prediction image and panoptic coco format ground truth.
thanks,
best

convert custom dataset to panoptic coco format

Hi, @alexander-kirillov , I have a custom dataset with annotations including instance segmentation masks and semantic segmentation masks. I generate this dataset in 2channels format, but I found that 2channels2panoptic_coco_format.py doesn't give 'bbox' and 'area' in 'segments_info', which makes the panoptic segmentation model unable to train. So, I convert it to instance segmentation format, and then try to convert it to panoptic segmentation format.But I got the following error,

line 68, in convert_detection_to_panoptic_coco_format_single_core
raise Exception("Segments for image {} overlap each other.".format(img_id))

Exception: Segments for image 1 overlap each other.

This problem has been bothering me for a long time. And How can I get the correct data format for panoptic segmentation?

TypeError: Object of type int64 is not JSON serializable

Running detection2panoptic_coco_format.py first fails on untouched fresh unzipped coco-things

ubuntu@XXXXX:~/GitRepos/panopticapi$ python3 converters/detection2panoptic_coco_format.py --input_json_file /home/ubuntu/coco-things/annotations/instances_val2017.json --output_json_file /home/ubuntu/coco-things/panoptic_things.json
Creating folder /home/ubuntu/coco-things/panoptic_things for panoptic segmentation PNGs
CONVERTING...
COCO detection format:
        JSON file: /home/ubuntu/coco-things/annotations/instances_val2017.json
TO
COCO panoptic format
        Segmentation folder: /home/ubuntu/coco-things/panoptic_things
        JSON file: /home/ubuntu/coco-things/panoptic_things.json


loading annotations into memory...
Done (t=0.47s)
creating index...
index created!
Number of cores: 6, images per core: 834
Core: 0, 0 from 834 images processed
Caught exception in worker thread:
Traceback (most recent call last):
  File "/home/ubuntu/miniconda3/lib/python3.9/site-packages/panopticapi/utils.py", line 16, in wrapper
    return f(*args, **kwargs)
  File "/home/ubuntu/GitRepos/panopticapi/converters/detection2panoptic_coco_format.py", line 69, in convert_detection_to_panoptic_coco_format_single_core
    raise Exception("Segments for image {} overlap each other.".format(img_id))
Exception: Segments for image 397133 overlap each other.
multiprocessing.pool.RemoteTraceback: 
"""
Traceback (most recent call last):
  File "/home/ubuntu/miniconda3/lib/python3.9/multiprocessing/pool.py", line 125, in worker
    result = (True, func(*args, **kwds))
  File "/home/ubuntu/miniconda3/lib/python3.9/site-packages/panopticapi/utils.py", line 20, in wrapper
    raise e
  File "/home/ubuntu/miniconda3/lib/python3.9/site-packages/panopticapi/utils.py", line 16, in wrapper
    return f(*args, **kwargs)
  File "/home/ubuntu/GitRepos/panopticapi/converters/detection2panoptic_coco_format.py", line 69, in convert_detection_to_panoptic_coco_format_single_core
    raise Exception("Segments for image {} overlap each other.".format(img_id))
Exception: Segments for image 397133 overlap each other.
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/ubuntu/GitRepos/panopticapi/converters/detection2panoptic_coco_format.py", line 148, in <module>
    convert_detection_to_panoptic_coco_format(args.input_json_file,
  File "/home/ubuntu/GitRepos/panopticapi/converters/detection2panoptic_coco_format.py", line 118, in convert_detection_to_panoptic_coco_format
    annotations_coco_panoptic.extend(p.get())
  File "/home/ubuntu/miniconda3/lib/python3.9/multiprocessing/pool.py", line 771, in get
    raise self._value
Exception: Segments for image 397133 overlap each other.
Core: 1, 0 from 834 images processed

I've modified to code to skip over anything with an overlap and then the png maps load but the JSON conversion fails at saving

# modification to skip overlaps
        if np.sum(overlaps_map > 1) != 0:
            bad_img_ids.append(img_id) #init this list as global
            continue

JSON Serialization Error. Something about d_coco['annotations'] fails even when using python ints

Traceback (most recent call last):
  File "/home/ubuntu/GitRepos/panopticapi/converters/detection2panoptic_coco_format.py", line 151, in <module>
    convert_detection_to_panoptic_coco_format(args.input_json_file,
  File "/home/ubuntu/GitRepos/panopticapi/converters/detection2panoptic_coco_format.py", line 127, in convert_detection_to_panoptic_coco_format
    save_json(d_coco, output_json_file)
  File "/home/ubuntu/miniconda3/lib/python3.9/site-packages/panopticapi/utils.py", line 99, in save_json
    json.dump(d, f)
  File "/home/ubuntu/miniconda3/lib/python3.9/json/__init__.py", line 179, in dump
    for chunk in iterable:
  File "/home/ubuntu/miniconda3/lib/python3.9/json/encoder.py", line 431, in _iterencode
    yield from _iterencode_dict(o, _current_indent_level)
  File "/home/ubuntu/miniconda3/lib/python3.9/json/encoder.py", line 405, in _iterencode_dict
    yield from chunks
  File "/home/ubuntu/miniconda3/lib/python3.9/json/encoder.py", line 325, in _iterencode_list
    yield from chunks
  File "/home/ubuntu/miniconda3/lib/python3.9/json/encoder.py", line 405, in _iterencode_dict
    yield from chunks
  File "/home/ubuntu/miniconda3/lib/python3.9/json/encoder.py", line 438, in _iterencode
    o = _default(o)
  File "/home/ubuntu/miniconda3/lib/python3.9/json/encoder.py", line 179, in default
    raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type int64 is not JSON serializable
    with open(input_json_file, 'r') as f:
        d_coco = json.load(f)
    d_coco['annotations'] = annotations_coco_panoptic
    d_coco['categories'] = categories_list
# converting bounding boxes & area to python integers
    for item in d_coco['annotations']:
        for item2 in item.get('segments_info'):
            if item2.get('bbox'):
                temp_bboxes = [int(num) for num in item2.get('bbox')]
                item2['bbox'] = temp_bboxes
                item2['area'] = int(item2['area'])
    save_json(d_coco, output_json_file) 

Even when I convert both the bounding boxes and area to python integers I still get the error. If I try to copy a sample to a jupyter notebook it has no problem saving to JSON

test1 = {'image_id': 403385, 'file_name': '000000403385.png', 'segments_info': [{'area': 16555, 'iscrowd': 0, 'bbox': [411, 237, 93, 242], 'category_id': 70, 'id': 7906560}, {'area': 7561, 'iscrowd': 0, 'bbox': [9, 313, 142, 78], 'category_id': 81, 'id': 6538619}]}
save_json(test1,output_json_file) # success 

ZeroDivisionError: division by zero

I got ZeroDivisionError: division by zero from panopticapi/evaluation.py for empty prediction.
With tp + fp + fn == 0, self increament of n is skipped.
When all of categories.items() are skipped because empty prediction, n becomes zero and an error is reported.
Shouldn't n increase whenever category isthing (right after the first continue)?

def pq_average(self, categories, isthing):
    pq, sq, rq, n = 0, 0, 0, 0
    per_class_results = {}
    for label, label_info in categories.items():
        if isthing is not None:
            cat_isthing = label_info['isthing'] == 1
            if isthing != cat_isthing:
                continue
        iou = self.pq_per_cat[label].iou
        tp = self.pq_per_cat[label].tp
        fp = self.pq_per_cat[label].fp
        fn = self.pq_per_cat[label].fn
>>        if tp + fp + fn == 0:
>>            per_class_results[label] = {'pq': 0.0, 'sq': 0.0, 'rq': 0.0}
>>            continue
>>        n += 1
        pq_class = iou / (tp + 0.5 * fp + 0.5 * fn)
        sq_class = iou / tp if tp != 0 else 0
        rq_class = tp / (tp + 0.5 * fp + 0.5 * fn)
        per_class_results[label] = {'pq': pq_class, 'sq': sq_class, 'rq': rq_class}
        pq += pq_class
        sq += sq_class
        rq += rq_class

    return {'pq': pq / n, 'sq': sq / n, 'rq': rq / n, 'n': n}, per_class_results

Underestimation of SQ?

When averaging SQ over classes, even those classes are taken into account where TP=0, i.e. which were not even recognized correctly. For those classes SQ=0, so they reduce average SQ considerably. At least for me this breaks the intuition of SQ being a metric that measures how well the segmentation matches ground truth. Consider this example:

class IOU TP FP FN PQ SQ RQ
class 1 0.88 1 0 1 0.59 0.88 0.67
class 2 0.69 1 0 0 0.69 0.69 1
class 3 0.63 1 0 0 0.63 0.63 1
class 4 0 0 1 1 0 0 0
class 5 0 0 1 0 0 0 0
average 0.38 0.44 0.53

Even though per-class segmentation results were decent for first three classes, when averaged over all classes 0.44 seems unfair. Following the SQ calculation algorithm one could wonder how SQ can ever be smaller than 0.5. Simple solution would be to count only non-zero SQ results when averaging, i.e. in this case the average SQ would be 0.73.

At first I thought that this would break the nice PQ=SQ*RQ formula. But then I realized that averaging breaks it anyway. I understand that this brings up the question how to average PQ as well and it would be nice if the rules would be consistent. That's why I'm posting this as an issue to discuss rather than pull request.

Question regarding IDGenerator

Greetings,

I have a question regarding ID generator, lets say I create panoptic labels for a dataset with 5 different colours for cars viz. [0 , 0 , 255], [0, 0, 200], [0, 0, 150], [0, 0 , 100], [0, 0, 50] and i assign differnt category ids to them and use 5 different class ids for the same class "car".
can I still use IDGenerator for all the car colours I have assigned? will it try to generate more shades of blue or use all the ones I have mentioned?

COCO Panoptic data does not have 'color' attribute

Hi,
It seems that that COCO Panoptic annotation does not have 'color' attribute
http://images.cocodataset.org/annotations/panoptic_annotations_trainval2017.zip

eg.
...{"supercategory": "building", "isthing": 0, "id": 197, "name": "building-other-merged"}, {"supercategory": "solid", "isthing": 0, "id": 198, "name": "rock-merged"}, {"supercategory": "wall", "isthing": 0, "id": 199, "name": "wall-other-merged"}, {"supercategory": "textile", "isthing": 0, "id": 200, "name": "rug-merged"}]}

When I use format_converter.py I get "KeyError: 'color'"

Thanks!

How is semantic_data.py different from opencv single channel imread ?

HI,

The semantic data.py is described as extraction of semantic annotation for stuffs and things from ground truths and saving them as a single channel png .

Loading gt using opencv as greyscale :

>>> o = cv2.imread("../panoptic_val2017/000000000139.png", 0 ) 
>>> o.shape
(426, 640)
>>> o
array([[119, 119, 119, ..., 119, 119, 119],
       [119, 119, 119, ..., 119, 119, 119],
       [119, 119, 119, ..., 119, 119, 119],
       ..., 
       [ 78,  78,  78, ..., 129, 129, 129],
       [ 78,  78,  78, ..., 129, 129, 129],
       [ 78,  78,  78, ..., 129, 129, 129]], dtype=uint8)

After running the semantic_data.py and generating the pngs :

>>> i = cv2.imread("000000000139.png", 0 ) 
>>> i.shape
(426, 640)
>>> i
array([[119, 119, 119, ..., 119, 119, 119],
       [119, 119, 119, ..., 119, 119, 119],
       [119, 119, 119, ..., 119, 119, 119],
       ..., 
       [ 78,  78,  78, ..., 129, 129, 129],
       [ 78,  78,  78, ..., 129, 129, 129],
       [ 78,  78,  78, ..., 129, 129, 129]], dtype=uint8)

Comparing the two :

>>> a = i == o
>>> a
array([[ True,  True,  True, ...,  True,  True,  True],
       [ True,  True,  True, ...,  True,  True,  True],
       [ True,  True,  True, ...,  True,  True,  True],
       ..., 
       [ True,  True,  True, ...,  True,  True,  True],
       [ True,  True,  True, ...,  True,  True,  True],
       [ True,  True,  True, ...,  True,  True,  True]], dtype=bool)
>>> True in a
True
>>> False in a
False

Is there any difference between the two that I am missing ? Can I simply use the imread with loading as greyscale, instead of an extra pre-processing step ?

About the stuff categories annotation.

panoptic_semseg_train2017/000000000247.png
000000000247

I want to know why the greyscale of the sky is 119, the sky-other in the labels may 157 or 146(157-11)(some classes have been removed)?

I am so confused that how to build the mapping relationships between classes and greyscale in the .png

how to visualize results for panoptic segmentation on test set

I believe that Cityscapes is using coco format for input generation using scripts.

after running the panoptic deep lab model with Coco format input on Cityscapes, I am getting one JSON file in output with some result matrix. How can I use that JSON file with input test files to generate a predicted mask?

Is there any script available?

Detection to Panoptic Segmenetaion Converter creates blank images and no segmenetation coordinates

In an attempt to convert COCO detection annotations to Panoptic Segmentation, I am getting the following output:

  • The Generated PNG images are nothing but black images, no segments are visible
  • The COCO Panoptic Annotation Generated JSON contains "segments_info": [] (Empty List) for all images.

Command used to convert:

python converters/detection2panoptic_coco_format.py \
  --input_json_file /media/Data/Documents/Python-Codes/data/reading_order_dataset/complex/textline/via_export_coco.json \
  --output_json_file /media/Data/Documents/Python-Codes/data/reading_order_dataset/complex/textline/coco_panoptic.json

I have attached a sample of the files below:

  • Input COCO Detection JSON with BBox and Segmentation Coordinates:
    image
  • Output Panoptic Segmentation JSON with empty "segments_info":
    image
  • Output generated Sample PNG which is completely blank black image:
    001

Please let me know what I am doing wrong and how to fix this.

KeyError: 'color' when running panoptic2detection_coco_format.py

Hello

I am trying to run the script panoptic2detection_coco_format.py

My full command is

python converters/panoptic2detection_coco_format.py   --input_json_file ~/data/coco_panoptic/annotations/panoptic_val2017.json --categories_json_file ./panoptic_coco_categories.json  --segmentations_folder ~/data/coco_panoptic/annotations/panoptic_val2017  --output_json_file ~/data/coco_panoptic/annotations/converted/panoptic2det_val2017.json

However, after a few seconds of processing the script errors out with

Traceback (most recent call last):
  File "converters/panoptic2detection_coco_format.py", line 153, in <module>
    args.things_only)
  File "converters/panoptic2detection_coco_format.py", line 120, in convert_panoptic_to_detection_coco_format
    category.pop('color')
KeyError: 'color'

I was wondering is this was a bug or a mistake in how I am invoking the converter. Any help would be appreciated.

Thanks
Vighnesh

KeyError when using detection2panoptic_coco_format

Trying to convert instance segmentations in COCO format to a panoptic format, but getting a KeyError:

image

Creating folder ../Playment/01-playment-1000-images/panoptic for panoptic segmentation PNGs
CONVERTING...
COCO detection format:
	JSON file: ../Playment/01-playment-1000-images/02-playment_1_class_poly.json
TO
COCO panoptic format
	Segmentation folder: ../Playment/01-playment-1000-images/panoptic
	JSON file: ../Playment/01-playment-1000-images/panoptic.json


loading annotations into memory...
Done (t=0.00s)
creating index...
index created!
Number of cores: 8, images per core: 13
Core: 0, 0 from 13 images processed
Core: 1, 0 from 13 images processed
Caught exception in worker thread:
Core: 2, 0 from 13 images processed
Caught exception in worker thread:
Traceback (most recent call last):
  File "/opt/conda/envs/oneformer2/lib/python3.8/site-packages/panopticapi/utils.py", line 16, in wrapper
    return f(*args, **kwargs)
  File "converters/detection2panoptic_coco_format.py", line 42, in convert_detection_to_panoptic_coco_format_single_core
    img = coco_detection.loadImgs(int(img_id))[0]
  File "/opt/conda/envs/oneformer2/lib/python3.8/site-packages/pycocotools/coco.py", line 229, in loadImgs
    return [self.imgs[ids]]
KeyError: 10643857408083048448
Core: 3, 0 from 13 images processed
Caught exception in worker thread:
Traceback (most recent call last):
  File "/opt/conda/envs/oneformer2/lib/python3.8/site-packages/panopticapi/utils.py", line 16, in wrapper
    return f(*args, **kwargs)
  File "converters/detection2panoptic_coco_format.py", line 42, in convert_detection_to_panoptic_coco_format_single_core
    img = coco_detection.loadImgs(int(img_id))[0]
  File "/opt/conda/envs/oneformer2/lib/python3.8/site-packages/pycocotools/coco.py", line 229, in loadImgs
    return [self.imgs[ids]]
KeyError: 3484711879296423936
Traceback (most recent call last):
  File "/opt/conda/envs/oneformer2/lib/python3.8/site-packages/panopticapi/utils.py", line 16, in wrapper
    return f(*args, **kwargs)
  File "converters/detection2panoptic_coco_format.py", line 42, in convert_detection_to_panoptic_coco_format_single_core
    img = coco_detection.loadImgs(int(img_id))[0]
  File "/opt/conda/envs/oneformer2/lib/python3.8/site-packages/pycocotools/coco.py", line 229, in loadImgs
    return [self.imgs[ids]]
KeyError: 10094264106565765120
Core: 4, 0 from 12 images processed
Caught exception in worker thread:
Traceback (most recent call last):
  File "/opt/conda/envs/oneformer2/lib/python3.8/site-packages/panopticapi/utils.py", line 16, in wrapper
    return f(*args, **kwargs)
  File "converters/detection2panoptic_coco_format.py", line 42, in convert_detection_to_panoptic_coco_format_single_core
    img = coco_detection.loadImgs(int(img_id))[0]
Core: 5, 0 from 12 images processed
  File "/opt/conda/envs/oneformer2/lib/python3.8/site-packages/pycocotools/coco.py", line 229, in loadImgs
    return [self.imgs[ids]]
KeyError: 382583937906978560
Caught exception in worker thread:
Traceback (most recent call last):
  File "/opt/conda/envs/oneformer2/lib/python3.8/site-packages/panopticapi/utils.py", line 16, in wrapper
    return f(*args, **kwargs)
  File "converters/detection2panoptic_coco_format.py", line 42, in convert_detection_to_panoptic_coco_format_single_core
    img = coco_detection.loadImgs(int(img_id))[0]
  File "/opt/conda/envs/oneformer2/lib/python3.8/site-packages/pycocotools/coco.py", line 229, in loadImgs
    return [self.imgs[ids]]
KeyError: 16482284176345886720
Core: 6, 0 from 12 images processed
Core: 7, 0 from 12 images processed
Caught exception in worker thread:
Caught exception in worker thread:
Traceback (most recent call last):
Traceback (most recent call last):
  File "/opt/conda/envs/oneformer2/lib/python3.8/site-packages/panopticapi/utils.py", line 16, in wrapper
    return f(*args, **kwargs)
  File "/opt/conda/envs/oneformer2/lib/python3.8/site-packages/panopticapi/utils.py", line 16, in wrapper
    return f(*args, **kwargs)
  File "converters/detection2panoptic_coco_format.py", line 42, in convert_detection_to_panoptic_coco_format_single_core
    img = coco_detection.loadImgs(int(img_id))[0]
  File "converters/detection2panoptic_coco_format.py", line 42, in convert_detection_to_panoptic_coco_format_single_core
    img = coco_detection.loadImgs(int(img_id))[0]
  File "/opt/conda/envs/oneformer2/lib/python3.8/site-packages/pycocotools/coco.py", line 229, in loadImgs
    return [self.imgs[ids]]
  File "/opt/conda/envs/oneformer2/lib/python3.8/site-packages/pycocotools/coco.py", line 229, in loadImgs
    return [self.imgs[ids]]
KeyError: 11941352026092914688
KeyError: 4354006610727533568
Caught exception in worker thread:
Traceback (most recent call last):
  File "/opt/conda/envs/oneformer2/lib/python3.8/site-packages/panopticapi/utils.py", line 16, in wrapper
    return f(*args, **kwargs)
  File "converters/detection2panoptic_coco_format.py", line 42, in convert_detection_to_panoptic_coco_format_single_core
    img = coco_detection.loadImgs(int(img_id))[0]
  File "/opt/conda/envs/oneformer2/lib/python3.8/site-packages/pycocotools/coco.py", line 229, in loadImgs
    return [self.imgs[ids]]
KeyError: 12128046965261295616
multiprocessing.pool.RemoteTraceback: 
"""
Traceback (most recent call last):
  File "/opt/conda/envs/oneformer2/lib/python3.8/multiprocessing/pool.py", line 125, in worker
    result = (True, func(*args, **kwds))
  File "/opt/conda/envs/oneformer2/lib/python3.8/site-packages/panopticapi/utils.py", line 20, in wrapper
    raise e
  File "/opt/conda/envs/oneformer2/lib/python3.8/site-packages/panopticapi/utils.py", line 16, in wrapper
    return f(*args, **kwargs)
  File "converters/detection2panoptic_coco_format.py", line 42, in convert_detection_to_panoptic_coco_format_single_core
    img = coco_detection.loadImgs(int(img_id))[0]
  File "/opt/conda/envs/oneformer2/lib/python3.8/site-packages/pycocotools/coco.py", line 229, in loadImgs
    return [self.imgs[ids]]
KeyError: 12128046965261295616
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "converters/detection2panoptic_coco_format.py", line 148, in <module>
    convert_detection_to_panoptic_coco_format(args.input_json_file,
  File "converters/detection2panoptic_coco_format.py", line 118, in convert_detection_to_panoptic_coco_format
    annotations_coco_panoptic.extend(p.get())
  File "/opt/conda/envs/oneformer2/lib/python3.8/multiprocessing/pool.py", line 771, in get
    raise self._value
KeyError: 12128046965261295616

TypeError when running cityscapes_panoptic_converter.py

I get a TypeError when trying to run the cityscapes converter.

The problem seems to be that something is using a numpy type which can't be serialized by the json module.

Traceback (most recent call last):
  File "cityscapes_gt_converter/cityscapes_panoptic_converter.py", line 119, in <module>
    panoptic_converter(original_format_folder, out_folder, out_file)
  File "cityscapes_gt_converter/cityscapes_panoptic_converter.py", line 116, in panoptic_converter
    json.dump(d, f)
  File "/export/home/yuyanli/.local/opt/miniconda3/envs/t4/lib/python3.6/json/__init__.py", line 179, in dump
    for chunk in iterable:
  File "/export/home/yuyanli/.local/opt/miniconda3/envs/t4/lib/python3.6/json/encoder.py", line 430, in _iterencode
    yield from _iterencode_dict(o, _current_indent_level)
  File "/export/home/yuyanli/.local/opt/miniconda3/envs/t4/lib/python3.6/json/encoder.py", line 404, in _iterencode_dict
    yield from chunks
  File "/export/home/yuyanli/.local/opt/miniconda3/envs/t4/lib/python3.6/json/encoder.py", line 325, in _iterencode_list
    yield from chunks
  File "/export/home/yuyanli/.local/opt/miniconda3/envs/t4/lib/python3.6/json/encoder.py", line 404, in _iterencode_dict
    yield from chunks
  File "/export/home/yuyanli/.local/opt/miniconda3/envs/t4/lib/python3.6/json/encoder.py", line 325, in _iterencode_list
    yield from chunks
  File "/export/home/yuyanli/.local/opt/miniconda3/envs/t4/lib/python3.6/json/encoder.py", line 404, in _iterencode_dict
    yield from chunks
  File "/export/home/yuyanli/.local/opt/miniconda3/envs/t4/lib/python3.6/json/encoder.py", line 437, in _iterencode
    o = _default(o)
  File "/export/home/yuyanli/.local/opt/miniconda3/envs/t4/lib/python3.6/json/encoder.py", line 180, in default
    o.__class__.__name__)
TypeError: Object of type 'int64' is not JSON serializable

P.S. I also had to change from utils import IdGenerator to from panopticapi.utils import IdGenerator

Convert the annotations of thing to COCO instance format

Hi, thanks for the work!
I have some questions about the "instance_data.py":
the line:
pan = pan_format[:,:,0] + 256 * pan_format[:,:,1] + 256*256*pan_format[:,:,2]
Could you give some details about this line?
When I print the results after converting to the COCO instance format, it seems that there are all the binary encoding for the "segmentation" rather than the decimal code as the COCO instance format.
Thanks very much!

Questions about evaluation.py

Hi, I was reading evaluation.py and found some lines confusing:

panopticapi/evaluation.py

Lines 141 to 147 in 8a32c19

crowd_labels_dict = {}
for gt_label, gt_info in gt_segms.items():
if gt_label in gt_matched:
continue
# crowd segments are ignored
if gt_info['iscrowd'] == 1:
crowd_labels_dict[gt_info['category_id']] = gt_label

For 'is_crowd' segments, category_id and segment id are stored in a dictionary, so it implies that
for each category, coco annotates at most one segment with is_crowd as true. In other words, suppose that there are two distant crowds of people, they are annotated with the same segment id. However, I can't find any support of this implication in the description of the panoptic dataset. Is this a feature of coco dataset?

close multiprocessing safely

I think

workers.close()
workers.join()

should be added right before the return xxx in method combine_to_panoptic_multi_core and pq_compute_multi_core for closing multiprocessing safely, as my collegue found it might not release the memory properly. I agreed and adopted it, but I didn't test it.

How to specify instance mask in coco panoptic annotation format?

Here is a coco panoptic segmentation annatation format from COCO website:
image

Since the segment_info has only category_id related to instance color mask in annotation mask image, how do you tell instances have the same category_id one from another?

Also, I noticed panoptic annotation mask images in this project: sample_data/panoptic_examples/000000142238.png
000000142238
the color of these person category are a little different from each others.

Thanks!

UTF8 decoding error

Hello,

I am using python 3 and i have some issues storing json files from panoptic to detection and panoptic to semantic segmentation conversion.

In panoptictosemantic conversion, the data from RLE is not gettiing decoded to utf 8 and it throws an error when it tries to save the data from 'count' on from RLE in the json as its bytes.

interestingly same code works in the panoptictodetection conversion but it converts the byte as it is to a string as in the image attached.

has anyone seen the same issue?

thanks
utf problem

Instance ID encoding discrepancy

Hi, question:
Here I read:

The RGB encoding used is ID = R * 256 * G + 256 * 256 + B.

However, in the code, a different encoding is used:

return color[:, :, 0] + 256 * color[:, :, 1] + 256 * 256 * color[:, :, 2]

Which is correct? Thanks!

How can I change color back to category?

Hi,

I noticed that there is a folder called 'panoptic_train2017' in the annotation zip file, and this folder contains color png images corresponding to the training samples.
I wonder if there is any way to convert these png images to the panoptic annotations?
Or I have to incorporate other information from the json file?
Thank you.

How are the ground truth panoptic annotations generated?

According to the code, it gives the combination method of predictions of instance and semantic segmentation as the paper -- Panoptic Segmentation.

However, how do you generate the ground truth panoptic dataset? How do you deal with the conflicts of semantic and instance segmentation and the overlap of different instances when there are no confidence scores anymore?

overlapping between annotations

Hey!
I saw that using detection2panoptic is only possible if there are ne overlappings in the annotations.
Since I have some overlappings in my annoations, how to get rid of them?
Any suggestion? :)

I'm struggling with this problem already for a while..

How to distinguish different instance id?

  • recently i am study Panoptic Segmentation. so learn the panoptic_annotations_trainval2017 datasets.
  • i have understand some thing from the datasets , i know from the json file can get lots information.
  • we can get category like this "categories": [{"supercategory": "person", "color": [220, 20, 60], "isthing": 1, "id": 1, "name": "person"}...} the person is thing , the color is [220, 20, 60]; so we can get the ids thought ids=R+G*256+B*256^2. ; the ids is in "annotations": [{"segments_info": [{"area": 3528, "category_id": 1, "iscrowd": 0, "id": 3937500, "bbox": [282, 207, 48, 149]},... ; so i think the thing ids is unique;
  • but in the annotations :
{"segments_info": [{"area": 3528, "category_id": 1, "iscrowd": 0, "id": 3937500, "bbox": [282, 207, 48, 149]}, {"area": 3751, "category_id": 1, "iscrowd": 0, "id": 4260062, "bbox": [420, 183, 49, 173]}, {"area": 2301, "category_id": 1, "iscrowd": 0, "id": 2035955, "bbox": [271, 118, 53, 141]}, {"area": 4076, "category_id": 1, "iscrowd": 0, "id": 4325578, "bbox": [337, 208, 54, 149]}, {"area": 2931, "category_id": 1, "iscrowd": 0, "id": 2628072, "bbox": [47, 221, 36, 138]}...

the category_id :1 has more than one id:(3937500,2035955,2628072...); then i found when i get_color the thing may has random chagne base on original color:

    def get_color(self, cat_id):
        def random_color(base, max_dist=30):
            new_color = base + np.random.randint(low=-max_dist,
                                                 high=max_dist+1,
                                                 size=3)
            return tuple(np.maximum(0, np.minimum(255, new_color)))

        category = self.categories[cat_id]
        if category['isthing'] == 0:
            return category['color']
        base_color_array = category['color']
        base_color = tuple(base_color_array)
        if base_color not in self.taken_colors:
            self.taken_colors.add(base_color)
            return base_color
        else:
            while True:
                color = random_color(base_color_array)
                if color not in self.taken_colors:
                     self.taken_colors.add(color)
                     return color
  • so different thing instance has litte different color, but is not fixed;
  • so question is when i get my own predict panoptic result png, the thing instance ids is different with groudtruth because of the random ?
  • How do you make sure you get the right match ? when use the evaluation.py matched_annotations_list
  • I don't know if I understand it correctly ? so please help me .

convert labels to coco panoptic format

Great initiative and awesome work. Thanks.

I have some data in the form of rgb images and panoptic labels also as images.
I sadly do not have the labels in any other format.
Is there a way to get the labels from images to coco panoptic format?

thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.