GithubHelp home page GithubHelp logo

diux-xview / xview2_baseline Goto Github PK

View Code? Open in Web Editor NEW
194.0 194.0 81.0 3.42 MB

Baseline localization and classification models for the xView 2 challenge.

Home Page: https://xview2.org

License: Other

Python 80.47% Jupyter Notebook 7.10% Dockerfile 2.36% Shell 10.08%

xview2_baseline's People

Contributors

deg4uss3r avatar ritwikgupta avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

xview2_baseline's Issues

xview_geotransforms.json

The script utils/png_to_geotiff.py loads a file called xview_geotransforms.json to convert all PNG files to geoTIFFs, which would be really useful. Has xview_geotransforms.json been released or can be made available?

Format checking of train.csv

I would like to confirm the processing I am doing correct or not. after running process_data.py by passing xBD directory and it generates train.csv and polygons. I looked at train.csv format and error are below when passing it to python damage_classification.py --train_data

,uuid,labels
0,9e483432-8681-4257-81db-335c382dbc32.png,0
1,99600a2d-588f-4980-98d8-325c1ea9c825.png,0
2,843424e5-f7c5-4af5-aff0-baddf6bc5836.png,0
3,0aa8779d-a857-435e-bc58-b55cc516971c.png,1
File "/damage_classifier/xView2_baseline/model/damage_classification.py", line 156, in train_model
    class_weights = compute_class_weight('balanced', np.unique(df['labels'].to_list()), df['labels'].to_list());
TypeError: compute_class_weight() takes 1 positional argument but 3 were given

can you please confirm the format of train.csv and cause of the error?

Path in Dockerfile does not exist in the repository

When building Docker image from Dockerfile it fails as there is no xview-2 folder to add

ADD xview-2 /code/xview-2

If changed to

ADD / /code/xview-2

The build will work if the build context is set to the top level repository folder (default when building using dockerhub).

Do we have some code or a software that can label the training image?

Hi, I know the completeation has ended, but I still want to train it.
However, I don't how to convert those json to location of building. However, if we have a correct image(pixel 0-4) for training images, it could save me a lot of time. Do we have something that can do that? Thank You.
Window 10

Error while loading weights for damage_inference.py

I am getting shape mis-match error while loading weights in damage_inference.py, which is using 'classification.hdf5' file.
ValueError: Cannot assign to variable conv3_block1_0_conv/kernel:0 due to variable shape (1, 1, 256, 512) and value shape (1, 1, 128, 512) are incompatible
(The weights for 'localization.h5' are loaded fine.)

Below is the snapshot for error:

File "./damage_inference.py", line 93, in run_inference
    model.load_weights(model_weights)
  File "/Users/navjotkaur/miniforge3/envs/tfmacos/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 2234, in load_weights
    hdf5_format.load_weights_from_hdf5_group(f, self.layers)
  File "/Users/navjotkaur/miniforge3/envs/tfmacos/lib/python3.8/site-packages/tensorflow/python/keras/saving/hdf5_format.py", line 710, in load_weights_from_hdf5_group
    K.batch_set_value(weight_value_tuples)
  File "/Users/navjotkaur/miniforge3/envs/tfmacos/lib/python3.8/site-packages/tensorflow/python/util/dispatch.py", line 201, in wrapper
    return target(*args, **kwargs)
  File "/Users/navjotkaur/miniforge3/envs/tfmacos/lib/python3.8/site-packages/tensorflow/python/keras/backend.py", line 3745, in batch_set_value
    x.assign(np.asarray(value, dtype=dtype(x)))
  File "/Users/navjotkaur/miniforge3/envs/tfmacos/lib/python3.8/site-packages/tensorflow/python/ops/resource_variable_ops.py", line 888, in assign
    raise ValueError(
ValueError: Cannot assign to variable conv3_block1_0_conv/kernel:0 due to variable shape (1, 1, 256, 512) and value shape (1, 1, 128, 512) are incompatible

Unable to view images after downloading "Full datasets with GEOTIFF metadata"

Hello

I am unable to view the .tif files after downloading them from the "Full datasets with GEOTIFF metadata" menu. I confirmed the SHA1 checksum of each file after download and of the combined file. I used the tar option to decompress as stated in the instructions. But all the images are blank. Do I need to convert them to another format before I can use them?

Thanks!

Potential bug in scaling in damange_inference.py

I noticed in damage_inference.py that the testing images are all scaled by 1.4. Inside damage_classification.py however the images are all scaled by 1/255 which makes sense to me.

The performance of the pretrained model changes greatly depending on the choice of scaling I've noticed. When I change the scaling to 1/255 the model only predicts "no-damage" but when you keep the scaling as 1.4 the results are more similar to the paper.

Final output image issue in inference steps

I was running the inference steps individually by following the inference.sh script. But finally I am getting the output as plain black image. I am using all the following steps from inference.sh with trained weights from the releases section.

  1. Running the localization inference
    !python inference.py --input "/content/drive/My Drive/xview/xBD/palu-tsunami/images/palu-tsunami_00000019_pre_disaster.png" --weights "/content/drive/My Drive/xview/localization.h5" --mean "/content/drive/My Drive/xview/weights/mean.npy" --output "/content/drive/My Drive/xview/output/palu-tsunami_00000019_pre_disaster/labels/palu-tsunami_00000019_pre_disaster.json"

  2. Extracting polygon from post image
    !python process_data_inference.py --input_img "/content/drive/My Drive/xview/xBD/palu-tsunami/images/palu-tsunami_00000019_post_disaster.png" --label_path "/content/drive/My Drive/xview/output/palu-tsunami_00000019_pre_disaster/labels/palu-tsunami_00000019_pre_disaster.json" --output_dir "/content/drive/My Drive/xview/output/output_polygons" --output_csv "/content/drive/My Drive/xview/output/output.csv"

  3. Classifying extracted polygons
    !python damage_inference.py --test_data "/content/drive/My Drive/xview/output/output_polygons" --test_csv "/content/drive/My Drive/xview/output/output.csv" --model_weights "/content/drive/My Drive/xview/classification.hdf5" --output_json "/content/drive/My Drive/xview/output/classification_inference.json"

  4. Combining the predicted polygons with the predicted label
    !python combine_jsons.py --polys "/content/drive/My Drive/xview/output/palu-tsunami_00000019_pre_disaster/labels/palu-tsunami_00000019_pre_disaster.json" --classes "/content/drive/My Drive/xview/output/classification_inference.json" --output "/content/drive/My Drive/xview/output/inference.json"

  5. Transforming the inference json file to the image required
    !python inference_image_output.py --input "/content/drive/My Drive/xview/output/inference.json" --output "/content/drive/My Drive/xview/output/image_tsunami.png"

All of these scripts are running correctly without any warnings and error but, I am getting output as plain black image:
image_tsunami

Can you suggest me what is wrong in my flow or am I missing any steps?

Problem with process_data.py

I followed the README instructions and failed at preprocessing step of damage paragraph.

(xview) /xview2-baseline-master/model$ python process_data.py --input_dir /data/xview19/xBD/ --output_dir /data/xview19/xBD/damage_out --output_dir_csv /data/xview19/xBD/damage_out.csv --val_split_pct 0.80
INFO:root:Started Processing for Data
 24%|███████████████████████████████████████████████▉                                           Traceback (most recent call last):                                                                       | 1790/7323 [01:20<03:58, 23.23it/s]
  File "process_data.py", line 183, in <module>
    main()
  File "process_data.py", line 178, in main
    process_data(args.input_dir, args.output_dir, args.output_dir_csv, float(args.val_split_pct))
  File "process_data.py", line 116, in process_data
    label_file = open(label_path)
FileNotFoundError: [Errno 2] No such file or directory: '/data/xview19/xBD//spacenet_gt/labels/hurricane-harvey_00000258_pre_disaster.json'
 24%|███████████████████████████████████████████████▉                                                                                                                                                    | 1790/7323 [01:20<04:09, 22.19it/s]

What should I do?

Cannot run data_finalize.sh

I am trying to get the code and train it on my machine using README.

I am able to split the data into disasters using split_into_disasters.py file. As the next step in the readme is localization training using spacenet. I am running the sample command given in readme by changing the path suitable for my directory, whem I run it, git bash is opening and getting closed in a second.

image

Mentioned directory format in the README is missing 'train' , 'test' and 'hold' directories, process_data.py

Hi guys,
Thanks for providing this wonderful baseline code to start working on.
While I was going through the steps mentioned in the README, specifically to run process_data.py (https://github.com/DIUx-xView/xview2-baseline#damage-classification-training) , I had to slightly modify my directory structure as compared to the prescribed directory format few lines above in the README (https://github.com/DIUx-xView/xview2-baseline#other-methods) to insert train , test and hold subdirectories before.

To elaborate more, my directory structure looks like

xBD
├── train
    ├── guatemala-volcano
    ├── hurricane-florence
    ├── hurricane-harvey
    ├── hurricane-matthew
     ...
├── test
├── hold
└── spacenet_gt
    ├── dataSet
    ├── images
    └── labels

Can you please look into this or correct me if I am wrong. Thanks again!

Can not Decompress xview2 dataset

I've downloaded all the data and verified them, then used cat xview2.geotiff.tgz.part-a? > xview2_geotiff.tgz command to merge them , the merged file size is about 103Gb and when i use tar -xvzf xview2_geotiff.tgz or tar xzf xview2_geotiff.tgz comands or other tools to unpack file, i get this error, tar.exe: Error openning archive: Unrecohnized archive format or file damaged error ... what am i doing wronge?

Issue in running train_model.py

When I run the train_model.py inside spacenet/src/models using the provided sample piece of command, I am getting output as Killed.

The Command I am trying to run:
python train_model.py ~/data/xBD/spacenet_gt/dataSet/ ~/data/xBD/spacenet_gt/images/ ~/data/xBD/spacenet_gt/labels/ -e 100 -g -4 GPU: -4

Output:
image

Problem with inference.sh

I was following your README, when I ran inference.sh using the simple call you mentioned, like

Sample Call: ./utils/inference.sh -x /path/to/xView2/ -i /path/to/$DISASTER_$IMAGEID_pre_disaster.png -p /path/to/$DISASTER_$IMAGEID_post_disaster.png -l /path/to/localization_weights.h5 -c /path/to/classification_weights.hdf5 -o /path/to/output/image.png -y

The terminal prints Running localization and then finished, without continuing the script.
I was using conda environment. And the weights you've trained.
I wonder what's wrong with my operation.

Question about xBD dataset

xBD dataset's paper says "Furthermore, the dataset contains bounding boxes and labels for environmental factors such as fire, water, and smoke." But I can't find any annotations about the environment factors in those .json files. Anyone who can tell me where them are? Thanks very much.

Question about inference

Hi, I comment out rm -rf "$inference_base" in ./utils/inference.sh as mentioned in the readme.md, but still cannot find /tmp/inferences/inference.json for visual checking. Could you please show me how to find the final json file? Thanks!

problem with train_model.py

I got a TypeError: convelution_forward() got an unexpected keyword argument 'd_layout' when running train_mode.py in line 167. Thanks!

Discrepancy in fire names in xBD dataset?

Firstly, a BIG thanks for providing this dataset for research.

I have a question: Where exactly is "Socal fire" data from? In Table 1 of your manuscript , I don't see any fire labeled as "Socal fire", but in the xview2 dataset, there are many images with names starting with "socal-fire".

Additionally, Table 1 of the manuscript has a row for "Carr fire", but I did not find any images with names containing "Carr". Can you please help me figure out-

  1. Which geographic location or which fire is "Socal fire" data from?
  2. Is Carr fire data indeed missing (since no files contain the word "Carr" in train, tier3, or test folders)?

Thanks again for all your efforts. I really appreciate it.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.