GithubHelp home page GithubHelp logo

patricktum / uncrtaints Goto Github PK

View Code? Open in Web Editor NEW
46.0 46.0 2.0 476 KB

Home Page: https://patricktum.github.io/cloud_removal/

Dockerfile 0.36% Python 94.72% Shell 4.92%
cloud-removal image-processing image-reconstruction inpainting optical radar remote-sensing sar satellite satellite-data satellite-imagery sentinel sentinel-1 sentinel-2 uncertainty uncertainty-estimation uncertainty-quantification

uncrtaints's People

Contributors

patricktum avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

uncrtaints's Issues

Some questions about the SEN12MSCRTS dataset class

Hi. This is truly a great paper! And I have encountered some issues when using the data/dataLoader.py/SEN12MSCRTS class for my own cloud removal model. First sorry for my poor English.

Here are my questions:

  1. I see that you have provided some pre-computed cloud coverage statistics on the pCloud. So, I downloaded the generic_3_train_all_s2cloudless_mask.npy, the generic_3_val_all_s2cloudless_mask.npy and the generic_3_test_all_s2cloudless_mask.npy as I set the input_t=3, region='all', sampler='fixed' and sample_type='cloudy_cloudfree' in my SEN12MSCRTS class. But by debugging, I have discovered that, in most cases, in the same area (that's to say the same pdx in the dataset), among all 30 images, the cloud coverage of more than one image is 0. The SEN12MSCRTS class selects the image with the lowest coverage in the fixed_sampler method as the target cloudless image. But under this circumstance, do we still need to conduct cloud removal since we already have another cloudless(coverage close to 0) image in the input image list?

  2. I have also observed that even when two or more images in the same area out of the 30 images have zero coverage, they do not look the same. I found this by drawing the RGB channels of some images. For example, the two images below are from the same area, but they have obvious differences. Does that mean we cannot directly view the cloud removal task as an inpainting task?

    Fig.1.
    Fig.2.

  3. Since we have more than one image out of the 30 images with zero coverage, does it matter which image we choose as the target cloudless image? That is to say, does the way we choose the target image affect the performance of our models?

  4. In a few cases, I have noticed that the target cloudless image appears to have clouds by observing the RGB channels with my eyes. For example, the four images below are fetched from the target S2 images using the __getitem__ method in the SEN12MSCRTS class and displayed as RGB images by selecting the RGB channels ([3, 2, 1]) from the original tif images. Actually, they do not seem to be cloudless. But this thing doesn't happen very often. Is this because there was an error in my code, or did the S2PixelCloudDetector fail to detect the cloud? Or for some other reason?

    Fig.3.

Apologize again for my poor English. Sincerely looking forward to your reply.
Many thanks to you!

Dataset preparation

Hi,
Thanks for the nice work and accompanying article. I got your code running with the SEN12MS-CR-TS dataset but ideally I would like to run inference on my own dataset (let's say from a different region). Do you have a script to prepare the dataset for other locations?

How is the aleatoric uncertainty map calculated?

Hello @PatrickTUM ,
Thank you for your great work!

In practical scenarios, how is the aleatoric uncertainty map calculated? According to the description in the paper, aleatoric uncertainty prediction comes from the diagonal covariance matrix.

image

So, is the Aleatoric uncertainty map calculated based on the network output results? Or if it is predicted by the network, how should the corresponding loss function be designed?

Can't download s1_america.tar.gz

When I download s1_america.tar.gz dataset using the script provided by this project, I get the following error:

--2024-04-20 22:01:06--  https://dataserv.ub.tum.de/s/m1639953/download?path=/&files=s1_america.tar.gz
Connecting to 172.22.112.1:7890... connected.
WARNING: cannot verify dataserv.ub.tum.de's certificate, issued by ‘CN=R3,O=Let's Encrypt,C=US’:
  Unable to locally verify the issuer's authority.
Proxy request sent, awaiting response... 404 Not Found
2024-04-20 22:01:19 ERROR 404: Not Found.

I also tried using rsync, but it also doesn't download properly:

$ rsync rsync://[email protected]/m1639953/s1_america.tar.gz
Password:
rsync: [sender] link_stat "/s1_america.tar.gz" (in m1639953) failed: Bad file descriptor (9)
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1865) [Receiver=3.2.7]

Other datasets can be obtained normally, such as:

$ rsync rsync://[email protected]/m1639953/s1_africa.tar.gz
Password:
-r-xr-xr-x 53,534,515,467 2022/01/28 01:11:02 s1_africa.tar.gz

How much GPU needs?

Hello @PatrickTUM ,
Thank you for your great work!
I have an issue in running your code to train the model on SEN12MSCR dataset with the following code:

python train_reconstruct.py --experiment_name my_first_experiment --root3 /home/bada_za/data/uncrtaints/SEN12MSCR --model uncrtaints --input_t 3 --region all --epochs 20 --lr 0.001 --batch_size 4 --gamma 1.0 --scale_by 10.0 --trained_checkp "" --loss MGNLL --covmode diag --var_nonLinearity softplus --display_step 10 --use_sar --block_type mbconv --n_head 16 --device cuda --res_dir ./results --rdm_seed 1

I tried to train it with GPU=16 GB but I received CUDA error, therefore I am looking for how much GPU does this model needs for training on both dataset?
The error I received is:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 262.00 MiB. GPU 0 has a total capacity of 14.75 GiB of which 55.06 MiB is free. Process 1048438 has 1.42 GiB memory in use. Including non-PyTorch memory, this process has 13.27 GiB memory in use. Of the allocated memory 13.10 GiB is allocated by PyTorch, and 32.70 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Simple example of cloud removal

Hello Patrick,

I'm interested in using your code to remove clouds from my own dataset.

Can you please provide the simplest image example and code to perform inference on these images using your code and one of the pre-trained models?

This would help me greatly understand the data structure needed without downloading the large dataset and the basics of how the code operates.

Thank you

Question about Figure 4 on the paper

Hi! Great paper! I just have a quick question -- for the qualitative examples on Figure 4, which dataset do they come from, SEN12MS-CR or SEN12MS-CR-TS? Thank you.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.