GithubHelp home page GithubHelp logo

carlos-gg / dl4ds Goto Github PK

View Code? Open in Web Editor NEW
77.0 77.0 23.0 4.4 MB

Deep Learning for empirical DownScaling. Python package with state-of-the-art and novel deep learning algorithms for empirical/statistical downscaling of gridded data

Home Page: https://carlos-gg.github.io/dl4ds/

License: Apache License 2.0

Python 2.67% Jupyter Notebook 97.33%
deep-learning downscaling earth-observation earth-science python super-resolution tensorflow

dl4ds's People

Contributors

carlos-gg avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

dl4ds's Issues

Colab example fail

Hi @carlos-gg I try to replicate you colab example https://github.com/carlos-gg/dl4ds/blob/master/notebooks/DL4DS_tutorial.ipynb, but in this part:

ARCH_PARAMS = dict(n_filters=8,
                   n_blocks=8,
                   normalization=None,
                   dropout_rate=0.0,
                   dropout_variant='spatial',
                   attention=False,
                   activation='relu',
                   localcon_layer=True)

trainer = dds.SupervisedTrainer(
    backbone='resnet',
    upsampling='spc', 
    data_train=y_train, 
    data_val=y_val,
    data_test=y_test,
    data_train_lr=None, # here you can pass the LR dataset for training with explicit paired samples
    data_val_lr=None, # here you can pass the LR dataset for training with explicit paired samples
    data_test_lr=None, # here you can pass the LR dataset for training with explicit paired samples
    scale=8,
    time_window=None, 
    static_vars=None,
    predictors_train=[y_z_train],
    predictors_val=[y_z_val],
    predictors_test=[y_z_test],
    interpolation='inter_area',
    patch_size=None, 
    batch_size=60, 
    loss='mae',
    epochs=100, 
    steps_per_epoch=None, 
    validation_steps=None, 
    test_steps=None, 
    learning_rate=(1e-3, 1e-4), lr_decay_after=1e4,
    early_stopping=False, patience=6, min_delta=0, 
    save=False, 
    save_path=None,
    show_plot=True, verbose=True, 
    device='GPU', 
    **ARCH_PARAMS)

trainer.run()

The following error appears:

ValueError: Exception encountered when calling layer "SubpixelConvolution" (type SubpixelConvolutionBlock).

in user code:

    File "/usr/local/lib/python3.10/dist-packages/dl4ds/models/blocks.py", line 453, in call  *
        x = self.upsample_conv(x, self.scale)
    File "/usr/local/lib/python3.10/dist-packages/dl4ds/models/blocks.py", line 427, in upsample_conv  *
        return tf.nn.depth_to_space(x, factor)

    ValueError: Attr 'block_size' of 'DepthToSpace' Op passed 1 less than minimum 2.


Call arguments received by layer "SubpixelConvolution" (type SubpixelConvolutionBlock):
  • x=tf.Tensor(shape=(None, 96, 128, 64), dtype=float32)

I am just wondering why you got for the input this: [(None, 12, 16, 2)]

requirements/environments issuse

I have trouble setting my environment with tensorflow=2.6.0 , python=3.9.16 which are following the colab instruction. I tried to use 'pip install dl4ds' directly after I created a new conda virtual environment. But it always looked like with tensorflow=2.11.0 which I can't use my GPU. Please tell me how to create a completely enviroment on anaconda.

Problem in trainer.run() step in Colab notebook

I wanted to have a look at the Colab notebook and received an error at a reshape operation while running trainer.run() at the last chunk of the Training section of the notebook. I guess this has something to do with package versioning but I'm not sure - any suggestions that quickly comes to your mind?

Running an example with data_module

Hi, I was trying to run an example with my own data but when running app.py is asking me for a data_module flag. Reading the documentation and the code I couldn't find a reference on this.
'data_module flag must be provided (path to the data preprocessing module)'

Could you please provide more info about this, thank you!

import dl4ds as dss: Illegal instruction (core dumped)

After succesfully installing DL4DS, I get the following core dumped message which is surely due to some incompatibility:

Python 3.7.16 (default, May 24 2023, 16:22:32)
[GCC 11.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.

import dl4ds as dss
Illegal instruction (core dumped)

I have tried with diferent versions of Python (from 3.7 to the latest) and Tensorflow, but the error persits.

More than troubleshooting my case in particular, could I ask you @carlos-gg to share the details of your configuration (versions of OS, python, packages) to try and reproduce your code? Thanks in advance.

How to make predictions without the availability of future empirical data

Greetings, I have been utilizing the aforementioned code to perform a downscaling of precipitation data, employing historic GCM data for the training and validation of the model. However, I harbor the ambition to carry out downscaled forecasts on future GCM data based on the model that has already been diligently trained. The procedure to achieve this, however, eludes me. An error is manifested when I set the parameters data_test, data_test_lr, and predictors_test to ‘None’: it reads, “‘DataGenerator’ object lacks the attribute ‘array’”.

batch size issue in a multi-GPU environment

Training with horovod in a multi-GPU environment will invoke setup_datagen on each replica. But the batch_size of setup_datagen is set to self.global_batch_size, which will result in an effective batch size of self.global_batch_size * num_replicas. In both single and multi-GPUs, setup_datagen's batch_size should set to self.batch_size because the "global batch size" will be implicitly handled by the distributed data parallelism of horovod.

Please also see a related discussion here.

Error when dimension mistmatch between static variable and predictand

Hi,

I am playing with the code, and when I input a static variable of higher resolution than the predictand, I get an error.

The code I run is:

ARCH_PARAMS = dict(n_filters=4,
n_blocks=4,
normalization=None,
dropout_rate=0.0,
dropout_variant='spatial',
attention=False,
activation='relu',
localcon_layer=True)

trainer = dds.SupervisedTrainer(
backbone='resnet',
upsampling='spc',
data_train=y_train,
data_val=y_val,
data_test=y_test,
data_train_lr=x_train,#None, # here you can pass the LR dataset for training with explicit paired samples
data_val_lr=x_val,#None, # here you can pass the LR dataset for training with explicit paired samples
data_test_lr=x_test,#None, # here you can pass the LR dataset for training with explicit paired samples
scale=2,
time_window=None,
static_vars=[elevation],#None,
predictors_train=[z_pr_train],
predictors_val=[z_pr_val],
predictors_test=[z_pr_test],
interpolation='inter_area',
patch_size=None,
batch_size=8,
loss='mae',
epochs=50,
steps_per_epoch=None,
validation_steps=None,
test_steps=None,
learning_rate=(2e-3, 2e-4),
lr_decay_after=2e4,
early_stopping=True,
patience=6,
min_delta=0,
save=False,
save_path=None,
show_plot=True, verbose=True,
device='CPU',
**ARCH_PARAMS)

trainer.run()

And the error I get is:

InvalidArgumentError: Graph execution error:

Node: 'gradient_tape/resnet_spc/concatenate_9/ConcatOffset'
All dimensions except 3 must match. Input 1 has shape [8 3600 7200 16] and doesn't match input 0 with shape [8 192 384 18].
[[{{node gradient_tape/resnet_spc/concatenate_9/ConcatOffset}}]] [Op:__inference_train_function_34190]

The error disappears if the static variable is of the same size as the HR predictand.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.