GithubHelp home page GithubHelp logo

freia's People

Contributors

alcrene avatar ardizzone avatar cytotoxicity8 avatar ernstklrb avatar fdraxler avatar github-actions[bot] avatar johnson-yue avatar ju-w avatar larskue avatar mattskiff avatar paulhfu avatar psorrenson avatar psteinb avatar robertfirmo avatar russellala avatar samuelsmal avatar soraxas avatar tbung avatar tianjiao-j avatar wapu avatar zimea avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

freia's Issues

question about <mnist_minimal_example>

Hi, , thank you for your sharing , I am learning your method INN. and I can not understand the demo(mnist_minimal_example) loss ? what is the meaning?? I have ever seen before.
image

Negative Errors When Training

Hello there! I am training an INN on the CIFAR10 dataset, and everything works smoothly. However, I noticed that when I use the NLL from the examples, I get negative errors that continually decrease with each epoch. Here is the loss function that I used:

...
loss = torch.mean(z**2) / 2 - torch.mean(log_det) / 32*32
loss.backward()
...

Although training works fine, I get errors in the negative range. Even when I modified it to give me positive errors, it will converge to the negative spectrum. Is there something wrong with the code?

Prescription for subnet constructor using classes:

Hello all,

I am trying to write a class for the neural nets in the coupling blocks. For instance:


class subnetDense(nn.Module):

    def __init__(self, input_dim, output_dim, hidden_dim):
        super().__init__()
        self.input_dim = input_dim
        self.output_dim = output_dim
        self.hidden_dim = hidden_dim
        current_dim = input_dim
        self.layers = nn.ModuleList()
        for hdim in hidden_dim:
            self.layers.append(nn.Linear(current_dim, hdim))
            current_dim = hdim
        self.layers.append(nn.Linear(current_dim, output_dim))

    def forward(self, x):
        for layer in self.layers[:-1]:
            x = F.relu(layer(x))
        out = F.softmax(self.layers[-1](x))
        return out

layerlist = [128,128,128]   

nodes = [Ff.InputNode(2, name='input')]
for k in range(3):
    nodes.append(Ff.Node(nodes[-1],
                         Fm.GLOWCouplingBlock,
                         {'subnet_constructor':subnetDense(2,2,layerlist), 'clamp':2.0},
                         name=F'coupling_{k}'))

nodes.append(Ff.OutputNode(nodes[-1], name='output'))
inn = Ff.ReversibleGraphNet(nodes)

However, this gives me an error :
forward() takes 2 positional arguments but 3 were given.

I note that creating the subnet with nn.sequential works, but can I do it with classes? I am assuming I am somehow required to output a torch nn.module. I essentially want to have the number of hidden layers passed in as a parameter and there is no pretty way of doing it with nn.sequential.

Edit:
I now see that the source of error is that the coupling modules defined in the package require a subnet constructor with the signature (input_dim, output_dim) - this then means when my class is being invoked it is looking at the forward function - which only takes in x.

Regards,
Debajyoti

Where is the "experiments folder" ?

Hi, the readme says to look inside "experiments folder" for code showing how to the train the model end-to-end. Where is the "experiments folder"?

Thanks.

Missing key tmp_var when loading

I used the MNIST example to train 10 model, each with the same architecture and different initializations.

I then loop over these model, want to load and benchmark each one like so:

model = inn_model.MNIST_cINN(0)
model.cuda()
with torch.no_grad():
    for model_name in model_list_names:
        
        state_dict = {k:v for k,v in torch.load('inn_model/{}'.format(model_name)).items() if 'tmp_var' not in k}
        
        model.load_state_dict(state_dict)
        
        SAMPLING MODEL AND BENCHMARKING

However, I get this error in the model.load_state_dict(state_dict) line:

RuntimeError: Error(s) in loading state_dict for MNIST_cINN:
	Unexpected key(s) in state_dict: "cinn.tmp_var_0", "cinn.tmp_var_1", "cinn.tmp_var_2", "cinn.tmp_var_3", "cinn.tmp_var_4", "cinn.tmp_var_5", "cinn.tmp_var_6", "cinn.tmp_var_7", "cinn.tmp_var_8", "cinn.tmp_var_9", "cinn.tmp_var_10", "cinn.tmp_var_11", "cinn.tmp_var_12", "cinn.tmp_var_13", "cinn.tmp_var_14", "cinn.tmp_var_15", "cinn.tmp_var_16", "cinn.tmp_var_17", "cinn.tmp_var_18", "cinn.tmp_var_19", "cinn.tmp_var_20", "cinn.tmp_var_21", "cinn.tmp_var_22", "cinn.tmp_var_23", "cinn.tmp_var_24", "cinn.tmp_var_25", "cinn.tmp_var_26", "cinn.tmp_var_27", "cinn.tmp_var_29", "cinn.tmp_var_30", "cinn.tmp_var_31", "cinn.tmp_var_32", "cinn.tmp_var_33", "cinn.tmp_var_34", "cinn.tmp_var_35", "cinn.tmp_var_36", "cinn.tmp_var_37", "cinn.tmp_var_38", "cinn.tmp_var_40", "cinn.tmp_var_41", "cinn.tmp_var_42", "cinn.tmp_var_43", "cinn.tmp_var_44". 

If instead, I just create a new model inside the loop, it works. However, then sampling the model gives me an CUDA out of memory error after benchmarking some model. Probably, because somewhere are still references to the generated data which do not get deleted.

Can you tell me glow's splitting

Hello!
I'm excited to meet this wonderful library because I consumed too much time to build nearly the same library using Tensorflow 2.x.

I have a question which is I am in trouble to build.

In Glow, the latent vector z_L ... z_1 is not merged. And log_probability of each z_i is calculated at split layer with h_i is another of splitted output.

So do we need the split layer with their "coupling log probability"?

hidden `scipy` dependency

ImportError: Failed to import test module: test_base
Traceback (most recent call last):
  File "/opt/hostedtoolcache/Python/3.9.1/x64/lib/python3.9/unittest/loader.py", line 154, in loadTestsFromName
    module = __import__(module_name)
  File "/home/runner/work/FrEIA/FrEIA/tests/test_base.py", line 11, in <module>
    from FrEIA.modules import InvertibleModule
  File "/home/runner/work/FrEIA/FrEIA/FrEIA/__init__.py", line 5, in <module>
    from . import modules
  File "/home/runner/work/FrEIA/FrEIA/FrEIA/modules/__init__.py", line 54, in <module>
    from .all_in_one_block import *
  File "/home/runner/work/FrEIA/FrEIA/FrEIA/modules/all_in_one_block.py", line 8, in <module>
    from scipy.stats import special_ortho_group
ModuleNotFoundError: No module named 'scipy'

scipy is required in modules/all_in_one_block.py but not contained in requirements.txt

Miss cbn_layer file

Hi, when I will train cINN with LSUN dataset , I find the **from cbn_layer import *** has Error , because cbn_layer has not in your repo.

Using `pytest` instead of builtin `unittest`

pytest has become one often used standard library for unit testing or continuous integration as a whole. It offers tons of plugins and handy features while not requiring as much boilerplate code as the python builtin unittest module.

As long as the number of tests is small, I suggest to move to pytest. Please drop ๐Ÿ‘ or ๐Ÿ‘Ž to express your opinion or comment on this idea directly.

RuntimeError: Error(s) in loading state_dict for Flow

Hi there! I have a problem with loading my model. Here is my training script.

All works well except when trying to load the model. I tried saving and loading a blank state_dict, which works. But it returns an error when I load the trained model.

Here is the script that I want to load the state_dict into:

import torch
import torch.nn as nn
import torch.nn.functional as F
from torchvision.models.squeezenet import Fire
import FrEIA.framework as Ff
import FrEIA.modules as Fm

class Flow(nn.Module):
    def __init__(self, shape, num_steps):
        super(Flow, self).__init__()
        self.C, self.H, self.W = shape
        self.num_steps = num_steps
        self.model = self.build_model()

    def build_model(self):
        def subnet(in_channels, out_channels):
            return nn.Sequential(
                Fire(in_channels, 16, 8, 8),
                nn.BatchNorm2d(16),
                Fire(16, 8, 16, 16),
                nn.BatchNorm2d(32),
                Fire(32, 16, out_channels // 2, out_channels // 2)
            )
        
        nodes = [Ff.InputNode(self.C, self.H, self.W, name='input')]

        nodes.append(Ff.Node(nodes[-1], Fm.HaarDownsampling, {}, name='downsample'))

        for i in range(self.num_steps):
            nodes.append(Ff.Node(nodes[-1], Fm.PermuteRandom, {'seed':i}, name='permute_{}'.format(i)))
            nodes.append(Ff.Node(nodes[-1], Fm.GLOWCouplingBlock, {'subnet_constructor':subnet}, name='couple_{}'.format(i)))

        nodes.append(Ff.OutputNode(nodes[-1], name='output'))
        
        return Ff.ReversibleGraphNet(nodes, verbose=False)

    def forward(self, x):
        z = self.model(x)
        log_det = self.model.log_jacobian(run_forward=False)
        return z, log_det

    def generate(self, x):
        return self.model(x, rev=True)

    def reconstruct(self, x):
        return self.model(self.model(x), rev=True)      

net = Flow((3, 32, 32), 3)
net.load_state_dict(torch.load('flow.pt', map_location=torch.device('cpu')))

And this is the error that comes up:

Traceback (most recent call last):
  File "c:/Users/AyazA/Desktop/project/squeezeglow.py", line 55, in <module>
    net.load_state_dict(torch.load('flow.pt', map_location=torch.device('cpu')))
  File "C:\Python37\lib\site-packages\torch\nn\modules\module.py", line 839, in load_state_dict
    self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for Flow:
        Unexpected key(s) in state_dict: "model.tmp_var_0", "model.tmp_var_1", "model.tmp_var_2", "model.tmp_var_5", "model.tmp_var_6", "model.tmp_var_7", "model.tmp_var_8", "model.tmp_var_9". 

a typo

In README.rst exp( 2c/pi * atan(x)) should be exp( 2c/pi * atan(x/c)).

Predict forward direction at conditional invertible neural network

Hi there!

I trained a conditional invertible neural network and am able to predict the inversion of a given label.
At the training step, I used the label as a condition for the network to get the jacobian and latent space z
z, log_j = net(x, l)

After that, I optimized the negative log-likelihood loss.

With the given network, I am able to get the inversion of a label by the function:
net(z, c=l, rev=True)

with an arbitrary randomized z.

As defined above, the network requires the label as a condition at the training step.
It is possible to predict not only the inversion but also the forward direction without a given label within the conditional network as at a non conditional one?

Thanks in advance.

Best regards
Slewny

Error in your code

When I ran your code from: https://github.com/VLL-HD/FrEIA#tutorial
in1 = Ff.InputNode(100, name='Input 1') # 1D vector
in2 = Ff.InputNode(20, name='Input 2') # 1D vector
cond = Ff.ConditionNode(42, name='Condition')

def subnet(dims_in, dims_out):
return nn.Sequential(nn.Linear(dims_in, 256), nn.ReLU(),
nn.Linear(256, dims_out))

perm = Ff.Node(in1, Fm.PermuteRandom, {}, name='Permutation')
split1 = Ff.Node(perm, Fm.Split, {}, name='Split 1')
split2 = Ff.Node(split1.out1, Fm.Split, {}, name='Split 2')
actnorm = Ff.Node(split2.out1, Fm.ActNorm, {}, name='ActNorm')
concat1 = Ff.Node([actnorm.out0, in2.out0], Fm.Concat, {}, name='Concat 1')
affine = Ff.Node(concat1, Fm.AffineCouplingOneSided, {'subnet_constructor': subnet},
conditions=cond, name='Affine Coupling')
concat2 = Ff.Node([split2.out0, affine.out0], Fm.Concat, {}, name='Concat 2')

output1 = Ff.OutputNode(split1.out0, name='Output 1')
output2 = Ff.OutputNode(concat2, name='Output 2')

example_INN = Ff.GraphINN([in1, in2, cond,
perm, split1, split2,
actnorm, concat1, affine, concat2,
output1, output2])

dummy inputs:

x1, x2, c = torch.randn(1, 100), torch.randn(1, 20), torch.randn(1, 42)

compute the outputs

(z1, z2), log_jac_det = example_INN([x1, x2], c=c)

invert the network and check if we get the original inputs back:

(x1_inv, x2_inv), log_jac_det_inv = example_INN([z1, z2], c=c, rev=True)

#x2_inv has all Nan
print(x2_inv)

Invertible Encoder/Decoder?

We are trying to create a convolutional neural net such that the input is [batch_size, 259, 64, 64] and would like to output a size of [batch_size, 3, 64, 64]. Is there a way to build an encoder such that this is possible, and invertible, with the FrEIA architecture?

Non specific to this example, does there exist an invertible encoder/decoder that results in the input and output having different shapes with a different amount of parameters.

when using cINN ๏ผŒ input channel should be bigger than 1 ?

Hi, I want to try to train MNIST with convolution layers not fc_layers but I am failed.

mnist_minimal_example and mnist_cINN both use fc layers. though input is (1, 28, 28)
but, Second Node both are Flatten to 784 feature

and colorization_* input is (2, 64, 64)

I test , if input is (1, X, X) will fail. so , input channel should be bigger than 1 ? Would you plan to update this feature ?

AssertionError: Dimensions of input and one or more conditions don't agree: [(10,)] vs [(4, 14, 14)].

Hi there! I am trying to build my own conditional INN for conditional image generation. Everything works fine except when conditioning the network on labels.

Here is the script:

import torch
import torch.nn as nn
import torch.nn.functional as F
import FrEIA.framework as Ff
import FrEIA.modules as Fm

class Flow(nn.Module):
    def __init__(self, shape, num_outputs, num_steps):
        super(Flow, self).__init__()
        self.C, self.H, self.W = shape
        self.num_outputs = num_outputs
        self.num_steps = num_steps
        self.model = self.build_model()

    def build_model(self):
        def subnet(in_channels, out_channels):
            return nn.Sequential(
                nn.Conv2d(in_channels, 32, kernel_size=1),
                nn.Conv2d(32, 64, kernel_size=1),
                nn.Conv2d(64, out_channels, kernel_size=1)
            )
        
        nodes = [Ff.InputNode(self.C, self.H, self.W, name='input')]
        condition = Ff.ConditionNode(self.num_outputs)

        nodes.append(Ff.Node(nodes[-1], Fm.HaarDownsampling, {}, name='downsample'))

        for i in range(self.num_steps):
            nodes.append(Ff.Node(nodes[-1], Fm.PermuteRandom, {'seed':i}, name='permute_{}'.format(i)))
            nodes.append(Ff.Node(nodes[-1], Fm.GLOWCouplingBlock, {'subnet_constructor':subnet}, name='couple_{}'.format(i), conditions=condition))
        
        return Ff.ReversibleGraphNet(nodes + [condition, Ff.OutputNode(nodes[-1], name='output')], verbose=False)

    def forward(self, x, condition):
        z = self.model(x, c=condition)
        log_det = self.model.log_jacobian(run_forward=False)
        return z, log_det


x = torch.rand([1, 1, 28, 28])
y = torch.rand([1, 10])

net = Flow((1, 28, 28), 10, 3)
z, log_det = net(x, y)

How to ensure that the model predict the same result if i use it many times?

Hello!
I am using this library to do a recipe prediction project.I have trained a model already,the input is a recipe(211 tensor),the output is the spectral reflectance of the color(311 tensor).
Then given a spectral reflectance ,I can use the reverse model to predict the recipe.I can get many different useful 21*1 tensors already.
My problem is that if I input the same spectral reflectance and use the reverse model to predict many times,the result is different.How can I ensure that if the input is the same,the output of the reverse model is the same?Can you give me some suggestions?
Thank you very much!

Multidimensional condition for conditional invertible neural network

Hi there!

I am currently trying to build a conditional neural network regarding the MNIST example.
However, I do not want to use the standard procedure to train the MNIST cINN from a 28x28 picture/array to onehot(digit), but I want to train the reverse direction from onehot(digit) to a 28x28 picture/array.

Afterward, I want to be able to reverse sample from a 28x28 picture/array to the onehot(digit).
I have a similar issue to this with larger dimensions and wanted to start simple with the MNIST minimal example.

If I swap the direction I have an input of dimension 1 (one digit) or 10 (digit encoded as onehot) and my condition should be (1, 28, 28). I wanted to ask if it is possible to handle those dimensions for the FrEIA cINN architecture?

I already tried this but I am getting dimension problems similar to this issue:
#9

After taking a deeper look into the error/code I can see there is the following assertion defined:

assert all([tuple(dims_c[i][1:]) == tuple(dims_in[0][1:]) for i in range(len(dims_c))]), \
            F"Dimensions of input and one or more conditions don't agree: {dims_c} vs {dims_in}."

FrEIA\modules\coupling_layers.py", line 161

So, that means if the dimension of the condition(s) is larger than a single 1nd. The input dimensions from index 1+ must be equal. Regarding my example, if I got the condition (1, 28, 28) my input must be (x, 28, 28), right?

What can I do to fit my expectations? I will try to bypass this by scaling my input up to (1, 28, 28) and concatenate all entries with my label or I will try to flat my condition (1, 28, 28) to a 1-dimensional one with a dimension of (784).

Can you recommend me other solutions? It might be possible to split the conditional image into several conditions or not?

Thanks for your help in advance!

Best Regards
Pauliusinc

FrEIA forward compatibility with older code base

I am trying to run the IB-INN repo ( https://github.com/VLL-HD/IB-INN ) with latest FrEIA framework. But getting the following error:

Traceback (most recent call last):
File "/home/kaushikdas/aashish/FrEIA/FrEIA/framework/graph_inn.py", line 300, in forward
mod_out = node.module(mod_in, rev=rev, jac=jac)
File "/home/kaushikdas/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
TypeError: forward() got an unexpected keyword argument 'jac'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "main.py", line 29, in
train.train(args)
File "/home/kaushikdas/aashish/IB-INN/train.py", line 101, in train
losses = inn(x, y)
File "/home/kaushikdas/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/kaushikdas/aashish/IB-INN/model.py", line 141, in forward
z = self.inn(x)
File "/home/kaushikdas/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/kaushikdas/aashish/FrEIA/FrEIA/framework/reversible_graph_net.py", line 36, in forward
return super().forward(x_or_z, c, rev, jac, intermediate_outputs)
File "/home/kaushikdas/aashish/FrEIA/FrEIA/framework/graph_inn.py", line 302, in forward
raise RuntimeError(f"{node} encountered an error.") from e
RuntimeError: Node 'CONV_1_0': [(4, 14, 14)] -> AIO_Block -> [(4, 14, 14)] encountered an error.

Any fix for this?

Preventing NaN Issues with Large Networks

Hi There,

I was wondering if there are any recommended ways to prevent the max output size of values produced by a sequence of RNVP blocks from 'exploding'?

The limit for the number of rnvp chains I can use in a sequence seems to be 9 (with a 4 layer deep linear fully connected coefficient function (w/ LeakyRelu), internal width 768) before I get numbers larger than pytorch can recognized leading to inf/NaN issues.

I've tried adding BatchNorm layers to the coefficient network, and also to the rnvp input but unfortunately the testing error no longer decreases in either case. Not sure if this is a implementation error or a fundamentally not an appropriate approach. It is worth noting that the losses decrease similarly to networks without the normalization layers, its only the testing error that is different, which is... unexpected.

Thanks,
Jeremy

Running the code for the paper "Analyzing inverse problems with invertible neural networks"

When I run the code I get this error :

Traceback (most recent call last):
File "", line 1, in
File "/home/.pycharm_helpers/pydev/_pydev_bundle/pydev_umd.py", line 197, in runfile
pydev_imports.execfile(filename, global_vars, local_vars) # execute the script
File "/home/.pycharm_helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/home/INN/Inn_8_modes mixture model.py", line 234, in
train(i_epoch)
File "/home/INN/Inn_8_modes mixture model.py", line 155, in train
output = model(x)
File "/home/.local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/home/.local/lib/python3.7/site-packages/FrEIA/framework/reversible_graph_net.py", line 378, in forward
results = self.module_list[o[0]](x, rev=rev)
File "/home/.local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/home/.local/lib/python3.7/site-packages/FrEIA/modules/fixed_transforms.py", line 28, in forward
return [x[0][:, self.perm]]
TypeError: tuple indices must be integers or slices, not tuple

Pretrained Model

Can a pretrained model trained on ImageNet be released as well?

Missing invertible 1x1 conv from GLOW

Hi, I want to rebuild a framework referring to glow. Here I have some doubts:

  1. Is 'OrthogonalTransform' equal to the 'invertible 1x1 conv' in glow?

  2. If I only want to use addition coupling rather than affine coupling, should I just use NICECouplingBlock instead of GLOWCouplingBlock?

Thanks for help.

assert len(self.cond_vars) == 1 AssertionError

Hi,
I just want to train a Conditional INN for cifar-10, and made the data x itself x as the condition. Because I have three types of part (such as, 1 Higher resolution convolutional part, 2 Lower resolution convolutional part, 3 Fully connected part), so I defined a conditions like:
conditions = [Ff.ConditionNode(3, 32, 32),
Ff.ConditionNode(x, y, z),
Ff.ConditionNode(768)]

and added it to each part respectively,

and
return Ff.ReversibleGraphNet(nodes + conditions, verbose=False)

At last, I get the error message:
"assert len(self.cond_vars) == 1 AssertionError"

Now, I can solve this problem, please give me help, thank you a lot!

IndexError: index 0 is out of bounds for dimension 0 with size 0

Here is my code:

import torch
import torch.nn as nn
import FrEIA.framework as Ff
import FrEIA.modules as Fm

def subnet(in_channels, out_channels):
    return nn.Sequential(
        nn.Conv2d(in_channels, 256, kernel_size=1),
        nn.Conv2d(256, out_channels, kernel_size=1)
    )

input_node = Ff.InputNode(1, 28, 28, name='Input')
coupling = Ff.Node(input_node.out0, Fm.GLOWCouplingBlock, {'subnet_constructor': subnet}, name='Coupling')
output = Ff.OutputNode(coupling.out0, name='output')

inn = Ff.ReversibleGraphNet([input_node, coupling, output], verbose=False)
print(inn)

Tests failed

test_jacobian fails with pytorch=1.6.0=py3.7_cuda9.2.148_cudnn7.6.3_0

======================================================================
FAIL: test_jacobian (main.ConditioningTest)

Traceback (most recent call last):
File "conditioning.py", line 92, in test_jacobian
self.assertTrue(torch.allclose(logdet, logdet_num, atol=0.1, rtol=0.1))
AssertionError: False is not true

======================================================================
FAIL: test_jacobian (main.IResNetTest)

Traceback (most recent call last):
File "invertible_resnet.py", line 149, in test_jacobian
self.assertTrue(torch.allclose(logdet, logdet_num, atol=1.5, rtol=0.1))
AssertionError: False is not true

Any ideas here?

autoregressive flow?

Thanks for the great work.
I was wondering if FrEIA also provides an autoregressive-flow type of functionalities. I have multivariate time-series speech data which has input as audio features (12-d) and the output as framewise one-hot coded labels (9-d). Would it be possible to use FrEIA to model this type of data?

Requesting Advice on NF Methods

I am working on a project where I sample a set of n-dimensional points from a Gaussian distribution (of learnt parameters) as follows and then evaluate those points based on a loss function to update model parameters with gradient descent.

mu, std = self.lin_1(z), self.lin_2(z)
eps = torch.Tensor(*img_shape).normal_()
return self.act((eps.cuda() * std) + mu)

I would like to transform the Gaussian distribution for being able to sample those points from a more complex learnt distribution. In other words, the model needs to learn how to best transform points obtained from the Gaussian distribution.

I would be glad if you can suggest the best normalizing flows method (transform) to employ considering the following scalability requirements (whether or not it is available in this repo). Thank you very much in advance for your suggestion.

  • I am sampling 100K-dimensional points with a batch-size of 5K; hence, the scalability is crucial.
  • The method should be memory efficient and fast to train on a RTX series desktop Nvidia GPU.
  • There should not ideally be an additional regularization parameter to my current loss function.
  • Expressiveness of the method is not as important as scalability and robustness in the training.

Guidance on network sizes

Excellent library! I'm wondering if there are any tips/guidance from the community on how to choose the network architecture and depth. Currently I'm trying to do unconditional generation of ultrasound images (downsampled to 64x64), but I don't know where to start in terms of how big of an INN network to use (e.g., should I aim for ~30 coupling layers or ~300 layers).

The images used here: https://github.com/VLL-HD/conditional_invertible_neural_networks are maybe a simpler distribution, so I'm not how relevant the model architectures are for my task.

Just don't have the working experience that comes with standard NNs to at least get a starting point. Does anyone else have any experience they'd like to share so I can bootstrap off of them? E.g., strategies that work toward getting to a INN architecture that's appropriate?

Source Change Warning

Screenshot 2020-12-06 024236
Hello Everyone I am facing this issue anyone have the solution becuase it effect my results thanks in Advance

README Example produces AssertionError

The following code from the README, slightly modified to have a batch size >1, produces an AssertionError. Checking the maximum absolute difference between original and computed input shows it is in the order of 1, which is way to high.

in1 = Ff.InputNode(100, name='Input 1') # 1D vector
in2 = Ff.InputNode(20, name='Input 2') # 1D vector
cond = Ff.ConditionNode(42, name='Condition')

def subnet(dims_in, dims_out):
    return nn.Sequential(nn.Linear(dims_in, 256), nn.ReLU(),
                         nn.Linear(256, dims_out))

perm = Ff.Node(in1, Fm.PermuteRandom, {}, name='Permutation')
split1 =  Ff.Node(perm, Fm.Split, {}, name='Split 1')
split2 =  Ff.Node(split1.out1, Fm.Split, {}, name='Split 2')
actnorm = Ff.Node(split2.out1, Fm.ActNorm, {}, name='ActNorm')
concat1 =  Ff.Node([actnorm.out0, in2.out0], Fm.Concat, {}, name='Concat 1')
affine = Ff.Node(concat1, Fm.AffineCouplingOneSided, {'subnet_constructor': subnet},
                 conditions=cond, name='Affine Coupling')
concat2 =  Ff.Node([split2.out0, affine.out0], Fm.Concat, {}, name='Concat 2')

output1 = Ff.OutputNode(split1.out0, name='Output 1')
output2 = Ff.OutputNode(concat2, name='Output 2')

example_INN = Ff.GraphINN([in1, in2, cond,
                           perm, split1, split2,
                           actnorm, concat1, affine, concat2,
                           output1, output2])

# dummy inputs:
x1, x2, c = torch.randn(16, 100), torch.randn(16, 20), torch.randn(16, 42)

# compute the outputs
(z1, z2), log_jac_det = example_INN([x1, x2], c=c)

# invert the network and check if we get the original inputs back:
(x1_inv, x2_inv), log_jac_det_inv = example_INN([z1, z2], c=c, rev=True)
assert (torch.max(torch.abs(x1_inv - x1)) < 1e-5
       and torch.max(torch.abs(x2_inv - x2)) < 1e-5)

Seeing NaN in output of the example in readme.

Hi, I am following the example in the readme. This example is the conditional invertible NN model. My output z2 has NaN (as seen below). Would you know what I have to do to fix this?

import torch.nn as nn

# FrEIA imports
import FrEIA.framework as Ff
import FrEIA.modules as Fm

# ! set up model
in1 = Ff.InputNode(100, name='Input 1') # 1D vector
in2 = Ff.InputNode(20, name='Input 2') # 1D vector
cond = Ff.ConditionNode(42, name='Condition')

def subnet(dims_in, dims_out):
    return nn.Sequential(nn.Linear(dims_in, 256), nn.ReLU(),
                         nn.Linear(256, dims_out))

perm = Ff.Node(in1, Fm.PermuteRandom, {}, name='Permutation')
split1 =  Ff.Node(perm, Fm.Split, {}, name='Split 1')
split2 =  Ff.Node(split1.out1, Fm.Split, {}, name='Split 2')
actnorm = Ff.Node(split2.out1, Fm.ActNorm, {}, name='ActNorm')
concat1 =  Ff.Node([actnorm.out0, in2.out0], Fm.Concat, {}, name='Concat 1')
affine = Ff.Node(concat1, Fm.AffineCouplingOneSided, {'subnet_constructor': subnet},
                 conditions=cond, name='Affine Coupling')
concat2 =  Ff.Node([split2.out0, affine.out0], Fm.Concat, {}, name='Concat 2')

output1 = Ff.OutputNode(split1.out0, name='Output 1')
output2 = Ff.OutputNode(concat2, name='Output 2')

example_INN = Ff.GraphINN([in1, in2, cond,
                           perm, split1, split2,
                           actnorm, concat1, affine, concat2,
                           output1, output2])

# dummy inputs:
x1, x2, c = torch.randn(1, 100), torch.randn(1, 20), torch.randn(1, 42)

# compute the outputs
(z1, z2), log_jac_det = example_INN([x1, x2], c=c) # ! fail
z2
tensor([[-0.0435,  1.7810,  0.7086,  0.9377, -0.6783,  0.4945,  1.1104, -0.8096,
          0.3706,  2.4083,  2.2125, -0.2289, -1.6135,  0.9737, -0.5034,  1.4836,
          0.5349, -0.8587, -0.6704,  0.3350,  0.1832, -1.8121, -1.0111,  0.5252,
          1.5328,     nan,     nan,     nan,     nan,     nan,     nan,     nan,
             nan,     nan,     nan,     nan,     nan,     nan,     nan,     nan,
             nan,     nan,     nan,     nan,     nan,     nan,     nan,     nan,
             nan,     nan,     nan,     nan,     nan,     nan,     nan,     nan,
             nan,     nan,     nan,     nan,     nan,     nan,     nan,     nan,
             nan,     nan,     nan,     nan,     nan,     nan]],
       grad_fn=<CatBackward>)
       

improving setup.py and friends

while working on the unit tests, I had a look at setup.py which is basically empty. This makes use of the package, dependency handling and much more a bit demanding going forward.

For now, I see two options:

  • expand on the use of setup.py for the time being. As long as there is no C/C++ code being compiled during the build, I would however discourage that.

  • start using tools like flit or poetry. The latter is a tool that we made good experiences with so far. But that would mean, that those people doing releases etc would have to learn this.

Thoughts / wishes / decisions ?

Strange Behavor for Reverse and Forward in Contional NF

I defined my normalizing flow like this:

def subnet_fc(c_in, c_out):
    return nn.Sequential(nn.Linear(c_in, 32), nn.ReLU(),
                         nn.Linear(32,  c_out))

cond_t = Ff.ConditionNode(1, name='condition')#need to condition the NF on x
nodes_t = [Ff.InputNode(2, name='input')]
'''
for i in range(10):
  nodes.append(Ff.Node(nodes[-1],Fm.InvAutoFC,{},
                      name='FC_{}'.format(i)))#make sure to apply nonlinearity if not already applied (check this)
  
nodes.append(Ff.OutputNode(nodes[-1], name='output'))
'''
for k in range(12):
    nodes_t.append(Ff.Node(nodes_t[-1],
                         Fm.RNVPCouplingBlock,#RNVP has completely wrong output
                         {'subnet_constructor':subnet_fc, 'clamp':2.0},
                         conditions=cond,
                         name=F'coupling_{k}'))
    nodes_t.append(Ff.Node(nodes_t[-1],
                         Fm.PermuteRandom,
                         {'seed':k},
                         name=F'permute_{k}'))
nodes_t.append(Ff.OutputNode(nodes_t[-1], name='output'))

cinn_t = Ff.ReversibleGraphNet(nodes_t + [cond_t])

When I perform a pass data forward through the net via a normal base distribution basedist_t = td.Normal(torch.tensor(0.0), torch.tensor(1.0)), I get that the forward and the reverse transformations don't agree. As in:

samp_t2 = basedist_t.sample((20,2)).float()

a_1 = cinn_t(samp_t2, c=w_samp, rev=False)
rev1 = cinn_t(a_1, c=w_samp, rev = True)

print(torch.all(torch.eq(rev1,samp_t2)))

Returns False. Maybe I am doing something incorrectly, but I assume rev = True is the reverse net and rev = True is the forward net.

About dfkz_data

Hi.

I am trying to do some experiments in your paper especially inverse problems in natural science.

In FrEIA/experiments/inverse_problems_science folder, there is a symbolic link to '../datasets/dkfz'
but I cannot find any *.npy data files in this repository.

Can you let me know how I can deal with this? Or is those data confidential to be released in public?

Thanks in advance.

Best regards,
YJ Hong.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.