tamarott / asapnet Goto Github PK
View Code? Open in Web Editor NEWLicense: Other
License: Other
Because pix2pixHD need to train global and local model, how did you train the pix2pixHD model when you comparing speed with your method?
Did you train pix2pixHD global model in 256*256?
Since the nearest neighbor method is used for the parameter upsampling, it seems like the discontinuity at the border of parameters is inevitable.
However, according to the qualitative results reported in the paper, those 16x16 grid artifacts are not observed.
Wasn't there any worry or concern about this when you design the network?
Thanks!
I tried with celebAmask-HQ dataset (MaskGAN) with 19 lebel_nc and got a strange error:
RuntimeError: cuDNN error: CUDNN_STATUS_NOT_INITIALIZED
terminate called after throwing an instance of 'c10::Error'
what(): CUDA error: device-side assert triggered
after searching for a while i found that someone posted the same kind of error on "pix2pixHD" model but i am yet unable to solve it. kindly help if anyone knows. BTW Facades dataset work fine so there is no problem with CudNN or CUDA i think.
Looks like this file might be missing from source: ```ModuleNotFoundError: No module named 'models.networks.sync_batchnorm'
The weights I trained on the facedes dataset were 327MB and predicted nothing, while the pre-trained weights provided by the authors were 328Mb and were able to predict successfully.
Do you have train and test codes for ASAPNets? The current train and test codes in the repo seem to be written for PixtoPix model only.
Thank you for the awesome work again.
This work is very inspiring.
I have a question about the ablation study on the spatially-variant operation (Figure 9 (c) in the paper).
Does this mean that f(x_p, p, phi_p; phi) is less effective than f(x_p, p; phi_p)? (where phi is spatially-invariant learnable parameter).
If so, why?
Note1: In the case of f(x_p, p, phi_p; phi), the dimension for the phi_p should be much smaller since it now works as an input to the network.
Note2: If we use f(x_p, p, phi_p; phi), I think it would be possible to find an analogy with LIIF model (which tackles arbitrary-scale SR problem). In other words, reversely, I think it is also possible to apply this paper's pixelwise MLP method to arbitrary-scale SR problem if directly predicting the MLP parameters is more efficient than putting the the feature as an input for the coordinate-based MLP.
Hello, I want to use your wonderful work for depth estimation.
But I could not start training with some errors.
I tried this command
python train.py --name depthEstimation --dataset_mode custom --label_dir [monocular_Images_dir] --image_dir [depth_Images_dir] --no_instance_edge --no_instance_dist --no_one_hot
But I could not start training with this error.
RuntimeError: Given groups=1, weight of size 64 13 3 3, expected input[1, 3, 256, 256] to have 13 channels, but got 3 channels instead
The dataset images size is (512,512).
So please tell me options when you trained the depth estimation model with NYU dataset.
Thank you!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.