GithubHelp home page GithubHelp logo

pathak22 / context-encoder Goto Github PK

View Code? Open in Web Editor NEW
872.0 37.0 205.0 6.29 MB

[CVPR 2016] Unsupervised Feature Learning by Image Inpainting using GANs

Home Page: https://www.cs.cmu.edu/~dpathak/context_encoder/

License: Other

Lua 98.91% Shell 1.09%
image-inpainting context-encoders unsupervised-learning machine-learning generative-adversarial-network deep-learning computer-vision gan dcgan computer-graphics

context-encoder's People

Contributors

pathak22 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

context-encoder's Issues

don't need validation data

@pathak22
In the readme file, you wrote how to make the dataset folder
"mkdir -p /path_to_wherever_you_want/mydataset/train/images/

put all training images inside mydataset/train/images/

mkdir -p /path_to_wherever_you_want/mydataset/val/images/

put all val images inside mydataset/val/images/"

However, i only set the train folder and not set the val folder, the train process go on wheels . Was the validation data needless?

Segmentation results: What units are used?

In Table 2 of the paper some segmentation performances are given, but I am not sure of the units. Is it accuracy, since there is a percentage sign? Or is it mean IoU, which is more standard (I don't recall seeing it with a percentage sign before though)

image

Loss_D goes to 0

Hi,

First of all, great and interesting work. Congratulations! I have recently started my Ph.D. and your paper was one of the few interesting and helpful baselines for me to grab some knowledge on the inpainting topic.

I tried Context Encoder (CE) on my lab's locally obtained dataset (~0.5 million images) and it outperformed many other inpainting methods that we experimented with.

But recently, I have been trying to use CE for a rather larger dataset (over 4 million images). While training, after the second or third epoch, loss of discriminator starts approaching to 0 which apparently means the generator network is not learning anymore

Can I get your expert opinion on what may be the causes? and, what parameters would be suitable to train the network for such a large dataset?

Below is the screenshot showing learning progress:

screenshot from 2018-08-09 21-08-01

How to split the netG into two networks?

I am reproducing your work in "Context Encoders: Feature Learning by Inpainting" and I have trained the networks successfully. However, now I face a problem. I noticed that the netG extracts the features of a picture and then uses the features to fix the missing region of the picture. I am very interested in the features(i.e.,the 4000-unit bottleneck) and I hope to split the netG into two networks, an encode net and a decode net, so that I can use the encode one to transpose a picture into a vector or use the decode one to transpose a vector into a picture. I have no idea how to do it.
I once tried to use ":add" to put an encode net and a decode net together into a netG but after training I found they are both nil while the netG worked well.

Changing patch location request

Hello @pathak22
First of all, thanks for sharing your code and research 👍 . I got the demo and training working on macOSX 10.11.5💃. However, I got stuck when trying to modify the algorithm.

I'm trying to change the code so I can specify the location of the region mask myself instead of choosing either random or center (in the testing phase).
Can you give me pointers on how to go about doing this?
(I was thinking to manually put white patches on the validation images and disable the automatic masking but couldn't figure it out)

Another question is, would the algorithm work with bigger images say 1024x1024 ?

P.S: I'm a newbie in NN so apologies if I'm asking stupid questions.
Cheers.

Illegal memory access ?

I am trying to run your demo (ubuntu 16.04, cuda 8) but something goes wrong in the forward pass.
The line "net=models/inpaintCenter/paris_inpaintCenter.t7 name=paris_result imDir=images/paris overlapPred=4 manualSeed=222 batchSize=21 gpu=1 th demo.lua" gives me the following error.
I haven't changed anything in the code. Could it be GPU memory requirements ? (I have 6GB)
Any help would be hugely appriciated !

net=models/inpaintCenter/paris_inpaintCenter.t7 name=paris_result imDir=images/paris overlapPred=4 manualSeed=222 batchSize=21 gpu=1 th demo.lua
{
gpu : 1
net : "models/inpaintCenter/paris_inpaintCenter.t7"
overlapPred : 4
manualSeed : 222
name : "paris_result"
nc : 3
imDir : "images/paris"
batchSize : 21
}
Seed: 222
nn.Sequential {
[input -> (1) -> (2) -> (3) -> (4) -> (5) -> (6) -> (7) -> (8) -> (9) -> (10) -> (11) -> (12) -> (13) -> (14) -> (15) -> (16) -> (17) -> output]
(1): nn.Sequential {
[input -> (1) -> (2) -> (3) -> (4) -> (5) -> (6) -> (7) -> (8) -> (9) -> (10) -> (11) -> (12) -> (13) -> (14) -> (15) -> output]
(1): nn.SpatialConvolution(3 -> 64, 4x4, 2,2, 1,1)
(2): nn.LeakyReLU(0.2)
(3): nn.SpatialConvolution(64 -> 64, 4x4, 2,2, 1,1)
(4): nn.SpatialBatchNormalization (4D) (64)
(5): nn.LeakyReLU(0.2)
(6): nn.SpatialConvolution(64 -> 128, 4x4, 2,2, 1,1)
(7): nn.SpatialBatchNormalization (4D) (128)
(8): nn.LeakyReLU(0.2)
(9): nn.SpatialConvolution(128 -> 256, 4x4, 2,2, 1,1)
(10): nn.SpatialBatchNormalization (4D) (256)
(11): nn.LeakyReLU(0.2)
(12): nn.SpatialConvolution(256 -> 512, 4x4, 2,2, 1,1)
(13): nn.SpatialBatchNormalization (4D) (512)
(14): nn.LeakyReLU(0.2)
(15): nn.SpatialConvolution(512 -> 4000, 4x4)
}
(2): nn.SpatialBatchNormalization (4D) (4000)
(3): nn.LeakyReLU(0.2)
(4): nn.SpatialFullConvolution(4000 -> 512, 4x4)
(5): nn.SpatialBatchNormalization (4D) (512)
(6): nn.ReLU
(7): nn.SpatialFullConvolution(512 -> 256, 4x4, 2,2, 1,1)
(8): nn.SpatialBatchNormalization (4D) (256)
(9): nn.ReLU
(10): nn.SpatialFullConvolution(256 -> 128, 4x4, 2,2, 1,1)
(11): nn.SpatialBatchNormalization (4D) (128)
(12): nn.ReLU
(13): nn.SpatialFullConvolution(128 -> 64, 4x4, 2,2, 1,1)
(14): nn.SpatialBatchNormalization (4D) (64)
(15): nn.ReLU
(16): nn.SpatialFullConvolution(64 -> 3, 4x4, 2,2, 1,1)
(17): nn.Tanh
}
Loaded Image Block: 21 x 3 x 128 x 128
/home/user/torch/install/bin/luajit: /home/user/torch/install/share/lua/5.1/nn/Container.lua:67:
In 1 module of nn.Sequential:
In 6 module of nn.Sequential:
/home/user/torch/install/share/lua/5.1/nn/THNN.lua:110: bad argument #2 to 'v' (3D or 4D (batch mode) tensor is expected at /home/user/torch/extra/cunn/lib/THCUNN/SpatialConvolutionMM.cu:12)
stack traceback:
[C]: in function 'v'
/home/user/torch/install/share/lua/5.1/nn/THNN.lua:110: in function 'SpatialConvolutionMM_updateOutput'
...er/torch/install/share/lua/5.1/nn/SpatialConvolution.lua:79: in function <...er/torch/install/share/lua/5.1/nn/SpatialConvolution.lua:76>
[C]: in function 'xpcall'
/home/user/torch/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors'
/home/user/torch/install/share/lua/5.1/nn/Sequential.lua:44: in function </home/user/torch/install/share/lua/5.1/nn/Sequential.lua:41>
[C]: in function 'xpcall'
/home/user/torch/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors'
/home/user/torch/install/share/lua/5.1/nn/Sequential.lua:44: in function 'forward'
demo.lua:68: in main chunk
[C]: in function 'dofile'
...user/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
[C]: at 0x00405d50

WARNING: If you see a stack trace below, it doesn't point to the place where this error occurred. Please use only the one above.
stack traceback:
[C]: in function 'error'
/home/user/torch/install/share/lua/5.1/nn/Container.lua:67: in function 'rethrowErrors'
/home/user/torch/install/share/lua/5.1/nn/Sequential.lua:44: in function 'forward'
demo.lua:68: in main chunk
[C]: in function 'dofile'
...user/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
[C]: at 0x00405d50
THCudaCheck FAIL file=/home/user/torch/extra/cutorch/lib/THC/generic/THCStorage.c line=147 error=77 : an illegal memory access was encountered

Loss for Inpainting

Section 3.2 of the paper (Joint Loss) says that adversarial loss only is used for inpainting experiments, but Figure 6 shows results with L2+Adv. Can you please clarify?

How to avoid blocky reconstruction

@pathak22

How to remove the blocking artifact in the patch loction after reconstruction of image?
What value to be set in this part of code?

image_ctx[{{},{1},{1 + opt.fineSize/4 + opt.overlapPred, opt.fineSize/2 + opt.fineSize/4 - opt.overlapPred},{1 + opt.fineSize/4 + opt.overlapPred, opt.fineSize/2 + opt.fineSize/4 - opt.overlapPred}}] = 2117.0/255.0 - 1.0
image_ctx[{{},{2},{1 + opt.fineSize/4 + opt.overlapPred, opt.fineSize/2 + opt.fineSize/4 - opt.overlapPred},{1 + opt.fineSize/4 + opt.overlapPred, opt.fineSize/2 + opt.fineSize/4 - opt.overlapPred}}] = 2
104.0/255.0 - 1.0
image_ctx[{{},{3},{1 + opt.fineSize/4 + opt.overlapPred, opt.fineSize/2 + opt.fineSize/4 - opt.overlapPred},{1 + opt.fineSize/4 + opt.overlapPred, opt.fineSize/2 + opt.fineSize/4 - opt.overlapPred}}] = 2*123.0/255.0 - 1.0

The above values are used to set mean value in R. G. B.
But we want to make this region fully transparent [no blocky artifact].

Can you please guide us on this regard

Condition on Discriminator

Hi,

From page 5:
To customize GANs for this task, one could condition on the given context information; i.e., the mask Mˆ ⊙ x. However, conditional GANs don’t train easily for context prediction task as the adversarial dis- criminator D easily exploits the perceptual discontinuity in generated regions and the original context to easily classify predicted versus real samples.

Does this means that we only rely on the "L1 reconstruction loss" to ensure the model to take the (outside) condition part into consideration?
Since the discriminator only sees the inpainted region, it has no additional information to work on.

Looking forward to your reply! Thanks

what is the "conditionAdv" for?

Hi,
I find that if the opt.conditionAdv is true , then the input of the Adversarial discriminator net is 128x128,
but if set it false, the input is changed to be 64x64.
so what is the "conditionAdv" for? and what is the real input size of Adversarial discriminator net?

Download random region inpainting model

Hi
In your paper you mentioned that you have implemented the 3 strategies, namely centre inpainting, random block inpainting and random region inpainting. However there is currently no way to download a inpaintRandomRegin model in t7 format. Can that be provided or does it need to be trained with train_random.lua?

Regarding re-training a model

@pathak22
I understood the method to train with the existing dataset for centre region inpainting model with the below command as mentioned in README.md

DATA_ROOT=dataset/train display_id=11 name=inpaintCenter overlapPred=4 wtl2=0.999 nBottleneck=4000 niter=500 loadSize=350 fineSize=128 gpu=1 th train.lua

This will give me the trained model "inpaintCenter_500_net_G.t7"
However I have two queries:

  1. Suppose I want to re-train the already trained model "inpaintCenter_500_net_G.t7" with new dataset how do I achieve that?
    By referring the below link,
    https://stats.stackexchange.com/questions/325803/retraining-cnn-model
    I understood that I need to load the weights from the old model to the new one.
    But how do we load the weights in this context-encoder code?
    Kindly let me know the approach.

  2. Suppose I have trained a model "inpaintCenter_500_net_G.t7" with centre patch location, and I want to retrain the same model with different patch location, how do i achieve this?
    Since I am a beginner to machine learning, kindly suggest.

Cannot test random region inpainting

I am trying to run the provided trained net (imagenet_inpaintCenter) with random region inpainting (test_random.lua)

DATA_ROOT=dataset/val net=models/inpaintCenter/imagenet_inpaintCenter.t7 name=test_whatever useOverlapPred=0 manualSeed=222 batchSize=30 loadSize=129 gpu=1 th test_random.lua

but I get this error: bad argument #2 to '?' (sizes do not match at /<...>/torch/extra/cutorch/lib/THC/generated/../generic/THCTensorMasked.cu:122)

Any idea on how to solve this?

Thanks :)

Question about the paper

At B. Feature Learning part in Supplementary Material, you said," Unfortunately, we couldn’t train adversary with Alexnet Encoder." I wonder whether it is because you use Random region method?

Random missing areas get bad effect

hello! Thanks for your brilliant paper and code first.

I use the 'train_random.lua' script to train the net to complete an image with random loss area,I train it in a GPU of GTX 1080, I found the train time is so long if I loop 500 times, almost 150 hours, so I only loop 80 times.the effect in visulization web is good,but when I test it in validation data, the result is so bad.

Followings are some results of my training in random areas,the effet is bad
screenshot from 2017-11-19 17-28-00

Followings are resuls from weight in center missing that you offered for me, the result is good.
screenshot from 2017-11-19 17-33-34

is it the reason that my epoch loops is not enough? any advices will be appreciated.

Where is the Channel-wise fully-connected layer

I want to ask you for whether the Channel-wise fully-connected layer is the nBottleneck in the code? My understanding that is the transition layer between the encoder and decoder, is it right?How to expression in the Channel-wise fully-connected layer

About torch.setnumthreads(1)

In the file (train.lua and data.lua), why set torch.setnumthreads(1)?

I am a little confused about the setting. Can I set multithreads, such as torch.setnumthreads(torch.getnumthreads())?

Thanks for your time.

About Paris StreetView Dataset

Hi,

I have read your paper and hope to run the code, but I cannot find the Paris StreetView Dataset. Since ImageNet is too large and difficult to train, I hope to use Paris StreetView Dataset. So could you please tell me how I can find the dataset?

Thanks so much for your attention!

THTensorCopy copy error

I am attempting to run your code, but I am encountering an error when running train.lua.

The only changes I made to the code were commenting out some of the trainHook function in donkey_folder.lua since I did not want my images to be randomly cropped and changing the number of channels to be 1 since all of the images in my data are 128 x 128 grayscale images.

I am running the code on Ubuntu 16.04 on CPU only. This is the latest version of torch. I have installed all required dependencies by running luarocks install torch, luarocks install nn, luarocks install optim, luarocks install threads, luarocks install argcheck, luarocks install sys, luarocks install xlua, and luarocks install image.

I've been going nuts for 3 days trying to resolve this error. I'd be grateful for any help I could get.

anne@Anne:~/Desktop/context-encoder$ DATA_ROOT=dataset/train th train.lua
{
ntrain : inf
nc : 1
noiseGen : 0
beta1 : 0.5
nThreads : 4
display_iter : 50
niter : 25
batchSize : 64
ndf : 64
fineSize : 0
nz : 100
wtl2 : 0
loadSize : 0
gpu : 0
ngf : 64
conditionAdv : 0
noisetype : "normal"
lr : 0.0002
manualSeed : 0
name : "train1"
overlapPred : 0
nBottleneck : 100
nef : 64
display_id : 10
display : 0
}
Seed: 5354
Starting donkey with id: 2 seed: 5356
table: 0x418bb2f8
Starting donkey with id: 1 seed: 5355
table: 0x412fd340
Starting donkey with id: 4 seed: 5358
table: 0x41b19aa8
Starting donkey with id: 3 seed: 5357
table: 0x41e394e0
Loading train metadata from cache
Loading train metadata from cache
Loading train metadata from cache
Loading train metadata from cache
Dataset Size: 200000
LR of Gen is 1 times Adv
NetG: nn.Sequential {
input -> (1) -> (2) -> (3) -> (4) -> (5) -> (6) -> (7) -> (8) -> (9) -> (10) -> (11) -> (12) -> (13) -> (14) -> (15) -> (16) -> (17) -> output: nn.Sequential {
input -> (1) -> (2) -> (3) -> (4) -> (5) -> (6) -> (7) -> (8) -> (9) -> (10) -> (11) -> (12) -> (13) -> (14) -> (15) -> output: nn.SpatialConvolution(1 -> 64, 4x4, 2,2, 1,1)
(2): nn.LeakyReLU(0.2)
(3): nn.SpatialConvolution(64 -> 64, 4x4, 2,2, 1,1)
(4): nn.SpatialBatchNormalization
(5): nn.LeakyReLU(0.2)
(6): nn.SpatialConvolution(64 -> 128, 4x4, 2,2, 1,1)
(7): nn.SpatialBatchNormalization
(8): nn.LeakyReLU(0.2)
(9): nn.SpatialConvolution(128 -> 256, 4x4, 2,2, 1,1)
(10): nn.SpatialBatchNormalization
(11): nn.LeakyReLU(0.2)
(12): nn.SpatialConvolution(256 -> 512, 4x4, 2,2, 1,1)
(13): nn.SpatialBatchNormalization
(14): nn.LeakyReLU(0.2)
(15): nn.SpatialConvolution(512 -> 100, 4x4)
}
(2): nn.SpatialBatchNormalization
(3): nn.LeakyReLU(0.2)
(4): nn.SpatialFullConvolution(100 -> 512, 4x4)
(5): nn.SpatialBatchNormalization
(6): nn.ReLU
(7): nn.SpatialFullConvolution(512 -> 256, 4x4, 2,2, 1,1)
(8): nn.SpatialBatchNormalization
(9): nn.ReLU
(10): nn.SpatialFullConvolution(256 -> 128, 4x4, 2,2, 1,1)
(11): nn.SpatialBatchNormalization
(12): nn.ReLU
(13): nn.SpatialFullConvolution(128 -> 64, 4x4, 2,2, 1,1)
(14): nn.SpatialBatchNormalization
(15): nn.ReLU
(16): nn.SpatialFullConvolution(64 -> 1, 4x4, 2,2, 1,1)
(17): nn.Tanh
}
NetD: nn.Sequential {
input -> (1) -> (2) -> (3) -> (4) -> (5) -> (6) -> (7) -> (8) -> (9) -> (10) -> (11) -> (12) -> (13) -> (14) -> output: nn.SpatialConvolution(1 -> 64, 4x4, 2,2, 1,1)
(2): nn.LeakyReLU(0.2)
(3): nn.SpatialConvolution(64 -> 128, 4x4, 2,2, 1,1)
(4): nn.SpatialBatchNormalization
(5): nn.LeakyReLU(0.2)
(6): nn.SpatialConvolution(128 -> 256, 4x4, 2,2, 1,1)
(7): nn.SpatialBatchNormalization
(8): nn.LeakyReLU(0.2)
(9): nn.SpatialConvolution(256 -> 512, 4x4, 2,2, 1,1)
(10): nn.SpatialBatchNormalization
(11): nn.LeakyReLU(0.2)
(12): nn.SpatialConvolution(512 -> 1, 4x4)
(13): nn.Sigmoid
(14): nn.View(1)
}
/home/anne/torch/install/bin/luajit: /home/anne/torch/install/share/lua/5.1/threads/threads.lua:183: [thread 1 callback] /home/anne/Desktop/context-encoder/data/dataset.lua:328: inconsistent tensor size at /tmp/luarocks_torch-scm-1-667/torch7/lib/TH/generic/THTensorCopy.c:7
stack traceback:
[C]: in function 'copy'
/home/anne/Desktop/context-encoder/data/dataset.lua:328: in function 'tableToOutput'
/home/anne/Desktop/context-encoder/data/dataset.lua:345: in function </home/anne/Desktop/context-encoder/data/dataset.lua:335>
[C]: in function 'xpcall'
/home/anne/torch/install/share/lua/5.1/threads/threads.lua:234: in function 'callback'
/home/anne/torch/install/share/lua/5.1/threads/queue.lua:65: in function </home/anne/torch/install/share/lua/5.1/threads/queue.lua:41>
[C]: in function 'pcall'
/home/anne/torch/install/share/lua/5.1/threads/queue.lua:40: in function 'dojob'
[string " local Queue = require 'threads.queue'..."]:15: in main chunk
stack traceback:
[C]: in function 'error'
/home/anne/torch/install/share/lua/5.1/threads/threads.lua:183: in function 'dojob'
/home/anne/Desktop/context-encoder/data/data.lua:81: in function 'getBatch'
train.lua:281: in function 'opfunc'
/home/anne/torch/install/share/lua/5.1/optim/adam.lua:37: in function 'adam'
train.lua:413: in main chunk
[C]: in function 'dofile'
...anne/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
[C]: at 0x00405d50

Regarding image rescaling

@pathak22 : As per our observation, the models rescales the images to 128x128.
Since our input image resolution is HD, we expect the reconstructed output also to be of same resolution.
Please let us know if this is possible?

Input already masked images instead of generating patches

Hi
Is it possible to input already masked images instead of generating masks in the image first and inpaint them ? The problem is, that the masks in my images will not all be in the same relative locations so I should inpaint different image regions but I don't want to generate masks myself on the image. If it is possible, should the masks satisfy some constraints (like pixel values )?
Thx in advance

Organization of the train and val dataset

When I want to train my own model with my dataset, there is no explanation about the organization method of the data. I just put images into the folder dataset/train and dataset/val, but received a error:

/home/**/torch/install/bin/luajit: /home/**/torch/install/share/lua/5.1/threads/threads.lua:183: [thread 4 callback] /home/**/context-encoder/data/dataset.lua:202: Could not find any image file in the given input paths
stack traceback:
    [C]: in function 'assert'

@pathak22

how to change the mask location?

hiii, i m so interested in yr work, i want to be able to make the mask at the top left of the image and be able to in-paint that region successfully, what has to be changed in the training and test code to achieve this
thanks

Training time

Hi, can I know how much training time it spent for inpainting using ImageNet ?
And what kind of gpu you use? Thanks!

Implementation in pytorch

Any idea on how to implement this paper in pytorch would be of a great help.!
And what do these line of the code do?
How do I implement these lines in pytorch

real_ctx[{{},{1},{},{}}][mask_global] = 2 * 117.0/255.0 - 1.0
real_ctx[{{},{2},{},{}}][mask_global] = 2 * 104.0/255.0 - 1.0
real_ctx[{{},{3},{},{}}][mask_global] = 2 * 123.0/255.0 - 1.0
input_ctx:copy(real_ctx)

Questions about test code

hi
1 I have a question about the loadSize, why should we use 129 not just 128?
2 why should we use overlapPred?
3 I have a image which fineSize is 32*32. If I want to train this fineSize, then I just change new dataset and fineSize in code.?
4 If I want to train my dataset, how should I adjust the parameter to get best results?
Thank you so much!

is overlapPred used?

in paper, it mention that overlap is 7, but the parameter is set 0 (default) in code.
is it used?

Dataset for paris and imagenet

@pathak22 Could you please share me the dataset that you used to train your models?
My email id is [email protected]

Also, when I tried to train the model [train.lua] with a different dataset, I obtained the following error:
/home/ananya/torch/install/bin/luajit: /home/ananya/torch/install/share/lua/5.1/trepl/init.lua:389: module 'display' not found:No LuaRocks module found for display
no field package.preload['display']
no file '/home/ananya/.luarocks/share/lua/5.1/display.lua'
no file '/home/ananya/.luarocks/share/lua/5.1/display/init.lua'
no file '/home/ananya/torch/install/share/lua/5.1/display.lua'
no file '/home/ananya/torch/install/share/lua/5.1/display/init.lua'
no file './display.lua'
no file '/home/ananya/torch/install/share/luajit-2.1.0-beta1/display.lua'
no file '/usr/local/share/lua/5.1/display.lua'
no file '/usr/local/share/lua/5.1/display/init.lua'
no file '/home/ananya/.luarocks/lib/lua/5.1/display.so'
no file '/home/ananya/torch/install/lib/lua/5.1/display.so'
no file '/home/ananya/torch/install/lib/display.so'
no file './display.so'
no file '/usr/local/lib/lua/5.1/display.so'
no file '/usr/local/lib/lua/5.1/loadall.so'
stack traceback:
[C]: in function 'error'
/home/ananya/torch/install/share/lua/5.1/trepl/init.lua:389: in function 'require'
train.lua:260: in main chunk
[C]: in function 'dofile'
...anya/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: at 0x00405d50
Could you please help me resolve this issue?

Decoder part for section 5.2, feature learning?

The prototxt and learned weight for the Alexnet encoder is here, but I wonder if there is caffe /torch implementation for the decoder part, which would include the channel-wise fully connected layer.

I would greatly appreciate if you can share the implementation.

Paris Street-View Dataset

Hi Pathak,
I am currently working on image inpainting algorothim and trying to train several model architectures. Can you share the Paris Street-View Dataset through a private link in my email address. That would be really great. Thanks.
Email: [email protected]

About dataset

Hi Pathak,
I'm a student studying on inpainting now. And I'd like to know if I can get dataset by email for private use.
And my email is [email protected].

Train Context Encoders

hi
i m trying to train the context encoder but i m getting this error after the epoch no.20, i m using cpu mode as my laptop doesn't support CUDA
Epoch: [20][ 3 / 3] Time: 29.860 DataTime: 0.002 Err_G_L2: 0.0748 Err_G: 1.6929 Err_D: 0.4178
/home/nermin/torch/install/bin/lua: ...me/nermin/torch/install/share/lua/5.1/torch/File.lua:210: write error: wrote 24686288 blocks instead of 32768000 at /home/nermin/torch/pkg/torch/lib/TH/THDiskFile.c:353
stack traceback:
[C]: in function 'write'
...me/nermin/torch/install/share/lua/5.1/torch/File.lua:210: in function <...me/nermin/torch/install/share/lua/5.1/torch/File.lua:107>
[C]: in function 'write'
...me/nermin/torch/install/share/lua/5.1/torch/File.lua:210: in function 'writeObject'
...me/nermin/torch/install/share/lua/5.1/torch/File.lua:235: in function 'writeObject'
...ome/nermin/torch/install/share/lua/5.1/nn/Module.lua:188: in function 'write'
...me/nermin/torch/install/share/lua/5.1/torch/File.lua:210: in function 'writeObject'
...me/nermin/torch/install/share/lua/5.1/torch/File.lua:235: in function 'writeObject'
...me/nermin/torch/install/share/lua/5.1/torch/File.lua:235: in function 'writeObject'
...ome/nermin/torch/install/share/lua/5.1/nn/Module.lua:188: in function 'write'
...me/nermin/torch/install/share/lua/5.1/torch/File.lua:210: in function 'writeObject'
...me/nermin/torch/install/share/lua/5.1/torch/File.lua:235: in function 'writeObject'
...me/nermin/torch/install/share/lua/5.1/torch/File.lua:235: in function 'writeObject'
...ome/nermin/torch/install/share/lua/5.1/nn/Module.lua:188: in function 'write'
...me/nermin/torch/install/share/lua/5.1/torch/File.lua:210: in function 'writeObject'
...me/nermin/torch/install/share/lua/5.1/torch/File.lua:388: in function 'save'
/home/nermin/context-encoder/util.lua:96: in function 'save'
train.lua:451: in main chunk
[C]: in function 'dofile'
.../torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: ?
can u help?
thanks

Why we need reset conv's bias value in every iterator

   netD:apply(function(m) if torch.type(m):find('Convolution') then m.bias:zero() end end)
   netG:apply(function(m) if torch.type(m):find('Convolution') then m.bias:zero() end end)

We can see these code in fGx and fDX, I can' understand these code.

The bias value is determined at the last iterator training phase ?

AlexNet results

As I understand it all images in the paper are using the small (not AlexNet) architecture, since the adversarial loss makes the images look much nicer. I was hoping it is possible to share some of the images from the AlexNet implementation so that I can compare with the results of my own implementation.

(If there are other smart ways of performing intermediate validation of the AlexNet implementation before recreating the Table 2 results they are also welcome)

error whille testing my dataset

hii
i tried training using the parisview dataset but i always get that error while testing after it finishes the training phase, which i can't figure out why ??
can u help ? @pathak22
screenshot from 2018-01-24 17-25-41

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.