GithubHelp home page GithubHelp logo

samet-akcay / ganomaly Goto Github PK

View Code? Open in Web Editor NEW
840.0 840.0 211.0 75 KB

GANomaly: Semi-Supervised Anomaly Detection via Adversarial Training

License: MIT License

Python 98.57% Shell 1.43%
anomaly-detection gan pytorch semi-supervised-learning

ganomaly's Introduction

 triet1102

ganomaly's People

Contributors

andreluizbvs avatar dependabot[bot] avatar gerardmaggiolino avatar samet-akcay avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ganomaly's Issues

Is Discriminator loss correct??

Hi, I'm trying to understand the mechanism of GANomaly.

I read model.py and I thought Discriminator Loss was incorrect because it should maximize Ladv or minimize BCE.

Is Discriminator loss correct??

Something wrong for CIFAR experiments

HI! I run your code with pytorch0.3 + python3.5. I tried to run the demo in experiments/run_cifar.sh. But I met some index error. Error shows like follows:
Traceback (most recent call last):
File "train.py", line 47, in
main()
File "train.py", line 44, in main
model.train()
File "/media/yangbiao/disk1/ganomaly-master/lib/model.py", line 307, in train
self.train_epoch()
File "/media/yangbiao/disk1/ganomaly-master/lib/model.py", line 272, in train_epoch
for data in tqdm(self.dataloader['train'], leave=False, total=len(self.dataloader['train'])):
File "/usr/local/lib/python3.5/dist-packages/tqdm/_tqdm.py", line 719, in iter
for obj in iterable:
File "/usr/local/lib/python3.5/dist-packages/torch/utils/data/dataloader.py", line 281, in next
return self._process_next_batch(batch)
File "/usr/local/lib/python3.5/dist-packages/torch/utils/data/dataloader.py", line 301, in _process_next_batch
raise batch.exc_type(batch.exc_msg)
IndexError: Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/torch/utils/data/dataloader.py", line 55, in _worker_loop
samples = collate_fn([dataset[i] for i in batch_indices])
File "/usr/local/lib/python3.5/dist-packages/torch/utils/data/dataloader.py", line 55, in
samples = collate_fn([dataset[i] for i in batch_indices])
File "/usr/local/lib/python3.5/dist-packages/torchvision/datasets/cifar.py", line 83, in getitem
img, target = self.train_data[index], self.train_labels[index]
IndexError: index 49369 is out of bounds for axis 0 with size 45000
Do you know why? I will appreciate your aid.

some questions about Custom Dataset

Hello,I am reading your code now,and I want to ask you that if the Custom Dataset have labels like minst label 1,2,3,4,....
I am trying to do somthing that classify the normal class.
Hope for your relpy.

Discriminator Loss Specification

Amazing work! Just a comment, in the paper its not very clear if to train the discriminator you where going to:

  1. Maximize Ladv for a (real, fake) pair
  2. Minimize the cross_entropy loss for D(x) = 1 and D(G(x)) = 0

In the repo I see you use cross entropy but it would be good to clarify it in the paper, since you don't mention this its hard to understand the adversarial aspect of the training method.

What are dimensionalities of images in UBD and FFOB catasets?

I see that many people ask you to release datasets, but, unfortunately you cannot do that. That's fine.

I want to ask you how big the images are? Are we talking about 500x500 pixels? Or even 1024x1024? I haven't found this information in the paper.

Thank you.

error: expected input[64, 1, 32, 32] to have 3 channels, but got 1 channels instead

hi, when I switched to Linux with Nvdia GPU, I met a new problem. when I use command like this:

python3 train.py --dataset mnist --niter 20 --anomaly_class 5
python3 train.py --dataset cifar10 --niter 20 --anomaly_class cat

I tried on mnist and cifar 10, the result is the same like this:

root@z-Precision-Tower-3620:/home/z/桌面/git/ganomaly#` python3 train.py --dataset mnist --niter 20 --anomaly_class 5
/usr/local/lib/python3.6/dist-packages/sklearn/externals/joblib/externals/cloudpickle/cloudpickle.py:47: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
  import imp
/usr/local/lib/python3.6/dist-packages/torchvision/transforms/transforms.py:188: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.
  "please use transforms.Resize instead.")
>> Training model Ganomaly.
Traceback (most recent call last):                                              
  File "train.py", line 43, in <module>
    model.train()
  File "/home/z/桌面/git/ganomaly/lib/model.py", line 279, in train
    self.train_epoch()
  File "/home/z/桌面/git/ganomaly/lib/model.py", line 249, in train_epoch
    self.optimize()
  File "/home/z/桌面/git/ganomaly/lib/model.py", line 181, in optimize
    self.update_netd()
  File "/home/z/桌面/git/ganomaly/lib/model.py", line 134, in update_netd
    self.out_d_real, self.feat_real = self.netd(self.input)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 477, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/z/桌面/git/ganomaly/lib/networks.py", line 153, in forward
    features = self.features(x)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 477, in __call__
    result = self.forward(*input, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/container.py", line 91, in forward
    input = module(input)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 477, in __call__
    result = self.forward(*input, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py", line 301, in forward
    self.padding, self.dilation, self.groups)
RuntimeError: Given groups=1, weight of size [64, 3, 4, 4], expected input[64, 1, 32, 32] to have 3 channels, but got 1 channels instead

Can you help me to fix this? Thx!

There are some mistakes in GANomaly paper

In paper 3.2 Model Training, the Encoder Loss part :
1)In last sentence which is write "'Lcon is formally defined as Lenc=...." ,It's a mistake i think,the 'Lcon' should be corrected to 'Lenc'.
2) In second sentence "encoder loss Lcon to minimize the..." should corrected 'Lcon' to Lenc too.

Prediction

Could you please include the instruction for doing the prediction?

ValueError: multiclass format is not supported

Hi,I tried your new update code 。but I got this error

Traceback (most recent call last):
File "train.py", line 43, in
train()
File "train.py", line 40, in train
model.train()
File "/home/dl/VSST/dm/ganomaly-master/lib/model.py", line 168, in train
res = self.test()
File "/home/dl/VSST/dm/ganomaly-master/lib/model.py", line 259, in test
auc = evaluate(self.gt_labels, self.an_scores, metric=self.opt.metric)
File "/home/dl/VSST/dm/ganomaly-master/lib/evaluate.py", line 24, in evaluate
return roc(labels, scores)
File "/home/dl/VSST/dm/ganomaly-master/lib/evaluate.py", line 46, in roc
fpr, tpr, _ = roc_curve(labels, scores)
File "/home/dl/anaconda3/envs/pytorch0.4/lib/python3.7/site-packages/sklearn/metrics/ranking.py", line 622, in roc_curve
y_true, y_score, pos_label=pos_label, sample_weight=sample_weight)
File "/home/dl/anaconda3/envs/pytorch0.4/lib/python3.7/site-packages/sklearn/metrics/ranking.py", line 396, in _binary_clf_curve
raise ValueError("{0} format is not supported".format(y_type))
ValueError: multiclass format is not supported

Issue in displaying

Hi,

Thanks for sharing this code. When I tried to include the display option in train.py. I get the following error

raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8097): Max retries exceeded with url: /events (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f77db544bd0>: Failed to establish a new connection: [Errno 111] Connection refused'))

Use of visdom server is not being helpful. Any suggestions to solve this will be greatly appreciated.
Also, Please share if you added new resources for the prediction/ testing.

Thanks in advance!

How to test mnist and cifar10 datasets?

I don't see the test.py file in the folder. Do I need to write a test.py file to test the mnist and cifar10 data set? Or what parameters can I modify to test?
anyone can help me? thank you !

where is the test code?

After training, how could we test a single image to classify whether it's fake or real without using test() in model.py?

can you upload the code of skip ganomaly

I have read your paper of skip ganomaly, your results are so good. Can you release you code so that I can learn from that? Besides, your paper use the different loss compared with the loss in ganomaly.

test() function is correct?

I tested cifar10(abnormal_class: bird) with the latest commit code which solved the loss problem.

As a result, a maximum of 0.579(AUC) was obtained, which was different from the GANomaly-bird score of 0.510 on paper(skip-ganomaly).

So while checking the source code, I found that self.netg.eval() is missing from the test() function in lib/model.py

So I added self.netg.eval() to 197 line and retested to get a 0.410 AUC.
It was quite different from the results of the paper.

I think it's right to test by adding eval(). What do you think?
If adding eval() is right, what do you think about the evaluation results of the paper?

I met this error. When train time/

Hi, @samet-akcay

I met this error.

File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 169, in add_module
raise KeyError("module name can't contain "."")
KeyError: 'module name can't contain "."'

What's wrong with me?

Thanks in advance.

from @bemoregt.

AttributeError: can't set attribute

Hi, there are something wrong when i try to run your codes:
File "E:/software/installation/Anaconda3/envs/CleverHans_py3.5/Lib/site-packages/ganomaly/train.py", line 35, in
dataloader = load_data(opt)
File "E:\software\installation\Anaconda3\envs\CleverHans_py3.5\Lib\site-packages\ganomaly\lib\data.py", line 99, in load_data
abn_cls_idx=opt.anomaly_class
AttributeError: can't set attribute

I don't know the ture reason, can you help me?
Thank you.

Broken pipe for demo sh

Hi~
I'm a student and impressed of your paper, then I decided to give it a try. But I'm facing some unknown problem when running the demo sh "run_mnist.sh" and "run_cifar.sh". (I moved these two sh file to the folder containing train.py.)

results for cifar10:

`Manual Seed: 0
Running CIFAR. Anomaly Class: plane
C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\torchvision\transforms\transforms.py:188: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.
"please use transforms.Resize instead.")
Files already downloaded and verified
Files already downloaded and verified

Training model Ganomaly.
0%| | 0/703 [00:00<?, ?it/s]C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\torchvision\transforms\transforms.py:188: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.
"please use transforms.Resize instead.")
Files already downloaded and verified
Files already downloaded and verified
Training model Ganomaly.
0%| | 0/703 [00:00<?, ?it/s]Traceback (most recent call last):
File "", line 1, in
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 114, in _main
prepare(preparation_data)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
run_name="mp_main")
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\foo\Desktop\git\ganomaly\train.py", line 43, in
model.train()
File "C:\Users\foo\Desktop\git\ganomaly\lib\model.py", line 279, in train
self.train_epoch()
File "C:\Users\foo\Desktop\git\ganomaly\lib\model.py", line 244, in train_epoch
for data in tqdm(self.dataloader['train'], leave=False, total=len(self.dataloader['train'])):
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\tqdm_tqdm.py", line 937, in iter
for obj in iterable:
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 501, in iter
return _DataLoaderIter(self)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 289, in init
w.start()
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 33, in init
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 143, in get_preparation_data
_check_not_importing_main()
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.

    This probably means that you are not using fork to start your
    child processes and you have forgotten to use the proper idiom
    in the main module:

        if __name__ == '__main__':
            freeze_support()
            ...

    The "freeze_support()" line can be omitted if the program
    is not going to be frozen to produce an executable.

Traceback (most recent call last):
File "train.py", line 43, in
model.train()
File "C:\Users\foo\Desktop\git\ganomaly\lib\model.py", line 279, in train
self.train_epoch()
File "C:\Users\foo\Desktop\git\ganomaly\lib\model.py", line 244, in train_epoch
for data in tqdm(self.dataloader['train'], leave=False, total=len(self.dataloader['train'])):
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\tqdm_tqdm.py", line 937, in iter
for obj in iterable:
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 501, in iter
return _DataLoaderIter(self)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 289, in init
w.start()
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 65, in init
reduction.dump(process_obj, to_child)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
BrokenPipeError: [Errno 32] Broken pipe
results for mnist:
Manual Seed: 0
Running mnist_0
C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\torchvision\transforms\transforms.py:188: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.
"please use transforms.Resize instead.")
Exception in user code:

Traceback (most recent call last):
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connection.py", line 141, in _new_conn
(self.host, self.port), self.timeout, **extra_kw)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\util\connection.py", line 83, in create_connection
raise err
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\util\connection.py", line 73, in create_connection
sock.connect(sa)
ConnectionRefusedError: [WinError 10061] ▒▒▒▒Ŀ▒▒▒▒▒▒▒▒▒▒▒ܾ▒▒▒▒޷▒▒▒▒ӡ▒

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connectionpool.py", line 601, in urlopen
chunked=chunked)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connectionpool.py", line 357, in _make_request
conn.request(method, url, **httplib_request_kw)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\http\client.py", line 1239, in request
self._send_request(method, url, body, headers, encode_chunked)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\http\client.py", line 1285, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\http\client.py", line 1234, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\http\client.py", line 1026, in _send_output
self.send(msg)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\http\client.py", line 964, in send
self.connect()
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connection.py", line 166, in connect
conn = self._new_conn()
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connection.py", line 150, in _new_conn
self, "Failed to establish a new connection: %s" % e)
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x000002330D5E2320>: Failed to establish a new connection: [WinError 10061] ▒▒▒▒Ŀ▒▒▒▒▒▒▒▒▒▒▒ܾ▒▒▒▒޷▒▒▒▒ӡ▒

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\adapters.py", line 440, in send
timeout=timeout
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connectionpool.py", line 639, in urlopen
_stacktrace=sys.exc_info()[2])
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\util\retry.py", line 388, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8097): Max retries exceeded with url: /env/main (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x000002330D5E2320>: Failed to establish a new connection: [WinError 10061] ▒▒▒▒Ŀ▒▒▒▒▒▒▒▒▒▒▒ܾ▒▒▒▒޷▒▒▒▒ӡ▒',))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\visdom_init_.py", line 446, in _send
data=json.dumps(msg),
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\api.py", line 112, in post
return request('post', url, data=data, json=json, **kwargs)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\api.py", line 58, in request
return session.request(method=method, url=url, **kwargs)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\sessions.py", line 508, in request
resp = self.send(prep, **send_kwargs)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\sessions.py", line 618, in send
r = adapter.send(request, **kwargs)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\adapters.py", line 508, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8097): Max retries exceeded with url: /env/main (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x000002330D5E2320>: Failed to establish a new connection: [WinError 10061] ▒▒▒▒Ŀ▒▒▒▒▒▒▒▒▒▒▒ܾ▒▒▒▒޷▒▒▒▒ӡ▒',))

Training model Ganomaly.
0%| | 0/844 [00:00<?, ?it/s]C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\torchvision\transforms\transforms.py:188: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.
"please use transforms.Resize instead.")
C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\torchvision\transforms\transforms.py:188: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.
"please use transforms.Resize instead.")
C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\torchvision\transforms\transforms.py:188: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.
"please use transforms.Resize instead.")
C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\torchvision\transforms\transforms.py:188: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.
"please use transforms.Resize instead.")
C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\torchvision\transforms\transforms.py:188: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.
"please use transforms.Resize instead.")
C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\torchvision\transforms\transforms.py:188: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.
"please use transforms.Resize instead.")
C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\torchvision\transforms\transforms.py:188: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.
"please use transforms.Resize instead.")
C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\torchvision\transforms\transforms.py:188: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.
"please use transforms.Resize instead.")
Exception in user code:


Traceback (most recent call last):
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connection.py", line 141, in _new_conn
(self.host, self.port), self.timeout, **extra_kw)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\util\connection.py", line 83, in create_connection
raise err
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\util\connection.py", line 73, in create_connection
sock.connect(sa)
ConnectionRefusedError: [WinError 10061] ▒▒▒▒Ŀ▒▒▒▒▒▒▒▒▒▒▒ܾ▒▒▒▒޷▒▒▒▒ӡ▒

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connectionpool.py", line 601, in urlopen
chunked=chunked)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connectionpool.py", line 357, in _make_request
conn.request(method, url, **httplib_request_kw)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\http\client.py", line 1239, in request
self._send_request(method, url, body, headers, encode_chunked)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\http\client.py", line 1285, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\http\client.py", line 1234, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\http\client.py", line 1026, in _send_output
self.send(msg)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\http\client.py", line 964, in send
self.connect()
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connection.py", line 166, in connect
conn = self._new_conn()
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connection.py", line 150, in _new_conn
self, "Failed to establish a new connection: %s" % e)
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x000001C91B44E320>: Failed to establish a new connection: [WinError 10061] ▒▒▒▒Ŀ▒▒▒▒▒▒▒▒▒▒▒ܾ▒▒▒▒޷▒▒▒▒ӡ▒

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\adapters.py", line 440, in send
timeout=timeout
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connectionpool.py", line 639, in urlopen
_stacktrace=sys.exc_info()[2])
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\util\retry.py", line 388, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8097): Max retries exceeded with url: /env/main (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x000001C91B44E320>: Failed to establish a new connection: [WinError 10061] ▒▒▒▒Ŀ▒▒▒▒▒▒▒▒▒▒▒ܾ▒▒▒▒޷▒▒▒▒ӡ▒',))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\visdom_init_.py", line 446, in _send
data=json.dumps(msg),
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\api.py", line 112, in post
return request('post', url, data=data, json=json, **kwargs)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\api.py", line 58, in request
return session.request(method=method, url=url, **kwargs)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\sessions.py", line 508, in request
resp = self.send(prep, **send_kwargs)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\sessions.py", line 618, in send
r = adapter.send(request, **kwargs)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\adapters.py", line 508, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8097): Max retries exceeded with url: /env/main (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x000001C91B44E320>: Failed to establish a new connection: [WinError 10061] ▒▒▒▒Ŀ▒▒▒▒▒▒▒▒▒▒▒ܾ▒▒▒▒޷▒▒▒▒ӡ▒',))
Exception in user code:

Traceback (most recent call last):
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connection.py", line 141, in _new_conn
(self.host, self.port), self.timeout, **extra_kw)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\util\connection.py", line 83, in create_connection
raise err
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\util\connection.py", line 73, in create_connection
sock.connect(sa)
ConnectionRefusedError: [WinError 10061] ▒▒▒▒Ŀ▒▒▒▒▒▒▒▒▒▒▒ܾ▒▒▒▒޷▒▒▒▒ӡ▒

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connectionpool.py", line 601, in urlopen
chunked=chunked)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connectionpool.py", line 357, in _make_request
conn.request(method, url, **httplib_request_kw)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\http\client.py", line 1239, in request
self._send_request(method, url, body, headers, encode_chunked)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\http\client.py", line 1285, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\http\client.py", line 1234, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\http\client.py", line 1026, in _send_output
self.send(msg)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\http\client.py", line 964, in send
self.connect()
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connection.py", line 166, in connect
conn = self._new_conn()
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connection.py", line 150, in _new_conn
self, "Failed to establish a new connection: %s" % e)
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x00000200002DE2E8>: Failed to establish a new connection: [WinError 10061] ▒▒▒▒Ŀ▒▒▒▒▒▒▒▒▒▒▒ܾ▒▒▒▒޷▒▒▒▒ӡ▒

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\adapters.py", line 440, in send
timeout=timeout
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connectionpool.py", line 639, in urlopen
_stacktrace=sys.exc_info()[2])
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\util\retry.py", line 388, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8097): Max retries exceeded with url: /env/main (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x00000200002DE2E8>: Failed to establish a new connection: [WinError 10061] ▒▒▒▒Ŀ▒▒▒▒▒▒▒▒▒▒▒ܾ▒▒▒▒޷▒▒▒▒ӡ▒',))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\visdom_init_.py", line 446, in _send
data=json.dumps(msg),
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\api.py", line 112, in post
return request('post', url, data=data, json=json, **kwargs)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\api.py", line 58, in request
return session.request(method=method, url=url, **kwargs)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\sessions.py", line 508, in request
resp = self.send(prep, **send_kwargs)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\sessions.py", line 618, in send
r = adapter.send(request, **kwargs)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\adapters.py", line 508, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8097): Max retries exceeded with url: /env/main (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x00000200002DE2E8>: Failed to establish a new connection: [WinError 10061] ▒▒▒▒Ŀ▒▒▒▒▒▒▒▒▒▒▒ܾ▒▒▒▒޷▒▒▒▒ӡ▒',))
Exception in user code:

Exception in user code:

Traceback (most recent call last):
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connection.py", line 141, in _new_conn
(self.host, self.port), self.timeout, **extra_kw)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\util\connection.py", line 83, in create_connection
raise err
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\util\connection.py", line 73, in create_connection
sock.connect(sa)
ConnectionRefusedError: [WinError 10061] ▒▒▒▒Ŀ▒▒▒▒▒▒▒▒▒▒▒ܾ▒▒▒▒޷▒▒▒▒ӡ▒

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connectionpool.py", line 601, in urlopen
chunked=chunked)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connectionpool.py", line 357, in _make_request
conn.request(method, url, **httplib_request_kw)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\http\client.py", line 1239, in request
self._send_request(method, url, body, headers, encode_chunked)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\http\client.py", line 1285, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\http\client.py", line 1234, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\http\client.py", line 1026, in _send_output
self.send(msg)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\http\client.py", line 964, in send
self.connect()
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connection.py", line 166, in connect
conn = self._new_conn()
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connection.py", line 150, in _new_conn
self, "Failed to establish a new connection: %s" % e)
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x000002CCC845D2E8>: Failed to establish a new connection: [WinError 10061] ▒▒▒▒Ŀ▒▒▒▒▒▒▒▒▒▒▒ܾ▒▒▒▒޷▒▒▒▒ӡ▒

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\adapters.py", line 440, in send
timeout=timeout
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connectionpool.py", line 639, in urlopen
_stacktrace=sys.exc_info()[2])
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\util\retry.py", line 388, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8097): Max retries exceeded with url: /env/main (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x000002CCC845D2E8>: Failed to establish a new connection: [WinError 10061] ▒▒▒▒Ŀ▒▒▒▒▒▒▒▒▒▒▒ܾ▒▒▒▒޷▒▒▒▒ӡ▒',))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\visdom_init_.py", line 446, in _send
data=json.dumps(msg),
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\api.py", line 112, in post
return request('post', url, data=data, json=json, **kwargs)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\api.py", line 58, in request
return session.request(method=method, url=url, **kwargs)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\sessions.py", line 508, in request
resp = self.send(prep, **send_kwargs)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\sessions.py", line 618, in send
r = adapter.send(request, **kwargs)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\adapters.py", line 508, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8097): Max retries exceeded with url: /env/main (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x000002CCC845D2E8>: Failed to establish a new connection: [WinError 10061] ▒▒▒▒Ŀ▒▒▒▒▒▒▒▒▒▒▒ܾ▒▒▒▒޷▒▒▒▒ӡ▒',))
Traceback (most recent call last):
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connection.py", line 141, in _new_conn
(self.host, self.port), self.timeout, **extra_kw)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\util\connection.py", line 83, in create_connection
raise err
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\util\connection.py", line 73, in create_connection
sock.connect(sa)
ConnectionRefusedError: [WinError 10061] ▒▒▒▒Ŀ▒▒▒▒▒▒▒▒▒▒▒ܾ▒▒▒▒޷▒▒▒▒ӡ▒

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connectionpool.py", line 601, in urlopen
chunked=chunked)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connectionpool.py", line 357, in _make_request
conn.request(method, url, **httplib_request_kw)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\http\client.py", line 1239, in request
self._send_request(method, url, body, headers, encode_chunked)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\http\client.py", line 1285, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\http\client.py", line 1234, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\http\client.py", line 1026, in _send_output
self.send(msg)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\http\client.py", line 964, in send
self.connect()
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connection.py", line 166, in connect
conn = self._new_conn()
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connection.py", line 150, in _new_conn
self, "Failed to establish a new connection: %s" % e)
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x00000269802DC2E8>: Failed to establish a new connection: [WinError 10061] ▒▒▒▒Ŀ▒▒▒▒▒▒▒▒▒▒▒ܾ▒▒▒▒޷▒▒▒▒ӡ▒

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\adapters.py", line 440, in send
timeout=timeout
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connectionpool.py", line 639, in urlopen
_stacktrace=sys.exc_info()[2])
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\util\retry.py", line 388, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8097): Max retries exceeded with url: /env/main (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x00000269802DC2E8>: Failed to establish a new connection: [WinError 10061] ▒▒▒▒Ŀ▒▒▒▒▒▒▒▒▒▒▒ܾ▒▒▒▒޷▒▒▒▒ӡ▒',))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\visdom_init_.py", line 446, in _send
data=json.dumps(msg),
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\api.py", line 112, in post
return request('post', url, data=data, json=json, **kwargs)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\api.py", line 58, in request
return session.request(method=method, url=url, **kwargs)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\sessions.py", line 508, in request
resp = self.send(prep, **send_kwargs)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\sessions.py", line 618, in send
r = adapter.send(request, **kwargs)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\adapters.py", line 508, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8097): Max retries exceeded with url: /env/main (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x00000269802DC2E8>: Failed to establish a new connection: [WinError 10061] ▒▒▒▒Ŀ▒▒▒▒▒▒▒▒▒▒▒ܾ▒▒▒▒޷▒▒▒▒ӡ▒',))
Exception in user code:

Traceback (most recent call last):
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connection.py", line 141, in _new_conn
(self.host, self.port), self.timeout, **extra_kw)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\util\connection.py", line 83, in create_connection
raise err
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\util\connection.py", line 73, in create_connection
sock.connect(sa)
ConnectionRefusedError: [WinError 10061] ▒▒▒▒Ŀ▒▒▒▒▒▒▒▒▒▒▒ܾ▒▒▒▒޷▒▒▒▒ӡ▒

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connectionpool.py", line 601, in urlopen
chunked=chunked)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connectionpool.py", line 357, in _make_request
conn.request(method, url, **httplib_request_kw)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\http\client.py", line 1239, in request
self._send_request(method, url, body, headers, encode_chunked)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\http\client.py", line 1285, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\http\client.py", line 1234, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\http\client.py", line 1026, in _send_output
self.send(msg)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\http\client.py", line 964, in send
self.connect()
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connection.py", line 166, in connect
conn = self._new_conn()
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connection.py", line 150, in _new_conn
self, "Failed to establish a new connection: %s" % e)
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x000001CE002DF2E8>: Failed to establish a new connection: [WinError 10061] ▒▒▒▒Ŀ▒▒▒▒▒▒▒▒▒▒▒ܾ▒▒▒▒޷▒▒▒▒ӡ▒

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\adapters.py", line 440, in send
timeout=timeout
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connectionpool.py", line 639, in urlopen
_stacktrace=sys.exc_info()[2])
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\util\retry.py", line 388, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8097): Max retries exceeded with url: /env/main (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x000001CE002DF2E8>: Failed to establish a new connection: [WinError 10061] ▒▒▒▒Ŀ▒▒▒▒▒▒▒▒▒▒▒ܾ▒▒▒▒޷▒▒▒▒ӡ▒',))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\visdom_init_.py", line 446, in _send
data=json.dumps(msg),
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\api.py", line 112, in post
return request('post', url, data=data, json=json, **kwargs)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\api.py", line 58, in request
return session.request(method=method, url=url, **kwargs)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\sessions.py", line 508, in request
resp = self.send(prep, **send_kwargs)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\sessions.py", line 618, in send
r = adapter.send(request, **kwargs)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\adapters.py", line 508, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8097): Max retries exceeded with url: /env/main (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x000001CE002DF2E8>: Failed to establish a new connection: [WinError 10061] ▒▒▒▒Ŀ▒▒▒▒▒▒▒▒▒▒▒ܾ▒▒▒▒޷▒▒▒▒ӡ▒',))
Exception in user code:

Exception in user code:

Traceback (most recent call last):
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connection.py", line 141, in _new_conn
(self.host, self.port), self.timeout, **extra_kw)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\util\connection.py", line 83, in create_connection
raise err
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\util\connection.py", line 73, in create_connection
sock.connect(sa)
ConnectionRefusedError: [WinError 10061] ▒▒▒▒Ŀ▒▒▒▒▒▒▒▒▒▒▒ܾ▒▒▒▒޷▒▒▒▒ӡ▒

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connectionpool.py", line 601, in urlopen
chunked=chunked)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connectionpool.py", line 357, in _make_request
conn.request(method, url, **httplib_request_kw)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\http\client.py", line 1239, in request
self._send_request(method, url, body, headers, encode_chunked)
Exception in user code: File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\http\client.py", line 1285, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)

File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\http\client.py", line 1234, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
------------------------------------------------------------ File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\http\client.py", line 1026, in _send_output
self.send(msg)

File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\http\client.py", line 964, in send
self.connect()
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connection.py", line 166, in connect
conn = self._new_conn()
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connection.py", line 150, in _new_conn
self, "Failed to establish a new connection: %s" % e)
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x0000016A002DE278>: Failed to establish a new connection: [WinError 10061] ▒▒▒▒Ŀ▒▒▒▒▒▒▒▒▒▒▒ܾ▒▒▒▒޷▒▒▒▒ӡ▒

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\adapters.py", line 440, in send
timeout=timeout
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connectionpool.py", line 639, in urlopen
_stacktrace=sys.exc_info()[2])
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\util\retry.py", line 388, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8097): Max retries exceeded with url: /env/main (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x0000016A002DE278>: Failed to establish a new connection: [WinError 10061] ▒▒▒▒Ŀ▒▒▒▒▒▒▒▒▒▒▒ܾ▒▒▒▒޷▒▒▒▒ӡ▒',))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\visdom_init_.py", line 446, in _send
data=json.dumps(msg),
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\api.py", line 112, in post
return request('post', url, data=data, json=json, **kwargs)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\api.py", line 58, in request
return session.request(method=method, url=url, **kwargs)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\sessions.py", line 508, in request
resp = self.send(prep, **send_kwargs)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\sessions.py", line 618, in send
r = adapter.send(request, **kwargs)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\adapters.py", line 508, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8097): Max retries exceeded with url: /env/main (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x0000016A002DE278>: Failed to establish a new connection: [WinError 10061] ▒▒▒▒Ŀ▒▒▒▒▒▒▒▒▒▒▒ܾ▒▒▒▒޷▒▒▒▒ӡ▒',))
Traceback (most recent call last):
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connection.py", line 141, in _new_conn
(self.host, self.port), self.timeout, **extra_kw)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\util\connection.py", line 83, in create_connection
raise err
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\util\connection.py", line 73, in create_connection
sock.connect(sa)
ConnectionRefusedError: [WinError 10061] ▒▒▒▒Ŀ▒▒▒▒▒▒▒▒▒▒▒ܾ▒▒▒▒޷▒▒▒▒ӡ▒

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connectionpool.py", line 601, in urlopen
chunked=chunked)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connectionpool.py", line 357, in _make_request
conn.request(method, url, **httplib_request_kw)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\http\client.py", line 1239, in request
self._send_request(method, url, body, headers, encode_chunked)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\http\client.py", line 1285, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\http\client.py", line 1234, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\http\client.py", line 1026, in _send_output
self.send(msg)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\http\client.py", line 964, in send
self.connect()
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connection.py", line 166, in connect
conn = self._new_conn()
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connection.py", line 150, in _new_conn
self, "Failed to establish a new connection: %s" % e)
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x0000014D34EBE320>: Failed to establish a new connection: [WinError 10061] ▒▒▒▒Ŀ▒▒▒▒▒▒▒▒▒▒▒ܾ▒▒▒▒޷▒▒▒▒ӡ▒

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\adapters.py", line 440, in send
timeout=timeout
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connectionpool.py", line 639, in urlopen
_stacktrace=sys.exc_info()[2])
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\util\retry.py", line 388, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8097): Max retries exceeded with url: /env/main (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x0000014D34EBE320>: Failed to establish a new connection: [WinError 10061] ▒▒▒▒Ŀ▒▒▒▒▒▒▒▒▒▒▒ܾ▒▒▒▒޷▒▒▒▒ӡ▒',))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\visdom_init_.py", line 446, in _send
data=json.dumps(msg),
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\api.py", line 112, in post
return request('post', url, data=data, json=json, **kwargs)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\api.py", line 58, in request
return session.request(method=method, url=url, **kwargs)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\sessions.py", line 508, in request
resp = self.send(prep, **send_kwargs)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\sessions.py", line 618, in send
r = adapter.send(request, **kwargs)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\adapters.py", line 508, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8097): Max retries exceeded with url: /env/main (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x0000014D34EBE320>: Failed to establish a new connection: [WinError 10061] ▒▒▒▒Ŀ▒▒▒▒▒▒▒▒▒▒▒ܾ▒▒▒▒޷▒▒▒▒ӡ▒',))
Traceback (most recent call last):
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connection.py", line 141, in _new_conn
(self.host, self.port), self.timeout, **extra_kw)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\util\connection.py", line 83, in create_connection
raise err
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\util\connection.py", line 73, in create_connection
sock.connect(sa)
ConnectionRefusedError: [WinError 10061] ▒▒▒▒Ŀ▒▒▒▒▒▒▒▒▒▒▒ܾ▒▒▒▒޷▒▒▒▒ӡ▒

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connectionpool.py", line 601, in urlopen
chunked=chunked)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connectionpool.py", line 357, in _make_request
conn.request(method, url, **httplib_request_kw)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\http\client.py", line 1239, in request
self._send_request(method, url, body, headers, encode_chunked)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\http\client.py", line 1285, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\http\client.py", line 1234, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\http\client.py", line 1026, in _send_output
self.send(msg)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\http\client.py", line 964, in send
self.connect()
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connection.py", line 166, in connect
conn = self._new_conn()
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connection.py", line 150, in _new_conn
self, "Failed to establish a new connection: %s" % e)
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x000001B98BF1C2E8>: Failed to establish a new connection: [WinError 10061] ▒▒▒▒Ŀ▒▒▒▒▒▒▒▒▒▒▒ܾ▒▒▒▒޷▒▒▒▒ӡ▒

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\adapters.py", line 440, in send
timeout=timeout
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\connectionpool.py", line 639, in urlopen
_stacktrace=sys.exc_info()[2])
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\urllib3\util\retry.py", line 388, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8097): Max retries exceeded with url: /env/main (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x000001B98BF1C2E8>: Failed to establish a new connection: [WinError 10061] ▒▒▒▒Ŀ▒▒▒▒▒▒▒▒▒▒▒ܾ▒▒▒▒޷▒▒▒▒ӡ▒',))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\visdom_init_.py", line 446, in _send
data=json.dumps(msg),
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\api.py", line 112, in post
return request('post', url, data=data, json=json, **kwargs)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\api.py", line 58, in request
return session.request(method=method, url=url, **kwargs)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\sessions.py", line 508, in request
resp = self.send(prep, **send_kwargs)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\sessions.py", line 618, in send
r = adapter.send(request, **kwargs)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\requests\adapters.py", line 508, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8097): Max retries exceeded with url: /env/main (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x000001B98BF1C2E8>: Failed to establish a new connection: [WinError 10061] ▒▒▒▒Ŀ▒▒▒▒▒▒▒▒▒▒▒ܾ▒▒▒▒޷▒▒▒▒ӡ▒',))

Training model Ganomaly.
0%| | 0/844 [00:00<?, ?it/s]Traceback (most recent call last):
File "", line 1, in
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 114, in _main
prepare(preparation_data)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
run_name="mp_main")
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\foo\Desktop\git\ganomaly\train.py", line 43, in
model.train()
File "C:\Users\foo\Desktop\git\ganomaly\lib\model.py", line 279, in train
self.train_epoch()
File "C:\Users\foo\Desktop\git\ganomaly\lib\model.py", line 244, in train_epoch
for data in tqdm(self.dataloader['train'], leave=False, total=len(self.dataloader['train'])):
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\tqdm_tqdm.py", line 937, in iter
for obj in iterable:
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 501, in iter
return _DataLoaderIter(self)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 289, in init
w.start()
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 33, in init
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 143, in get_preparation_data
_check_not_importing_main()
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.

    This probably means that you are not using fork to start your
    child processes and you have forgotten to use the proper idiom
    in the main module:

        if __name__ == '__main__':
            freeze_support()
            ...

    The "freeze_support()" line can be omitted if the program
    is not going to be frozen to produce an executable.

Training model Ganomaly.
0%| | 0/844 [00:00<?, ?it/s]Traceback (most recent call last):
File "", line 1, in
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 114, in _main
prepare(preparation_data)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
run_name="mp_main")
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\foo\Desktop\git\ganomaly\train.py", line 43, in
model.train()
File "C:\Users\foo\Desktop\git\ganomaly\lib\model.py", line 279, in train
self.train_epoch()
File "C:\Users\foo\Desktop\git\ganomaly\lib\model.py", line 244, in train_epoch
for data in tqdm(self.dataloader['train'], leave=False, total=len(self.dataloader['train'])):
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\tqdm_tqdm.py", line 937, in iter
for obj in iterable:
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 501, in iter
return _DataLoaderIter(self)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 289, in init
w.start()
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 33, in init
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 143, in get_preparation_data
_check_not_importing_main()
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.

    This probably means that you are not using fork to start your
    child processes and you have forgotten to use the proper idiom
    in the main module:

        if __name__ == '__main__':
            freeze_support()
            ...

    The "freeze_support()" line can be omitted if the program
    is not going to be frozen to produce an executable.

Training model Ganomaly.
0%| | 0/844 [00:00<?, ?it/s]Traceback (most recent call last):
File "", line 1, in
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 114, in _main
prepare(preparation_data)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
run_name="mp_main")
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\foo\Desktop\git\ganomaly\train.py", line 43, in
model.train()
File "C:\Users\foo\Desktop\git\ganomaly\lib\model.py", line 279, in train
self.train_epoch()
File "C:\Users\foo\Desktop\git\ganomaly\lib\model.py", line 244, in train_epoch
for data in tqdm(self.dataloader['train'], leave=False, total=len(self.dataloader['train'])):
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\tqdm_tqdm.py", line 937, in iter
for obj in iterable:
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 501, in iter
return _DataLoaderIter(self)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 289, in init
w.start()
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 33, in init
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 143, in get_preparation_data
_check_not_importing_main()
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.

    This probably means that you are not using fork to start your
    child processes and you have forgotten to use the proper idiom
    in the main module:

        if __name__ == '__main__':
            freeze_support()
            ...

    The "freeze_support()" line can be omitted if the program
    is not going to be frozen to produce an executable.

Training model Ganomaly.
0%| | 0/844 [00:00<?, ?it/s]Traceback (most recent call last):
File "", line 1, in
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 114, in _main
prepare(preparation_data)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
run_name="mp_main")
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\foo\Desktop\git\ganomaly\train.py", line 43, in
model.train()
File "C:\Users\foo\Desktop\git\ganomaly\lib\model.py", line 279, in train
self.train_epoch()
File "C:\Users\foo\Desktop\git\ganomaly\lib\model.py", line 244, in train_epoch
for data in tqdm(self.dataloader['train'], leave=False, total=len(self.dataloader['train'])):
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\tqdm_tqdm.py", line 937, in iter
for obj in iterable:
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 501, in iter
return _DataLoaderIter(self)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 289, in init
w.start()
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 33, in init
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 143, in get_preparation_data
_check_not_importing_main()
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.

    This probably means that you are not using fork to start your
    child processes and you have forgotten to use the proper idiom
    in the main module:

        if __name__ == '__main__':
            freeze_support()
            ...

    The "freeze_support()" line can be omitted if the program
    is not going to be frozen to produce an executable.

Training model Ganomaly.
0%| | 0/844 [00:00<?, ?it/s]Traceback (most recent call last):
File "", line 1, in
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 114, in _main
prepare(preparation_data)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
run_name="mp_main")
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\foo\Desktop\git\ganomaly\train.py", line 43, in
model.train()
File "C:\Users\foo\Desktop\git\ganomaly\lib\model.py", line 279, in train
self.train_epoch()
File "C:\Users\foo\Desktop\git\ganomaly\lib\model.py", line 244, in train_epoch
for data in tqdm(self.dataloader['train'], leave=False, total=len(self.dataloader['train'])):
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\tqdm_tqdm.py", line 937, in iter
for obj in iterable:
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 501, in iter
return _DataLoaderIter(self)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 289, in init
w.start()
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 33, in init
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 143, in get_preparation_data
_check_not_importing_main()
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.

    This probably means that you are not using fork to start your
    child processes and you have forgotten to use the proper idiom
    in the main module:

        if __name__ == '__main__':
            freeze_support()
            ...

    The "freeze_support()" line can be omitted if the program
    is not going to be frozen to produce an executable.

Training model Ganomaly.
0%| | 0/844 [00:00<?, ?it/s]Traceback (most recent call last):
File "", line 1, in
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 114, in _main
prepare(preparation_data)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
run_name="mp_main")
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\foo\Desktop\git\ganomaly\train.py", line 43, in
model.train()
File "C:\Users\foo\Desktop\git\ganomaly\lib\model.py", line 279, in train
self.train_epoch()
File "C:\Users\foo\Desktop\git\ganomaly\lib\model.py", line 244, in train_epoch
for data in tqdm(self.dataloader['train'], leave=False, total=len(self.dataloader['train'])):
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\tqdm_tqdm.py", line 937, in iter
for obj in iterable:
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 501, in iter
return _DataLoaderIter(self)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 289, in init
w.start()
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 33, in init
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 143, in get_preparation_data
_check_not_importing_main()
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.

    This probably means that you are not using fork to start your
    child processes and you have forgotten to use the proper idiom
    in the main module:

        if __name__ == '__main__':
            freeze_support()
            ...

    The "freeze_support()" line can be omitted if the program
    is not going to be frozen to produce an executable.

Training model Ganomaly.
0%| | 0/844 [00:00<?, ?it/s]Traceback (most recent call last):
File "", line 1, in
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 114, in _main
prepare(preparation_data)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
run_name="mp_main")
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\foo\Desktop\git\ganomaly\train.py", line 43, in
model.train()
File "C:\Users\foo\Desktop\git\ganomaly\lib\model.py", line 279, in train
self.train_epoch()
File "C:\Users\foo\Desktop\git\ganomaly\lib\model.py", line 244, in train_epoch
for data in tqdm(self.dataloader['train'], leave=False, total=len(self.dataloader['train'])):
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\tqdm_tqdm.py", line 937, in iter
for obj in iterable:
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 501, in iter
return _DataLoaderIter(self)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 289, in init
w.start()
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 33, in init
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 143, in get_preparation_data
_check_not_importing_main()
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.

    This probably means that you are not using fork to start your
    child processes and you have forgotten to use the proper idiom
    in the main module:

        if __name__ == '__main__':
            freeze_support()
            ...

    The "freeze_support()" line can be omitted if the program
    is not going to be frozen to produce an executable.

Training model Ganomaly.
0%| | 0/844 [00:00<?, ?it/s]Traceback (most recent call last):
File "", line 1, in
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 114, in _main
prepare(preparation_data)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
run_name="mp_main")
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\foo\Desktop\git\ganomaly\train.py", line 43, in
model.train()
File "C:\Users\foo\Desktop\git\ganomaly\lib\model.py", line 279, in train
self.train_epoch()
File "C:\Users\foo\Desktop\git\ganomaly\lib\model.py", line 244, in train_epoch
for data in tqdm(self.dataloader['train'], leave=False, total=len(self.dataloader['train'])):
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\tqdm_tqdm.py", line 937, in iter
for obj in iterable:
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 501, in iter
return _DataLoaderIter(self)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 289, in init
w.start()
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 33, in init
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 143, in get_preparation_data
_check_not_importing_main()
File "C:\Users\foo\AppData\Local\Continuum\anaconda3\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.

    This probably means that you are not using fork to start your
    child processes and you have forgotten to use the proper idiom
    in the main module:

        if __name__ == '__main__':
            freeze_support()
            ...

    The "freeze_support()" line can be omitted if the program
    is not going to be frozen to produce an executable.

`

(ps. the environment is Win10 64 python3)
Could you help me out this problem? Thx! :)

Visualization on Custom Dataset

Hi,
Thanks for your work, I have a question about when I train on mnist or cifar10 dataset, in the file of "output", there will be the visualization result, but when I train on my own dataset, there won't be any visualization result. Should I set for simething?
Thanks again for your kind reply.

doubt about encoder loss

Hi~I have run your code and the result is good. I have also read your paper. I found that L1 loss is used in your paper to define encode loss, but actually you use L2 loss in your code to define encode loss. Am I right? In addition, you have defined l2_loss in "loss.py". But you did not define the backward function of this error. So, I wonder how "self.err_g_enc" can backward in your code. These questions trouble me a lot. It will glad if you can resolve these doubts. Thanks and good luck!

confused with the train process

Dear samet,

I think your ganomaly has no adversarial train.

you trained the Discriminator with |f(x) - f(x')|, (not the real or fake).

so I doubt that your D can not judge input is real or fake.

and your Generator using the real/fake (D have no idea about it) to adversarial the the autoencoder.

I have read your skip-ganomaly paper, there is the actual adversarial train.

train on cirf10

when I run the train.py ,it turns a wrong error :
Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to ./data\cifar-10-python.tar.gz
100%|█████████▉| 170483712/170498071 [3:27:33<00:01, 10548.55it/s]Files already downloaded and verified
Traceback (most recent call last):
File "D:/Anaconda/envs/tensorflow/ganomaly-master/train.py", line 35, in
dataloader = load_data(opt)
File "D:\Anaconda\envs\tensorflow\ganomaly-master\lib\data.py", line 60, in load_data
trn_img=dataset['train'].train_data,
AttributeError: 'CIFAR10' object has no attribute 'train_data'
Exception ignored in: <function tqdm.del at 0x0000000005FE9620>
Traceback (most recent call last):
File "D:\Anaconda\lib\site-packages\tqdm_tqdm.py", line 931, in del
self.close()
File "D:\Anaconda\lib\site-packages\tqdm_tqdm.py", line 1133, in close
self._decr_instances(self)
File "D:\Anaconda\lib\site-packages\tqdm_tqdm.py", line 496, in _decr_instances
cls.monitor.exit()
File "D:\Anaconda\lib\site-packages\tqdm_monitor.py", line 52, in exit
self.join()
File "D:\Anaconda\lib\threading.py", line 1029, in join
raise RuntimeError("cannot join current thread")
RuntimeError: cannot join current thread

my computer envrionmenr is windows cpu

train.py: error: argument --manualseed: invalid int value: '{0..2}'

Thanks for your great work!
Not familiar with sh command,
When I run the command: sh experiments/run_mnist.sh,
the following comes out:

Manual Seed: {0..2}
Running mnist_{0..9}
usage: train.py [-h] [--dataset DATASET] [--dataroot DATAROOT]
                [--batchsize BATCHSIZE] [--workers WORKERS] [--droplast]
                [--isize ISIZE] [--nc NC] [--nz NZ] [--ngf NGF] [--ndf NDF]
                [--extralayers EXTRALAYERS] [--gpu_ids GPU_IDS] [--ngpu NGPU]
                [--name NAME] [--model MODEL]
                [--display_server DISPLAY_SERVER]
                [--display_port DISPLAY_PORT] [--display_id DISPLAY_ID]
                [--display] [--outf OUTF] [--manualseed MANUALSEED]
                [--anomaly_class ANOMALY_CLASS] [--proportion PROPORTION]
                [--metric METRIC] [--print_freq PRINT_FREQ]
                [--save_image_freq SAVE_IMAGE_FREQ] [--save_test_images]
                [--load_weights] [--resume RESUME] [--phase PHASE]
                [--iter ITER] [--niter NITER] [--beta1 BETA1] [--lr LR]
                [--w_bce W_BCE] [--w_rec W_REC] [--w_enc W_ENC]
train.py: error: argument --manualseed: invalid int value: '{0..2}'

What should I do?
Thank you very much!

the results on MNIST

Hello, could you tell me whether the results of MNIST in your paper were obtained without verification set? Thank you.

Custom dataset

Hi,

Thank you for the code, I tested it on the mnist and cifar datasets and it works fine.

I admit I'm very new with Pytorch, so it is possible that I'm making a very basic mistake.

I tried to upload my own dataset (RGB images 80x80 pixels) following the same structure you suggested. I used .png format and named the images exactly as you did.

The command I give is train.py --dataset mydata --isize 80 --niter 10

My problem is the code runs but it never start working (I have just one core at 100% but no GPU usage).
data.py sees the files in the folders correctly. I also tried to debug the rest of the code without big success. The run enters in model = Ganomaly(opt, dataloader) but never gets out.

I changed the size of the dataset (reducing the images), but it seems to give the same problem.
Interrupting the code, it is always at the same point: File "/data/myPC/ganomaly/lib/networks.py", line 169, in __init__ self.decoder = Decoder(opt.isize, opt.nz, opt.nc, opt.ngf, opt.ngpu, opt.extralayers)

Thank you in advance for your answer!

how to choose the means and stds in data.py

Hello, Samet
I have two questions to consult for your help.

  • Why choose the following means and stds In data.py:
# for cifar10
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))

# for minist
transforms.Normalize((0.1307,), (0.3081,))
  • For my own dataset, how to choose or calculate the mean and std?

Thank you very much!

'CIFAR10' object has no attribute 'train_data'

when I run bash experiments/run_cifar.sh in linux,the error are as follows:

Traceback (most recent call last):
File "train.py", line 35, in
dataloader = load_data(opt)
File "/home/z00377882/wangpengxiang/ganomaly/lib/data.py", line 60, in load_data
trn_img=dataset['train'].train_data,
AttributeError: 'CIFAR10' object has no attribute 'train_data'

About Evaluation

Hi, thanks for your paper and your codes.

I've just read the script evaluate.py. In evaluation, you chose three metrics to evaluate the classification performance.
When evaluating auc or average_precision_score, you just use sklearn to calculate the metrics. However, when testing the f-score, you specifically set up a threshold to determine it's positive or negative. I think the result the f1_score is contradictory to the result of auc or average_precision. Could you please explain why you set up the threshold in f-score why you set up the threshold as 0.2 (intuitively speaking, different datasets should be set up different thresholds)?

Training on UCSD PED1

I've trained this network on "UCSD PED1" with the following options:

> ------------ Options -------------
> alpha: 50
> anomaly_class: abnormal
> batchsize: 16
> beta1: 0.5
> dataroot: 
> dataset: PED1
> display: True
> display_id: 0
> display_port: 8097
> display_server: http://localhost
> droplast: True
> extralayers: 0
> gpu_ids: [0]
> isTrain: True
> isize: 128
> iter: 0
> load_weights: False
> lr: 0.0002
> manualseed: None
> model: ganomaly
> name: ganomaly/PED1
> nc: 3
> ndf: 64
> ngf: 64
> ngpu: 1
> niter: 1200
> nz: 512
> outf: ./output
> phase: train
> print_freq: 100
> resume: 
> save_image_freq: 100
> save_test_images: False
> workers: 8
> -------------- End ----------------

and the result I've got is as follow:

> Loss: [0/1200] err_d: 13.113 err_g: 31.230 err_d_real: 0.000 err_d_fake: 13.113 err_g_bce: 27.631 err_g_l1l: 0.069 err_g_enc: 0.168 
>    Avg Run Time (ms/batch): 6.692 EER: 0.450 AUC: 0.558 max AUC: 0.558
>    Loss: [1/1200] err_d: 2.959 err_g: 30.650 err_d_real: 0.119 err_d_fake: 2.840 err_g_bce: 27.631 err_g_l1l: 0.057 err_g_enc: 0.154 
>    Avg Run Time (ms/batch): 5.997 EER: 0.519 AUC: 0.453 max AUC: 0.558
>    Loss: [2/1200] err_d: 1.915 err_g: 27.692 err_d_real: 0.825 err_d_fake: 1.090 err_g_bce: 24.081 err_g_l1l: 0.070 err_g_enc: 0.122 
>    .
>    .
>    .
>    .
>    Loss: [1086/1200] err_d: 24.558 err_g: 0.692 err_d_real: 24.558 err_d_fake: 0.000 err_g_bce: 0.000 err_g_l1l: 0.014 err_g_enc: 0.001 
>    Avg Run Time (ms/batch): 6.892 EER: 0.404 AUC: 0.642 max AUC: 0.670
>    Loss: [1087/1200] err_d: 1.565 err_g: 0.826 err_d_real: 0.965 err_d_fake: 0.599 err_g_bce: 0.001 err_g_l1l: 0.016 err_g_enc: 0.001 
>    Avg Run Time (ms/batch): 7.184 EER: 0.401 AUC: 0.641 max AUC: 0.670
>    Loss: [1088/1200] err_d: 1.789 err_g: 0.788 err_d_real: 1.307 err_d_fake: 0.481 err_g_bce: 0.000 err_g_l1l: 0.016 err_g_enc: 0.001 
>    Avg Run Time (ms/batch): 7.295 EER: 0.401 AUC: 0.644 max AUC: 0.670
>    Loss: [1089/1200] err_d: 18.843 err_g: 0.708 err_d_real: 18.843 err_d_fake: 0.000 err_g_bce: 0.000 err_g_l1l: 0.014 err_g_enc: 0.001 
>    Avg Run Time (ms/batch): 7.100 EER: 0.406 AUC: 0.639 max AUC: 0.670
>    Loss: [1090/1200] err_d: 13.253 err_g: 28.329 err_d_real: 0.000 err_d_fake: 13.253 err_g_bce: 27.631 err_g_l1l: 0.014 err_g_enc: 0.001 
>    Avg Run Time (ms/batch): 6.469 EER: 0.406 AUC: 0.647 max AUC: 0.670
>    Loss: [1091/1200] err_d: 3.089 err_g: 28.403 err_d_real: 0.060 err_d_fake: 3.029 err_g_bce: 27.631 err_g_l1l: 0.015 err_g_enc: 0.001 
>    Avg Run Time (ms/batch): 7.284 EER: 0.402 AUC: 0.644 max AUC: 0.670

and i believe that there's something wrong with the configurations I've applied.
Have you already tried this network on UCSD?
I was wondering if you'd mind letting me know what is missing in training options or any other facts I must consider.

(Code Refactoring) Broken with PyTorch 1.1.0, No Seeding, Unnecessary Statements

This post will highlight multiple issues, which can be broken down at the authors request. In general, they all fall under small code changes to fix or update broken or missing functionality.

General usage breaks with PyTorch version 1.1.0.

  • In data.load_data, attributes train_data and train_labels are used rather than data and targets for loading MNIST and CIFAR10 data sets. These are deprecated, read-only properties. This throws an error when attempting to assign new values to the data sets.
  • In model.Ganomaly.set_input, resize_() is used on the data attribute of an array, causing a runtime error to be thrown. This can be fixed with a torch.no_grad() context manager.

Several lines of code are unnecessary and unused elsewhere.

  • In model.Ganomaly.update_netd, self.label is filled twice and then not used. This is only necessary in update_netg for the BCE loss. Note, this loss is not discussed in the current version of the paper on Arxiv, which may be of interest to the authors.
  • self.err_d_real and self.err_d_fake are identical to self.err_d, and unnecessary.

Seeding is nonfunctional.

  • opt.manualseed is not used to seed any aspect of the network instantiation or training procedure. This will require additional functionality.

I will make a pull request for some of these changes; @samet-akcay, please do let me know if I'm misinformed on any of these issues! I may be misinterpreting some of your work. Thank you for an excellent paper and open sourced implementation as well 🙂.

How to run with cpu? Actually CUDA is needed anyway?

Hi,I turn to run the code on Linux. But I don't have a computer with NVDIA device, so I tried to run the code only in cpu, like this command:

./python ~/ganomaly/train.py --dataset cifar10 --niter 100 --anomaly_class cat --gpu_ids -1 --ngpu 0

But it turns out that the code needs CUDA device anyway... Are there any options I missed? Thx~!

The running result:
~/anaconda3/bin$ ./python ~/ganomaly/train.py --dataset cifar10 --niter 100 --anomaly_class cat --gpu_ids -1 --ngpu 0 /home/foo/anaconda3/lib/python3.6/site-packages/torchvision-0.2.1-py3.6.egg/torchvision/transforms/transforms.py:188: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead. Files already downloaded and verified Files already downloaded and verified THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1535491974311/work/aten/src/THC/THCGeneral.cpp line=74 error=35 : CUDA driver version is insufficient for CUDA runtime version Traceback (most recent call last): File "/home/foo/ganomaly/train.py", line 39, in <module> model = Ganomaly(opt, dataloader) File "/home/foo/ganomaly/lib/model.py", line 71, in __init__ self.netg = NetG(self.opt).to(self.device) File "/home/foo/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 379, in to return self._apply(convert) File "/home/foo/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 185, in _apply module._apply(fn) File "/home/foo/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 185, in _apply module._apply(fn) File "/home/foo/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 185, in _apply module._apply(fn) File "/home/foo/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 191, in _apply param.data = fn(param.data) File "/home/foo/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 377, in convert return t.to(device, dtype if t.is_floating_point() else None, non_blocking) RuntimeError: cuda runtime error (35) : CUDA driver version is insufficient for CUDA runtime version at /opt/conda/conda-bld/pytorch_1535491974311/work/aten/src/THC/THCGeneral.cpp:74

Model Selection Issue

In your code, you are using the test set and auc metric for model selection, what metric should I use if the test set label is not reliable??

run_cifar.sh

experiments/run_cifar.sh: 5: experiments/run_cifar.sh: Syntax error: "(" unexpected

how to improve the result

hi,after using your code to train my custom dataset,the precision and recall is very low in test set,how can i imporove it during the training? hope you can give some advice.

About discriminator, generator and implementation in Tensorflow eager mode

Hi all, I'm currently working on the implementation of your work in Tensorflow/Keras with eager execution and I'm facing some problems and doubts.

Premise: I have successfully run your code with PyTorch but I cannot reproduce it with Tesorflow.

In particular, I want two to ask you the following.

About your code

  1. Inside the update_netd function (in model.py) you have separately called the backward() function first on self.err_d_real and then on self.err_d_fake.
    1.1. Why instead don't you have applied the backward() pass on self.err_d = self.err_d_real + self.err_d_fake?
    1.2. Why is there the above mentioned self.err_d = self.err_d_real + self.err_d_fake if it is not used

  2. Inside the update_netg function (in the same model.py)
    2.1. Why don't you have used the feature matching loss as stated in the paper
    2.2. What is the meaning of using here the retain_graph=True as the argument of the backward() function? I have tried with retain_graph=False and everything seems works fine as usual

  3. Is the reinitialization of the Discriminator network mandatory? Have you some more comments, instructions, past experience or something else on this particular practice?

About my code
I have reproduced your net structure to try to reproduce your results with, as I said, Tensorflow/Keras with eager execution. I have checked multiple times the code and it seems correct, it mirrors exactly your code but unfortunately, I don't have your results. The generator collapses (Note: I have not used yet the reinitialization of the discriminator as you do because even without reinitialization your code works well) and I get very bad generated images. I would ask you if you kindly are willing to give a look at my code, I will be glad to share it with you just to check if you see some big glitch that maybe I'm missing.

Thank you for your attention.

I have some small questions about MNIST dataset results?

Hello, author
First,when I run train.py on the mnist dataset, when the exception class is "1", the resulting AUC value is very low, about 0.1. When the rest of the values ​​are treated as exception classes, the resulting AUC values ​​are normal.Why is this?
Secondly, the result of the mnist dataset in Figure 4a is the average or maximum value?
I really hope to hear from you to answer my doubts.

TypeError: can't convert CUDA tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.

Thanks for an awesome repository.

I met the error of the issue title when executing run_mnist.sh in my GPU environment.

Traceback (most recent call last):
  File "train.py", line 43, in <module>
    model.train()
  File "/work/lib/model.py", line 280, in train
    res = self.test()
  File "/work/lib/model.py", line 354, in test
    auc = evaluate(self.gt_labels, self.an_scores, metric=self.opt.metric)
  File "/work/lib/evaluate.py", line 24, in evaluate
    return roc(labels, scores)
  File "/work/lib/evaluate.py", line 43, in roc
    fpr, tpr, _ = roc_curve(labels, scores)
  File "/root/anaconda3/lib/python3.7/site-packages/sklearn/metrics/ranking.py", line 618, in roc_curve
    y_true, y_score, pos_label=pos_label, sample_weight=sample_weight)
  File "/root/anaconda3/lib/python3.7/site-packages/sklearn/metrics/ranking.py", line 394, in _binary_clf_curve
    y_type = type_of_target(y_true)
  File "/root/anaconda3/lib/python3.7/site-packages/sklearn/utils/multiclass.py", line 249, in type_of_target
    if is_multilabel(y):
  File "/root/anaconda3/lib/python3.7/site-packages/sklearn/utils/multiclass.py", line 140, in is_multilabel
    y = np.asarray(y)
  File "/root/anaconda3/lib/python3.7/site-packages/numpy/core/numeric.py", line 501, in asarray
    return array(a, dtype, copy=False, order=order)
  File "/root/anaconda3/lib/python3.7/site-packages/torch/tensor.py", line 450, in __array__
    return self.numpy()
TypeError: can't convert CUDA tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.

My GPU environments:

  • nvidia/cuda:9.0-cudnn7-devel-ubuntu16.04
  • Anaconda3-2018.12-Linux-x86_64
  • PyTorch version: 1.0.0

The error was fixed by adding .cpu() method in https://github.com/yoheikikuta/ganomaly/blob/master/lib/evaluate.py#L22-L23 as below:

def evaluate(labels, scores, metric='roc'):
    labels = labels.cpu()
    scores = scores.cpu()
    if metric == 'roc':
...

Is this a known issue?
If not, I'll send a PR to fix it.

dataset

Can you provide the dataset(University Baggage Anomaly Dataset) in your paper?

not Semi-Supervised

As you mentioned in your paper, this is a Semi-Supervised Anomaly Detection. However, I can't agree with this point. In the training process, the input is normal data without unlabeled data. Specifically, instead of part, all input data label is marked as normal. From this perspective, the model is not a semi-supervised model. Can you explain the semi-supervision?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.