Comments (7)
I am also facing the same issue during stage_1 training:
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
from openlongtailrecognition-oltr.
@jchhuang @RamyaRaghuraman Actually, as written in the paper, we used frozen resnet 152 feature extractor for the places experiments. The first reason is that finetuning resnet 152 requires large gpu resource, which we don't have. Second, as I explained in #22 (comment) self attention module and meta embedding module require initialization. Since the feature extractor is already imagenet pretrained, so self attention module won't be affected too much. However, in stage 1, we are still training classifier module with an additional fc layer. The memory is initialized using the feature after the fc, and the meta embedding was initialized using the stage 1 pretrained classifier. So the training of stage 1 is not meaningless.
from openlongtailrecognition-oltr.
@zhmiao Thank you for your reply. I understand that the stage 1 training is required, but I am still not able to overcome the stated "freeze_support()" error. Is there a work around for this?
from openlongtailrecognition-oltr.
@RamyaRaghuraman Could you be a little bit more specific on the error?
from openlongtailrecognition-oltr.
@zhmiao I have attached the entire output for stage_1 training. I am using only 1 GPU but I do not know where the multiprocessing error comes from.
`C:\Users\RAR7ABT\AppData\Local\conda\conda\envs\pjval\python.exe C:/Users/RAR7ABT/pj-val-ml/pjval_ml/OSR/OLTR/main.py
Loading dataset from: C:\Users\RAR7ABT\Documents\Meine empfangenen Dateien\Places_LT
{'criterions': {'PerformanceLoss': {'def_file': 'C:/Users/RAR7ABT/pj-val-ml/pjval_ml/OSR/OLTR/loss/SoftmaxLoss.py',
'loss_params': {},
'optim_params': None,
'weight': 1.0}},
'memory': {'centroids': False, 'init_centroids': False},
'networks': {'classifier': {'def_file': 'C:/Users/RAR7ABT/pj-val-ml/pjval_ml/OSR/OLTR/models/DotProductClassifier.py',
'optim_params': {'lr': 0.1,
'momentum': 0.9,
'weight_decay': 0.0005},
'params': {'dataset': 'Places_LT',
'in_dim': 512,
'num_classes': 365,
'stage1_weights': False}},
'feat_model': {'def_file': 'C:/Users/RAR7ABT/pj-val-ml/pjval_ml/OSR/OLTR/models/ResNet152Feature.py',
'fix': True,
'optim_params': {'lr': 0.01,
'momentum': 0.9,
'weight_decay': 0.0005},
'params': {'caffe': True,
'dataset': 'Places_LT',
'dropout': None,
'stage1_weights': False,
'use_fc': True,
'use_modulatedatt': False}}},
'training_opt': {'batch_size': 256,
'dataset': 'Places_LT',
'display_step': 10,
'feature_dim': 512,
'log_dir': 'C:/Users/RAR7ABT/pj-val-ml/pjval_ml/OSR/OLTR/logs/Places_LT/stage1',
'num_classes': 365,
'num_epochs': 30,
'num_workers': 4,
'open_threshold': 0.1,
'sampler': None,
'scheduler_params': {'gamma': 0.1, 'step_size': 10}}}
Loading data from C:/Users/RAR7ABT/pj-val-ml/pjval_ml/OSR/OLTR/data/Places_LT/Places_LT_train.txt
Use data transformation: Compose(
RandomResizedCrop(size=(224, 224), scale=(0.08, 1.0), ratio=(0.75, 1.3333), interpolation=PIL.Image.BILINEAR)
RandomHorizontalFlip(p=0.5)
ColorJitter(brightness=[0.6, 1.4], contrast=[0.6, 1.4], saturation=[0.6, 1.4], hue=None)
ToTensor()
Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
)
txt C:/Users/RAR7ABT/pj-val-ml/pjval_ml/OSR/OLTR/data/Places_LT/Places_LT_train.txt
No sampler.
Shuffle is True.
Loading data from C:/Users/RAR7ABT/pj-val-ml/pjval_ml/OSR/OLTR/data/Places_LT/Places_LT_val.txt
Use data transformation: Compose(
Resize(size=256, interpolation=PIL.Image.BILINEAR)
CenterCrop(size=(224, 224))
ToTensor()
Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
)
txt C:/Users/RAR7ABT/pj-val-ml/pjval_ml/OSR/OLTR/data/Places_LT/Places_LT_val.txt
No sampler.
Shuffle is True.
cuda:0
Using 1 GPUs.
Loading Scratch ResNet 152 Feature Model.
Using fc.
Loading Caffe Pretrained ResNet 152 Weights.
Pretrained feature model weights path: C:\Users\RAR7ABT\pj-val-ml\pjval_ml\OSR\OLTR\logs\resnet152.pth
Freezing feature weights except for self attention weights (if exist).
Loading Dot Product Classifier.
Random initialized classifier weights.
Using steps for training.
Initializing model optimizer.
Loading Softmax Loss.
Phase: train
Loading dataset from: C:\Users\RAR7ABT\Documents\Meine empfangenen Dateien\Places_LT
{'criterions': {'PerformanceLoss': {'def_file': 'C:/Users/RAR7ABT/pj-val-ml/pjval_ml/OSR/OLTR/loss/SoftmaxLoss.py',
'loss_params': {},
'optim_params': None,
'weight': 1.0}},
'memory': {'centroids': False, 'init_centroids': False},
'networks': {'classifier': {'def_file': 'C:/Users/RAR7ABT/pj-val-ml/pjval_ml/OSR/OLTR/models/DotProductClassifier.py',
'optim_params': {'lr': 0.1,
'momentum': 0.9,
'weight_decay': 0.0005},
'params': {'dataset': 'Places_LT',
'in_dim': 512,
'num_classes': 365,
'stage1_weights': False}},
'feat_model': {'def_file': 'C:/Users/RAR7ABT/pj-val-ml/pjval_ml/OSR/OLTR/models/ResNet152Feature.py',
'fix': True,
'optim_params': {'lr': 0.01,
'momentum': 0.9,
'weight_decay': 0.0005},
'params': {'caffe': True,
'dataset': 'Places_LT',
'dropout': None,
'stage1_weights': False,
'use_fc': True,
'use_modulatedatt': False}}},
'training_opt': {'batch_size': 256,
'dataset': 'Places_LT',
'display_step': 10,
'feature_dim': 512,
'log_dir': 'C:/Users/RAR7ABT/pj-val-ml/pjval_ml/OSR/OLTR/logs/Places_LT/stage1',
'num_classes': 365,
'num_epochs': 30,
'num_workers': 4,
'open_threshold': 0.1,
'sampler': None,
'scheduler_params': {'gamma': 0.1, 'step_size': 10}}}
Loading data from C:/Users/RAR7ABT/pj-val-ml/pjval_ml/OSR/OLTR/data/Places_LT/Places_LT_train.txt
Use data transformation: Compose(
RandomResizedCrop(size=(224, 224), scale=(0.08, 1.0), ratio=(0.75, 1.3333), interpolation=PIL.Image.BILINEAR)
RandomHorizontalFlip(p=0.5)
ColorJitter(brightness=[0.6, 1.4], contrast=[0.6, 1.4], saturation=[0.6, 1.4], hue=None)
ToTensor()
Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
)
txt C:/Users/RAR7ABT/pj-val-ml/pjval_ml/OSR/OLTR/data/Places_LT/Places_LT_train.txt
No sampler.
Shuffle is True.
Loading data from C:/Users/RAR7ABT/pj-val-ml/pjval_ml/OSR/OLTR/data/Places_LT/Places_LT_val.txt
Use data transformation: Compose(
Resize(size=256, interpolation=PIL.Image.BILINEAR)
CenterCrop(size=(224, 224))
ToTensor()
Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
)
txt C:/Users/RAR7ABT/pj-val-ml/pjval_ml/OSR/OLTR/data/Places_LT/Places_LT_val.txt
No sampler.
Shuffle is True.
cuda:0
Using 1 GPUs.
Loading Scratch ResNet 152 Feature Model.
Using fc.
Loading Caffe Pretrained ResNet 152 Weights.
Pretrained feature model weights path: C:\Users\RAR7ABT\pj-val-ml\pjval_ml\OSR\OLTR\logs\resnet152.pth
Freezing feature weights except for self attention weights (if exist).
Loading Dot Product Classifier.
Random initialized classifier weights.
Using steps for training.
Initializing model optimizer.
Loading Softmax Loss.
Phase: train
Traceback (most recent call last):
File "", line 1, in
Traceback (most recent call last):
File "C:/Users/RAR7ABT/pj-val-ml/pjval_ml/OSR/OLTR/main.py", line 60, in
training_model.train()
File "C:\Users\RAR7ABT\pj-val-ml\pjval_ml\OSR\OLTR\run_networks.py", line 208, in train
for step, (inputs, labels, _) in enumerate(self.data['train']):
File "C:\Users\RAR7ABT\AppData\Local\conda\conda\envs\pjval\lib\site-packages\torch\utils\data\dataloader.py", line 193, in iter
return _DataLoaderIter(self)
File "C:\Users\RAR7ABT\AppData\Local\conda\conda\envs\pjval\lib\site-packages\torch\utils\data\dataloader.py", line 469, in init
w.start()
File "C:\Users\RAR7ABT\AppData\Local\conda\conda\envs\pjval\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\Users\RAR7ABT\AppData\Local\conda\conda\envs\pjval\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\RAR7ABT\AppData\Local\conda\conda\envs\pjval\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Users\RAR7ABT\AppData\Local\conda\conda\envs\pjval\lib\multiprocessing\popen_spawn_win32.py", line 65, in init
reduction.dump(process_obj, to_child)
File "C:\Users\RAR7ABT\AppData\Local\conda\conda\envs\pjval\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
BrokenPipeError: [Errno 32] Broken pipe
File "C:\Users\RAR7ABT\AppData\Local\conda\conda\envs\pjval\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "C:\Users\RAR7ABT\AppData\Local\conda\conda\envs\pjval\lib\multiprocessing\spawn.py", line 114, in _main
prepare(preparation_data)
File "C:\Users\RAR7ABT\AppData\Local\conda\conda\envs\pjval\lib\multiprocessing\spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "C:\Users\RAR7ABT\AppData\Local\conda\conda\envs\pjval\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
run_name="mp_main")
File "C:\Users\RAR7ABT\AppData\Local\conda\conda\envs\pjval\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "C:\Users\RAR7ABT\AppData\Local\conda\conda\envs\pjval\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "C:\Users\RAR7ABT\AppData\Local\conda\conda\envs\pjval\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\RAR7ABT\pj-val-ml\pjval_ml\OSR\OLTR\main.py", line 60, in
training_model.train()
File "C:\Users\RAR7ABT\pj-val-ml\pjval_ml\OSR\OLTR\run_networks.py", line 208, in train
for step, (inputs, labels, _) in enumerate(self.data['train']):
File "C:\Users\RAR7ABT\AppData\Local\conda\conda\envs\pjval\lib\site-packages\torch\utils\data\dataloader.py", line 193, in iter
return _DataLoaderIter(self)
File "C:\Users\RAR7ABT\AppData\Local\conda\conda\envs\pjval\lib\site-packages\torch\utils\data\dataloader.py", line 469, in init
w.start()
File "C:\Users\RAR7ABT\AppData\Local\conda\conda\envs\pjval\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\Users\RAR7ABT\AppData\Local\conda\conda\envs\pjval\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\RAR7ABT\AppData\Local\conda\conda\envs\pjval\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Users\RAR7ABT\AppData\Local\conda\conda\envs\pjval\lib\multiprocessing\popen_spawn_win32.py", line 33, in init
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\Users\RAR7ABT\AppData\Local\conda\conda\envs\pjval\lib\multiprocessing\spawn.py", line 143, in get_preparation_data
_check_not_importing_main()
File "C:\Users\RAR7ABT\AppData\Local\conda\conda\envs\pjval\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
Process finished with exit code 1
`
from openlongtailrecognition-oltr.
@RamyaRaghuraman Thanks for posting the error message. It seems to be a problem of running pytorch on windows instead of frozen weights. Here is an issue from pytorch official repo: pytorch/pytorch#5858 (comment) . I will close this issue and open a new one about this (#29 (comment)). We do not have a windows machine to test it for now. I think we will try to get one and see if it is possible to reproduce. Thanks.
from openlongtailrecognition-oltr.
@jchhuang @RamyaRaghuraman Actually, as written in the paper, we used frozen resnet 152 feature extractor for the places experiments. The first reason is that finetuning resnet 152 requires large gpu resource, which we don't have. Second, as I explained in #22 (comment) self attention module and meta embedding module require initialization. Since the feature extractor is already imagenet pretrained, so self attention module won't be affected too much. However, in stage 1, we are still training classifier module with an additional fc layer. The memory is initialized using the feature after the fc, and the meta embedding was initialized using the stage 1 pretrained classifier. So the training of stage 1 is not meaningless.
Hi, I don't agree this explain. In my understanding, the dimension of centroid is num_classnum_feature, however, the output of fc is a 1num_class vector, their size are not matched, how can a vector initialize a matrix?
from openlongtailrecognition-oltr.
Related Issues (20)
- Reproducing OLTR results HOT 3
- Stage 2 multi GPU
- why fix all parameters except self attention parameters? HOT 4
- Table 2 results HOT 2
- Pretrained Weights for Places_LT?
- the use of fc layer HOT 2
- the accuracy of the train and val HOT 2
- how to compute centroids?
- Why the input dimension of the `fc_spatial` layer in `ModulatedAttLayer` is 7*7*in_channel? HOT 1
- Many_shot_accuracy_top1: nan on my own dataset HOT 1
- Revised F-measure results for other models in your paper
- Applications for face recognition
- Error when running stage_1.py under Places_LT
- Unable to reproduce baseline result on ImageNet-LT HOT 1
- BUG: stage1 test error!!
- Could you please give me an example of arranging ILSVRC2014 dataset? HOT 7
- Implementation on Inat-18
- About Class aware sampler
- The role of untrained FC(add_fc)
- The question about the version of Places_LT
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from openlongtailrecognition-oltr.