hirokatsukataoka16 / fractaldb-pretrained-resnet-pytorch Goto Github PK
View Code? Open in Web Editor NEWPre-training without Natural Images (ACCV 2020 Best Paper Honorable Mention Award)
License: MIT License
Pre-training without Natural Images (ACCV 2020 Best Paper Honorable Mention Award)
License: MIT License
Good Morning,
After finishing the Fractal Category Search I proceeded to run the code fractal_renderer/make_fractaldb.py to construct FractalDB. However, in doing so I receive an error:
$ python fractal_renderer/make_fractaldb.py
Traceback (most recent call last):
File "fractal_renderer/make_fractaldb.py", line 39, in
weights = np.genfromtxt(args.weight_csv,dtype=np.str,delimiter=',')
File "/home/noahm/RSML/lib/python3.6/site-packages/numpy/lib/npyio.py", line 1772, in genfromtxt
fid = np.lib._datasource.open(fname, 'rt', encoding=encoding)
File "/home/noahm/RSML/lib/python3.6/site-packages/numpy/lib/_datasource.py", line 269, in open
return ds.open(path, mode, encoding=encoding, newline=newline)
File "/home/noahm/RSML/lib/python3.6/site-packages/numpy/lib/_datasource.py", line 623, in open
raise IOError("%s not found." % path)
OSError: ./weights/weights_0.1.csv not found.
I went to FractalDB-trained-ResNet-PyTorch/fractal_renderer/weights and found the file weights_0.1.csv was indeed there.
I am unsure of the solution to this problem.
Hi, I tried to load your resnet-50 pretrained weights but displayed error messages like the below.
It seems that the weight is corrupt. Do I need any preparations after downloading the weight? If really corrupted, would you please fix them?
Thanks in advance.
Python 3.8.10 (default, Jun 26 2021, 12:40:55)
Type 'copyright', 'credits' or 'license' for more information
IPython 8.2.0 -- An enhanced Interactive Python. Type '?' for help.
In [1]: import torch
In [2]: a = torch.load('FractalDB-1000_res50.pth')
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Input In [2], in <cell line: 1>()
----> 1 a = torch.load('FractalDB-1000_res50.pth')
File ~/.pyenv/versions/3.8.10/lib/python3.8/site-packages/torch/serialization.py:713, in load(f, map_location, pickle_module, **pickle_load_args)
711 return torch.jit.load(opened_file)
712 return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
--> 713 return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File ~/.pyenv/versions/3.8.10/lib/python3.8/site-packages/torch/serialization.py:938, in _legacy_load(f, map_location, pickle_module, **pickle_load_args)
936 assert key in deserialized_objects
937 typed_storage = deserialized_objects[key]
--> 938 typed_storage._storage._set_from_file(
939 f, offset, f_should_read_directly,
940 torch._utils._element_size(typed_storage.dtype))
941 if offset is not None:
942 offset = f.tell()
RuntimeError: unexpected EOF, expected 2813703 more bytes. The file might be corrupted.
In [3]: torch.__version__
Out[3]: '1.11.0'
Hello,
How large is the dataset (in GB), and how long does it take for you to generate it? I'm trying to reproduce your results, and generation is taking a super long time, even on a machine with 64 CPUs.
Also, once you generated the dataset, how did you train your classification model?
Best,
Hello @hirokatsukataoka16,
I left this question on #1 , but since that issue is closed I realized you may have missed it. My question is about finetuning -- how did you prepare the PascalVOC and Omniglot datasets? My understanding is that PascalVOC is usually in a multi-class classification setup and Omniglot is usually a few-shot learning setup. Did you make them into single-class classification tasks, and if so how exactly did you go about it? I don't believe there is any information about it in the paper.
Thank you so much again for the paper and repo! It's a really nice idea.
I am conducting a replication experiment to create a FractalDB pretrained VGG16 with TensorFlow, referencing this repository.
I have a concern regarding a hyperparameter: the default value for learning_rate
in pretraining/args.py
is set to 0.1
, but shouldn't it be 0.01
?
I checked the paper, and it mentions that the initial learning rate is 0.01. Just to be sure, I tried training for about 30 epochs with a learning rate of 0.1, but it was too high and the loss did not decrease at all.
Thanks in advance.
In your paper, you show the model's transform result after a train in FractalDB-1k. I am so curious about the model's top-1 test accuracy of original data (FractalDB-1k). Can you show this result?
Could you also release the R50 checkpoints fine-tuned on ImageNet-1k from FractalDB1k/10k?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.