Comments (4)
I believe this is due to the way my datasets are instantiated. For instance, when instantiating an EpisodicDataset, it creates a list of generators at
. The problem is that generator objects cannot be pickled, which is exactly what he seems to be doing on Windows when multiprocessing is activated (i.e num_workers > 0). I suspect the way it works is that the dataset is created on the main worker, and then pickled for other processes to load.So the workaround would be to remove this line and find a way to create the generator in the iter function (only when needed of course) and not the init . This should be doable with a try except. Given that I do not have Windows 10, I will unfortunately be unable to reproduce this error, but I would be happy to help debug it further :)
from pytorch-meta-dataset.
Thanks for clarifying. I have a linux machine as well, so I am not blocked. I may try your suggestion.
from pytorch-meta-dataset.
I have tried to fix the issue by implementing the initial workaround I proposed earlier. Please let me know if that fixes the issue on Windows ! Thanks in advance :)
from pytorch-meta-dataset.
Thanks for working on this. Unfortunately, there is still an issue on Windows, with num_workers > 0
, there is a new error:
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'Reader.construct_class_datasets..decode_image'
from pytorch-meta-dataset.
Related Issues (20)
- CrossTransformers implementation HOT 1
- no bash file HOT 1
- No dataset_spec file found in directory
- ResNet structure HOT 5
- How to run the code correctly? HOT 2
- Learn2Learn support? HOT 3
- where is make_index_files.sh HOT 1
- Too many unexpected Errors.
- Training the fine-tuned base line with standard supervised learning with union/concatenation of labels
- Sampling from episodic loader gives error - "Key image doesn't exist (select from [])!" HOT 2
- version of tensorflow-gpu being used? HOT 2
- how long does make index take?
- What are tricks to speed up training for SL and MAML?
- How can we use all classes all the time for episodic training even if the number of examples is small?
- Main feature differences between pytorch-meta-dataset and original meta-dataset?
- how to compute data set size form tfrecrods for mds (within python)? HOT 1
- Unexpected behavior from min_examples_in_class HOT 2
- shuffle buffer issue? HOT 8
- Meta-batch size hard coded to 1 HOT 2
- Feature Request: Ideally the episodes generated would be repeatable for a specified seed. HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pytorch-meta-dataset.