continualai / clvision-challenge-2023 Goto Github PK
View Code? Open in Web Editor NEWDevelopment kit for the CLVISION @ CVPR 2023 Challenge
Development kit for the CLVISION @ CVPR 2023 Challenge
Hi,
I was wondering wether it is allowed within the competition to build an ensemble of multiple SlimResnet18 networks as long as they fit within the Memory Budget. For example having multiple frozen versions for the SlimResnet18 that learn a single head at the end?
Or is it intented in the way that each sample only passes through the backbone/SlimResnet18 once?
Kind Regards
The code gives a classifier with an output dimension of 100, can it be augmented?For example, modify the output dimension to 200.
Hi, I tried to run the example file on Colab and it seems that the GPU memory allocation exceeded the maximum allowed amount.
Traceback (most recent call last): File "/content/clvision-challenge-2023/train.py", line 134, in <module> main(args) File "/content/clvision-challenge-2023/train.py", line 95, in main cl_strategy.train(experience, num_workers=args.num_workers) File "/content/clvision-challenge-2023/avalanche/avalanche/training/templates/base_sgd.py", line 146, in train super().train(experiences, eval_streams, **kwargs) File "/content/clvision-challenge-2023/avalanche/avalanche/training/templates/base.py", line 117, in train self._after_training_exp(**kwargs) File "/content/clvision-challenge-2023/avalanche/avalanche/training/templates/base.py", line 233, in _after_training_exp trigger_plugins(self, "after_training_exp", **kwargs) File "/content/clvision-challenge-2023/avalanche/avalanche/training/utils.py", line 35, in trigger_plugins getattr(p, event)(strategy, **kwargs) File "/content/clvision-challenge-2023/utils/competition_plugins.py", line 73, in after_training_exp raise MaxGPUAllocationExceeded(ram_allocated, self.max_allowed) utils.competition_plugins.MaxGPUAllocationExceeded: GPU memory allocation (4297 MB) exceeded the maximum allowed amount which is 4000 MB.
Hi, I noticed that on the website, it is mentioned that
Replay Buffer: Replay buffers may not be used to store dataset samples. However, buffers may be used to store any form of data representation, such as the model's internal representations.
Does it mean that we are not allowed to use naive ReplayPlugin, but we can use GenerativeReplayPlugin?
The default resolution of the input image is 32*32. Can I scale the image to a larger resolution? To this end, the code of SlimResNet18 will be modified slightly (i.e. out = relu(self.bn1(self.conv1(x.view(bsz, 3, 32, 32))))
). Is it allowed or not?
We found that the memory usage of the baseline is different on different OS. It exceeds the limit (about 3400MB) on Ubuntu20.04, but not on Ubuntu22.04. Additionally, we found that this seems to be caused by PyTorch.
Here is the result of memory-profiler
's analysis on the forward
function of the model.
Line # Mem usage Increment Occurrences Line Contents
=============================================================
71 2240.7 MiB 2240.7 MiB 1 @profile
72 def forward(self, x):
73 2240.7 MiB 0.0 MiB 1 bsz = x.size(0)
74 3247.5 MiB 1006.9 MiB 1 out = relu(self.bn1(self.conv1(x.view(bsz, 3, 32, 32))))
75 3248.5 MiB 1.0 MiB 1 out = self.layer1(out)
76 3248.7 MiB 0.1 MiB 1 out = self.layer2(out)
77 3248.7 MiB 0.0 MiB 1 out = self.layer3(out)
78 3248.7 MiB 0.0 MiB 1 out = self.layer4(out)
79 3248.7 MiB 0.0 MiB 1 out = avg_pool2d(out, 4)
80 3248.7 MiB 0.0 MiB 1 out = out.view(out.size(0), -1)
81 3249.2 MiB 0.6 MiB 1 out = self.linear(out)
82 3249.2 MiB 0.0 MiB 1 return out
Hi, when I tried to figure out the default dataset settings, I found something confused. In the benchmark.train_stream
, the variable experience.scenario.seen_classes
is [34, 51, ....]
, which is smaller at the beginning and then increases to 100, just like the website shows.
But when I tried to write a plugin, I found that the strategy has the attribute mb_y
, which is the current mini-batch target, and it has much more classes then it should be. It has around 85 to 95 classes along the whole training, even at the first experience, and I think it was set wrongly.
Can anyone explain that to me? Or I just misunderstood the dataset settings?
I just clone this repo and run this code, but there is an error:
Traceback (most recent call last):
File "/codes/ConAI/train.py", line 131, in
main(args)
File "/codes/ConAI/train.py", line 47, in main
benchmark = get_cifar_based_benchmark(scenario_config=args.config_file, seed=args.seed)
File "/codes/ConAI/benchmarks/init.py", line 27, in get_cifar_based_benchmark
benchmark = generate_benchmark(seed=seed, train_set=train_set, test_set=test_set, **scenario_config)
File "/codes/ConAI/benchmarks/cir_benchmark.py", line 47, in generate_benchmark
stream_items = [create_dataset_exp_i(i) for i in range(n_e)]
File "/codes/ConAI/benchmarks/cir_benchmark.py", line 47, in
stream_items = [create_dataset_exp_i(i) for i in range(n_e)]
File "/codes/ConAI/benchmarks/cir_benchmark.py", line 43, in create_dataset_exp_i
ds_i = classification_subset(train_set, indices=all_indices_i)
File "/codes/ConAI/avalanche/avalanche/avalanche/benchmarks/utils/classification_dataset.py", line 497, in classification_subset
return dataset.subset(indices)
File "/codes/ConAI/avalanche/avalanche/avalanche/benchmarks/utils/classification_dataset.py", line 102, in subset
data = super().subset(indices)
File "/codes/ConAI/avalanche/avalanche/avalanche/benchmarks/utils/flat_data.py", line 123, in subset
return self.class(datasets=[self], indices=indices)
File "/codes/ConAI/avalanche/avalanche/avalanche/benchmarks/utils/data.py", line 244, in init
dasub = da.subset(self._indices)
File "/codes/ConAI/avalanche/avalanche/avalanche/benchmarks/utils/data_attribute.py", line 143, in subset
self.data.subset(indices),
File "/codes/ConAI/avalanche/avalanche/avalanche/benchmarks/utils/flat_data.py", line 122, in subset
return self.class(datasets=self._datasets, indices=new_indices)
File "/codes/ConAI/avalanche/avalanche/avalanche/benchmarks/utils/flat_data.py", line 85, in init
self._datasets = _flatten_dataset_list(self._datasets)
File "/codes/ConAI/avalanche/avalanche/avalanche/benchmarks/utils/flat_data.py", line 338, in _flatten_dataset_list
if len(dataset) == 0:
File "/codes/ConAI/avalanche/avalanche/avalanche/benchmarks/utils/dataset_utils.py", line 102, in len
if self.slice_ids is not None:
AttributeError: 'SubSequence' object has no attribute 'slice_ids'
I want to know what caused this error.
Could you please help to solve this issue. Thanks in advance!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.