GithubHelp home page GithubHelp logo

clvision-challenge-2023's Introduction

ContinualAI - Jekyll Website now DEPRECATED

We moved to GitBook. If you want to help us maintain the new website, please send and email to [email protected].

Continual AI is the first hub on Continual / Lifelong Deep Learning in AI! :-) The aim of the project is to provide a starting point for researchers, developers and AI enthusiasts who share an interest or are willing to learn more and/or contribute to Continual / Lifelong Learning. We are building an open-source, collaborative wiki at continualai.org as well as creating a community of CL enthusiasts! Join us today on slack! :D

How to contribute

  1. Star the project :-)

  2. Join our community on Slack: https://continualai.herokuapp.com/

  3. Start making changes to the *.md files from the browser (use the 'Preview' button)

  4. Commit the changes!

How to contribute (like a pro)

  1. Star the project :-)

  2. Join our community on Slack: https://continualai.herokuapp.com/

  3. Fork the repo on GitHub and clone it locally

  4. Enter the folder:

    cd website-wiki

  5. If you don't have gem and bundler installed:

    apt-get install rubygems
    gem install bundler

  6. Install Ruby gems:

    bundle install

  7. Start Jekyll server:

    jekyll serve --incremental

  8. Now you can start making changes on the see the result in your browser at http://localhost:4000/

  9. Make a Pull Request (with only the .md or original .html files)! :D

clvision-challenge-2023's People

Contributors

antoniocarta avatar hamedhemati avatar lrzpellegrini avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

clvision-challenge-2023's Issues

Memory usage exceeds limit on Ubuntu20.04

We found that the memory usage of the baseline is different on different OS. It exceeds the limit (about 3400MB) on Ubuntu20.04, but not on Ubuntu22.04. Additionally, we found that this seems to be caused by PyTorch.
Here is the result of memory-profiler's analysis on the forward function of the model.

Line #    Mem usage    Increment  Occurrences   Line Contents
=============================================================
    71   2240.7 MiB   2240.7 MiB           1       @profile
    72                                             def forward(self, x):
    73   2240.7 MiB      0.0 MiB           1           bsz = x.size(0)
    74   3247.5 MiB   1006.9 MiB           1           out = relu(self.bn1(self.conv1(x.view(bsz, 3, 32, 32))))
    75   3248.5 MiB      1.0 MiB           1           out = self.layer1(out)
    76   3248.7 MiB      0.1 MiB           1           out = self.layer2(out)
    77   3248.7 MiB      0.0 MiB           1           out = self.layer3(out)
    78   3248.7 MiB      0.0 MiB           1           out = self.layer4(out)
    79   3248.7 MiB      0.0 MiB           1           out = avg_pool2d(out, 4)
    80   3248.7 MiB      0.0 MiB           1           out = out.view(out.size(0), -1)
    81   3249.2 MiB      0.6 MiB           1           out = self.linear(out)
    82   3249.2 MiB      0.0 MiB           1           return out

Requirements Model Architecture - Ensemble

Hi,

I was wondering wether it is allowed within the competition to build an ensemble of multiple SlimResnet18 networks as long as they fit within the Memory Budget. For example having multiple frozen versions for the SlimResnet18 that learn a single head at the end?
Or is it intented in the way that each sample only passes through the backbone/SlimResnet18 once?
Kind Regards

Default dataset configuration may be set wrongly?

Hi, when I tried to figure out the default dataset settings, I found something confused. In the benchmark.train_stream, the variable experience.scenario.seen_classes is [34, 51, ....], which is smaller at the beginning and then increases to 100, just like the website shows.

But when I tried to write a plugin, I found that the strategy has the attribute mb_y, which is the current mini-batch target, and it has much more classes then it should be. It has around 85 to 95 classes along the whole training, even at the first experience, and I think it was set wrongly.

Can anyone explain that to me? Or I just misunderstood the dataset settings?

Can I scale the image to a larger resolution?

The default resolution of the input image is 32*32. Can I scale the image to a larger resolution? To this end, the code of SlimResNet18 will be modified slightly (i.e. out = relu(self.bn1(self.conv1(x.view(bsz, 3, 32, 32))))). Is it allowed or not?

Memory usage exceeds limit on Colab

Hi, I tried to run the example file on Colab and it seems that the GPU memory allocation exceeded the maximum allowed amount.

Traceback (most recent call last): File "/content/clvision-challenge-2023/train.py", line 134, in <module> main(args) File "/content/clvision-challenge-2023/train.py", line 95, in main cl_strategy.train(experience, num_workers=args.num_workers) File "/content/clvision-challenge-2023/avalanche/avalanche/training/templates/base_sgd.py", line 146, in train super().train(experiences, eval_streams, **kwargs) File "/content/clvision-challenge-2023/avalanche/avalanche/training/templates/base.py", line 117, in train self._after_training_exp(**kwargs) File "/content/clvision-challenge-2023/avalanche/avalanche/training/templates/base.py", line 233, in _after_training_exp trigger_plugins(self, "after_training_exp", **kwargs) File "/content/clvision-challenge-2023/avalanche/avalanche/training/utils.py", line 35, in trigger_plugins getattr(p, event)(strategy, **kwargs) File "/content/clvision-challenge-2023/utils/competition_plugins.py", line 73, in after_training_exp raise MaxGPUAllocationExceeded(ram_allocated, self.max_allowed) utils.competition_plugins.MaxGPUAllocationExceeded: GPU memory allocation (4297 MB) exceeded the maximum allowed amount which is 4000 MB.

'SubSequence' object has no attribute 'slice_ids'

I just clone this repo and run this code, but there is an error:
Traceback (most recent call last):
File "/codes/ConAI/train.py", line 131, in
main(args)
File "/codes/ConAI/train.py", line 47, in main
benchmark = get_cifar_based_benchmark(scenario_config=args.config_file, seed=args.seed)
File "/codes/ConAI/benchmarks/init.py", line 27, in get_cifar_based_benchmark
benchmark = generate_benchmark(seed=seed, train_set=train_set, test_set=test_set, **scenario_config)
File "/codes/ConAI/benchmarks/cir_benchmark.py", line 47, in generate_benchmark
stream_items = [create_dataset_exp_i(i) for i in range(n_e)]
File "/codes/ConAI/benchmarks/cir_benchmark.py", line 47, in
stream_items = [create_dataset_exp_i(i) for i in range(n_e)]
File "/codes/ConAI/benchmarks/cir_benchmark.py", line 43, in create_dataset_exp_i
ds_i = classification_subset(train_set, indices=all_indices_i)
File "/codes/ConAI/avalanche/avalanche/avalanche/benchmarks/utils/classification_dataset.py", line 497, in classification_subset
return dataset.subset(indices)
File "/codes/ConAI/avalanche/avalanche/avalanche/benchmarks/utils/classification_dataset.py", line 102, in subset
data = super().subset(indices)
File "/codes/ConAI/avalanche/avalanche/avalanche/benchmarks/utils/flat_data.py", line 123, in subset
return self.class(datasets=[self], indices=indices)
File "/codes/ConAI/avalanche/avalanche/avalanche/benchmarks/utils/data.py", line 244, in init
dasub = da.subset(self._indices)
File "/codes/ConAI/avalanche/avalanche/avalanche/benchmarks/utils/data_attribute.py", line 143, in subset
self.data.subset(indices),
File "/codes/ConAI/avalanche/avalanche/avalanche/benchmarks/utils/flat_data.py", line 122, in subset
return self.class(datasets=self._datasets, indices=new_indices)
File "/codes/ConAI/avalanche/avalanche/avalanche/benchmarks/utils/flat_data.py", line 85, in init
self._datasets = _flatten_dataset_list(self._datasets)
File "/codes/ConAI/avalanche/avalanche/avalanche/benchmarks/utils/flat_data.py", line 338, in _flatten_dataset_list
if len(dataset) == 0:
File "/codes/ConAI/avalanche/avalanche/avalanche/benchmarks/utils/dataset_utils.py", line 102, in len
if self.slice_ids is not None:
AttributeError: 'SubSequence' object has no attribute 'slice_ids'

I want to know what caused this error.

Question about the usage of replay buffer

Hi, I noticed that on the website, it is mentioned that

Replay Buffer: Replay buffers may not be used to store dataset samples. However, buffers may be used to store any form of data representation, such as the model's internal representations.

Does it mean that we are not allowed to use naive ReplayPlugin, but we can use GenerativeReplayPlugin?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.