GithubHelp home page GithubHelp logo

Comments (12)

zwsjink avatar zwsjink commented on August 24, 2024 1

Cool! Then your understanding here seems correct!

OK, thanks for your reply, so in my example, I would set --train-num-samples to 12.8M while set --epochs to 10. Alternatively, I can also do it with 25.6M train-num-samples and 5 epochs , right? as long as the multiplication meets the same, there should be no difference in the final training performance , I suppose ?

Great, thanks for your time. I will let you know once we reproduce your conclusion . Besides, good luck to your PhD degree pursuit. :D

from datacomp.

sagadre avatar sagadre commented on August 24, 2024 1

@mingtan2 yes! that should be fine!

from datacomp.

sagadre avatar sagadre commented on August 24, 2024

Hi @zwsjink, the number of epochs controls the number of checkpoints that are saved during training. If training on k samples (e.g., k = 128M for the medium pool) with number of epoch n, we will save a checkpoint after k // n samples are seen. Hence each epoch corresponds to seeing k // n samples from the training pool with replacement.

See here where number of samples per epoch are set.

See here where number of epochs is set to be the number of checkpoints.

from datacomp.

sagadre avatar sagadre commented on August 24, 2024

In your example of the 30M dataset for the medium scale with number of epochs = 10. Each epoch would correspond to sampling 12.8M samples from the 30M dataset (with replacement)

from datacomp.

zwsjink avatar zwsjink commented on August 24, 2024

In your example of the 30M dataset for the medium scale with number of epochs = 10. Each epoch would correspond to sampling 12.8M samples from the 30M dataset (with replacement)

OK, thanks for your reply, so in my example, I would set --train-num-samples to 12.8M while set --epochs to 10. Alternatively, I can also do it with 25.6M train-num-samples and 5 epochs , right? as long as the multiplication meets the same, there should be no difference in the final training performance , I suppose ?

from datacomp.

sagadre avatar sagadre commented on August 24, 2024

For participating in DataComp, you don't have to set --train-num-samples or --epochs directly. Please see this section of the README for a sample command line, where $scale would be medium for the 128M pool.

You can additionally set the --num_checkpoints flag as seen here to specify how many checkpoints you would like to save. Our code will take care of setting --train-num-samples and --epochs accordingly under the hood.

Hope this helps!

from datacomp.

sagadre avatar sagadre commented on August 24, 2024

As for the performance deltas for setting different --num_checkpoints, there should not be dramatic changes in downstream performance. At the start of every "epoch" the dataloader is re-initialized, hence different values for --num_checkpoints will lead to different data orders similar to changing the random seed. Please see here for a note on seed variance.

from datacomp.

zwsjink avatar zwsjink commented on August 24, 2024

For participating in DataComp, you don't have to set --train-num-samples or --epochs directly. Please see this section of the README for a sample command line, where $scale would be medium for the 128M pool.

You can additionally set the --num_checkpoints flag as seen here to specify how many checkpoints you would like to save. Our code will take care of setting --train-num-samples and --epochs accordingly under the hood.

Hope this helps!

Well currently i'm not planning to participate in the track, just trying to follow the paper and do something very similar with you guys' OPEN_CLIP & CLIP_BENCHMARK toolbox on different dataset.

from datacomp.

sagadre avatar sagadre commented on August 24, 2024

Cool! Then your understanding here seems correct!

OK, thanks for your reply, so in my example, I would set --train-num-samples to 12.8M while set --epochs to 10. Alternatively, I can also do it with 25.6M train-num-samples and 5 epochs , right? as long as the multiplication meets the same, there should be no difference in the final training performance , I suppose ?

from datacomp.

mingtan2 avatar mingtan2 commented on August 24, 2024

As for the performance deltas for setting different --num_checkpoints, there should not be dramatic changes in downstream performance. At the start of every "epoch" the dataloader is re-initialized, hence different values for --num_checkpoints will lead to different data orders similar to changing the random seed. Please see here for a note on seed variance.

@sagadre Had this same question. Thanks for explaining here. Then, it is allowed to set the num_checkpoints to be 1 (instead of 8) to accelerate training without reinitializing dataloader?

from datacomp.

mingtan2 avatar mingtan2 commented on August 24, 2024

@mingtan2 yes! that should be fine!

@sagadre In addition, is it allowed to disable dataset_resampled here if it is compatible with DataComp challenge settings?

from datacomp.

sagadre avatar sagadre commented on August 24, 2024

Hi @mingtan2, yes you should keep the --dataset_resampled for the challenge

from datacomp.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.