GithubHelp home page GithubHelp logo

Comments (4)

aaronkl avatar aaronkl commented on June 2, 2024

No we haven't investigated the additional overhead of the single methods systematically since they are all negligible (seconds) compared to the time of a single function evaluation (hours). Also note, that an optimizer's overhead strongly depends on how efficient it is implemented and which hyperparameters you use.
Having said that, since we do an exhaustive amount of function evaluations (5000), model-based approaches such as SMAC might need a few hours to generated the full trajectory in Figure 7 (even though the optimizer overhead are only a few seconds).

from nasbench.

hibayesian avatar hibayesian commented on June 2, 2024

@aaronkl

As you said, SMAC takes a few hours to generate the full trajectory in Figure 7. This result is based TPU, right?

Which implementation of SMAC you used in your experiments?

From my experience, the open source implementation of SMAC takes a very long time(1 hour for only 1 repeat ) to generate the trajectory(around 2500 evaluations) on my laptop(Intel® Core™ i5-5200U CPU @ 2.20GHz × 4 ). So in order to compute the mean performance of 500 independent runs as a function of the estimated training time just like the Figure 7 in the origin paper, it will take 500 hours which is a very big number. Unlike SMAC, Random Search and Evolutionary Search are much faster. That's why I am concerned about the additional overhead of these algorithms.

from nasbench.

aaronkl avatar aaronkl commented on June 2, 2024

I used the implementation from here
For the comparison we parallelized all 500 runs of SMAC on 500 different cores which means in actual wall-clock time it took only a few hours (everything was on CPU).
Even though one could probably improve SMAC's optimization overhead it will never be as fast as random search, which is ok since it's made for expensive optimization problems. Keep in mind running SMAC for one hour corresponds to running the original benchmark for 2778 hours.

For the paper we used 500 runs to get very solid estimate of the optimizers' performance. However, in case you just want to play around or get some quick estimates of method's performance, you probably get away with less runs (e.g 50 or so).

from nasbench.

hibayesian avatar hibayesian commented on June 2, 2024

Anyway, nice work!

And thanks for your reply.

from nasbench.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.