Comments (11)
Hey,
thanks for trying our package.
The fact that all configs were not picked by the model seems to suggest that there is something wrong. Could you please post your ConfigSpace defining the search space? Otherwise I can't tell you what might have happened.
Best, Stefan
from hpbandster.
Hi Stefan,
Please find below my ConfigSpace definition. Thanks for taking a look at it.
import ConfigSpace as cs
def get_configspace():
config = cs.ConfigurationSpace()
config.add_hyperparameter(cs.Constant('training_batchsize', 1))
config.add_hyperparameter(cs.Constant('subbatch_validation', 1))
config.add_hyperparameter(cs.Constant('validation_batchsize', 100))
config.add_hyperparameter(cs.Constant('validation_interval', 400))
config.add_hyperparameter(cs.CategoricalHyperparameter('loss_function', ['dice', 'categorical_cross_entropy',
'dice_allchannels', 'top-k(5)']))
config.add_hyperparameter(cs.UniformFloatHyperparameter('learning_rate', lower=1e-6, upper=1e-3, log=True))
config.add_hyperparameter(cs.UniformFloatHyperparameter('arch_spatial_dropout', lower=0, upper=0.8,
default_value=0.5, log=False))
config.add_hyperparameter(cs.CategoricalHyperparameter('arch_activation', ['relu', 'leaky_relu']))
return config
from hpbandster.
Oh I see,
Could you try to remove the constants from your configspace.
I know the ConfigSpace has those, but I have never tested them, and so they probably don't work for BOHB. The reason it still runs through comes from the fact that BOHB will fall back to Hyperband if the configuration space has not supported features. You would see some messages in the debug output, but I guess I should make them warning that are always printed.
Sorry about that. Let me know if that fixes your problem.
from hpbandster.
Great, thanks for the hint! I just started a second run without any Constants in my configspace. I will let you know whether it helped.
Could you tell me what is the meaning of the n_iterations
parameter passed to the BOHB.run method? I was unable to derive the meaning of this argument from the source code.
from hpbandster.
Hi Stefan,
unfortunately, removal of Constants from the ConfigSpace didn't resolve the problem. Please find attached the configspace definition, the sampled configs and results logged by the result logger (attached them as txt since github doesn't support json files). I started the optimization with min_budget=2 and max_budget=10.
Do you have any idea, why still none of my configs was sampled from the model?
results.txt
config.txt
configspace.txt
from hpbandster.
Hey,
Oh I see what's going on. So here is what's happening:
- you only have two budgets: 3.33 and 10 (a result of your min and max budegt and the eta=3 default)
- as a consequence, you have two types of iterations, one that tries 3 configs on the small budget, and advances one to the large one, and another one that simply tries two configs on the largest budget.
- given your config space dimensionality (4 parameters), BOHB will start building a model after 6 evaluations on any budget.
- you do 4 iterations which results in 6 evaluations on both budgets actually (Your scenario with only two budgets might not give you much speed up as you evaluate not many more configurations on the smaller budget)
So, the very next iteration should contain configurations sampled from the model. You can reuse old runs by putting them into the model before you start BOHB again. See
this example.
Let me know if you have any trouble with that.
Best,
Stefan
from hpbandster.
Hi Stefan,
Many thanks for your help. I just started next optimization run and will let you know once it ends. Could you elaborate a bit more on the types of iterations? What types of iterations are there and how the iteration type is chosen? I would appreciate a short explanation or a pointer to a code.
from hpbandster.
So Hyperband and BOHB do a round robin on different iterations that trade of an aggressive minimal budget with many configurations vs a conservative budget with only a few configurations.
I like the explanation in the original Hyperand post, although their parametrization is slightly different.
So the basic idea is to have the first iteration start with the minimal budget and increase that by factors of eta (=3) and cutting down the number of configurations by the same factor. The next iteration will increase the smallest budget which means that fewer configurations are evaluated, but with a higher fidelity right away.
This goes on until the iteration that just evaluates configurations on the largest budget. This is implemented in the code here. You only need to understand what self.budgets and what the ns variable in get_next_iteration
are. Those will tell you how many configurations (ns
) are evaluated at which budget.
BTW, I updated the FAQ to include your questions here.
Let me know if you any more questions, or close the issue if you are happy :)
from hpbandster.
As you expected, the next iteration picked some configurations from the model. Thank you very much for your help and the explanation of BOHP internals. Closing the issue.
from hpbandster.
So, does this mean that BOHB fits a different model for each budget?
In other words, I'll only start evaluating (1 - random_fraction) of model-sampled configurations after min_points_in_model configurations have been evaluated for each budget K. Is that it?
from hpbandster.
Just for the record, config_space Constants work fine with BOHB. I've tested this.
from hpbandster.
Related Issues (20)
- A bunch of errors when using categoricals
- Interferance between two BOHB instances when they have the same networking parameters? HOT 2
- Why get_incumbent_id selects the incumbent only from runs with max_budget? HOT 1
- How to stop a worker
- How to plot pictures like your work
- Support scipy 1.3+ HOT 1
- Download code button for keras example is not working
- Missing Optimizer
- Minor documentation fixes
- Machine precision problem in determening s_max
- Fit reinforcement learning model with hyperparameter optimization method
- Hyperband bracket generation is inconsistent with the original Hyperband paper HOT 1
- Continuing from running halfway
- Warm start: results.json and configs.json (from live json_result_logger) do not contain results from previous runs HOT 3
- "Ideally, the optimizer would adjust the budgets online" -- when will it be implemented and using what method?
- Issue in case of warmstarting case
- Saving optimization process HOT 1
- conda package? HOT 2
- License file for conda package
- some problem HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from hpbandster.