An extension to Sacred for automated hyperparameter optimization.
You can find the documentation for labwatch here
An extension to Sacred for automated hyperparameter optimization.
An extension to Sacred for automated hyperparameter optimization.
You can find the documentation for labwatch here
Traceback (most recent call last):
File "train_model.py", line 330, in
@ex.automain
File "/home/ubuntu/miniconda3/envs/mongo/lib/python3.7/site-packages/sacred/experiment.py", line 137, in automain
self.run_commandline()
File "/home/ubuntu/miniconda3/envs/mongo/lib/python3.7/site-packages/sacred/experiment.py", line 260, in run_commandline
return self.run(cmd_name, config_updates, named_configs, {}, args)
File "/home/ubuntu/miniconda3/envs/mongo/lib/python3.7/site-packages/sacred/experiment.py", line 208, in run
meta_info, options)
File "/home/ubuntu/miniconda3/envs/mongo/lib/python3.7/site-packages/sacred/experiment.py", line 433, in _create_run
None))
File "/home/ubuntu/miniconda3/envs/mongo/lib/python3.7/site-packages/sacred/initialize.py", line 368, in create_run
ncfg_updates = scaff.run_named_config(cfg_name)
File "/home/ubuntu/miniconda3/envs/mongo/lib/python3.7/site-packages/sacred/initialize.py", line 92, in run_named_config
fallback=self.fallback)
File "/home/ubuntu/miniconda3/envs/mongo/lib/python3.7/site-packages/labwatch/assistant.py", line 203, in _search_space_wrapper
values = self.get_suggestion()
File "/home/ubuntu/miniconda3/envs/mongo/lib/python3.7/site-packages/labwatch/assistant.py", line 315, in get_suggestion
self.update_optimizer()
File "/home/ubuntu/miniconda3/envs/mongo/lib/python3.7/site-packages/labwatch/assistant.py", line 296, in update_optimizer
for job in completed_jobs if job["_id"] not in self.known_jobs]
File "/home/ubuntu/miniconda3/envs/mongo/lib/python3.7/site-packages/labwatch/assistant.py", line 296, in
for job in completed_jobs if job["_id"] not in self.known_jobs]
File "/home/ubuntu/miniconda3/envs/mongo/lib/python3.7/site-packages/labwatch/assistant.py", line 448, in convert_result
"or a dict".format(type(result)))
ValueError: The result of your experiment is a <class 'NoneType'> but labwatch expects either a number or a dict
I'm getting the above error when I try to run my experiment.
At the end of my main method, I have a line:
results = dict()
results["val_metric"] = some_value
return results
which I believe is returning a dict. But I'm getting the error nonetheless...
Since this commit:
IDSIA/sacred@216549c#diff-9c6482f0e6fd19199727a4946e5d74f7
opt.has_pymongo
is no longer existent.
which yields to the following error when running some labwatch experiment:
File "experiment.py", line 6, in <module>
from labwatch.assistant import LabAssistant
File "/local/lib/python3.4/site-packages/labwatch/__init__.py", line 4, in <module>
from labwatch.assistant import LabAssistant
File "/local/lib/python3.4/site-packages/labwatch/assistant.py", line 25, in <module>
if not opt.has_pymongo:
AttributeError: 'module' object has no attribute 'has_pymongo'
If you want to use BayesianOptimization you have to install the following dependencies:
https://github.com/automl/RoBO
george
RUNNING sampled configs
Traceback (most recent call last):
File "1.py", line 6, in <module>
ex.run(named_configs=['search_space'])
File "/home/mlspeech/felixk/.local/share/virtualenvs/a-AaumlMlO/lib/python3.6/site-packages/sacred/experiment.py", line 208, in run
meta_info, options)
File "/home/mlspeech/felixk/.local/share/virtualenvs/a-AaumlMlO/lib/python3.6/site-packages/sacred/experiment.py", line 433, in _create_run
None))
File "/home/mlspeech/felixk/.local/share/virtualenvs/a-AaumlMlO/lib/python3.6/site-packages/sacred/initialize.py", line 368, in create_run
ncfg_updates = scaff.run_named_config(cfg_name)
File "/home/mlspeech/felixk/.local/share/virtualenvs/a-AaumlMlO/lib/python3.6/site-packages/sacred/initialize.py", line 92, in run_named_config
fallback=self.fallback)
File "/home/mlspeech/felixk/.local/share/virtualenvs/a-AaumlMlO/lib/python3.6/site-packages/labwatch/assistant.py", line 196, in _search_space_wrapper
assert not fallback, "{}".format(fallback)
AssertionError: {'_log': <RootLogger root (INFO)>}
For me this simple example tries two random values for f
and then gets stuck always retrying the same value.
Not sure if bug in Labwatch or in RoBO.
from sacred import Experiment
from labwatch import LabAssistant
from labwatch.hyperparameters import UniformInt
from labwatch.optimizers import BayesianOptimization
ex = Experiment()
la = LabAssistant(ex, database_name='labwatch_demo2',
optimizer=BayesianOptimization)
@ex.config
def cfg():
f = 42
@la.searchspace
def small_search_space():
f = UniformInt(lower=32, upper=64, default=32)
@ex.automain
def run(f):
return f
Like this:
(labwatch) greff@Liz:~/Programming/labwatch/examples$ python my_example.py -p -d with small_search_space
WARNING: SMAC not found
Configuration (modified, added, typechanged, doc):
f = 61
seed = 862826954 # the random seed for this experiment
-------------------------------------------------------------------------------
INFO - my_example - Running command 'run'
INFO - my_example - Started run with ID "1"
INFO - my_example - Result: 61
INFO - my_example - Completed after 0:00:00
(labwatch) greff@Liz:~/Programming/labwatch/examples$ python my_example.py -p -d with small_search_space
WARNING: SMAC not found
Configuration (modified, added, typechanged, doc):
f = 52
seed = 674124630 # the random seed for this experiment
-------------------------------------------------------------------------------
INFO - my_example - Running command 'run'
INFO - my_example - Started run with ID "2"
INFO - my_example - Result: 52
INFO - my_example - Completed after 0:00:00
(labwatch) greff@Liz:~/Programming/labwatch/examples$ python my_example.py -p -d with small_search_space
WARNING: SMAC not found
Configuration (modified, added, typechanged, doc):
f = 52
seed = 886589901 # the random seed for this experiment
-------------------------------------------------------------------------------
INFO - my_example - Running command 'run'
INFO - my_example - Started run with ID "3"
INFO - my_example - Result: 52
INFO - my_example - Completed after 0:00:00
(labwatch) greff@Liz:~/Programming/labwatch/examples$ python my_example.py -p -d with small_search_space
WARNING: SMAC not found
Configuration (modified, added, typechanged, doc):
f = 52
seed = 614908786 # the random seed for this experiment
-------------------------------------------------------------------------------
INFO - my_example - Running command 'run'
INFO - my_example - Started run with ID "4"
INFO - my_example - Result: 52
INFO - my_example - Completed after 0:00:00
I've noticed that the intention seems to be that one should run labwatch
from the command line, and I'm wondering what the reason for that is?
Is there some fundamental break from sacred
's interface?
We run all our experiments from within python, so this is a highly desirable feature for us. Happy to take a first pass at it if it makes sense to do so.
Thanks,
Andrew
Hi there!
after reading your paper about Sacred, I wanted to try out a basic example using Sacred and Labwatch. However, I faced some issues while installing Labwatch on my machine (Ubuntu 17.10, Python 2.7.14):
Processing /home/benny/projects/labwatch
Complete output from command python setup.py egg_info:
Invalid MIT-MAGIC-COOKIE-1 key/usr/lib/python2.7/dist-packages/gtk-2.0/gtk/__init__.py:57: GtkWarning: could not open display
warnings.warn(str(e), _gtk.Warning)
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-oZQ4Vu-build/setup.py", line 22, in <module>
from labwatch import __about__
File "labwatch/__init__.py", line 4, in <module>
from labwatch.assistant import LabAssistant
File "labwatch/assistant.py", line 18, in <module>
from labwatch.optimizers.random_search import RandomSearch
File "labwatch/optimizers/__init__.py", line 14, in <module>
from .bayesian_optimization import BayesianOptimization
File "labwatch/optimizers/bayesian_optimization.py", line 10, in <module>
from robo.models.gaussian_process_mcmc import GaussianProcessMCMC
File "/usr/local/lib/python2.7/dist-packages/robo/models/gaussian_process_mcmc.py", line 10, in <module>
from robo.models.gaussian_process import GaussianProcess
File "/usr/local/lib/python2.7/dist-packages/robo/models/gaussian_process.py", line 14, in <module>
class GaussianProcess(BaseModel):
File "/usr/local/lib/python2.7/dist-packages/robo/models/gaussian_process.py", line 70, in GaussianProcess
def train(self, X, y, do_optimize=True):
TypeError: unbound method _check_shapes_train() must be called with BaseModel instance as first argument (got function instance instead)
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-oZQ4Vu-build/
After I looked into the gausion_process code of RoBo, I have seen that the issue is is coming from an @ annotation. I thought maybe this is not supported before Python 3.6.3. Hence, I tried everything once more with python3... installing everything once more with pip3 ... and voilà ... the error is gone!
Finally, my two questions:
a) Is there no Python 2.7 support?
b) The installation page of Labwatch does not say anything about having to install some dependencies manually. It says something about RoBo, but it sounds more like an optional step. Is that actually a required step? Both the installation of RoBo and ConfigSpace? Or am I just doing something wrong? Or is it just my machine having fun with me?
Thank you in advance!
Hi,
Could you provide an example of how to perform simple hyperparameters search with nested configuration using labwatch.
Thank you in advance!
Hello,
I am not able to gather the license under which the Code is provided. Did I miss to find the information?
Is it possible to add the license and the license.txt to the repository.
Thanks in advance.
Siegfried
I was delighted to stumble upon this project just now, in Klauss' 2017 Scipy sacred
write-up.
I feel that there is great potential and need for such a project.
sacred
has revolutionized ML development for us, and we are rooting for it become a standard component of the modern ML ecosystem.
In much the same way that sacred
cracked down on the massive headache of ensuring reproducibility during model development by providing high-level easy to use tools, there seems to be a huge opportunity to tame the wild west of hyperparam tuning by building high-level tools as a community.
Building on top of sacred
for this makes perfect sense. For us, the only way we support to run experiments is through sacred
, so if we were to plug into a hyperopt library (eg spearmint), points searched in hyperparam space would be represented as sacred
experiments anyway.
Additionally, afaik there are no major FOSS hyperparam projects right now. Spearmint is commonly used in my experience, but not free for commercial use. About 6 months ago I did a fairly extensive survey of available options and all I found was hyperopt which is okay, but can be a pain to use[1] and lacks Bayesian algos. More recently, I stumbled upon the ray tune framework, which I haven't had a chance to investigate deeply but looks very promising.
I'm curious about the vision of the original authors for this project?
From where I'm sitting, (admittedly having not looked deeply into the current design of labwatch
), I feel that a keras
level tool would be appropriate - high-level glue between hyperopt libaries and sacred
, focused on providing a clean interface but not necessarily implementing the underlying hyperopt algos.
[1] Referring mostly to its parallelization features, not trying to knock hyperopt
; I found it to be the best option in my recent survey, and it's pretty easy to use for simple searches.
Hi there!
I am currently testing labwatch to do some hyperparamater optimization. Unfortunately I get an error due to a non-empty fallback dict thrown in line 196 of assistant.py.
When the create_run() of initialize.py is executed, the scaffold in line 363 calls gather_fallback() and hence setting self.fallback to a dict including the root logger. This will be passed to the _search_space_wrapper() (line 172 in assistant.py) leading to this error. It seems to me that there is no way to end up at line 196 of assistant.py with an empty fallback dict or fallback=None.
Does anybody have an idea if this is an bug or am I doing something wrong?
Cheers
Thomas
Do we need to do something special while running pytorch and labwatch together?
I can use the basic version of labwatch(without BayesianOptimizer and smac) and run RandomSearch.
But if I try to use BayesianOptimizer and smac with pytorch, I am getting "segmentation fault" and could track it to a .cpp binding.
Thanks for your help.
Hello,
I attempt to run the following code:
from sacred import Experiment
from labwatch.assistant import LabAssistant
from labwatch.optimizers.random_search import RandomSearch
ex = Experiment()
la = LabAssistant(ex, database_name='labwatch_demo2',
optimizer=RandomSearch)
@la.searchspace
def small_search_space():
f = UniformInt(lower=32, upper=64, default=32)
@ex.automain
def run(f):
return f
and get the following error:
If you want to use BayesianOptimization you have to install the following dependencies:
https://github.com/automl/RoBO
george
Traceback (most recent call last):
File "b.py", line 9, in <module>
@la.searchspace
AttributeError: 'LabAssistant' object has no attribute 'searchspace'
Am I missing something?
Thanks,
Felix.
I am trying to use the SMAC optimizer in the simple branin example script:
#!/usr/bin/env python
# coding=utf-8
from __future__ import division, print_function, unicode_literals
from sacred import Experiment
from labwatch.assistant import LabAssistant
from labwatch.hyperparameters import UniformFloat
from labwatch.optimizers.smac_wrapper import SMAC
import numpy as np
ex = Experiment()
a = LabAssistant(ex, "test", optimizer=SMAC)
@ex.config
def cfg():
x = (0., 5.)
@a.search_space
def search_space():
x = (UniformFloat(-5, 10), UniformFloat(0, 15))
@ex.automain
def branin(x):
x1, x2 = x
print("{:.2f}, {:.2f}".format(x1, x2))
y = (x2 - (5.1 / (4 * np.pi ** 2)) * x1 ** 2 + 5 * x1 / np.pi - 6) ** 2
y += 10 * (1 - 1 / (8 * np.pi)) * np.cos(x1) + 10
return y
I get the following error:
Traceback (most recent call last):
File "labwatch_test.py", line 26, in <module>
@ex.automain
File "/cluster/home/miniconda3/envs/lib/python3.5/site-packages/sacred/experiment.py", line 132, in automain
self.run_commandline()
File "/cluster/home/miniconda3/envs/lib/python3.5/site-packages/sacred/experiment.py", line 250, in run_commandline
return self.run(cmd_name, config_updates, named_configs, {}, args)
File "/cluster/home/miniconda3/envs/lib/python3.5/site-packages/sacred/experiment.py", line 198, in run
meta_info, options)
File "/cluster/home/miniconda3/envs/lib/python3.5/site-packages/sacred/experiment.py", line 423, in _create_run
force=options.get(ForceOption.get_flag(), False))
File "/cluster/home/miniconda3/envs/lib/python3.5/site-packages/sacred/initialize.py", line 332, in create_run
ncfg_updates = scaff.run_named_config(cfg_name)
File "/cluster/home/miniconda3/envs/lib/python3.5/site-packages/sacred/initialize.py", line 93, in run_named_config
fallback=self.fallback)
File "/cluster/home/miniconda3/envs/lib/python3.5/site-packages/labwatch/assistant.py", line 190, in _search_space_wrapper
self.optimizer = self.optimizer_class(self.current_search_space)
File "/cluster/home/miniconda3/envs/lib/python3.5/site-packages/labwatch/optimizers/smac_wrapper.py", line 71, in __init__
super(SMAC, self).__init__(sacred_space_to_configspace(config_space))
File "/cluster/home/miniconda3/envs/lib/python3.5/site-packages/labwatch/converters/convert_to_configspace.py", line 110, in sacred_space_to_configspace
converted_param = convert_simple_param(name, param)
File "/cluster/home/miniconda3/envs/lib/python3.5/site-packages/labwatch/converters/convert_to_configspace.py", line 56, in convert_simple_param
log=param["log_scale"])
File "ConfigSpace/hyperparameters.pyx", line 331, in ConfigSpace.hyperparameters.UniformFloatHyperparameter.__init__
TypeError: __init__() got an unexpected keyword argument 'default'
From looking at the code I believe that the ConfigSpace package expects the keyword argument to be called "default_value" instead of "default". Is this a problem of version compatibility? Are there any versions which work well together without being deprecated?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.