GithubHelp home page GithubHelp logo

facebook / ax Goto Github PK

View Code? Open in Web Editor NEW
2.4K 69.0 303.0 649.93 MB

Adaptive Experimentation Platform

Home Page: https://ax.dev

License: MIT License

Python 51.89% CSS 0.36% JavaScript 0.34% HTML 0.02% Jupyter Notebook 47.15% Shell 0.11% Makefile 0.07% Batchfile 0.07%

ax's Introduction

Ax Logo


Support Ukraine Build Status Build Status Build Status Build Status codecov Build Status

Ax is an accessible, general-purpose platform for understanding, managing, deploying, and automating adaptive experiments.

Adaptive experimentation is the machine-learning guided process of iteratively exploring a (possibly infinite) parameter space in order to identify optimal configurations in a resource-efficient manner. Ax currently supports Bayesian optimization and bandit optimization as exploration strategies. Bayesian optimization in Ax is powered by BoTorch, a modern library for Bayesian optimization research built on PyTorch.

For full documentation and tutorials, see the Ax website

Why Ax?

  • Versatility: Ax supports different kinds of experiments, from dynamic ML-assisted A/B testing, to hyperparameter optimization in machine learning.
  • Customization: Ax makes it easy to add new modeling and decision algorithms, enabling research and development with minimal overhead.
  • Production-completeness: Ax comes with storage integration and ability to fully save and reload experiments.
  • Support for multi-modal and constrained experimentation: Ax allows for running and combining multiple experiments (e.g. simulation with a real-world "online" A/B test) and for constrained optimization (e.g. improving classification accuracy without significant increase in resource-utilization).
  • Efficiency in high-noise setting: Ax offers state-of-the-art algorithms specifically geared to noisy experiments, such as simulations with reinforcement-learning agents.
  • Ease of use: Ax includes 3 different APIs that strike different balances between lightweight structure and flexibility. The Service API (recommended for the vast majority of use-cases) provides an extensive, robust, and easy-to-use interface to Ax; the Loop API enables particularly concise usage; and the Developer API enables advanced experimental and methodological control.

Getting Started

To run a simple optimization loop in Ax (using the Booth response surface as the artificial evaluation function):

>>> from ax import optimize
>>> best_parameters, best_values, experiment, model = optimize(
        parameters=[
          {
            "name": "x1",
            "type": "range",
            "bounds": [-10.0, 10.0],
          },
          {
            "name": "x2",
            "type": "range",
            "bounds": [-10.0, 10.0],
          },
        ],
        # Booth function
        evaluation_function=lambda p: (p["x1"] + 2*p["x2"] - 7)**2 + (2*p["x1"] + p["x2"] - 5)**2,
        minimize=True,
    )

# best_parameters contains {'x1': 1.02, 'x2': 2.97}; the global min is (1, 3)

Installation

Requirements

You need Python 3.10 or later to run Ax.

The required Python dependencies are:

  • botorch
  • jinja2
  • pandas
  • scipy
  • sklearn
  • plotly >=2.2.1

Stable Version

Installing via pip

We recommend installing Ax via pip (even if using Conda environment):

conda install pytorch torchvision -c pytorch  # OSX only (details below)
pip install ax-platform

Installation will use Python wheels from PyPI, available for OSX, Linux, and Windows.

Note: Make sure the pip being used to install ax-platform is actually the one from the newly created Conda environment. If you're using a Unix-based OS, you can use which pip to check.

Recommendation for MacOS users: PyTorch is a required dependency of BoTorch, and can be automatically installed via pip. However, we recommend you install PyTorch manually before installing Ax, using the Anaconda package manager. Installing from Anaconda will link against MKL (a library that optimizes mathematical computation for Intel processors). This will result in up to an order-of-magnitude speed-up for Bayesian optimization, as at the moment, installing PyTorch from pip does not link against MKL.

If you need CUDA on MacOS, you will need to build PyTorch from source. Please consult the PyTorch installation instructions above.

Optional Dependencies

To use Ax with a notebook environment, you will need Jupyter. Install it first:

pip install jupyter

If you want to store the experiments in MySQL, you will need SQLAlchemy:

pip install SQLAlchemy

Latest Version

Installing from Git

You can install the latest (bleeding edge) version from Git.

First, see recommendation for installing PyTorch for MacOS users above.

At times, the bleeding edge for Ax can depend on bleeding edge versions of BoTorch (or GPyTorch). We therefore recommend installing those from Git as well:

pip install git+https://github.com/cornellius-gp/linear_operator.git
pip install git+https://github.com/cornellius-gp/gpytorch.git
export ALLOW_LATEST_GPYTORCH_LINOP=true
pip install git+https://github.com/pytorch/botorch.git
export ALLOW_BOTORCH_LATEST=true
pip install git+https://github.com/facebook/Ax.git#egg=ax-platform

Optional Dependencies

If using Ax in Jupyter notebooks:

pip install git+https://github.com/facebook/Ax.git#egg=ax-platform[notebook]

To support plotly-based plotting in newer Jupyter notebook versions

pip install "notebook>=5.3" "ipywidgets==7.5"

See Plotly repo's README for details and JupyterLab instructions.

If storing Ax experiments via SQLAlchemy in MySQL or SQLite:

pip install git+https://github.com/facebook/Ax.git#egg=ax-platform[mysql]

Join the Ax Community

Getting help

Please open an issue on our issues page with any questions, feature requests or bug reports! If posting a bug report, please include a minimal reproducible example (as a code snippet) that we can use to reproduce and debug the problem you encountered.

Contributing

See the CONTRIBUTING file for how to help out.

When contributing to Ax, we recommend cloning the repository and installing all optional dependencies:

pip install git+https://github.com/cornellius-gp/linear_operator.git
pip install git+https://github.com/cornellius-gp/gpytorch.git
export ALLOW_LATEST_GPYTORCH_LINOP=true
pip install git+https://github.com/pytorch/botorch.git
export ALLOW_BOTORCH_LATEST=true
git clone https://github.com/facebook/ax.git --depth 1
cd ax
pip install -e .[tutorial]

See recommendation for installing PyTorch for MacOS users above.

The above example limits the cloned directory size via the --depth argument to git clone. If you require the entire commit history you may remove this argument.

License

Ax is licensed under the MIT license.

ax's People

Contributors

2timesjay avatar adamobeng avatar amyreese avatar balandat avatar bernardbeckerman avatar bletham avatar cesar-cardoso avatar danielrjiang avatar dme65 avatar eonofrey avatar ericzlou avatar esantorella avatar itsmrlin avatar joelmarcey avatar kkashin avatar ldworkin avatar lena-kashtelyan avatar liangshi7 avatar liusulin avatar mgarrard avatar mgrange1998 avatar mpolson64 avatar pcanaran avatar qingfeng10 avatar saitcakmak avatar sdaulton avatar sebastianament avatar sophiawho avatar thatch avatar zcohn avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ax's Issues

OSError: [Errno 12] Cannot allocate memory

I am running the notebook in this tutorial https://ax.dev/tutorials/tune_cnn.html .
I get an error the error as shown in the title, when I run the folowing code block.

best_parameters, values, experiment, model = optimize( parameters=[ {"name": "lr", "type": "range", "bounds": [1e-6, 0.4], "log_scale": True}, {"name": "momentum", "type": "range", "bounds": [0.0, 1.0]}, ], evaluation_function=train_evaluate, objective_name='accuracy', )

Changing 'total_trials' or device type wouldn't help.

The error stack is as follows. Thanks in advance!

[INFO 05-24 11:51:58] ax.service.utils.dispatch: Using Bayesian Optimization generation strategy. Iterations after 5 will take longer to generate due to model-fitting.
[INFO 05-24 11:51:58] ax.service.managed_loop: Started full optimization with 20 steps.
[INFO 05-24 11:51:58] ax.service.managed_loop: Running optimization trial 1...

OSError Traceback (most recent call last)
in
5 ],
6 evaluation_function=train_evaluate,
----> 7 objective_name='accuracy',
8 )

~/.local/anaconda3/envs/ax/lib/python3.7/site-packages/ax/service/managed_loop.py in optimize(parameters, evaluation_function, experiment_name, objective_name, minimize, parameter_constraints, outcome_constraints, total_trials, arms_per_trial, wait_time)
204 wait_time=wait_time,
205 )
--> 206 loop.full_run()
207 parameterization, values = loop.get_best_point()
208 return parameterization, values, loop.experiment, loop.get_current_model()

~/.local/anaconda3/envs/ax/lib/python3.7/site-packages/ax/service/managed_loop.py in full_run(self)
148 logger.info(f"Started full optimization with {num_steps} steps.")
149 for _ in range(num_steps):
--> 150 self.run_trial()
151 return self
152

~/.local/anaconda3/envs/ax/lib/python3.7/site-packages/ax/service/managed_loop.py in run_trial(self)
139 else: # pragma: no cover
140 raise ValueError(f"Invalid number of arms per trial: {arms_per_trial}")
--> 141 trial.fetch_data()
142 self.current_trial += 1
143

~/.local/anaconda3/envs/ax/lib/python3.7/site-packages/ax/core/base_trial.py in fetch_data(self, metrics, **kwargs)
257 """
258 return self.experiment._fetch_trial_data(
--> 259 trial_index=self.index, metrics=metrics, **kwargs
260 )
261

~/.local/anaconda3/envs/ax/lib/python3.7/site-packages/ax/core/simple_experiment.py in _fetch_trial_data(self, trial_index, metrics, **kwargs)
203 self, trial_index: int, metrics: Optional[List[Metric]] = None, **kwargs: Any
204 ) -> Data:
--> 205 return self.eval_trial(self.trials[trial_index])
206
207 @copy_doc(Experiment.add_tracking_metric)

~/.local/anaconda3/envs/ax/lib/python3.7/site-packages/ax/core/simple_experiment.py in eval_trial(self, trial)
117 trial.mark_running()
118 evaluations[not_none(trial.arm).name] = self.evaluation_function_outer(
--> 119 not_none(trial.arm).parameters, None
120 )
121 elif isinstance(trial, BatchTrial):

~/.local/anaconda3/envs/ax/lib/python3.7/site-packages/ax/core/simple_experiment.py in evaluation_function_outer(self, parameterization, weight)
174 if num_evaluation_function_params == 1:
175 # pyre-fixme[20]: Anonymous call expects argument $1.
--> 176 evaluation = self._evaluation_function(parameterization)
177 elif num_evaluation_function_params == 2:
178 evaluation = self._evaluation_function(parameterization, weight)

in train_evaluate(parameterization)
1 def train_evaluate(parameterization):
----> 2 net = train(train_loader=train_loader, parameters=parameterization, dtype=dtype, device=device)
3 return evaluate(
4 net=net,
5 data_loader=valid_loader,

~/.local/anaconda3/envs/ax/lib/python3.7/site-packages/ax/utils/tutorials/cnn_utils.py in train(train_loader, parameters, dtype, device)
126
127 # Train Network
--> 128 for inputs, labels in train_loader:
129 # move data to proper dtype and device
130 inputs = inputs.to(device=device)

~/.local/anaconda3/envs/ax/lib/python3.7/site-packages/torch/utils/data/dataloader.py in iter(self)
191
192 def iter(self):
--> 193 return _DataLoaderIter(self)
194
195 def len(self):

~/.local/anaconda3/envs/ax/lib/python3.7/site-packages/torch/utils/data/dataloader.py in init(self, loader)
467 # before it starts, and del tries to join but will get:
468 # AssertionError: can only join a started process.
--> 469 w.start()
470 self.index_queues.append(index_queue)
471 self.workers.append(w)

~/.local/anaconda3/envs/ax/lib/python3.7/multiprocessing/process.py in start(self)
110 'daemonic processes are not allowed to have children'
111 _cleanup()
--> 112 self._popen = self._Popen(self)
113 self._sentinel = self._popen.sentinel
114 # Avoid a refcycle if the target function holds an indirect

~/.local/anaconda3/envs/ax/lib/python3.7/multiprocessing/context.py in _Popen(process_obj)
221 @staticmethod
222 def _Popen(process_obj):
--> 223 return _default_context.get_context().Process._Popen(process_obj)
224
225 class DefaultContext(BaseContext):

~/.local/anaconda3/envs/ax/lib/python3.7/multiprocessing/context.py in _Popen(process_obj)
275 def _Popen(process_obj):
276 from .popen_fork import Popen
--> 277 return Popen(process_obj)
278
279 class SpawnProcess(process.BaseProcess):

~/.local/anaconda3/envs/ax/lib/python3.7/multiprocessing/popen_fork.py in init(self, process_obj)
18 self.returncode = None
19 self.finalizer = None
---> 20 self._launch(process_obj)
21
22 def duplicate_for_child(self, fd):

~/.local/anaconda3/envs/ax/lib/python3.7/multiprocessing/popen_fork.py in _launch(self, process_obj)
68 code = 1
69 parent_r, child_w = os.pipe()
---> 70 self.pid = os.fork()
71 if self.pid == 0:
72 try:

OSError: [Errno 12] Cannot allocate memory

Is there a way to set a random seed for the Ax Service API.

I have a use case in which users want to be able to view the trials they are going to run before they run them and I would like to show the exact values of the trials that are going to be run. Is there a way to set a random seed for the service API for reproducibility.

Inconsistent Hyperparameter Tutorial Results

I've run the hyperparameter tuning tutorial in Google Colab:
https://colab.research.google.com/drive/1P6TvA9UZDtLf9dMFTcYWm_RUBW0wpsiV#scrollTo=mzPpAbbyGSBf

I wasn't able to plot my results due to #83, but I am able to see the optimization results printed in the notebook.

best_parameters
{'lr': 0.00018387879800690676, 'momentum': 0.8395379415413641}
means, covariances = values
means, covariances
({'accuracy': 0.9633318647366059},
 {'accuracy': {'accuracy': 1.5112844861703536e-08}})

The 'optimal' momentum above is wildly different from the results shown on the Ax website. I understand there exist 2D local minima in this parameter space, but its a bit surprising to see such enormous differences.

best_parameters
{'lr': 0.0029176399675537317, 'momentum': 3.0347402313065844e-16}
means, covariances = values
means, covariances
({'accuracy': 0.968833362542745},
 {'accuracy': {'accuracy': 1.3653840299223108e-08}})

pip install is outdated?

the pip3 install ax-platform seems to be outdated.
e.g. in ax/utils/tutorials/cnn_utils.py has different train and load_mnist functions compared to whats there on this github

def train(
    train_loader: DataLoader,
    parameters: Dict[str, float],
    dtype: torch.dtype,
    device: torch.device,
) -> nn.Module:

will throw error *** TypeError: train() got an unexpected keyword argument 'net' if I follow the tutorial on https://ax.dev/versions/latest/tutorials/tune_cnn.html
net = train(net=net, train_loader=train_loader, parameters=parameterization, dtype=dtype, device=device)

Similarly train_loader, valid_loader, test_loader = load_mnist(batch_size=BATCH_SIZE)
will also throw error

How to properly save and load an experiment

I have a modified version of this https://botorch.org/tutorials/custom_botorch_model_in_ax

where I have saved the experiment after each call to get_botorch.

        for i in range(len(exp.trials.values()), num_bo_trails+2):
            print('Running optimization batch {}/{}'.format(i+1, num_bo_trails))
            model = get_botorch(experiment=exp, data=exp.eval(), search_space=exp.search_space,
                                model_constructor=_get_and_fit_gp)

            save(exp, args.bo_save_path)
            batch = exp.new_trial(generator_run=model.gen(1))

If that loop gets interupted, I want to be able to reload the experiment and restart the loop from where it left off. However I get his issue:

File "Torch1venv/venv/lib/python3.6/site-packages/ax/core/observation.py", line 189, in observations_from_data
obs_parameters = experiment.arms_by_name[features["arm_name"]].parameters.copy()
KeyError: '0'

After the first get_botorch call after I try to load up again.

Also I noticed that the trail status always seems to be 'status=TrialStatus.RUNNING' and never completed? Do I manually need to set trials to completed?

Thanks.

Setting parameter constraints of the form x - y >= 1

It seems like parameter restraints are very strict, so x - y >= 1 does not work. Is it possible to support this in the future or does something prevent it?

"Parameter constraint should be of form <parameter_name> >= <other_parameter_name> for order constraints or <parameter_name> + <other_parameter_name> >= x, where any number of parameters can be summed up and x` is a float bound. Acceptable comparison operators are ">=" and "<=".'

Support batch trial in service API

First of all I really appreciate the great work that has been done here and the fact that this library is open sourced.

In my use case, I would like to do Bayesian Optimization of the hyperparameter of neural networks. Each training of the neural networks can take more than 10 hours. The training is submitted to the gpu cluster through slurm system. Because the training takes such long time, I would want to run multiple training (arms) at the same time.

Right now, because the service API doesn't support batch trial, the optimization loop I set up using ax is to create an Experiment to manage data, and write everything else separately, including initialize batch trial, evaluate them through slurm system, collect results, joint optimize by botorch, record the trial in experiment, and repeat.

If the service API is intended to be used for the cases when trials are evaluated externally, I would like to request a feature to make service API support batch trial. I am than happy to contribute to the implementation if possible. If so, I would appreciate any guidance in terms of how the core development team would like this to be done.

Using Ax as a supplier of candidates for black box evaluation

Hi,

I have been trying, in resent days, to use Ax for my task.

The use case: supplying X new candidates for evaluation, given known+pending evaluations. Our "evaluation" is a training & testing of an ML model done on a cloud sever. I just want to feed the results to the BO model, and get new points for evaluation = to have Ax power our HPO. No success yet.

In BoTorch, I achieved this goal, with these 5 lines at the core:

model = botorch.models.SingleTaskGP(X, Y)
mll = gpytorch.mlls.ExactMarginalLogLikelihood(model.likelihood, model)
botorch.fit.fit_gpytorch_model(mll)

acquisition_function = botorch.acquisition.qNoisyExpectedImprovement(model, X_baseline)
X_candidates_tensor = botorch.optim.joint_optimize(acquisition_function, bounds=bounds, 
                                                   q=batch_size, num_restarts=1, raw_samples=len(X))

I've been trying to use BotorchModel via the developer API. Questions:

  • Do I have to state an evaluation function when defining an "experiment"? In our use case the function is a "black box": we have a platform for launching train jobs as resources are freed, and collecting evaluations when ready, and I want to get from Ax X new candidates for evaluation, as in the BoTorch example above.
  • I couldn't find how to load the known+pending evaluations to the model.
  • Are the objective_weights, that the gen() function of BotorchModel requires, weights for low/high-fidelity evals?

Have I been looking at the wrong place? Should I have been using the service API (loosing some flexibility)?
Could you please direct me to relevant examples in both APIs?

(One of my main reasons for shifting to Ax, is that I want in the future to optimize over a mixed domain: some parameters continuous, and some discrete; but this is a different question...)

Thanks a lot,
Avi

how to optimize on integer-valued parameters?

Hi,
I'm just brand new to this field.
Following the tutorial, I could very easily do the optimization on float-valued parameters such as learning rate. However, I couldn't find any guidance on integer-valued parameters, such as the number of layers and the number of neurons per layer.
A brief search lead me to the paper: https://arxiv.org/pdf/1706.03673.pdf which described 3 strategies:
1. optimize the float valued acquisition function and then wrap the result into the closest integer, before the evaluating step
2. optimize the float valued acquisition function, use this value as input to the evaluating function, then do the wrapping inside the evaluating function
2. do the wrapping of the input when calculating the covariance function

I am not sure if Ax or botorch has implemented any interface for integer inputs. If so, which strategy is used? or any other ideas are recommended here?

Thanks a lot in advance.

Invalid choice parameter example in the core.md file

There is a small typo in the example of choice parameters in the core.md document. ChoiceParameter didn't have value param, instead, it is called values.

Currently:

choice_param = ChoiceParameter(name="y", parameter_type=ParameterType.STRING, value=["foo", "bar"])

Should be:

choice_param = ChoiceParameter(name="y", parameter_type=ParameterType.STRING, values=["foo", "bar"])

ValueError: numpy.ufunc size changed, may indicate binary incompatibility

When running the hyperparameter tuning tutorial, using anaconda 3.7 and installing with pip install ax-platform
https://ax.dev/tutorials/tune_cnn.html

/Users/glennjocher/.conda/envs/yolov3/bin/python /Users/glennjocher/PycharmProjects/yolov3/tune_cnn.py
Traceback (most recent call last):
  File "/Users/glennjocher/PycharmProjects/yolov3/tune_cnn.py", line 6, in <module>
    from ax.service.managed_loop import optimize
  File "/Users/glennjocher/.conda/envs/yolov3/lib/python3.7/site-packages/ax/__init__.py", line 5, in <module>
    from ax.modelbridge import Models
  File "/Users/glennjocher/.conda/envs/yolov3/lib/python3.7/site-packages/ax/modelbridge/__init__.py", line 6, in <module>
    from ax.modelbridge.factory import (
  File "/Users/glennjocher/.conda/envs/yolov3/lib/python3.7/site-packages/ax/modelbridge/factory.py", line 12, in <module>
    from ax.modelbridge.discrete import DiscreteModelBridge
  File "/Users/glennjocher/.conda/envs/yolov3/lib/python3.7/site-packages/ax/modelbridge/discrete.py", line 18, in <module>
    from ax.models.discrete_base import DiscreteModel
  File "/Users/glennjocher/.conda/envs/yolov3/lib/python3.7/site-packages/ax/models/__init__.py", line 4, in <module>
    from ax.models.random.sobol import SobolGenerator
  File "/Users/glennjocher/.conda/envs/yolov3/lib/python3.7/site-packages/ax/models/random/sobol.py", line 10, in <module>
    from ax.utils.stats.sobol import SobolEngine  # pyre-ignore: Not handling .pyx properly
  File "__init__.pxd", line 918, in init ax.utils.stats.sobol
ValueError: numpy.ufunc size changed, may indicate binary incompatibility. Expected 216 from C header, got 192 from PyObject

Process finished with exit code 1

Loop API: optimize

What does the wait_time parameter do in optimize function of loop api?

[Question/Documentation] Is it possible to run an experiment with trials distributed across GPUs

Tool looks great! But just wondering how you would go about running an experiment with trials distributed across GPUs (on a single machine). I am looking at the Service API / Developer API pages but cannot see how a client/server or queue structure would work (have not dug through code yet).

I'm after something like Ray to do optimisation.

I think distributed experiments is a pretty important feature so I'm assuming it has to be there somewhere, a tutorial would be great. I'm interested on single host / multi GPU environment but I'm sure multi-host would also be of value to people.

Get parameters and results of all the trials

AxClient class has get_best_parameters() method. I don't see any methods in AxClient class that returns all the trials with their parameters and evaluation results. Is there any plans to add such API?

Adding a type=choice parameter breaks plot_contour

Hi and thank you for this great repo!
I'm doing some experiments with ax.plot.contour.plot_contour. I've just tried to add a new choice-type parameter to the list given the the tune_cnn example:

best_parameters, values, experiment, model = optimize(
    parameters=[
        {"name": "lr", "type": "range", "bounds": [1e-6, 0.4], "log_scale": True},
        {"name": "momentum", "type": "range", "bounds": [0.0, 1.0]},
        {"name": "test", "type": "choice", "values": [0.0, 1.0]},
    ],
    evaluation_function=train_evaluate,
    objective_name='accuracy'
)

And running render(plot_contour(model=model, param_x='lr', param_y='momentum', metric_name='accuracy')) raises the following error:

ValueError Traceback (most recent call last)
in
----> 1 render(plot_contour(model=model, param_x='lr', param_y='momentum', metric_name='accuracy'))

~/.conda/envs/dl3.7/lib/python3.7/site-packages/ax/plot/contour.py in plot_contour(model, param_x, param_y, metric_name, generator_runs_dict, relative, density, slice_values, lower_is_better)
105 generator_runs_dict=generator_runs_dict,
106 density=density,
--> 107 slice_values=slice_values,
108 )
109 config = {

~/.conda/envs/dl3.7/lib/python3.7/site-packages/ax/plot/contour.py in _get_contour_predictions(model, x_param_name, y_param_name, metric, generator_runs_dict, density, slice_values)
61 param_grid_obsf.append(ObservationFeatures(parameters))
62
---> 63 mu, cov = model.predict(param_grid_obsf)
64
65 f_plt = mu[metric]

~/.conda/envs/dl3.7/lib/python3.7/site-packages/ax/modelbridge/base.py in predict(self, observation_features)
336 for t in self.transforms.values():
337 observation_features = t.transform_observation_features(
--> 338 observation_features
339 )
340 # Apply terminal transform and predict

~/.conda/envs/dl3.7/lib/python3.7/site-packages/ax/modelbridge/transforms/one_hot.py in transform_observation_features(self, observation_features)
109 for p_name, encoder in self.encoder.items():
110 if p_name in obsf.parameters:
--> 111 vals = encoder.transform(labels=[obsf.parameters.pop(p_name)])[0]
112 updated_parameters: TParameterization = {
113 self.encoded_parameters[p_name][i]: v

~/.conda/envs/dl3.7/lib/python3.7/site-packages/ax/modelbridge/transforms/one_hot.py in transform(self, labels)
33 def transform(self, labels: List[T]) -> np.ndarray:
34 """One hot encode a list of labels."""
---> 35 return self.label_binarizer.transform(self.int_encoder.transform(labels))
36
37 def inverse_transform(self, encoded_labels: List[T]) -> List[T]:

~/.conda/envs/dl3.7/lib/python3.7/site-packages/sklearn/preprocessing/label.py in transform(self, y)
255 return np.array([])
256
--> 257 _, y = encode(y, uniques=self.classes, encode=True)
258 return y
259

~/.conda/envs/dl3.7/lib/python3.7/site-packages/sklearn/preprocessing/label.py in _encode(values, uniques, encode)
108 return _encode_python(values, uniques, encode)
109 else:
--> 110 return _encode_numpy(values, uniques, encode)
111
112

~/.conda/envs/dl3.7/lib/python3.7/site-packages/sklearn/preprocessing/label.py in _encode_numpy(values, uniques, encode)
51 if diff:
52 raise ValueError("y contains previously unseen labels: %s"
---> 53 % str(diff))
54 encoded = np.searchsorted(uniques, values)
55 return uniques, encoded

ValueError: y contains previously unseen labels: [17]

However, if I set the test parameter to range there isn't any issue.

Crash while optimizing: RuntimeError: cholesky_cpu: U(1,1) is zero, singular U.

I got this recently trying to tune the hyperparameters on an MLP.

Relevant versions:

python==3.7.1
ax-platform==0.1.2
botorch==0.1.0
gpytorch==0.3.2
scipy==1.1.0
torch==1.1.0

I'm using ax.optimize() as the entrypoint. It was 45 trials into the experiment. Here's the stack trace.

ax.service.managed_loop: Running optimization trial 45...
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-37-5c288f8ea2dd> in <module>
     76         ],
     77         evaluation_function=do_train,
---> 78         minimize=True,
     79     )

~/anaconda3/lib/python3.7/site-packages/ax/service/managed_loop.py in optimize(parameters, evaluation_function, experiment_name, objective_name, minimize, parameter_constraints, outcome_constraints, total_trials, arms_per_trial, wait_time)
    204         wait_time=wait_time,
    205     )
--> 206     loop.full_run()
    207     parameterization, values = loop.get_best_point()
    208     return parameterization, values, loop.experiment, loop.get_current_model()

~/anaconda3/lib/python3.7/site-packages/ax/service/managed_loop.py in full_run(self)
    148         logger.info(f"Started full optimization with {num_steps} steps.")
    149         for _ in range(num_steps):
--> 150             self.run_trial()
    151         return self
    152 

~/anaconda3/lib/python3.7/site-packages/ax/service/managed_loop.py in run_trial(self)
    128             trial = self.experiment.new_trial(
    129                 generator_run=self.generation_strategy.gen(
--> 130                     experiment=self.experiment, new_data=dat
    131                 )
    132             )

~/anaconda3/lib/python3.7/site-packages/ax/modelbridge/generation_strategy.py in gen(self, experiment, new_data, n, **kwargs)
    161         elif new_data is not None:
    162             # We're sticking with the current model, but update with new data
--> 163             self._model.update(experiment=experiment, data=new_data)
    164 
    165         gen_run = not_none(self._model).gen(n=n, **(self._curr.model_gen_kwargs or {}))

~/anaconda3/lib/python3.7/site-packages/ax/modelbridge/base.py in update(self, data, experiment)
    385             obs_feats = t.transform_observation_features(obs_feats)
    386             obs_data = t.transform_observation_data(obs_data, obs_feats)
--> 387         self._update(observation_features=obs_feats, observation_data=obs_data)
    388         self.fit_time += time.time() - t_update_start
    389         self.fit_time_since_gen += time.time() - t_update_start

~/anaconda3/lib/python3.7/site-packages/ax/modelbridge/array.py in _update(self, observation_features, observation_data)
    110         # Update in-design status for these new points.
    111         self.training_in_design[-len(observation_features) :] = in_design
--> 112         self._model_update(Xs=Xs_array, Ys=Ys_array, Yvars=Yvars_array)
    113 
    114     def _model_update(

~/anaconda3/lib/python3.7/site-packages/ax/modelbridge/torch.py in _model_update(self, Xs, Ys, Yvars)
    113         Ys: List[Tensor] = self._array_list_to_tensors(Ys)
    114         Yvars: List[Tensor] = self._array_list_to_tensors(Yvars)
--> 115         self.model.update(Xs=Xs, Ys=Ys, Yvars=Yvars)
    116 
    117     def _model_predict(self, X: np.ndarray) -> Tuple[np.ndarray, np.ndarray]:

~/anaconda3/lib/python3.7/site-packages/ax/models/torch/botorch.py in update(self, Xs, Ys, Yvars)
    372             Yvars=self.Yvars,
    373             task_features=self.task_features,
--> 374             state_dict=state_dict,
    375         )

~/anaconda3/lib/python3.7/site-packages/ax/models/torch/botorch_defaults.py in get_and_fit_model(Xs, Ys, Yvars, task_features, state_dict, **kwargs)
     84             # pyre-ignore: [16]
     85             mll = ExactMarginalLogLikelihood(model.likelihood, model)
---> 86         mll = fit_gpytorch_model(mll, bounds=bounds)
     87     else:
     88         model.load_state_dict(state_dict)

~/anaconda3/lib/python3.7/site-packages/botorch/fit.py in fit_gpytorch_model(mll, optimizer, **kwargs)
     33     """
     34     mll.train()
---> 35     mll, _ = optimizer(mll, track_iterations=False, **kwargs)
     36     mll.eval()
     37     return mll

~/anaconda3/lib/python3.7/site-packages/botorch/optim/fit.py in fit_gpytorch_scipy(mll, bounds, method, options, track_iterations)
    186         jac=True,
    187         options=options,
--> 188         callback=cb,
    189     )
    190     iterations = []

~/anaconda3/lib/python3.7/site-packages/scipy/optimize/_minimize.py in minimize(fun, x0, args, method, jac, hess, hessp, bounds, constraints, tol, callback, options)
    601     elif meth == 'l-bfgs-b':
    602         return _minimize_lbfgsb(fun, x0, args, jac, bounds,
--> 603                                 callback=callback, **options)
    604     elif meth == 'tnc':
    605         return _minimize_tnc(fun, x0, args, jac, bounds, callback=callback,

~/anaconda3/lib/python3.7/site-packages/scipy/optimize/lbfgsb.py in _minimize_lbfgsb(fun, x0, args, jac, bounds, disp, maxcor, ftol, gtol, eps, maxfun, maxiter, iprint, callback, maxls, **unknown_options)
    333             # until the completion of the current minimization iteration.
    334             # Overwrite f and g:
--> 335             f, g = func_and_grad(x)
    336         elif task_str.startswith(b'NEW_X'):
    337             # new iteration

~/anaconda3/lib/python3.7/site-packages/scipy/optimize/lbfgsb.py in func_and_grad(x)
    283     else:
    284         def func_and_grad(x):
--> 285             f = fun(x, *args)
    286             g = jac(x, *args)
    287             return f, g

~/anaconda3/lib/python3.7/site-packages/scipy/optimize/optimize.py in function_wrapper(*wrapper_args)
    291     def function_wrapper(*wrapper_args):
    292         ncalls[0] += 1
--> 293         return function(*(wrapper_args + args))
    294 
    295     return ncalls, function_wrapper

~/anaconda3/lib/python3.7/site-packages/scipy/optimize/optimize.py in __call__(self, x, *args)
     61     def __call__(self, x, *args):
     62         self.x = numpy.asarray(x).copy()
---> 63         fg = self.fun(x, *args)
     64         self.jac = fg[1]
     65         return fg[0]

~/anaconda3/lib/python3.7/site-packages/botorch/optim/fit.py in _scipy_objective_and_grad(x, mll, property_dict)
    221     output = mll.model(*train_inputs)
    222     args = [output, train_targets] + _get_extra_mll_args(mll)
--> 223     loss = -mll(*args).sum()
    224     loss.backward()
    225     param_dict = OrderedDict(mll.named_parameters())

~/anaconda3/lib/python3.7/site-packages/gpytorch/module.py in __call__(self, *inputs, **kwargs)
     20 
     21     def __call__(self, *inputs, **kwargs):
---> 22         outputs = self.forward(*inputs, **kwargs)
     23         if isinstance(outputs, list):
     24             return [_validate_module_outputs(output) for output in outputs]

~/anaconda3/lib/python3.7/site-packages/gpytorch/mlls/exact_marginal_log_likelihood.py in forward(self, output, target, *params)
     26         # Get the log prob of the marginal distribution
     27         output = self.likelihood(output, *params)
---> 28         res = output.log_prob(target)
     29 
     30         # Add terms for SGPR / when inducing points are learned

~/anaconda3/lib/python3.7/site-packages/gpytorch/distributions/multivariate_normal.py in log_prob(self, value)
    127 
    128         # Get log determininat and first part of quadratic form
--> 129         inv_quad, logdet = covar.inv_quad_logdet(inv_quad_rhs=diff.unsqueeze(-1), logdet=True)
    130 
    131         res = -0.5 * sum([inv_quad, logdet, diff.size(-1) * math.log(2 * math.pi)])

~/anaconda3/lib/python3.7/site-packages/gpytorch/lazy/lazy_tensor.py in inv_quad_logdet(self, inv_quad_rhs, logdet, reduce_inv_quad)
    990             from .chol_lazy_tensor import CholLazyTensor
    991 
--> 992             cholesky = CholLazyTensor(self.cholesky())
    993             return cholesky.inv_quad_logdet(inv_quad_rhs=inv_quad_rhs, logdet=logdet, reduce_inv_quad=reduce_inv_quad)
    994 

~/anaconda3/lib/python3.7/site-packages/gpytorch/lazy/lazy_tensor.py in cholesky(self, upper)
    716             (LazyTensor) Cholesky factor (lower triangular)
    717         """
--> 718         res = self._cholesky()
    719         if upper:
    720             res = res.transpose(-1, -2)

~/anaconda3/lib/python3.7/site-packages/gpytorch/utils/memoize.py in g(self, *args, **kwargs)
     32         cache_name = name if name is not None else method
     33         if not is_in_cache(self, cache_name):
---> 34             add_to_cache(self, cache_name, method(self, *args, **kwargs))
     35         return get_from_cache(self, cache_name)
     36 

~/anaconda3/lib/python3.7/site-packages/gpytorch/lazy/lazy_tensor.py in _cholesky(self)
    401             evaluated_mat.register_hook(_ensure_symmetric_grad)
    402 
--> 403         cholesky = psd_safe_cholesky(evaluated_mat.double()).to(self.dtype)
    404         return NonLazyTensor(cholesky)
    405 

~/anaconda3/lib/python3.7/site-packages/gpytorch/utils/cholesky.py in psd_safe_cholesky(A, upper, out, jitter)
     45                 continue
     46 
---> 47         raise e
     48 
     49 

~/anaconda3/lib/python3.7/site-packages/gpytorch/utils/cholesky.py in psd_safe_cholesky(A, upper, out, jitter)
     19     """
     20     try:
---> 21         L = torch.cholesky(A, upper=upper, out=out)
     22         # TODO: Remove once fixed in pytorch (#16780)
     23         if A.dim() > 2 and A.is_cuda:

RuntimeError: cholesky_cpu: U(1,1) is zero, singular U.

TypeError: __init__() got an unexpected keyword argument 'encoding'

When running the https://ax.dev/tutorials/tune_cnn.html notebook file in Google Colab:
https://colab.research.google.com/drive/1P6TvA9UZDtLf9dMFTcYWm_RUBW0wpsiV#scrollTo=DMsfBROgGSBK

render(plot_contour(model=model, param_x='lr', param_y='momentum', metric_name='accuracy'))
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-9-849423054076> in <module>()
----> 1 render(plot_contour(model=model, param_x='lr', param_y='momentum', metric_name='accuracy'))

2 frames
/usr/local/lib/python3.6/dist-packages/simplejson/__init__.py in dumps(obj, skipkeys, ensure_ascii, check_circular, allow_nan, cls, indent, separators, encoding, default, use_decimal, namedtuple_as_object, tuple_as_array, bigint_as_string, sort_keys, item_sort_key, for_json, ignore_nan, int_as_string_bitcount, iterable_as_array, **kw)
    397         ignore_nan=ignore_nan,
    398         int_as_string_bitcount=int_as_string_bitcount,
--> 399         **kw).encode(obj)
    400 
    401 

TypeError: __init__() got an unexpected keyword argument 'encoding'

Example for how to plot without jupyter?

Hi

Would be great to include an example on how to plot contours etc. without using Jupyter. I tried to find out how to do that but have no idea how to plot the AxPlotConfig outside of Jupyter.

Or is this just not supported?

Multiple Equality Constraints

Hello,

Ive been trying out Ax and I really Like it. I was trying to create a three stage regression model and have Ax infer the weights to be assigned to each regressor. I am not able to create a constraint that uses three parameters and have them be equal to 0.

I have used the following code:

from ax.service.ax_client import AxClient
from ax.utils.measurement.synthetic_functions import branin
from sklearn.ensemble import RandomForestRegressor
from sklearn.gaussian_process import GaussianProcessRegressor
from axtrainer.data import DATA_SET_DICT
from sklearn.linear_model import LinearRegression
# from axtrainer.logger import *
from ax import ChoiceParameter, ParameterType
import xgboost as xgb
import numpy as np
from sklearn.metrics import mean_squared_error, explained_variance_score
# from axtrainer.trainer import DATA_SET_DICT
import os

PROBLEM_TYPE = os.environ.get("PROBLEM_TYPE", "REGRESSION")

# Helper function for parameter handling
def make_parameter(name, ptype, bounds, value_type):
    ''' Creates a parameter dictionary to be used in ax.create_experiment'''
    if ptype == "range":
        return dict(name=name, type=ptype, bounds=bounds, value_type=value_type)
    elif ptype == "choice":
        return dict(name=name, type=ptype, values=bounds, value_type=value_type)

# Function to return our target cost function and optimize parameters with ax.Client
def train_and_return_score(w1=1/3.0,w2=1/3.0,w3=1/3.0, **kwargs):
    ''' Convinience function to train model and return score'''
    if PROBLEM_TYPE == "REGRESSION":
        Model = xgb.XGBRegressor
    elif PROBLEM_TYPE == "CLASSIFICATION":
        Model = xgb.XGBClassifier
    
   
    X_train, X_test, y_train, y_test = DATA_SET_DICT["X_train"], DATA_SET_DICT[
        "X_test"], DATA_SET_DICT["y_train"], DATA_SET_DICT["y_test"]
    # Instantiate model with keyword arguments
    estimators = [
        RandomForestRegressor(n_estimators=30),
        Model(n_jobs=-1,gpu_id=0, **kwargs)
    ]
    for model in estimators:
        model.fit(X_train, y_train)
    
    preds = np.array(list(model.predict(X_test) for model in estimators))
    
    # Weighted sum of models
    preds = np.array((w1,w2)) @ preds

    _score = explained_variance_score(y_test, preds)
    # print("MODEL SCORE: %s " % _score)
    return 1 - _score

PARAMETERS = [
    make_parameter("w1", "range", [0, .99], "float"),
    make_parameter("w2", "range", [0, .99], "float"),

]

CONSTRAINTS = ["w1 + w2 = 1.0",]

Lets say I wanted to have more than two parameters itnteract with each other such as having three weights and three models. I know that there are other ways to do what I am specifically doing but I am trying to get an understanding of tuning with Ax.
Will the following be possible as a constraint: w1 + w2 + w3 == 1.0. Will this be possible using ax anytime soon? Is there a limiation to bayesian optimization that will not allow this functionality?

When I do try to do something like this I get the following error:

Traceback (most recent call last):
  File "run.py", line 6, in <module>
    ax, b , m = main()
  File "/home/david/Desktop/ax-container/app/axtrainer/weighted_model.py", line 74, in main
    minimize=True,
  File "/home/david/miniconda3/envs/threeseven/lib/python3.7/site-packages/ax/service/ax_client.py", line 115, in create_experiment
    outcome_constraints=outcome_constraints,
  File "/home/david/miniconda3/envs/threeseven/lib/python3.7/site-packages/ax/service/utils/instantiation.py", line 225, in make_experiment
    else [constraint_from_str(c, parameter_map) for c in parameter_constraints],
  File "/home/david/miniconda3/envs/threeseven/lib/python3.7/site-packages/ax/service/utils/instantiation.py", line 225, in <listcomp>
    else [constraint_from_str(c, parameter_map) for c in parameter_constraints],
  File "/home/david/miniconda3/envs/threeseven/lib/python3.7/site-packages/ax/service/utils/instantiation.py", line 160, in constraint_from_str
    "Parameter constraint should be of form `metric_name` >= `other_metric_name` "
AssertionError: Parameter constraint should be of form `metric_name` >= `other_metric_name` for order constraints or `metric_name` + `other_metric_name` >= x, where x is a float bound, and acceptable comparison operators are >= and <=.

I am using Python 3.7 on Ubuntu in an anaconda enviornment.

Hierarchical search spaces

Is there a way to define a hierarchy of parameters?
for example a parameter that chooses architecture, and each architecture has its own parameters.

example (pseudo code):

architecture = choise(["NeuralNetwork","xgdboost"])

if architecture=="NeuralNetwork":
     n_layers = choise(range(1,10,1))
     #more architecture releted params here.

else if  architecture=="xgdboost":
    max_depth =  choise(range(1,5,1))
     #more architecture releted params here.

Allow for plotly>=3.9.0 dependency

The installation of ax-platform breaks other installed packages like plotly-express that rely on plotly>=3.9.0.

During installation of ax-platform:

ERROR: plotly-express 0.2.0 has requirement plotly>=3.9.0, but you'll have plotly 2.7.0 which is incompatible.

Modulo parameter constraints

1. modulo

hidden_size % num_attention_heads == 0

use case

In transformer models, we use multi-head attention which the hidden vector will be divided into n part, and n is number of attention heads,
so we usually want hidden_size divisible by num_attention_heads

2. log2

math.log2(batch_size) % 1 == 0

use case

To make batch size be 2**n to just fit in memory.

3. Why?

Can we just pass a function as parameter constraint and have all parameters' names as arguments?

Passing extra arguments to the evaluation_function

I am trying to do a hyper-parameter search over a subset of the overall model hyperparameters. This subset has been defined for the search space, however, my evaluation_function needs access to the rest in order to pass them to the model's trainer. I feel like this might conflict with Ax's design philosophy but I have a considerable number of these and its annoying to define them both through argparse and then again through ax.

  1. Is there a way of passing these to the evaluation_function function without creating a parameter for them in the search space i.e. having them as part of the parameterization?

  2. If not is there a helper function which converts and argparse to ax parameters? or something similar.

Thanks.

Does attach trial affect the first few trials to be generated?

I tried to add a prior to the Ax service by feeding it data for previous completed trials by calling attach trial and then complete trial. But I noticed that even if I did that, the first few trials to be generated were still the same.

Even when I added the exact same parameter configuration (or close to exact) and results that were generated in a previously run Ax optimization job, the new Ax job would still generate the same parameters as the old optimization job. This is a lot of wasted compute on my end, and I was wondering if there was a way to avoid this?

ChoiceParameter with list of ints: TypeError: Object of type int32 is not JSON serializable

Hi,

First of all thank you for this awesome project.

Regarding the error: when trying to input a set of integers to a ChoiceParameter with parameter_type=ParameterType.INT the following error appears:

TypeError: Object of type int32 is not JSON serializable

The problem seems to occur at file arm.py and at function md5hash where the parameters are dumped to a JSON parameters_str = json.dumps(parameters, sort_keys=True)

To reproduce:

    range_x = ChoiceParameter(name='x', values=[1,5,10],
                              parameter_type=ParameterType.INT)
    space = SearchSpace(parameters=[range_x])
    experiment = Experiment(name="experiment_one_cell",
                            search_space=space)
    sobol = Models.SOBOL(search_space=experiment.search_space)
    generator_run = sobol.gen(100)

I have tried to make the list a set, but the problem seems to be in the parameter type definition. When changing to ParameterType.STRING and casting values to strings the problem does not exist.

I know this is not an important error, but I state it here just so it is somewhere defined.

Thank you .

UPDATE: indeed this is strange since there is already a conversion from numpy to python in function md5hash:

        for k, v in parameters.items():
            parameters[k] = numpy_type_to_python_type(v)

But it does not seem to work.

Optimization of Acquisition Function subject to Parameter Constraints

I have been running into issues generating points that obey the parameter constraints I specify. Below is a simple example that should reproduce the issue (I realize that the linear constraint can just be incorporated as an upper bound on the parameter types).

Do I need to manually implement an acquisition function in Botorch that can handle parameter constraints? I didn't seem to have this issue earlier when I was using the Service API but the points generated by the BOTORCH model don't seem to respect the constraints.

from ax import (
    ParameterType, 
    RangeParameter,
    SearchSpace, 
    SimpleExperiment, 
    ParameterConstraint
)

from ax.modelbridge.registry import Models
parameters = [RangeParameter(name = "x1", parameter_type = ParameterType.FLOAT, lower = 0, upper = 100)]
constraints = [ParameterConstraint({"x1": 1}, 5)]
search_space = SearchSpace(parameters = parameters, parameter_constraints = constraints)
exp = SimpleExperiment(name = "dummy", search_space = search_space, evaluation_function = lambda d : abs(d['x1'] - 5))
print(f"Running Sobol initialization trials...")
sobol = Models.SOBOL(exp.search_space)
for i in range(5):
    exp.new_trial(generator_run=sobol.gen(1))
    
for i in range(10):
    print(f"Running GP+EI optimization trial {i+1}/10...")
    # Reinitialize GP+EI model at each step with updated data.
    gpei = Models.BOTORCH(experiment = exp,  data=exp.eval())
    batch = exp.new_trial(generator_run=gpei.gen(1))
    print(batch)

[Question/Issue] Choice parameters don't work with string values

I was trying to do optimization using choice parameters. Here is a simple example:

best_parameters, values, experiment, model = optimize(
    parameters=[
        {'name': 'categorical', 'type': 'choice', 'values': ['foo', 'bar']}
    ],
    evaluation_function=objective,
    minimize=True,
)

And I got the following error:

Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/ax/service/managed_loop.py", line 206, in optimize
loop.full_run()
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/ax/service/managed_loop.py", line 150, in full_run
self.run_trial()
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/ax/service/managed_loop.py", line 130, in run_trial
experiment=self.experiment, new_data=dat
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/ax/core/experiment.py", line 446, in new_trial
experiment=self, trial_type=trial_type, generator_run=generator_run
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/ax/core/trial.py", line 38, in init
self.add_generator_run(generator_run=generator_run)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/ax/core/base_trial.py", line 85, in _immutable_once_run
return func(self, *args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/ax/core/trial.py", line 87, in add_generator_run
generator_run.arms[0].parameters, raise_error=True
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/ax/core/search_space.py", line 192, in check_types
f"{value} is not a valid value for "
ValueError: bar is not a valid value for parameter ChoiceParameter(name='categorical', parameter_type=STRING, values=['foo', 'bar'])

Digging into that, I found that the problem was with the one_hot transformer, it changed the type of string values to numpy.str_ and hence, the ValueError was raised.

I see that this is already fixed in this commit, but it's not available in the currently latest 0.1.1 tag.

Could you please provide information when the new release is going to be?
Thank you!

ImportError: cannot import name 'optimize_acqf'

After updating to ax-platform 0.1.4 (with botorch 0.1.3, torch 1.2.0), importing AxClient raises an ImportError. In particular,

from ax.service.ax_client import AxClient

gives error

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "[...]/python3.6/site-packages/ax/__init__.py", line 5, in <module>
    from ax.modelbridge import Models
  File "[...]/python3.6/site-packages/ax/modelbridge/__init__.py", line 6, in <module>
    from ax.modelbridge.factory import (
  File "[...]/python3.6/site-packages/ax/modelbridge/factory.py", line 13, in <module>
    from ax.modelbridge.discrete import DiscreteModelBridge
  File "[...]/python3.6/site-packages/ax/modelbridge/discrete.py", line 18, in <module>
    from ax.models.discrete_base import DiscreteModel
  File "[...]/python3.6/site-packages/ax/models/__init__.py", line 5, in <module>
    from ax.models.torch.botorch import BotorchModel
  File "[...]/python3.6/site-packages/ax/models/torch/botorch.py", line 10, in <module>
    from ax.models.torch.botorch_defaults import (
  File "[...]/python3.6/site-packages/ax/models/torch/botorch_defaults.py", line 21, in <module>
    from botorch.optim.optimize import optimize_acqf
ImportError: cannot import name 'optimize_acqf'

Optimize doesn't work with evaluation function that returns value of some NumPy types

I noticed that when the evaluation_function returns value of some NumPy type(I was able to reproduce with np.int64, np.int32, np.float32), optimize method crashes with the following exception:

File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/ax/service/managed_loop.py", line 206, in optimize
loop.full_run()
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/ax/service/managed_loop.py", line 150, in full_run
self.run_trial()
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/ax/service/managed_loop.py", line 141, in run_trial
trial.fetch_data()
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/ax/core/base_trial.py", line 259, in fetch_data
trial_index=self.index, metrics=metrics, **kwargs
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/ax/core/simple_experiment.py", line 205, in _fetch_trial_data
return self.eval_trial(self.trials[trial_index])
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/ax/core/simple_experiment.py", line 119, in eval_trial
not_none(trial.arm).parameters, None
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/ax/core/simple_experiment.py", line 192, in evaluation_function_outer
"Evaluation function returned an invalid type. The function must "
Exception: Evaluation function returned an invalid type. The function must either return a dictionary of metric names to mean, sem tuples or a single mean, sem tuple, or a single mean.

My environment:

numpy==1.16.1
ax-platform==0.1.2

Script to reproduce:

import numpy as np
from ax import optimize
from ax.utils.measurement.synthetic_functions import branin

best_parameters, values, experiment, model = optimize(
    parameters=[
        {
            "name": "x1",
            "type": "range",
            "bounds": [-5.0, 10.0],
        },
        {
            "name": "x2",
            "type": "range",
            "bounds": [0.0, 10.0],
        },
    ],
    evaluation_function=lambda p: np.float32(branin(p["x1"], p["x2"])),
    minimize=True,
)

Example for online evaluation

Hi! It is really amazing to see a project like this open source.

I really wish you had better examples though. All the examples seem to be focused on offline evaluation, but I wonder if you could provide an online evaluation example.

For instance, a simple webservice that routes to A or B and track conversion rates for a given action. It was not clear in the docs if that is doable or not with this platform, but it is a very common case for AB testing.

[Question/Issue]Not showing graphs on JupyterLab

I followed a tutorial to plot response surface in my local Docker environment. However, it did not show contour plot on my JupyterLab notebook. It showed only small blank space. Same situation at other rendering tutorial.
Is there any dependency to show plot by using Ax library? Other plotting library such as matplotlib and plotly works for my environment.

How to save and load experiment/model from `optimize`

Hi, from the documentation, the optimize function returns the (best_parameters, values, experiment, model) tuple. I'm wondering what's the best practices for saving these values (e.g. for visualization in a different machine)? Also, is it possible to interrupt a model and later resume from the state in the optimize API? Thanks!

[Feature Request]: Visualization of optimization progress and function evaluation values

Hi,

Is it possible to support visualization of the evaluated parameter values so far and their corresponding function values in a ranked manner before an optimization run is finished? This is useful for:

  • Deciding when to stop an optimization run. Sometimes, it is unclear how many trials to run for before running any trials.
  • Getting a sense of what parameter values are important.

For example, spearmint supports starting up a web server, which serves a html file with the statistics about optimization progress, such as the best parameter values found so far.

https://github.com/JasperSnoek/spearmint

Thanks!

Providing hints in parameters space to Ax

We can specify range parameters. Is there any way to specify to force Ax to try some specific value of of the parameters in that range. For instance:

parameters=[
          {
            "name": "x1",
            "type": "range",
            "bounds": [-10.0, 10.0],
          },
        ],

What I am looking for is something similar to this:

parameters=[
          {
            "name": "x1",
            "type": "range",
            "bounds": [-10.0, 10.0],
            "must_try": [0.0, 5.0,], # "must_try" just some name
          },
        ],

In above must_try will dictate which parameters Ax must try. It's in a sense giving hints to Ax. Is it possible to do this right now?

Better documentation for `evaluation_function`

Thanks to the team for the great work! I would appreciate if documentation for arguments like evaluation_function could be provided more explicitly in the tutorials. It took me a while to figure out the function should return a (mean, stderr) tuple when the outcome is a dictionary (https://github.com/facebook/Ax/blob/master/ax/core/types.py#L25).

Must the comparison for outcome constraint always be <= ?

In the Service API tutorial, it is mentioned that:

outcome_constraints should be a list of strings of form "constrained_metric <= some_bound".

But in the outcome_constraint_from_str function we have this assertion:

   assert len(tokens) == 3 and tokens[1] in COMPARISON_OPS, (
        "Outcome constraint should be of form `metric_name >= x`, where x is a "
        "float bound and comparison operator is >= or <=."
    )

Does this mean the comparison operator for an outcome constraint can be either >= or <= and that the tutorial is outdated?

No pip install for Windows

If I run pip install ax-platform on Windows, it installs version 0.0.0. Can we please have whl files for Windows users who don't have build tools installed on their machines?

How to save and load my Ax Client and experiment

I'm trying to use Ax to maximize a function, and wonder how to save and load my client.

my_ax = AxClient()

my_ax.create_experiment(
    name='value',
    parameters=[
        {"name": "x", "type": "choice", 'values' : [1,2,3,4,5]}
    ],
    objective_name="y"
)

for i in range(4):   
    parameters, trial_index = my_ax.get_next_trial()
    my_ax.complete_trial(trial_index=trial_index, raw_data=1)

my_ax.save('client.json')
ax.save(my_ax.experiment, 'exp.json')

Then I tried

new = AxClient()

new = new.load('client.json')
new.load_experiment('exp.json')

But then it gives an error like

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-112-184fdad25d6f> in <module>()
     21 
     22 new = new.load('client.json')
---> 23 new.load_experiment('exp.json')

/usr/local/lib/python3.6/dist-packages/ax/service/ax_client.py in load_experiment(self, experiment_name)
    468         if not self.db_settings:
    469             raise ValueError(  # pragma: no cover
--> 470                 "Cannot load an experiment in the absence of the DB settings."
    471                 "Please initialize `AxClient` with DBSettings."
    472             )

ValueError: Cannot load an experiment in the absence of the DB settings.Please initialize `AxClient` with DBSettings.

Is there a simple way to just save-load my optimization?

Question about setting specified parameters.

Hello.

I am trying to use Ax for a problem which seems to be solved by a bandit optimization.

Here, I have a question about how I can set specified parameters (arms in Ax) for function evaluation.

For example, I have two parameters named 'x1' and 'x2' which have 'x1' in {1,2,3} and 'x2' in {1,2}, respectively.

I read this tutorial(https://ax.dev/docs/core.html) and I can set a search space through Choice parameters or Range parameters(int).

The problem is I have true evaluation function values for specific combination of 'x1' and 'x2'.
e.g) I have f(x1=1, x2=1) but I don't know f(x1=3, x2=2).

Therefore, I want to specify a search space in a way that makes it have parameter combinations I know real function values. (do not need search (x1=3, x2=2) from above examples)

How can I do that? Please help.

Thanks in advance.

pip not install

ubuntu 19.04
conda python=3.7.4
pytorch 1.2.1
botorch

(py37) yiyin@yiyin-ThinkPad-T460p:$ pip uninstall ax-platform
8.3.0
Uninstalling ax-platform-0.0.0:
Would remove:
/opt/miniconda3/envs/py37/lib/python3.7/site-packages/ax_platform-0.0.0.dist-info/*
Proceed (y/n)? y
Successfully uninstalled ax-platform-0.0.0
(py37) yiyin@yiyin-ThinkPad-T460p:
$ pip install ax-platform
8.3.0
Collecting ax-platform
Using cached https://files.pythonhosted.org/packages/89/b4/a51b618c99ea757d051cb1fcf89996ebd3c92acfce82806040c25b54b43f/ax_platform-0.0.0-py3-none-any.whl
Installing collected packages: ax-platform
Successfully installed ax-platform-0.0.0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.