Comments (5)
@riyadparvez - I'm assuming you mean setting certain parameters that you want evaluated during the exploration phase before Bayesian optimization actually kicks in?
In that case, you are able to just attach custom trials to Ax, pass them to your evaluation function, and then report the result back to Ax (if using the service API). You can see an example here.
In the case of a one-dimensional search space like you have above, it would mean:
params1, trial_index1 = ax.attach_trial(parameters={"x1": 0.0}
params2, trial_index2 = ax.attach_trial(parameters={"x1": 0.0}
# run your evaluation here...
ax.complete_trial(trial_index1, [data here])
ax.complete_trial(trial_index2, [data here])
If you have more than one parameter, you will have to manually set the other parameters as well. At this point, we don't have functionality that will tell Ax that you have to try certain values of one parameter in a range while keeping the other parameters completely flexible (at least through the Service API). This is essentially the problem of putting a strong prior on where you want the quasi-random search to go. The closest you could come to that if you have a lot of parameters and don't want to manually specify them is to do something custom via the developer API that would allow you to generate points from multiple search spaces, one that is a broad random search, and one that is a more narrow search. I can show you how to do that if you're interested, but hopefully the custom trials address your needs.
from ax.
Thanks a lot! It worked!
Alos, is it possible to find out which trials are custom trials and which trials have been generated by Ax? I know there is an easier work-around. I was just wondering it'd be nice to have an API from Ax.
from ax.
Awesome!
Not in a very straightforward way at the moment, albeit it's a TODO for us that we're going to roll into the functionality for returning all trials from the Service API @lena-kashtelyan mentioned here: #132.
In the meantime, here's an example of what you can do (I based this off of the Service API tutorial, https://ax.dev/versions/latest/tutorials/gpei_hartmann_service.html):
import numpy as np
from ax.plot.contour import plot_contour
from ax.plot.trace import optimization_trace_single_method
from ax.service.ax_client import AxClient
from ax.metrics.branin import branin
from ax.utils.measurement.synthetic_functions import hartmann6
from ax.utils.notebook.plotting import render, init_notebook_plotting
ax = AxClient()
ax.create_experiment(
name="hartmann_test_experiment",
parameters=[
{
"name": "x1",
"type": "range",
"bounds": [0.0, 1.0],
"value_type": "float", # Optional, defaults to inference from type of "bounds".
"log_scale": False, # Optional, defaults to False.
},
{
"name": "x2",
"type": "range",
"bounds": [0.0, 1.0],
},
{
"name": "x3",
"type": "range",
"bounds": [0.0, 1.0],
},
{
"name": "x4",
"type": "range",
"bounds": [0.0, 1.0],
},
{
"name": "x5",
"type": "range",
"bounds": [0.0, 1.0],
},
{
"name": "x6",
"type": "range",
"bounds": [0.0, 1.0],
},
],
objective_name="hartmann6",
minimize=True, # Optional, defaults to False.
)
def evaluate(parameters):
x = np.array([parameters.get(f"x{i+1}") for i in range(6)])
# In our case, standard error is 0, since we are computing a synthetic function.
return {"hartmann6": (hartmann6(x), 0.0), "l2norm": (np.sqrt((x ** 2).sum()), 0.0)}
# add a custom arm
custom_params, trial_index = ax.attach_trial(parameters={"x1": 0.0, "x2":0.0, "x3":0.0, "x4":1.0, "x5":1.0, "x6": 1.0})
ax.complete_trial(trial_index=trial_index, raw_data=evaluate(custom_params))
for i in range(15):
print(f"Running trial {i+1}/15...")
parameters, trial_index = ax.get_next_trial()
# Local evaluation here can be replaced with deployment to external system.
ax.complete_trial(trial_index=trial_index, raw_data=evaluate(parameters))
# here's how you get the origin of the trials (what model created them)
# model key is None because it's a custom configuration
ax.experiment.trials[0].generator_run._model_key
# model key is 'Sobol' because it's a quasi-random configuration
ax.experiment.trials[1].generator_run._model_key
# model key is 'GPEI' because it was generated using Bayesian optimization
ax.experiment.trials[12].generator_run._model_key
from ax.
@riyadparvez, did @kkashin's answer fully take care of your issue?
from ax.
@lena-kashtelyan yes, it does! Sorry for the late reply! Thanks a lot!
from ax.
Related Issues (20)
- [CODING QUESTION] Data to attach is empty HOT 2
- [CODING QUESTION] 'Data' object has no attribute 'is_ok' HOT 3
- how to use Banit Optimization of AX if i want to set multiple objects? HOT 4
- SyntheticRunner: @property decorator missing for def run_metadata_report_keys() HOT 2
- How to add json_encoder_registry in RegistryBundle to save to JSON file
- Deserialization using RegistryBundle not working as expected HOT 5
- Question regards how to enforce one set of user choice default values during the Sobol initial trial generation. HOT 5
- Typeguard requirement is for old version HOT 5
- how to get plots(slice, contour) from 'ax_client_snapshot.json'? HOT 3
- Remove deprecated `FixedNoiseGP` from examples
- API docs still show usage of minimize=True and objective_name="branin" despite deprecation warning HOT 1
- Setting up robust optimization experiment with MVaR gives error HOT 8
- evaluate_acquisition_function does not work correctly HOT 2
- FullyBayesianMOO memory usage HOT 5
- Example (tutorial) for ST_MTGP_NEHVI HOT 2
- Out of Memory crash issue HOT 6
- "Hyperparameter Optimization via Raytune" link in website is broken. HOT 2
- Using `evaluate_acquisition_function` on `AxClient` causes subsequent optimziation errors HOT 2
- Question : modifications of compute_posterior_pareto_frontier HOT 2
- Tracking of auxiliary metrics HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ax.