Comments (4)
Hi @lsassen, Can you share your code including setting up AxClient and the GenerationStrategy for repro? Thanks!
from ax.
Hi @sdaulton. Here is how I set my AxClient:
- Set up the generation startegy:
gs = GenerationStrategy(
steps=[
GenerationStep(
model=Models.BOTORCH_MODULAR,
num_trials=-1,
model_kwargs={
"surrogate": Surrogate(FixedNoiseGP),
"botorch_acqf_class": UpperConfidenceBound,
"acquisition_options": {"beta": 1.96**2,}
},
),
]
)
- Set up the experiment
search_space = SearchSpace(parameters=[RangeParameter(name="x", parameter_type=ParameterType.FLOAT, lower=-10*np.pi, upper=10*np.pi)])
optimization_config = OptimizationConfig(objective=Objective(metric=Metric(name="f(x)",lower_is_better=False,),minimize=False,))
# Here, we have a dummy runner that does nothing.
class MyRunner(Runner):
def run(self, trial):
trial_metadata = {"name": str(trial.index)}
return trial_metadata
experiment =Experiment(
name="my first experiment",
search_space=search_space,
optimization_config=optimization_config,
tracking_metrics=None,
runner=MyRunner(),
status_quo=None,
description=None,
is_test=False,
experiment_type=None,
properties=None,
default_data_type=None,
)
- Setup the Runner
ax_client = AxClient(
generation_strategy=gs,
verbose_logging=True,
early_stopping_strategy=None,
)
ax_client._set_experiment(
experiment=experiment, overwrite_existing_experiment=True
)
I am using ax-platform==0.3.5
version. If you need more input from my side to provide feedback please let me know.
Thanks you
from ax.
FYI in case it helps in identifying the issue - I am using the developer API, so not using the AxClient at all and have a similar issue. In my BO loop, I'm generating trials one by one. During each iteration, I added a call to evaluate_acquisition_function() for the suggested trial. If I complete the trial, fetch data, and retrain the model every iteration before getting another suggestion, I have no issues. If I try to make more than one trial (and evaluate the acquisition function for each) before fetching the data and retraining, every new trial has all the inputs clamped to an upper bound of 1 (see output messages in screenshot below). It is as if the evaluation of the acquisition function normalizes everything including the bounds and forgets to undo the transform on the parameter bounds or something. From that point on, everything is essentially clamped to 1 until the model is trained fresh again. If I remove the evaluate_acquisition_function() call, the problem goes away. Hope this helps!
from ax.
@cheeseheist thanks for this keen observation, this is very helpful. Seems like we were transforming the search space in-place when calling evaluate_acquisition_function()
, which then leads to an issue during the next generation step if using the same search space object.
I put up #2386 which I hope will fix this. Would you mind giving this a try to see if this indeed fixes your issue?
from ax.
Related Issues (20)
- Issue when starting an AxClient with out-of-design points HOT 2
- cannot import name 'TrainingData' HOT 2
- applying complex constrains HOT 2
- Ax is not not starting as many workers as I'd like to; sometimes, get_next_trials returns 0 new trials HOT 5
- Evaluating custom candidates HOT 2
- Input Feature Selection - Does the relevant code exist? HOT 6
- [Feature Request] support constraints on `ChoiceParameters` HOT 4
- Extending Models.THOMPSON with an extra parameter HOT 1
- There are some questions when i use the Ax HOT 7
- Space characters in the objective name AND specifying a threshold leads to an error message: "AssertionError: Outcome constraint should be of form `metric_name >= x" HOT 1
- Pandas deprecation warning when deserializing AxClient JSON HOT 2
- AX seems to get stuck with Ray
- `StandardizeY` transform requires non-empty data." when using SAASBO
- Plotting outside of a notebook HOT 1
- Setting search space step size in Ax Service API HOT 10
- Problem when Sobol falls back to HitAndRunPolytopeSampler HOT 5
- Arms from previous batch keep appearing in new batches HOT 5
- EHVI & NEHVI break with more than 7 objectives HOT 4
- Multi-objective experiments generate duplicated data HOT 5
- Question: Transforming objective when passing `best_f` to `ProbabilityOfImprovement`, etc. HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ax.