GithubHelp home page GithubHelp logo

Comments (7)

eytan avatar eytan commented on May 2, 2024 2

from ax.

mpolson64 avatar mpolson64 commented on May 2, 2024

Hi, thanks for reaching out.

To your first question: is it possible there is noise in the system you're trying to optimize? Or could there be some nonstationarity in your readings (ie the output changes over time in a way that is not related to your parameterization)? Both of these make it more difficult for Bayesian optimization to perform well. Our methods do their best to estimate noise internally and optimize for the true value, but sometimes there is simply too much noise for BO. You can use the function interact_cross_validation_plotly to get a plot that should show how well the Ax's model is performing on your data.

To the second question could you elaborate what you mean by changing the flow and "carry over"?

from ax.

cyrilmyself avatar cyrilmyself commented on May 2, 2024

@mpolson64 thank you for your replying

For the second question, changing the flow is described as below:
firstly, i use Ax's bayesian optimization to produce a set of parameters in a A/B testing with a group of users
secondly, i set the parameters in a A/B testing with another group of users

carry over means :
firstly i set parameters in A/B testing with a group of users, after a while i change the parameters in A/B testing with the same group of users, carry over means the influence on users of the first parameters will last for a while even i change the parameters;

from ax.

maor096 avatar maor096 commented on May 2, 2024

The same problem we encountered,We are using AB experiments for hyperparameter tuning, where there are 3 experimental groups, 3 optimization goals, and 1 constraint. Specific information can be found in the JSON file below. Currently, we have encountered the following issues: in the 15th and 16th rounds, we found some promising hyperparameter combinations, for example {"read_quality_factor":1, "duration_factor":0.5, "pos_interaction_factor":0.2, "score_read_factor":1}, with target effects of {'a':+0.98%, 'b':+0.68%, 'c':+1.49%, 'd':+0.67%}, where the p-value ranges from 0.005 to 0.08. However, when we conduct large-scale AB experiments with these promising hyperparameter combinations, we often encounter situations where the effects cannot be replicated. We would like to inquire about the following two questions:
1、Does Facebook's hyperparameter tuning AB experiment encounter similar issues? We have already used CUPED to reduce the variance of the experimental data for each round . What optimization suggestions do you have for similar issues?
2、For each experimental group, the same batch of users is used every time when deploying hyperparameters. We suspect that the inability to replicate the experimental effects may be related to carry over. Does Facebook's hyperparameter tuning AB experiment reshuffle the experimental users when deploying hyperparameters?"
snapshot.json

from ax.

maor096 avatar maor096 commented on May 2, 2024

Hi all, I would definitely recommend “reshuffling” (or simply creating a new experiment) for each batch. Otherwise you have carryover effects. Variance reduction is always a good idea. We use regression adjustment using pre-treatment covariants along the lines of CUPED for most AB tests. Second, 3 arms per batch is probably inefficient / problematic. Typically we use at least 8, but sometimes as many as 64. For 3 parameters though maybe 5 could be OK. The GP borrows strength across conditions so you can make the allocations smaller than you normally would if you wanted to have an appropriately powered AB test. Note that AB tests cause some non stationary, in that treatment effects change over time. I recommend making sure each batch runs for enough time to “settle down”, and using the same number of days per batch. There is more sophisticated adjustment procedure that we use at Meta. if you send me an email (which you can find at http://eytan.GitHub.io) I can send you a preprint that explains the considerations and procedures in more detail. Best, E

On Fri, Apr 12, 2024 at 5:32 AM maor096 @.> wrote: The same problem we encountered,We are using AB experiments for hyperparameter tuning, where there are 3 experimental groups, 3 optimization goals, and 1 constraint. Specific information can be found in the JSON file below. Currently, we have encountered the following issues: in the 15th and 16th rounds, we found some promising hyperparameter combinations, for example {"read_quality_factor":1, "duration_factor":0.5, "pos_interaction_factor":0.2, "score_read_factor":1}, with target effects of {'a':+0.98%, 'b':+0.68%, 'c':+1.49%, 'd':+0.67%}, where the p-value ranges from 0.005 to 0.08. However, when we conduct large-scale AB experiments with these promising hyperparameter combinations, we often encounter situations where the effects cannot be replicated. We would like to inquire about the following two questions: 1、Does Facebook's hyperparameter tuning AB experiment encounter similar issues? We have already used CUPED to reduce the variance of the experimental data for each round . What optimization suggestions do you have for similar issues? 2、For each experimental group, the same batch of users is used every time when deploying hyperparameters. We suspect that the inability to replicate the experimental effects may be related to carry over. Does Facebook's hyperparameter tuning AB experiment reshuffle the experimental users when deploying hyperparameters?" — Reply to this email directly, view it on GitHub <#2342 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAAW34KJT5PCRCMW7JGL6NTY46S3ZAVCNFSM6AAAAABF7TWSPWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANJRGM4TOMJXHE . You are receiving this because you are subscribed to this thread.Message ID: @.>

Thank you for your suggestion

from ax.

maor096 avatar maor096 commented on May 2, 2024

@eytan hi eytan
I have sent an email to you based on the information at http://eytan.GitHub.io, looking forward to your reply. Thank you very much.

from ax.

cyrilmyself avatar cyrilmyself commented on May 2, 2024

hi,@eytan,i also want the preprint that explains the considerations and procedures that you use in Meta,can you send me by email.
My email address is [email protected].
I am really looking forward to your reply

from ax.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.