GithubHelp home page GithubHelp logo

jbrea / bayesianoptimization.jl Goto Github PK

View Code? Open in Web Editor NEW
91.0 8.0 16.0 174 KB

Bayesian optimization for Julia

License: Other

Julia 100.00%
machine-learning optimization bayesian-methods bayesian-optimization gaussian-processes julia

bayesianoptimization.jl's People

Contributors

github-actions[bot] avatar jbrea avatar juliatagbot avatar platawiec avatar rohitrathore1 avatar samuelbelko avatar tpapp avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bayesianoptimization.jl's Issues

Feature request: pass initial evaluation points as argument

Sometimes one already has the function precomputed at some set of points. It would be nice to be able to pass these to the optimizer somehow. I thought I could just do...

o = BOpt(...)
xs = [rand(dim) for i in 1:500];
ys = Int(opt.sense) .* f.(xs);
BayesianOptimization.update!(o.model, hcat(xs...), ys)
boptimize!(opt)

but I realize that this make boptimize skip initialise_function! (and hence, nlopt_setup). I'm actually not sure how the code above runs without error... In any case, maybe it makes sense to include this as part of the standard interface?

Restarting optimization

If I want to restart the optimization, I might do something like (assume the following has been set up with the code in the README:

SETUP

using BayesianOptimization, GaussianProcesses, Distributions

f(x) = sum((x .- 1).^2) + randn()

model = ElasticGPE(2,
                   mean = MeanConst(0.),         
                   kernel = SEArd([0., 0.], 5.),
                   logNoise = 0.,
                   capacity = 3000)
set_priors!(model.mean, [Normal(1, 2)])

modeloptimizer = MAPGPOptimizer(every = 50, noisebounds = [-4, 3],
                                kernbounds = [[-1, -1, 0], [4, 4, 10]],
                                maxeval = 40)
opt = BOpt(f,
           model,
           UpperConfidenceBound(),
           modeloptimizer,                        
           [-5., -5.], [5., 5.],
           repetitions = 5,
           maxiterations = 100,
           sense = Min,
           verbosity = Progress)

result = boptimize!(opt)

RESTART OPTIMIZATION

opt = BOpt(f,
           model,
           UpperConfidenceBound(),
           modeloptimizer,                        
           [-5., -5.], [5., 5.],
           lhs_iterations = 0,
           maxiterations = 10,
           sense = Min,
           verbosity = Progress)

result = boptimize!(opt)

This currently gives the following error:

MethodError: no method matching append!(::GPE{ElasticArrays.ElasticArray{Float64,2,1},ElasticArrays.ElasticArray{Float64,1,0},MeanConst,SEArd{Float64},ElasticPDMats.ElasticPDMat{Float64,Array{Float64,2}},GaussianProcesses.StationaryARDData{ElasticPDMats.AllElasticArray{Float64,3}}}, ::Array{Any,1}, ::Array{Float64,1})

If I set lhs_iterations=1, and run for a few cycles, it seems that the optimization forgets about the previous optima:

opt = BOpt(f,
           model,
           UpperConfidenceBound(),
           modeloptimizer,                        
           [-5., -5.], [5., 5.],
           lhs_iterations = 1,
           maxiterations = 10,
           sense = Min,
           verbosity = Progress)

result = boptimize!(opt)

which reports observed_optimum = -0.8571058809745942, compared to the optimum pre-restart of observed_optimum = -2.720197115023963, whose observed_optimizer is still in the model

Which leads to the following: is there a good way to restart optimizations currently (maybe I missed something in the source), and, if this is the best way to do so, can the above issues be fixed?

SpecialFunctions dependency

First thanks for the great work, most useful!

Seems like SpecialFunctions is not registered in the deps of the package. When the dependency is installed independently, here is the warning:

julia> using BayesianOptimization
[ Info: Precompiling BayesianOptimization [4c6ed407-134f-591c-93fa-e0f7c164a0ec]
┌ Warning: Package BayesianOptimization does not have SpecialFunctions in its dependencies:
│ - If you have BayesianOptimization checked out for development and have
│   added SpecialFunctions as a dependency but haven't updated your primary
│   environment's manifest file, try `Pkg.resolve()`.
│ - Otherwise you may need to report an issue with BayesianOptimization
└ Loading SpecialFunctions into BayesianOptimization from project dependency, future warnings for BayesianOptimization are suppressed.

InterruptException from `optimizemodel!`

I have occasionally been getting this error:

InterruptException:                                                                                                                  
optimizemodel! at /usr/people/flc2/.julia/packages/BayesianOptimization/poykT/src/models/gp.jl:72                                    
optimizemodel! at /usr/people/flc2/.julia/packages/BayesianOptimization/poykT/src/models/gp.jl:42 [inlined]                          
macro expansion at /usr/people/flc2/.julia/packages/TimerOutputs/ZmKD7/src/TimerOutput.jl:190 [inlined]                              
boptimize! at /usr/people/flc2/.julia/packages/BayesianOptimization/poykT/src/BayesianOptimization.jl:150   

My guess is that this is an NLopt thing. (see edit) Unfortunately, I'm not able to reproduce this reliably—even given the same function it will sometimes happen and sometimes not depending on the random seed.

It might be relevant that I don't put bounds on the kernel parameters. But my thinking is that an easy (and reasonable) thing to do is to wrap the call to NLOpt in a try-catch block and print a warning if there's an exception. Given that optimizing the model parameters is not essential to the completion of the algorithm, I think this would be preferable to letting the error terminate the optimization process, which could have been going for hours.

EDIT: Looking at the code, I see that you intentionally throw an InterruptException, so I guess you disagree that letting the optimization continue is the best course of action. Could we add a flag that determines what to do in this case? Or am I incorrect that you could reasonably continue after hitting that error?

Ambiguous error message for incorrect type

It took me some times to figure out the following "scary" error message is due to I using Integer instead of Real for lowerbounds/upperbounds:

ERROR: MethodError: no method matching BOpt(::typeof(f), ::BayesianOptimization.Sense, ::GPE{ElasticArrays.ElasticArray{Float64,2,1},ElasticArrays.ElasticArray{Float64,1,0},MeanConst,SEArd{Float64},ElasticPDMats.ElasticPDMat{Float64,Array{Float64,2}},GaussianProcesses.StationaryARDData{ElasticPDMats.AllElasticArray{Float64,3}}}, ::UpperConfidenceBound{BrochuBetaScaling}, ::NamedTuple{(:method, :restarts, :maxeval),Tuple{Symbol,Int64,Int64}}, ::MAPGPOptimizer{NamedTuple{(:domean, :kern, :noise, :lik, :meanbounds, :kernbounds, :noisebounds, :likbounds, :method, :maxeval),Tuple{Bool,Bool,Bool,Bool,Nothing,Array{Array{Int64,1},1},Array{Int64,1},Nothing,Symbol,Int64}}}, ::Array{Int64,1}, ::Array{Int64,1}, ::Float64, ::Array{Float64,1}, ::Float64, ::Array{Float64,1}, ::BayesianOptimization.IterationCounter, ::BayesianOptimization.DurationCounter, ::NLopt.Opt, ::BayesianOptimization.Verbosity, ::Int64, ::Int64, ::TimerOutputs.TimerOutput)
Closest candidates are:
  BOpt(::F, ::BayesianOptimization.Sense, ::M, ::A, ::AO, ::MO, ::Array{Float64,1}, ::Array{Float64,1}, ::Float64, ::Array{Float64,1}, ::Float64, ::Array{Float64,1}, ::BayesianOptimization.IterationCounter, ::BayesianOptimization.DurationCounter, ::NLopt.Opt, ::BayesianOptimization.Verbosity, ::Int64, ::Int64, ::TimerOutputs.TimerOutput) where {F, M, A, AO, MO} at /home/ngphuoc/.julia/dev/BayesianOptimization/src/BayesianOptimization.jl:28
  BOpt(::Any, ::Any, ::Any, ::Any, ::Any, ::Any; sense, maxiterations, maxduration, acquisitionoptions, repetitions, verbosity, lhs_iterations) at /home/ngphuoc/.julia/dev/BayesianOptimization/src/BayesianOptimization.jl:59

Would be useful if the package promotes type automatically.

Feature Request: Parallel acquisition functions

Add implementations for the q- class of acquisition functions (i.e. q-EI), which return a set of q query points which are then run in parallel on the function.

I think implementation-wise this is fairly straightforward. On the interface side, the question is how the user defines the parallel function. I think this should accept any function f which takes a vector of vectors and returns a vector of results, so that the user is free to specify how that parallel operation takes place. I'm open to thoughts on what an automatically parallel interface would look like.

References:
[1] Wang, Clark, Liu, Frazier. Parallel Bayesian Global Optimization of Expensive Functions https://arxiv.org/pdf/1602.05149.pdf
[2] Wilson, Moriconi, Hutter, Deisenroth. The reparameterization trick for acquisition functions https://arxiv.org/pdf/1712.00424.pdf

Help with hyper parameter optimization

I'm trying to get the following code to work, but get an error. Any suggestion for solution?

using StructuredOptimization, BayesianOptimization, GaussianProcesses, Random, Statistics

#Simulated data (n observations, p variables, tr true variables, sig error)
n, p, tr, sig = 500, 1000, 10, 0.1
X = randn(n, p)
b_true = [randn(tr)..., zeros(p-tr)...]
y = X*b_true+ sig*randn(n)
Xtrain = X[1:400,:]
Xtest = X[401:500,:]
ytrain = y[1:400]
ytest = y[401:500]

b = Variable(p);               # initialize optimization variable

λ = 1e-2*norm(Xtrain'*ytrain,Inf);      # define λ

function l1(λ)     # opt function
    @minimize ls(Xtrain*b - ytrain) + λ*norm(b, 1) with ZeroFPR();# solve problem
    bhat = copy(~b)
    Ytestpred = Xtest*bhat
    MSEtest = (0.5*norm(Ytestpred-ytest,2)^2)/length(ytest)
    return MSEtest
end

# optimize lambda using BO

model = ElasticGPE(1, mean = MeanConst(0.),
                   kernel = SEArd([0.], 5.), logNoise = 0.)
modeloptimizer = MAPGPOptimizer(every = 50, noisebounds = [-2., 3],
                                kernbounds = [[-1, 0], [4, 10]], maxeval = 40)
opt = BOpt(λ->l1(λ), model, UpperConfidenceBound(),
           modeloptimizer, [0], [1.],
           maxiterations = 5, sense = Min, repetitions = 5,
           acquisitionoptions = (maxeval = 4000, restarts = 50),
           verbosity = Progress)
result = boptimize!(opt)

The error message:

julia> opt = BOpt(λ->l1(λ), model, UpperConfidenceBound(),
                  modeloptimizer, [0], [1.],
                  maxiterations = 5, sense = Min, repetitions = 5,
                  acquisitionoptions = (maxeval = 4000, restarts = 50),
                  verbosity = Progress)
ERROR: MethodError: no method matching ScaledSobolIterator(::Array{Int64,1}, ::Array{Float64,1}, ::Int64, ::Sobol.SobolSeq{1})
Closest candidates are:
  ScaledSobolIterator(::Array{T,1}, ::Array{T,1}, ::Int64, ::Sobol.SobolSeq{D}) where {T, D} at /home/pawn0002/.julia/packages/BayesianOptimization/AxbaR/src/utils.jl:64
  ScaledSobolIterator(::Any, ::Any, ::Any; seq) at /home/pawn0002/.julia/packages/BayesianOptimization/AxbaR/src/utils.jl:79
Stacktrace:
 [1] #ScaledSobolIterator#1(::Sobol.SobolSeq{1}, ::Type, ::Array{Int64,1}, ::Array{Float64,1}, ::Int64) at /home/pawn0002/.julia/packages/BayesianOptimization/AxbaR/src/utils.jl:80
 [2] ScaledSobolIterator(::Array{Int64,1}, ::Array{Float64,1}, ::Int64) at /home/pawn0002/.julia/packages/BayesianOptimization/AxbaR/src/utils.jl:79
 [3] (::getfield(Core, Symbol("#kw#Type")))(::NamedTuple{(:maxiterations, :sense, :repetitions, :acquisitionoptions, :verbosity),Tuple{Int64,BayesianOptimization.Sense,Int64,NamedTuple{(:maxeval, :restarts),Tuple{Int64,Int64}},BayesianOptimization.Verbosity}}, ::Type{BOpt}, ::Function, ::GPE{ElasticArrays.ElasticArray{Float64,2,1},ElasticArrays.ElasticArray{Float64,1,0},MeanConst,SEArd{Float64},ElasticPDMats.ElasticPDMat{Float64,Array{Float64,2}},GaussianProcesses.StationaryARDData{ElasticPDMats.AllElasticArray{Float64,3}}}, ::UpperConfidenceBound{BrochuBetaScaling}, ::MAPGPOptimizer{NamedTuple{(:domean, :kern, :noise, :lik, :meanbounds, :kernbounds, :noisebounds, :likbounds, :method, :maxeval),Tuple{Bool,Bool,Bool,Bool,Nothing,Array{Array{Int64,1},1},Array{Float64,1},Nothing,Symbol,Int64}}}, ::Array{Int64,1}, ::Array{Float64,1}) at ./none:0
 [4] top-level scope at none:0

Full Bayesian approach to hyperparameters

Using the full posterior distribution with the hyperparameters as unknown variables is known to give better results in Bayesian optimization (see https://arxiv.org/pdf/1206.2944.pdf).

A user could opt-in to using this technique by replacing the MAPGPOptimizer with MCMCEstimate or another appropriate name. GaussianProcesses provides an mcmc function to estimate hyperparameters but my understanding of the source is that it does not marginalize over the hyperparameters and compute an integrated acquisition function (which I suppose wouldn't make sense within the scope of GaussianProcesses).

Thoughts on including something like this? The way I see it, the work would break down as follows:

  • Include benchmarks
  • Decide on MCMC implementation (do we introduce additional dependencies, write in-line code, etc.)
  • Decide on interface for various acquisition functions under MCMCEstimate
  • Implement prototype
  • Compare against tests/benchmarks

Absolute value in expression

I noticed that at line

abs- a.τ) * normal_cdf- a.τ, σ²) + σ² * normal_pdf- a.τ, σ²)

one takes the absolute value abs(μ - a.τ) as opposed to perhaps μ - a.τ which is what equation (4) in "A Tutorial on Bayesian Optimization of..." suggests.

I am looking at the code to see whether the repository could help with using BO on my problem.
Thanks for your time.

ERROR: UndefVarError: GP.bounds not defined

Hi, thanks for the package!

I tried to run the example and got the following error:

julia> result = boptimize!(opt)
ERROR: UndefVarError: bounds not defined
Stacktrace:
 [1] #optimizemodel!#32(::Bool, ::Bool, ::Bool, ::Bool, ::Nothing, ::Array{Array{Int64,1},1}, ::Array{Int64,1}, ::Symbol, ::Int64, ::typeof(BayesianOptimization.optimizemodel!), ::GPE{ElasticArrays.ElasticArray{Float64,2,1},ElasticArrays.ElasticArray{Float64,1,0},MeanConst,SEArd,ElasticPDMats.ElasticPDMat{Float64,Array{Float64,2}},GaussianProcesses.StationaryARDData{ElasticPDMats.AllElasticArray{Float64,3}}}) at /home/ngphuoc/.julia/dev/BayesianOptimization/src/models/gp.jl:26
 [2] #optimizemodel! at ./none:0 [inlined]
 [3] optimizemodel! at /home/ngphuoc/.julia/dev/BayesianOptimization/src/BayesianOptimization.jl:45 [inlined]
 [4] macro expansion at ./util.jl:213 [inlined]
 [5] macro expansion at ./util.jl:212 [inlined]
 [6] initialise_model!(::BOpt{typeof(f),GPE{ElasticArrays.ElasticArray{Float64,2,1},ElasticArrays.ElasticArray{Float64,1,0},MeanConst,SEArd,ElasticPDMats.ElasticPDMat{Float64,Array{Float64,2}},GaussianProcesses.StationaryARDData{ElasticPDMats.AllElasticArray{Float64,3}}},ExpectedImprovement,MLGPOptimizer{NamedTuple{(:noisebounds, :kernbounds, :maxeval),Tuple{Array{Int64,1},Array{Array{Int64,1},1},Int64}}}}) at /home/ngphuoc/.julia/dev/BayesianOptimization/src/BayesianOptimization.jl:127
 [7] boptimize!(::BOpt{typeof(f),GPE{ElasticArrays.ElasticArray{Float64,2,1},ElasticArrays.ElasticArray{Float64,1,0},MeanConst,SEArd,ElasticPDMats.ElasticPDMat{Float64,Array{Float64,2}},GaussianProcesses.StationaryARDData{ElasticPDMats.AllElasticArray{Float64,3}}},ExpectedImprovement,MLGPOptimizer{NamedTuple{(:noisebounds, :kernbounds, :maxeval),Tuple{Array{Int64,1},Array{Array{Int64,1},1},Int64}}}}) at /home/ngphuoc/.julia/dev/BayesianOptimization/src/BayesianOptimization.jl:144
 [8] top-level scope at none:0

Feature Request: Ask-tell interface

Hi!
I would like to collect some opinions on whether it is reasonable to additionally adopt ask-tell interface like in Dragonfly (docs here) in the future.

I think that this is the natural interface for Baysian optimization solvers, since the costly function evaluation might need to be scheduled / externally computed or it can even be some real world experiment.

Ask-tell interface provides the necessary flexibilty for other libraries to build on top of BO solvers. For instance like the platform AX that is some sort of "decision support system for running sequential experiments optimally" build on top of BOTorch (e.g., see its Service API example). Maybe something similar could also emerge in the Julia ecosystem and use this package. The closest .jl projects I found sofar are ExperimentalDesign.jl and Hyperopt.jl, the latter being more of an ad-hoc utility.

Thanks!

Add Docs

Hi,
I was thinking about adding docs using Documenter.jl, what do you think about it? I don't really know how it works yet but I could try it out.

Error while running example in README.md

I would like to use BayesianOptimization.jl to optimize some hyperparameters of a neural network.
I found your package which looks very interesting.
I try to run the first example in the README.md but I got the
following error

Cannot `convert` an object of type GaussianProcesses.EmptyData to an object of type GaussianProcesses.StationaryARDData{ElasticPDMats.AllElasticArray{Float64,3}}

I use an empty environment where I only install BayesianOptimization, GaussianProcesses and Distributions.
There are the version that get installed on my system:

(TestBayesianOptimization) pkg> st
Status `~/Test/TestBayesianOptimization/Project.toml`
  [4c6ed407] BayesianOptimization v0.2.2
  [31c24e10] Distributions v0.22.6
  [891a1506] GaussianProcesses v0.11.2

(TestBayesianOptimization) pkg> ^C

julia> include("/home/abarth/Julia/share/test_bayes_optim.jl")

The file test_bayes_optim.jl contains the first example on the README.md page.
The last call result = boptimize!(opt) results in the following error.

Do you have an idea of what could be the issue?

ERROR: LoadError: MethodError: Cannot `convert` an object of type GaussianProcesses.EmptyData to an object of type GaussianProcesses.StationaryARDData{ElasticPDMats.AllElasticArray{Float64,3}}
Closest candidates are:
  convert(::Type{T}, ::T) where T at essentials.jl:171
  GaussianProcesses.StationaryARDData{ElasticPDMats.AllElasticArray{Float64,3}}(::Any) where D at /home/abarth/.julia/packages/GaussianProcesses/sed6i/src/kernels/stationary.jl:73
Stacktrace:
 [1] setproperty!(::GPE{ElasticArrays.ElasticArray{Float64,2,1,Array{Float64,1}},ElasticArrays.ElasticArray{Float64,1,0,Array{Float64,1}},MeanConst,SEArd{Float64},GaussianProcesses.ElasticCovStrat,GaussianProcesses.StationaryARDData{ElasticPDMats.AllElasticArray{Float64,3}},ElasticPDMats.ElasticPDMat{Float64,Array{Float64,2}},GaussianProcesses.Scalar{Float64}}, ::Symbol, ::GaussianProcesses.EmptyData) at ./Base.jl:34
 [2] fit!(::GPE{ElasticArrays.ElasticArray{Float64,2,1,Array{Float64,1}},ElasticArrays.ElasticArray{Float64,1,0,Array{Float64,1}},MeanConst,SEArd{Float64},GaussianProcesses.ElasticCovStrat,GaussianProcesses.StationaryARDData{ElasticPDMats.AllElasticArray{Float64,3}},ElasticPDMats.ElasticPDMat{Float64,Array{Float64,2}},GaussianProcesses.Scalar{Float64}}, ::ElasticArrays.ElasticArray{Float64,2,1,Array{Float64,1}}, ::ElasticArrays.ElasticArray{Float64,1,0,Array{Float64,1}}) at /home/abarth/.julia/packages/GaussianProcesses/sed6i/src/GPE.jl:132
 [3] update!(::GPE{ElasticArrays.ElasticArray{Float64,2,1,Array{Float64,1}},ElasticArrays.ElasticArray{Float64,1,0,Array{Float64,1}},MeanConst,SEArd{Float64},GaussianProcesses.ElasticCovStrat,GaussianProcesses.StationaryARDData{ElasticPDMats.AllElasticArray{Float64,3}},ElasticPDMats.ElasticPDMat{Float64,Array{Float64,2}},GaussianProcesses.Scalar{Float64}}, ::Array{Float64,2}, ::Array{Float64,1}) at /home/abarth/.julia/packages/BayesianOptimization/AxbaR/src/models/gp.jl:16
 [4] macro expansion at /home/abarth/.julia/packages/TimerOutputs/NvIUx/src/TimerOutput.jl:229 [inlined]
 [5] initialise_model!(::BOpt{typeof(f),GPE{ElasticArrays.ElasticArray{Float64,2,1,Array{Float64,1}},ElasticArrays.ElasticArray{Float64,1,0,Array{Float64,1}},MeanConst,SEArd{Float64},GaussianProcesses.ElasticCovStrat,GaussianProcesses.StationaryARDData{ElasticPDMats.AllElasticArray{Float64,3}},ElasticPDMats.ElasticPDMat{Float64,Array{Float64,2}},GaussianProcesses.Scalar{Float64}},UpperConfidenceBound{BrochuBetaScaling},NamedTuple{(:method, :restarts, :maxeval, :maxtime),Tuple{Symbol,Int64,Int64,Float64}},MAPGPOptimizer{NamedTuple{(:domean, :kern, :noise, :lik, :meanbounds, :kernbounds, :noisebounds, :likbounds, :method, :maxeval),Tuple{Bool,Bool,Bool,Bool,Nothing,Array{Array{Int64,1},1},Array{Int64,1},Nothing,Symbol,Int64}}},ScaledSobolIterator{Float64,2}}) at /home/abarth/.julia/packages/BayesianOptimization/AxbaR/src/BayesianOptimization.jl:128
 [6] boptimize!(::BOpt{typeof(f),GPE{ElasticArrays.ElasticArray{Float64,2,1,Array{Float64,1}},ElasticArrays.ElasticArray{Float64,1,0,Array{Float64,1}},MeanConst,SEArd{Float64},GaussianProcesses.ElasticCovStrat,GaussianProcesses.StationaryARDData{ElasticPDMats.AllElasticArray{Float64,3}},ElasticPDMats.ElasticPDMat{Float64,Array{Float64,2}},GaussianProcesses.Scalar{Float64}},UpperConfidenceBound{BrochuBetaScaling},NamedTuple{(:method, :restarts, :maxeval, :maxtime),Tuple{Symbol,Int64,Int64,Float64}},MAPGPOptimizer{NamedTuple{(:domean, :kern, :noise, :lik, :meanbounds, :kernbounds, :noisebounds, :likbounds, :method, :maxeval),Tuple{Bool,Bool,Bool,Bool,Nothing,Array{Array{Int64,1},1},Array{Int64,1},Nothing,Symbol,Int64}}},ScaledSobolIterator{Float64,2}}) at /home/abarth/.julia/packages/BayesianOptimization/AxbaR/src/BayesianOptimization.jl:138
 [7] top-level scope at /home/abarth/Julia/share/test_bayes_optim.jl:32
 [8] include(::String) at ./client.jl:439
 [9] top-level scope at REPL[45]:1
 [10] eval(::Module, ::Any) at ./boot.jl:331
 [11] eval_user_input(::Any, ::REPL.REPLBackend) at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.4/REPL/src/REPL.jl:86
 [12] run_backend(::REPL.REPLBackend) at /home/abarth/.julia/packages/Revise/XFtoQ/src/Revise.jl:1162
 [13] top-level scope at none:0
in expression starting at /home/abarth/Julia/share/test_bayes_optim.jl:32

TagBot trigger issue

This issue is used to trigger TagBot; feel free to unsubscribe.

If you haven't already, you should update your TagBot.yml to include issue comment triggers.
Please see this post on Discourse for instructions and more details.

If you'd like for me to do this for you, comment TagBot fix on this issue.
I'll open a PR within a few hours, please be patient!

Example no longer working

The following example used to work fine, but no longer:

using BayesianOptimization, GaussianProcesses, Distributions

f(x) = sum((x .- 1).^2) + randn()                # noisy function to minimize

# Choose as a model an elastic GP with input dimensions 2.
# The GP is called elastic, because data can be appended efficiently.
model = ElasticGPE(2,                            # 2 input dimensions
                   mean = MeanConst(0.),
                   kernel = SEArd([0., 0.], 5.),
                   logNoise = 0.,
                   capacity = 3000)              # the initial capacity of the GP is 3000 samples.
set_priors!(model.mean, [Normal(1, 2)])

# Optimize the hyperparameters of the GP using maximum a posteriori (MAP) estimates every 50 steps
modeloptimizer = MAPGPOptimizer(every = 50, noisebounds = [-4, 3],       # bounds of the logNoise
                                kernbounds = [[-1, -1, 0], [4, 4, 10]],  # bounds of the 3 parameters GaussianProcesses.get_param_names(model.kernel)
                                maxeval = 40)
opt = BOpt(f,
           model,
           UpperConfidenceBound(),                # type of acquisition
           modeloptimizer,
           [-5., -5.], [5., 5.],                  # lowerbounds, upperbounds
           repetitions = 5,                       # evaluate the function for each input 5 times
           maxiterations = 100,                   # evaluate at 100 input positions
           sense = Min,                           # minimize the function
           verbosity = Progress)

result = boptimize!(opt)

Now results in the following error:

MethodError: no method matching fit!(::GPE{ElasticArrays.ElasticArray{Float64,2,1},ElasticArrays.ElasticArray{Float64,1,0},MeanConst,SEArd{Float64},GaussianProcesses.ElasticCovStrat,GaussianProcesses.StationaryARDData{ElasticPDMats.AllElasticArray{Float64,3}},ElasticPDMats.ElasticPDMat{Float64,Array{Float64,2}}}, ::Array{Float64,2}, ::Array{Float64,1})
Closest candidates are:
  fit!(::GPE, !Matched::AbstractArray{T,1} where T, ::AbstractArray{T,1} where T) at /home/pawn0002/.julia/packages/GaussianProcesses/sdvEx/src/GPE.jl:138
  fit!(::GPE{X,Y,M,K,CS,D,P} where P<:PDMats.AbstractPDMat where D<:GaussianProcesses.KernelData where CS<:GaussianProcesses.CovarianceStrategy where K<:Kernel where M<:GaussianProcesses.Mean, !Matched::X, !Matched::Y) where {X, Y} at /home/pawn0002/.julia/packages/GaussianProcesses/sdvEx/src/GPE.jl:129
update!(::GPE{ElasticArrays.ElasticArray{Float64,2,1},ElasticArrays.ElasticArray{Float64,1,0},MeanConst,SEArd{Float64},GaussianProcesses.ElasticCovStrat,GaussianProcesses.StationaryARDData{ElasticPDMats.AllElasticArray{Float64,3}},ElasticPDMats.ElasticPDMat{Float64,Array{Float64,2}}}, ::Array{Float64,2}, ::Array{Float64,1}) at gp.jl:16
macro expansion at TimerOutput.jl:216 [inlined]
initialise_model!(::BOpt{typeof(f),GPE{ElasticArrays.ElasticArray{Float64,2,1},ElasticArrays.ElasticArray{Float64,1,0},MeanConst,SEArd{Float64},GaussianProcesses.ElasticCovStrat,GaussianProcesses.StationaryARDData{ElasticPDMats.AllElasticArray{Float64,3}},ElasticPDMats.ElasticPDMat{Float64,Array{Float64,2}}},UpperConfidenceBound{BrochuBetaScaling},NamedTuple{(:method, :restarts, :maxeval),Tuple{Symbol,Int64,Int64}},MAPGPOptimizer{NamedTuple{(:domean, :kern, :noise, :lik, :meanbounds, :kernbounds, :noisebounds, :likbounds, :method, :maxeval),Tuple{Bool,Bool,Bool,Bool,Nothing,Array{Array{Int64,1},1},Array{Int64,1},Nothing,Symbol,Int64}}},ScaledSobolIterator{Float64,2}}) at BayesianOptimization.jl:128
boptimize!(::BOpt{typeof(f),GPE{ElasticArrays.ElasticArray{Float64,2,1},ElasticArrays.ElasticArray{Float64,1,0},MeanConst,SEArd{Float64},GaussianProcesses.ElasticCovStrat,GaussianProcesses.StationaryARDData{ElasticPDMats.AllElasticArray{Float64,3}},ElasticPDMats.ElasticPDMat{Float64,Array{Float64,2}}},UpperConfidenceBound{BrochuBetaScaling},NamedTuple{(:method, :restarts, :maxeval),Tuple{Symbol,Int64,Int64}},MAPGPOptimizer{NamedTuple{(:domean, :kern, :noise, :lik, :meanbounds, :kernbounds, :noisebounds, :likbounds, :method, :maxeval),Tuple{Bool,Bool,Bool,Bool,Nothing,Array{Array{Int64,1},1},Array{Int64,1},Nothing,Symbol,Int64}}},ScaledSobolIterator{Float64,2}}) at BayesianOptimization.jl:138
top-level scope at none:0

Formatting Code

Hi,
are there any opinions on code formatting to make this project look more consistent and readable? I would suggest to use the formatting feature of VSCode with Julia extention. This seems to be quite unintrusive and convenient.
Thanks!

Allow GP to use given observation noise

Thanks for sharing this package! I'm porting over some R code that I use to optimize parameters of a noisy simulation. I use DiceOptim/DiceKriging, which allows for a vector of observation variances to be passed (1 noise value for each observation). These explicit noise estimates can be used rather than estimating a nuggest effect with MLE.

Measuring the noise in this way is useful when performing multiple replications at a given set of input parameters. In addition to being a more reliable estimate of observation noise, it can also be used to reduce the Gaussian Process dimensionality by averaging the replications into a single observation ( n replications -> 1 mean and 1 variance).

I think most of the changes would need to be made in GaussianProcesses.jl, but wanted to check first that this approach made sense to you.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.