Comments (5)
Also, here's an alternative impl that might be useful:
using Turing, AdvancedVI, Distributions, StatsFuns, DiffResults
using Turing: Variational
using StatsBase
using OnlineStats
using LinearAlgebra
using Flux
function vi_custom(model, q_init=nothing; n_mc, n_iter, tol, optimizer, n_iter_min=10)
logπ = Variational.make_logjoint(model)
variational_objective = Variational.ELBO()
alg = ADVI(n_mc, n_iter)
# If `q_init` is nothing, then we create an initial approx. posterior
# using `Variational.meanfield(model)`.
q = isnothing(q_init) ? Variational.meanfield(model) : q_init
μ, σs = StatsBase.params(q)
σ = StatsFuns.invsoftplus.(σs)
θ = vcat(μ, σ)
converged = false
step = 1
diff_result = DiffResults.GradientResult(θ)
# From OnlineStats.jl: mean-estimator with exponential decay. That is, in this case
# μₜ = (1 - γ) ⋅ μₜ₋₁ + γ ⋅ xₜ
# where `μₜ` is the mean of the criterion quantity and `xₜ` is the criterion quantity
# we observed at time step `t`. See below for why this might be a better idea than
# only checking the last value.
mean_delta = Mean(weight = ExponentialWeight(0.1))
history = []
while (step <= n_iter) && !converged
# 1. Compute gradient and objective value; results are stored in `diff_results`
# Since you've already defined the `logπ`, we can just use that instead of `model`
# as it ought to be slightly more efficient.
AdvancedVI.grad!(variational_objective, alg, q, logπ, θ, diff_result, alg.samples_per_step)
# 2. Extract gradient from `diff_result`
∇ = DiffResults.gradient(diff_result)
# 3. Apply optimizer, e.g. multiplying by step-size
Δ = apply!(optimizer, θ, ∇)
# 4. Update parameters
θ_prev = copy(θ)
@. θ = θ - Δ
# Check convergence
# NOTE: The original criterion is unlikely to be a good termination criterion for VI
# due to the high variance of the gradient estimator. That is, you're likely to
# terminate too early because you happened to get a random sample of the gradient which
# had a small magnitude.
val = LinearAlgebra.norm(θ - θ_prev, 2)
# Alternatively you could look at the gradient norms
# val = LinearAlgebra.norm(∇, 2)
fit!(mean_delta, val) # updates the `mean_delta` mean estimator with the most recent `val`
push!(history, val)
converged = (step > n_iter_min) && (value(mean_delta) < tol)
step += 1
end
return θ, step, q, history
end
@model norm(z) = begin
s ~ InverseGamma(1, 1)
μ ~ Normal(0, sqrt(s))
# likelihood
z .~ Normal(μ, sqrt(s))
end
z = rand(Normal(1., 2.), (200, 1));
θ, step, q, history = vi_custom(norm(z); n_mc=25, n_iter=20000, tol=0.005, optimizer = Flux.ADAM())
q_final = AdvancedVI.update(q, θ)
To demonstrate what I mean by the above note on the convergence criterion, here's what the estimate looks like for a particular run of your example using different weights (larger weight means greater weight to the most recent sample):
from advancedvi.jl.
@torfjelde Thanks for the alternative implementation. I hadn't thought much about what criterion to use, yet, I just wanted to get something working :-) This is helpful.
from advancedvi.jl.
If you change
AdvancedVI.grad!(variational_objective, alg, q, model, θ, diff_result)
to
AdvancedVI.grad!(variational_objective, alg, q, model, θ, diff_result, alg.samples_per_step) # or use your `n_mc` variable
it should work:)
I realize this is a bit confusing since the information is already in alg
, but it's so that you'll be able to evaluate the ELBO
without having to fix the number of MC iterations used, e.g. one can imagine methods where the number of MC samples used vary throughout the optimization process.
But this issue made me realize we ought to have a default impl of ELBO
on ADVI
where we just use alg.sampler_per_step
, so thank you for bringing this up:)
from advancedvi.jl.
Thanks for the quick reply! Dang, I thought I had tried that, but anyway -- it now works.
In case it's helpful to future readers, here is the complete working example:
using Flux
using Turing, AdvancedVI, Distributions, DynamicPPL, StatsFuns, DiffResults
using Turing: Variational
using StatsBase
function vi_custom(model, q_init=nothing; n_mc, n_iter, tol, optimizer)
varinfo = DynamicPPL.VarInfo(model)
num_params = sum([size(varinfo.metadata[sym].vals, 1) for sym ∈ keys(varinfo.metadata)])
logπ = Variational.make_logjoint(model)
variational_objective = Variational.ELBO()
alg = ADVI(n_mc, n_iter)
# Set up q
if isnothing(q_init)
μ = randn(num_params)
σ = StatsFuns.softplus.(randn(num_params))
else
μ, σs = StatsBase.params(q_init)
σ = StatsFuns.invsoftplus.(σs)
end
θ = vcat(μ, σ)
q = Variational.meanfield(model)
converged = false
step = 1
diff_result = DiffResults.GradientResult(θ)
while (step <= n_iter) && !converged
# 1. Compute gradient and objective value; results are stored in `diff_results`
AdvancedVI.grad!(variational_objective, alg, q, model, θ, diff_result, n_mc)
# 2. Extract gradient from `diff_result`
∇ = DiffResults.gradient(diff_result)
# 3. Apply optimizer, e.g. multiplying by step-size
Δ = AdvancedVI.apply!(optimizer, θ, ∇)
# 4. Update parameters
θ_prev = copy(θ)
@. θ = θ - Δ
# Check convergence
converged = sqrt(sum((θ - θ_prev).^2)) < tol
step += 1
end
return θ, step, q
end
@model norm(z) = begin
s ~ InverseGamma(1, 1)
μ ~ Normal(0, sqrt(s))
# likelihood
z .~ Normal(μ, sqrt(s))
end
z = rand(Normal(1., 2.), (200, 1));
θ, step, q = vi_custom(norm(z); n_mc=25, n_iter=20000, tol=0.0001, optimizer = Flux.ADAM())
from advancedvi.jl.
Thanks for the quick reply! Dang, I thought I had tried that, but anyway -- it now works.
Great!:)
from advancedvi.jl.
Related Issues (19)
- Natural Gradients + Monte Carlo VI HOT 4
- Improving mechanism for updating parameters of distributions HOT 3
- TagBot trigger issue HOT 13
- Zygote should be preferred to ReverseDiff for reverse mode default HOT 9
- Rethinking AdvancedVI HOT 19
- VI+PSIS HOT 1
- Pathfinder HOT 2
- Stein Variational Gradient Descent (SVGD) HOT 2
- Both Bijectors and Distributions export "Distribution" HOT 1
- Missing API method HOT 3
- SVGD HOT 2
- Minibatches HOT 3
- Setting up Documenter
- Double Stochasticity HOT 4
- Need a weighted loss function/ log likelihood HOT 1
- Callback function during training
- Hyper-parameter optimization HOT 12
- Currently no license HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from advancedvi.jl.