Comments (5)
Prior for scale parameters in hierarchical models
Gelman (2006) suggested half-Cauchy with mode at 0 and scale set to a large value (in the 8-schools example, we used the value 25), or with the scale estimated from data in a hierarchical-hierarchical setting in which there are many variance parameters which can be given a common prior.
The Gelman (2006) recommendations may be too weak for many purposes. If the number of groups is small, the data don't provide much information on the group-level variance, and so it can make sense to use stronger prior information, in two ways. First: Cauchy might be too broad, maybe better to use something like a t_4 or even half-normal if you don't think there's a chance of any really big values. Second: maybe the scale parameter for this hyperprior should be set to something reasonable, not to something large. This would suggest something like half-normal(0,1) or half-t(4,0,1) as default choices.
Historically, a prior on the scale parameter with a long right tail has been considered "conservative" in that it allows for large values of the scale parameter which in turn correspond to minimal pooling. But from a modern point of view, minimal pooling is not a default, and a statistical method that underpools can be thought of as overreacting to noise and thus "anti-conservative."
If doing modal estimation, see section on Boundary Avoiding Priors above
from baggr.
RE: N = 2, I don't know if this is a good option, but Andrew Gelman discusses multilevel models with 2 groups here and argues that it's fine but you need an informative prior on the between-group sd. Maybe warning about using an informative prior if there are only two groups is appropriate?
I think it's also hard to determine what "informative" means in advance of seeing the data, since whether a prior is informative probably depends on the context. Anyway just thought I'd mention it.
from baggr.
from baggr.
There is now a warning for small N, but the default prior is unchanged -- worth revisiting this and setting a different prior (see text above)
from baggr.
We now have automated warning on setting default priors for N=2, N=3, but we do not stop anyone.
from baggr.
Related Issues (20)
- use hypermean(), hypersd() to refer to treatment_effects()
- incorporate power calculations for meta-analysis into baggr HOT 1
- Including fixed effects slows the logit model down few-fold
- covariates with NAs will crash baggr with no error msg
- baggr() with N=1 and informative prior should run no problem HOT 1
- in baggr() arguments, Don't say `effect` say `label`
- Reversal of colours in the legend in `ggplot`
- Annotate LOO CV better
- create a mapping from metafor to baggr
- distribute new commits to master by using drat and/or prebuild src on GH
- Font sizes in forest plots and plot.baggr_compare (add_values)
- Understanding how to configure makevars
- Issue in plot_quantiles
- Making priors stored in baggr objects more understandable to the users HOT 1
- Full pooling and a single study
- Print out p.p.d. in print(baggr)
- add study_effects() alias for group_effects()
- add a funnel plot? rudimentary publication bias features?
- add a pkgdown site
- Change mu/tau syntax (tau is a bad name for hypermean)
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from baggr.