GithubHelp home page GithubHelp logo

fix 8 schools example about example-models HOT 10 OPEN

stan-dev avatar stan-dev commented on July 24, 2024
fix 8 schools example

from example-models.

Comments (10)

jgabry avatar jgabry commented on July 24, 2024

I think it's useful to have the centered parameterization too, as a
comparison. So maybe we could have the one called "eight_schools" be the
better parameterization (in this case non-centered) but also have version
("eight_schools_bad", "eight_schools_naive" or something like that?) to
illustrate what goes wrong if you don't use that parameterization?

On Thu, Jul 14, 2016 at 10:59 AM, Bob Carpenter [email protected]
wrote:

@wds15 https://github.com/wds15 mentioned in an email on stan-dev that

https://github.com/stan-dev/example-models/blob/master/misc/eight_schools/eight_schools.stan

uses centered parameterization and gets lots of divergences.

We should fix the parameterization so that it works!


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
#71, or mute the thread
https://github.com/notifications/unsubscribe/AHb4Q-MT1AQWoCDCJEjz4D4Cf1hcoeDcks5qVk7EgaJpZM4JMhfQ
.

from example-models.

wds15 avatar wds15 commented on July 24, 2024

As long as the centered one is marked accordingly that it is bad and maybe even provide links to informational material which explains centered/non-centered, then that's a good thing, yes.

from example-models.

betanalpha avatar betanalpha commented on July 24, 2024

We desperately need to avoid names like “good” and “bad”, or
“right” and “wrong”, as the correct parameterization depends
on the model and the data. We have to convince the users
that MCMC can be fragile and they have to be careful — I know
many don’t want to year it, but it’s super important.

On Jul 14, 2016, at 4:11 PM, Jonah Gabry [email protected] wrote:

I think it's useful to have the centered parameterization too, as a
comparison. So maybe we could have the one called "eight_schools" be the
better parameterization (in this case non-centered) but also have version
("eight_schools_bad", "eight_schools_naive" or something like that?) to
illustrate what goes wrong if you don't use that parameterization?

On Thu, Jul 14, 2016 at 10:59 AM, Bob Carpenter [email protected]
wrote:

@wds15 https://github.com/wds15 mentioned in an email on stan-dev that

https://github.com/stan-dev/example-models/blob/master/misc/eight_schools/eight_schools.stan

uses centered parameterization and gets lots of divergences.

We should fix the parameterization so that it works!


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
#71, or mute the thread
https://github.com/notifications/unsubscribe/AHb4Q-MT1AQWoCDCJEjz4D4Cf1hcoeDcks5qVk7EgaJpZM4JMhfQ
.


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.

from example-models.

jgabry avatar jgabry commented on July 24, 2024

On Friday, July 15, 2016, Michael Betancourt [email protected]
wrote:

We desperately need to avoid names like “good” and “bad”, or
“right” and “wrong”, as the correct parameterization depends
on the model and the data.

Good point, I was thinking of model + data but yeah the model code on its
own isn't good or bad without knowledge of the data.

from example-models.

betanalpha avatar betanalpha commented on July 24, 2024

In general these things are much more complex than many beginning
users want them to be. There’s whether or not expectations are
accurately estimated, there’s whether or not the model is a good
fit to the data, and the interactions of these loosely codified in the
Folk Theorem. I know the reality can scare people away to less
robust tools, but we can’t sugar coat things forever.

On Jul 15, 2016, at 9:12 AM, Jonah Gabry [email protected] wrote:

On Friday, July 15, 2016, Michael Betancourt [email protected]
wrote:

We desperately need to avoid names like “good” and “bad”, or
“right” and “wrong”, as the correct parameterization depends
on the model and the data.

Good point, I was thinking of model + data but yeah the model code on its
own isn't good or bad without knowledge of the data.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.

from example-models.

sakrejda avatar sakrejda commented on July 24, 2024

We could do a case study that exercises this idea enough/not enough data
for centered parameterization and then enough/too much for non-centered. I
have some examples from ODSC that come close to this but I never got to
where I could set a seed and generate a working/failing data set for each
parameterization, and my stuff isn't for 8-schools.

On Fri, Jul 15, 2016 at 11:36 AM Michael Betancourt <
[email protected]> wrote:

In general these things are much more complex than many beginning
users want them to be. There’s whether or not expectations are
accurately estimated, there’s whether or not the model is a good
fit to the data, and the interactions of these loosely codified in the
Folk Theorem. I know the reality can scare people away to less
robust tools, but we can’t sugar coat things forever.

On Jul 15, 2016, at 9:12 AM, Jonah Gabry [email protected] wrote:

On Friday, July 15, 2016, Michael Betancourt [email protected]
wrote:

We desperately need to avoid names like “good” and “bad”, or
“right” and “wrong”, as the correct parameterization depends
on the model and the data.

Good point, I was thinking of model + data but yeah the model code on its
own isn't good or bad without knowledge of the data.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
#71 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAfA6ZSXt63g2dJpL1GNELQWlSe0Z4kRks5qV6kFgaJpZM4JMhfQ
.

from example-models.

betanalpha avatar betanalpha commented on July 24, 2024

You can always use the n-schools-ish model I used
for the HMC for hierarchical models paper.

On Jul 15, 2016, at 1:26 PM, Krzysztof Sakrejda [email protected] wrote:

We could do a case study that exercises this idea enough/not enough data
for centered parameterization and then enough/too much for non-centered. I
have some examples from ODSC that come close to this but I never got to
where I could set a seed and generate a working/failing data set for each
parameterization, and my stuff isn't for 8-schools.

On Fri, Jul 15, 2016 at 11:36 AM Michael Betancourt <
[email protected]> wrote:

In general these things are much more complex than many beginning
users want them to be. There’s whether or not expectations are
accurately estimated, there’s whether or not the model is a good
fit to the data, and the interactions of these loosely codified in the
Folk Theorem. I know the reality can scare people away to less
robust tools, but we can’t sugar coat things forever.

On Jul 15, 2016, at 9:12 AM, Jonah Gabry [email protected] wrote:

On Friday, July 15, 2016, Michael Betancourt [email protected]
wrote:

We desperately need to avoid names like “good” and “bad”, or
“right” and “wrong”, as the correct parameterization depends
on the model and the data.

Good point, I was thinking of model + data but yeah the model code on its
own isn't good or bad without knowledge of the data.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
#71 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAfA6ZSXt63g2dJpL1GNELQWlSe0Z4kRks5qV6kFgaJpZM4JMhfQ
.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.

from example-models.

jgabry avatar jgabry commented on July 24, 2024

Maybe three different eight schools models: the first with the actual data
(NCP better), the second with y scaled up by 10 but sigma the same (CP is
better), and the third with both y and sigma scaled up (NCP better again) ?

On Fri, Jul 15, 2016 at 1:49 PM, Michael Betancourt <
[email protected]> wrote:

You can always use the n-schools-ish model I used
for the HMC for hierarchical models paper.

On Jul 15, 2016, at 1:26 PM, Krzysztof Sakrejda [email protected]
wrote:

We could do a case study that exercises this idea enough/not enough data
for centered parameterization and then enough/too much for non-centered.
I
have some examples from ODSC that come close to this but I never got to
where I could set a seed and generate a working/failing data set for each
parameterization, and my stuff isn't for 8-schools.

On Fri, Jul 15, 2016 at 11:36 AM Michael Betancourt <
[email protected]> wrote:

In general these things are much more complex than many beginning
users want them to be. There’s whether or not expectations are
accurately estimated, there’s whether or not the model is a good
fit to the data, and the interactions of these loosely codified in the
Folk Theorem. I know the reality can scare people away to less
robust tools, but we can’t sugar coat things forever.

On Jul 15, 2016, at 9:12 AM, Jonah Gabry [email protected]
wrote:

On Friday, July 15, 2016, Michael Betancourt <
[email protected]>
wrote:

We desperately need to avoid names like “good” and “bad”, or
“right” and “wrong”, as the correct parameterization depends
on the model and the data.

Good point, I was thinking of model + data but yeah the model code
on its
own isn't good or bad without knowledge of the data.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<
#71 (comment)
,
or mute the thread
<
https://github.com/notifications/unsubscribe-auth/AAfA6ZSXt63g2dJpL1GNELQWlSe0Z4kRks5qV6kFgaJpZM4JMhfQ

.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#71 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AHb4QztH0rCm0w21lSkxPZA3gRhpKkwtks5qV8g5gaJpZM4JMhfQ
.

from example-models.

betanalpha avatar betanalpha commented on July 24, 2024

It’s the relative ratio that matters, so all you need to do
is scale the measured standard deviations relative to
the measured means. Which is exactly what I do in the
test model in the paper (the exact Stan program is in
the appendix).

On Jul 15, 2016, at 2:21 PM, Jonah Gabry [email protected] wrote:

Maybe three different eight schools models: the first with the actual data
(NCP better), the second with y scaled up by 10 but sigma the same (CP is
better), and the third with both y and sigma scaled up (NCP better again) ?

On Fri, Jul 15, 2016 at 1:49 PM, Michael Betancourt <
[email protected]> wrote:

You can always use the n-schools-ish model I used
for the HMC for hierarchical models paper.

On Jul 15, 2016, at 1:26 PM, Krzysztof Sakrejda [email protected]
wrote:

We could do a case study that exercises this idea enough/not enough data
for centered parameterization and then enough/too much for non-centered.
I
have some examples from ODSC that come close to this but I never got to
where I could set a seed and generate a working/failing data set for each
parameterization, and my stuff isn't for 8-schools.

On Fri, Jul 15, 2016 at 11:36 AM Michael Betancourt <
[email protected]> wrote:

In general these things are much more complex than many beginning
users want them to be. There’s whether or not expectations are
accurately estimated, there’s whether or not the model is a good
fit to the data, and the interactions of these loosely codified in the
Folk Theorem. I know the reality can scare people away to less
robust tools, but we can’t sugar coat things forever.

On Jul 15, 2016, at 9:12 AM, Jonah Gabry [email protected]
wrote:

On Friday, July 15, 2016, Michael Betancourt <
[email protected]>
wrote:

We desperately need to avoid names like “good” and “bad”, or
“right” and “wrong”, as the correct parameterization depends
on the model and the data.

Good point, I was thinking of model + data but yeah the model code
on its
own isn't good or bad without knowledge of the data.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<
#71 (comment)
,
or mute the thread
<
https://github.com/notifications/unsubscribe-auth/AAfA6ZSXt63g2dJpL1GNELQWlSe0Z4kRks5qV6kFgaJpZM4JMhfQ

.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#71 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AHb4QztH0rCm0w21lSkxPZA3gRhpKkwtks5qV8g5gaJpZM4JMhfQ
.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.

from example-models.

bgoodri avatar bgoodri commented on July 24, 2024

see stan-dev/rstan#387

from example-models.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.