GithubHelp home page GithubHelp logo

Comments (8)

koadman avatar koadman commented on July 28, 2024

well spotted. i think we care only about creating a sampling distribution from the weights and not about the actual likelihoods.
if that's true it seems like we should be able to rescale and still get the desired result. pseudocode follows:

m = max(log_weights)
log_weights += m

some very low likelihoods may still underflow with the exp() but those losers weren't going to make it to the next generation anyway. hehehe.

from sts.

matsen avatar matsen commented on July 28, 2024

Hrm, that ain't going to work for stratified resampling, if you see #27. We might be able to make it work in log-space but it will require some thought. Even multinomial calls

double GetWeight(void) const {return exp(logweight);}

... but there it seems straightforward to first subtract off the smallest log like then exponentiate, then take the multinomial. Seem good????

from sts.

koadman avatar koadman commented on July 28, 2024

ja, i looked over that smctc patch, seems ok. looks like you were able to get rid of a few extra calls to exp() as well. nice!

from sts.

matsen avatar matsen commented on July 28, 2024

So you think that using the first likelihood as a normalizing factor is an
OK choice? Seemed better to me than the most likely, and faster.

On Sun, Oct 21, 2012 at 9:07 PM, Aaron Darling [email protected]:

ja, i looked over that smctc patch, seems ok. looks like you were able to
get rid of a few extra calls to exp() as well. nice!


Reply to this email directly or view it on GitHubhttps://github.com//issues/30#issuecomment-9652128.

Frederick "Erick" Matsen, Assistant Member
Fred Hutchinson Cancer Research Center
http://matsen.fhcrc.org/

from sts.

koadman avatar koadman commented on July 28, 2024

well, i guess it depends how much range of variation there is in likelihoods. if they vary over more than e.g. 300 log units then the high likelihoods would overflow. Seems like it's much more important to keep the high likelihoods intact than the little'uns. Why did you not want to normalize with the most likely?

from sts.

matsen avatar matsen commented on July 28, 2024

Good point.

I guess I was thinking that on average we would have twice as much range if
we took something that was in the middle rather than something at the top.

I just had a little play around and noted that exp(-big) = 0 and exp(+big)
= inf.

So we would certainly know either way.

However, I agree and will fix and then merge.

On Mon, Oct 22, 2012 at 7:30 AM, Aaron Darling [email protected]:

well, i guess it depends how much range of variation there is in
likelihoods. if they vary over more than e.g. 300 log units then the high
likelihoods would overflow. Seems like it's much more important to keep the
high likelihoods intact than the little'uns. Why did you not want to
normalize with the most likely?


Reply to this email directly or view it on GitHubhttps://github.com//issues/30#issuecomment-9665733.

Frederick "Erick" Matsen, Assistant Member
Fred Hutchinson Cancer Research Center
http://matsen.fhcrc.org/

from sts.

koadman avatar koadman commented on July 28, 2024

if we want the extra range maybe we could find the max particle weight
and subtract off 300 or so log units from it (assuming double precision
floats) to get the normalization weight.

On Mon, 2012-10-22 at 08:31 -0700, Erick Matsen wrote:

Good point.

I guess I was thinking that on average we would have twice as much
range if
we took something that was in the middle rather than something at the
top.

I just had a little play around and noted that exp(-big) = 0 and
exp(+big)
= inf.

So we would certainly know either way.

However, I agree and will fix and then merge.

On Mon, Oct 22, 2012 at 7:30 AM, Aaron Darling
[email protected]:

well, i guess it depends how much range of variation there is in
likelihoods. if they vary over more than e.g. 300 log units then the
high
likelihoods would overflow. Seems like it's much more important to
keep the
high likelihoods intact than the little'uns. Why did you not want
to
normalize with the most likely?


Reply to this email directly or view it on
GitHubhttps://github.com//issues/30#issuecomment-9665733.

from sts.

cmccoy avatar cmccoy commented on July 28, 2024

Closed - just dividing by the largest log-like for now. Anyone feel strongly that we need more range than that gives?

from sts.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.