GithubHelp home page GithubHelp logo

neurostuff / masking-bias-in-ibma Goto Github PK

View Code? Open in Web Editor NEW
0.0 6.0 0.0 6 KB

An analysis to evaluate bias of IBMA estimators under different masking methods in NiMARE.

License: Apache License 2.0

masking-bias-in-ibma's Introduction

masking-bias-in-ibma's People

Contributors

tsalo avatar

Watchers

Satrajit Ghosh avatar Tal Yarkoni avatar James Cloos avatar Angie Laird avatar  avatar James Kent avatar

masking-bias-in-ibma's Issues

Determine appropriate datasets/contrasts for analysis

I assume we will want to analyze the same contrast across as many Neuroscout datasets as possible, so the key question is which features we want to use. Here's are a few that look like reasonable targets:

  • face_detectionConfidence (GoogleVisionAPIFaceExtractor)
  • shot_change (GoogleVideoAPIShotDetectionExtractor)
  • people (ClarifaiAPIImageExtractor)
  • face (ClarifaiAPIImageExtractor)
  • as-Speech (AudiosetLabelExtractor)
  • button_press (events)

Evaluate bias of IBMA estimators under different masking methods

Summary

In neurostuff/NiMARE#466, @nicholst and @tyarkoni note that maskers which aggregated values across voxels before fitting the meta-analytic model will likely produce biased results, depending on the meta-analytic model. We should systematically evaluate the different estimators across a range of datasets.

Additional details

@tyarkoni has performed some simulations, and did not find large bias across approaches for the non-combination, non-likelihood estimators (e.g., Hedges, WeightedLeastSquares, DerSimonianLaird, and probably PermutedOLS). The combination test estimators (Fishers and Stouffers) are probably heavily biased. The likelihood-based estimators (SampleSizeBasedLikelihood and VarianceBasedLikelihood) may or may not be biased.

@nicholst proposed the following options:

  1. OLS - We ignore ROI variances and the weighting is tau^2+const (no weighting), worst case is inefficiency and (as per Mumford & Nichols) no FPR risk for one-sample or balanced two-sample comparisons. (However, the M&N result was calibrated against heterogeneity seen in task fMRI, and not N=10 <-> N=1200 differences)
  2. GLS - We take average ROI variances as "correct", but they're actually too small, so weighting is tau^2 + TooSmallVar_i... I think this is OK as the estimated tau^2 will make up for the vars being too small over all, so inferences are probably fine, but just not as efficient as they could be. Another plus is that this approach will capture gross differences in sample size, something important if N's have a big range.

Analysis plan (tentative)

  1. Collect subject-level data (z, p, beta, and varcope maps) from a range of datasets.
    • We can collect these data from Neuroscout.
  2. Generate a range of dataset-level results with resampling.
    • Generate subset results with varying sample sizes.
    • Vary smoothness as well.
  3. Run voxel-wise image-based meta-analyses, then average results across ROIs.
  4. Run ROI-wise image-based meta-analyses.
  5. Compare results of both approaches. The former is the ground truth.
    • As the most basic test, we can perform pair-wise comparisons between the analysis-first and the aggregation-first results from the same estimators and datasets.
    • We can also dig into dataset parameters/characteristics, which might clarify what sources of bias there are.
    • Parameters to investigate:
      • Sample size characteristics (e.g., mean sample size, or perhaps through holding dataset sample sizes constant in some analyses?)
      • Smoothness
      • Original contrast variance levels?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.