GithubHelp home page GithubHelp logo

green-lab-sop's People

Contributors

acoppock avatar linstonwin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

green-lab-sop's Issues

attrition -- too lenient?

We will routinely perform three types of checks for asymmetrical attrition: ... In checks #2 and #3, p-values below 0.05 will be considered evidence of asymmetrical attrition., If any of those checks raises a red flag, and if the PAP has not specified methods for addressing attrition bias, we will follow these procedures

This seems too lenient. The test for bias from attrition may not be powerful. You shouldn't give yourself the benefit of the doubt for something that may cause substantial bias. Why not make the Lee bounds or the Horowitz bounds the default, and only do the first proposed thing if you can somehow very convincingly demonstrate that "it is extremely unlikely that the attrition was asymmetric"?

Also

  1. Consult a disinterested “jury” of colleagues to decide whether the monotonicity assumption for trimming bounds (Lee 2009; Gerber and Green 2012, 227) is plausible.

Where/how do you find this jury in practice? And what do you propose doing if they say it is not plausible?

BMlmSE() with fixed effects?

Hi! I'm now preparing my experiment and referring to this nice SOP.
Could we use BMlmSE() function with fixed effects? It only accepts lm() objects, so with factor() we get very sparse X and BMlmSE() stops working.

What to do with multiple outcomes?

There's a couple of things I'd like to have a default on:

  1. Some say you have to report all measured outcome variables. This is hard in some cases
  2. Some say you should make a scale. What's the best all-purpose way to make a scale. We've been doing principal components, but there must be tradeoffs to doing that.
  3. What about multiple comparisons?

What do to with multi-arm trials?

There's a couple of analysis choices to worry about:

  1. Which comparisons to report (all pairwise? all versus some control condition?)
  2. Which multiple comparisons correction to employ? Holm? RI? Some comparisons are in the same "family" some are not.
  3. Do multi-arm trials change how we think about covariate adjustment?
  4. When/how are we allowed to collapse over conditions?
  5. When/how are we allowed to parameterize (e.g., in Guess and Coppock, we code conditions for their "information content" and estimate a linear model)

Allow conditional permutation tests?

In the section on permutation tests, we've written, "We recommend attempting the permutation test with mock outcome data and actual covariate data before analyzing the actual outcome data. The mock permutation test may reveal that on some randomizations, the t-statistic cannot be computed because the regressors are collinear or because the HC2 or BM SE is undefined (see the section above on 'Avoiding regression models that do not allow the BM adjustment'). In such cases, covariates should be dropped from the model until the mock permutation test runs without errors."

I'm thinking to change this so that if the t-statistic is uncomputable on only a small % of randomizations (e.g., less than 5%), we do a conditional permutation test (i.e., randomizations where the t-stat is undefined are excluded from both the numerator and the denominator of the p-value).

One situation where this might happen is if the PAP specifies poststratification and there are some randomizations where all units in some poststratum are assigned to one treatment condition.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.