GithubHelp home page GithubHelp logo

project-theta's People

Contributors

benjaminhsieh avatar boyinggong avatar briankleinqiu avatar changsiyao avatar jarrodmillman avatar pigriver123 avatar rossbar avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

project-theta's Issues

Regression Tasks

Let's use this to make clear what's going on with the regression files and all the related moving parts

Dev branch and Testing/Documentation

So lets try to make new developments in the dev branch, so that the master is a "clean/stable" version with code reviewed work, test functions, and documentation for everything. In other words, make pull request to dev branch and then after everything looks good, we can pull request dev into master.

feedback

Had a nice introduction to the concept of loss-aversion and what their paper
was studying (it was one of the clearest of the introductions on the
loss-aversion study).

Only group that really introduced the models they were intending to use. They've
clearly done a good job reading and understanding their paper.

Preliminary results/preprocessing results were noticably absent; a lot of
focus on information from the paper. Seemed a bit behind in working with the
fMRI data.

Overall, seemed to have a very nice conceptual understanding of your study,
but really need to get working with the data. Your presented goal was
reasonable and feasible (propose to repeat the analysis in the paper to
validate the models and regression techniques used) but it seemed like very
little concrete work has materialized toward that end.

You need to make sure you are writing valid code. For example, you can't use
np.something, if np doesn't exist.

Paper Fixes Before Friday

  1. Need to change the title
  2. Problem with reference not showing up
  3. Need to add DATA section
  4. Need to add RESULTS section

Question: should we break off the last section of "PLAN" into "DISCUSSION"?

Final report draft feedback

  • Your methods description is very clear, but you need to show your results.
    For instance, at the end of section 3.1.1 you say "we acheived almost the
    same results as the paper". This doesn't carry any actual meaning: show the
    results of your logistic regression, show the results from the paper, and
    compare them
  • same goes for the rest of section 3.1 - these descriptions are very clear
    and concise, but where are the results?
  • you seem to have a very concrete plan, but there are no results to provide
    feedback on.
  • For comparison across subjects, you should look into the data Matthew posted
    that aligns data across the subjects. There's also the lecture on
    transforming the coordinate system of the brain images. These two things
    should be sufficient to overcome the issues you highlight in section 3.3

office hour questions

  1. What is the dependent variable of the neural regression model?
  2. What is the PTval in the behadata?
  3. Can we run R from python? The supplemental material mentioned that they use R to fit Logistic regression.
  4. Why we should run a mixed effects model? Can we just run a multiple regression model and combine the 3 runs for a subject together? If we fit the mixed effects model for the neural data, then we also have to do it for behav data?

Summary Statistics

Consider writing following code for: (1) across subjects, (2) each subject, (3) within each subject aka each run within subject

Also consider making the functions work by axis

-Mean/Median, mode(?)

  • Range, std/variance

-Covariances/correlations

Any other summary statistics that might seem useful

We don't necessarily need to output this data and stuff, but it might be useful to have the code onhand. I think making the functions as general as possible is a good idea

Final checklist

Read/test through the files, check when completed

Readme:

  • Code
    • Code/scripts
    • Code/functions
    • Code/graphing
  • Paper (all)
  • Paper/section need readme(?)
  • Data (all)
  • project-theta (main) See #150

Running make

  • Code
    • Code/scripts
  • Paper (all)
  • Data (all)
  • project-theta (main)
  • slides Important too

Paper Draft read through: how are we deciding to build this? if we do it my sections, we need a readme for the sections directory as well

  • read paper for spelling and grammar
  • merge into master

Address issues:

Stuff from Slides Presentation

  • Investigate mask thresholds, possibly customize for each one
  • Mention that there might be movement in the scanner
  • Change the box plots to dots
  • Clarify mixed effects model is 16 models, one for the three runs of each subject
  • Variance Map on the brain, show RMS for example on heat map

Preparing the data

Write functions that:

  • subsets the data by if activated or not
  • groups by what the response was (accept or reject)
  • aligns behavioral data with BOLD response

-possibly a short function that converts, possibly convolves data from .nii.gz into numpy

Organization, Makefile, Readme Finalize

To DO:

  1. Organize functions/graphing in utils, should place graphing functions separately since we dont have tests for them and coverage report not good as of now
  2. Update our scripts so figures/data txt files saves to (perferably) a single location, maybe the paper directory.
  3. Delete old paper directory
  4. Update the Makefiles for directories: project-theta, code, data, paper
    • Be sure the make coverage produces the correct reports
    • Be sure all make commands produce the correct results, eg all tests run smoothly
  5. Update/add the readme on all the following directories: project-theta, code, data, paper
    • Document all make commands
    • Include instructions to show user how to navigate each respective directory, eg how to download data, make figures for paper, the steps to building our paper, etc
    • Other documentation (in main directory): our names, project overview, acknowledgements

Basic Plots

Should have code/functions that generate plots from data. Consider what data types should be allowed - single runs, single subjects, or across all subjects?

-boxplots of the std
-plots of basic summary statistics like correlation/covariances?
-std over time like we did in the homework

Any other plots that sound useful

Data/makefile

We need to:

(1) pull in the data files when we run makefile. can do this either thru bash commands or calling a python file, i think we should consider making a python file

(2) generate a hash for each file

(3) make sure we validate it in the data.py test

Comment any thoughts

Make Paper failing

Cannot find the following file in 02_preprocessing.md:
../code/utils/smoothed_images.png

Please save images in paper/figures folder and update the slides

Idea: Brain anatomy plotting / T-value plotting

For t-value plotting, can we try to reproduce something like this?
hypothesis_testing

Where we plot the mean t-value (across all 16 subjects) at various cross sections of the brain, we might want to ask tomorrow how we can utilize the anatomy data, I feel like that might important for plotting

Need to add paperfig make command

Waiting for all figures to be saved in paper/figures, then edit makefile by adding make paperfig to move the corresponding figures from results/figures into paper/figures

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.