berkeley-stat159 / project-theta Goto Github PK
View Code? Open in Web Editor NEWLicense: BSD 3-Clause "New" or "Revised" License
License: BSD 3-Clause "New" or "Revised" License
Let's use this to make clear what's going on with the regression files and all the related moving parts
So lets try to make new developments in the dev branch, so that the master is a "clean/stable" version with code reviewed work, test functions, and documentation for everything. In other words, make pull request to dev branch and then after everything looks good, we can pull request dev into master.
Had a nice introduction to the concept of loss-aversion and what their paper
was studying (it was one of the clearest of the introductions on the
loss-aversion study).
Only group that really introduced the models they were intending to use. They've
clearly done a good job reading and understanding their paper.
Preliminary results/preprocessing results were noticably absent; a lot of
focus on information from the paper. Seemed a bit behind in working with the
fMRI data.
Overall, seemed to have a very nice conceptual understanding of your study,
but really need to get working with the data. Your presented goal was
reasonable and feasible (propose to repeat the analysis in the paper to
validate the models and regression techniques used) but it seemed like very
little concrete work has materialized toward that end.
You need to make sure you are writing valid code. For example, you can't use
np.something, if np doesn't exist.
Question: should we break off the last section of "PLAN" into "DISCUSSION"?
What should we do when we don't get integer voxel coordinates. Do you round the coordinates to integers?
For example, in our paper, the mm coordinate is (3.6, 6.3, 3.9), when we turn it into voxel coordinate, it is (43.2, 66.15, 37.95).
Thanks!
@matthew-brett @jarrodmillman @rossbar
Consider writing following code for: (1) across subjects, (2) each subject, (3) within each subject aka each run within subject
Also consider making the functions work by axis
-Mean/Median, mode(?)
-Covariances/correlations
Any other summary statistics that might seem useful
We don't necessarily need to output this data and stuff, but it might be useful to have the code onhand. I think making the functions as general as possible is a good idea
Read/test through the files, check when completed
Readme:
Running make
Paper Draft read through: how are we deciding to build this? if we do it my sections, we need a readme for the sections directory as well
Address issues:
In graph_lindiag_script.py, from graph_lindiagnostics import qqplot, res_var needs to be done from the utils/graphing folder.
** should be taken care of, please check #143
Waiting for all the scripts to come in and we will need to update the makefiles and readmes accordingly
@pigriver123 please comment when all sciprts are done
When I run make test
after cloning your repository, I get 1 out of 3 tests failing. I've attached the error summary from nosetest:
the first set of heatmaps shows up weird
Our draft is in a separate folder named "draft" instead of the paper directory. @jarrodmillman
We set up a meeting with Jarrod on Monday (11/23) 3-4 PM in the third floor lounge we normally meet.
@jarrodmillman @BenjaminHsieh @pigriver123 @boyinggong @BrianQIu
Write functions that:
-possibly a short function that converts, possibly convolves data from .nii.gz into numpy
To DO:
make
commands produce the correct results, eg all tests run smoothlymake
commands@BrianQIu can you add the new data into the .gitignore in the project directory? thanks
Should have code/functions that generate plots from data. Consider what data types should be allowed - single runs, single subjects, or across all subjects?
-boxplots of the std
-plots of basic summary statistics like correlation/covariances?
-std over time like we did in the homework
Any other plots that sound useful
We need to:
(1) pull in the data files when we run makefile. can do this either thru bash commands or calling a python file, i think we should consider making a python file
(2) generate a hash for each file
(3) make sure we validate it in the data.py test
Comment any thoughts
Cannot find the following file in 02_preprocessing.md:
../code/utils/smoothed_images.png
Please save images in paper/figures folder and update the slides
@pigriver123 @BrianQIu @changsiyao @boyinggong
So far I have analysis issues with run time and identifying regions to look at
Waiting for all figures to be saved in paper/figures, then edit makefile by adding make paperfig
to move the corresponding figures from results/figures
into paper/figures
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.