psychtaskframework's People
psychtaskframework's Issues
Offer to project to another screen automatically
Screen('Screens')
lists screens available to the user. Other than checking that chosen screenId
is, in fact, available, we could also automatically redirect PTB output onto the in-fMRI projector, which would e.g. allow the experimenter to see the record of subject's choices printed onto stdout by choiceReport
.
Abstract "elements" out of "draw scripts"
Currently, "draw scripts" handle both (1) the logic of drawing things on the screen and (2) deciding how long to keep them there and/or collecting responses. (The current structure is a product of the desire to move on quickly from #2 and #7. ) By convention, the latter logic is abstracted to a local function for the draw script, but I think that gets the problem semantically backward.
Each draw script currently handles one part of the trial. It has some composition of elements; it lasts for a particular duration or until a particular condition is fulfilled. The specifics of how the elements should be drawn are details that distract from the high-level picture: What elements should be on the screen? And how long should they remain?
The main challenge to refactoring is that some elements require screen dimensions and the position of others. #32 offers one solution -- just re-compute the rect
values from those objects in settings.object
that you know will be on the stage with you! -- but I think that might be short-sighted. It would be easier for the draw script to be required to pass to its element scripts a list of elements already drawn.
The other challenge is determining the conventional interface that element scripts should provide. Should they take device
(for windowPtr
etc.), variables
(for values to put in), otherElts
(see above), and callback
(for anything else)? Are there other needs?
Finally, if the draw scripts delegate the actual drawing to other functions, shouldn't they be called something else? Phase scripts are the first idea, but there might be a better option.
Play nice with zero durations
(This might means two different things. For display durations, this might mean skipping the display altogether. For response durations, however, this means that there is no time limit. It's an open question whether indefinite durations should be represented by another special value -- Inf
or NaN
, perhaps.)
Clarify namespace for task-specific functions
Create `valueFormatter` for monetary RA, images
See #5.
Generalize function for keypress capturing
Currently, each sub-function either rolls its own, or relies on waitForBackTick
. This violates DRY.
Calculate text size dynamically
Like gettextDims
from 3df8db0
Fix positions of drawn objects
Branch SP_refactor
has made dimension setting significantly clearer, but somewhere along the way, it lost some information -- master
, although ugly, reliably displays the task correctly, whereas SP_refactor
often screws up on smaller displays etc.
Setting to run @XYZ_{drawTask,drawRef,drawBgr} in a generic runTrial function
Both R&A
and MDM
have relatively few differences. The main thing that they do differently occurs in XYZ_drawTask.m
, but that function call is hard-coded in XYZ_drawTrial.m
(which, by the way, should be XYZ_runTrial.m
. (Other functions that might be different are drawRef
, because different reference, and drawBgr
.)
It should be very easy to specify these in the settings, and thus allow users to touch a couple files fewer.
Write sample preBlock and postBlock functions
(e.g. to display Block #{n}
)
Store trial properties and subject choices across blocks
Currently, each trial property is passed separately, and appending needs to be specifically defined.
-
Pass all trial properties in a table for all trials inData.trials
. -
Write all choices, response times etc. inData.records
, such thatData.records
andData.trials
are joinable bytrialId
. -
Clarify function responsibilities: anyX_drawTrial
function needs to have atrialId
passed to it, and anyX_handleResponse
needs to append its record toData.records
. (This might mean that all sub-task functions need to be passedtrialId
as well. The easiest way to do this is to cheat, and pass it throughData.currTrial
set beforeX_drawTrial
call byrunBlock
(or whatever function is directly calling trials)
Abstract out parts of trial
System to save (possibly disparate) trial records
For legacy reasons (and laziness reasons), most data structures used in this code are struct
s and cell arrays. This means (a) unlimited nesting, (b) absence of type and/or content checking. This will inevitably give rise to obscure errors down the line.
Moving to cleaner OOP would be one solution. Writing a modification function for every key struct would be another. But right now, we don't do either.
Filing this for future fix.
Second semantic naming sweep-through
preBlock.m
is not descriptiveredProb
andblueProb
are still used in places- ...
Fix suboptimal keypress capture method (`KbCheckQueue` instead of `KbCheck`)
See Harvard's CBS reasoning. Best to implement simultaneously with #9.
Abstract out generating trial order
Create abstraction for blocks
Move all properties for display objects to `settings.objects`
For cleaner name-spacing reasons. Will require re-writing all draw functions.
Translate key presses into choseLottery right away
This is really simple to do at the time of choice, and relatively harder later.
(Doing this at the time of choice also allows flagging stochastic dominance issues for the experimenter's attention, but there are sufficient reasons to do this for its own sake.)
Central, semantic config
Currently, many options are hard-coded into each file; many are passed a bit haphazardly via the participant data container struct Data.stimulus
.
- Centralize display options in
config
- Update test script with the correct option names
- Pass
settings
to display sub-functions that need them separately, rather than as part ofData
Use Screen('Flip') for timing?
Screen('Flip')
provides highly accurate timing.. Wherever we time events, we use datevec
differences; we do use Screen('Flip')
to re-draw the window, but never use its return values.
Can we do better?
Write initial README.md
-
config
and its options - The refactored structure (calling function,
runBlock
,runTrial
, anddrawX
)- The responsibilities of each layer
- What to customize to make the functions fit your task
- Example of changing things with drop-in callback functions
- Example of re-writing a specific function
- Using images instead of text
- Capturing key-presses
- Extracting collected data from data files
Develop a way to recycle (save & load) per-block config
"rails generate" for tasks and configs
Investigate difference between Screen('DrawText') and DrawFormattedText
3df8db0 changed from the latter to the former. Should this change be merged into the refactor?
Run most / all phases with `PhaseConfig` and `phase_generic.m`
There's a finite number of variables in the various stages of task drawing. In theory, a well-defined object/stage in settings.objects.(x)
could be drawn and timed in a general boilerplate code. If this could be made sufficiently lightweight, it would remove the need to write custom drawX
functions.
The complexity here could get ugly fast. The possible properties would have to be well-spec'd and strictly bounded; at some point, it might be easier to just provide boilerplate code for specific kinds of task parts.
After interruption, resume next block / trial
If a block is interrupted halfway, it is desirable to repeat as little of it as possible. Different setups will allow for different levels of this -- in fMRI, timing dictates that we'll have to restart the block, but in a behavioral experiment, starting at the last-seen choice is preferable.
Implementing block restarts should be relatively easy.
- On loading the participant data file, notice if the participant has all the blocks they should, at this point, have. (
settings.game.blocksPerSession
seems like a good idea.) SincerunBlock
only saves after all trials have concluded, all blocks inData.recordedBlocks
are finished. Done as of a3c4b65. - If the participant doesn't have all blocks, set a flag for how many blocks they do have.
(At this point, prompt the experimenter to optionally reset the flag.)Done:Data.blocks.numRecorded
.
- Implement per-block logic that decides whether this flag qualifies the participant to skip that block.
(Easy one: an additional argument passed torunBlock
that specifies its order among all blocks.)
Implementing trial restarts would require the following changes to runBlock
:
- implementing saving after every trial rather than adding a block after each block, and
- Setting a per-trial flag that can switch the trial for-loop from
1:num_trials
toflag:num_trials
.
Distinguishing between the two approaches should be done with a config flag. (s.misc.saveFreq
, where 0 is after each trial, 1 after each block, and 2 after each session?)
Segregate interfaces and responsibilities in data collection
There are three levels of abstraction: block, trial, and trial part. Currently, each part has full access and requires to Data
(a struct
with all metadata & collected data for participant) and settings
(a specification of task design for the particular block type).
It works for task design in settings
, but is problematic for Data
, as each trial must know how to correctly append its collected data to the previously collected data (which may or may not have the same format). This can result in data loss.
Each trial must only be expected to collect its per-trial data and clarify what trial properties (and perhaps trial ID) the collected data correspond to.
An elegant solution would have the block abstraction append the new row to the previous rows, and the session procedure to integrate data collections across blocks.
A stop-gap solution might just have each block stick every new per-trial row in a cell array (along with block type) and deal with this problem later.
Check with experimenter about participant data & what to run
Similar to #27, it would be useful to require an experimenter go ahead for any task. The use case here is a two-session fMRI task in which you want to avoid a mistype that would append the data to a different subject's data file, or accidentally repeating Session 1 and saving into a new data file.
Update (and create) function descriptions
Many function descriptions (e.g. imgLookup
) are somewhere between deprecation and wishful thinking. Before imposing this on users, make sure that the functions are described correctly.
Wrapper function around Screen('OpenWindow') and Screen('CloseWindow')
There are a couple of tasks that are always necessary to do with Screen('OpenWindow')
; most prominently, it sets the windowPtr
and screenDims
in settings.device
. This should be factored out for general ease of usage, and for preventing some simple errors.
What simple errors? For example, almost all scripts use screen.device.screenDims(3:4)
as width and height. But this is only correct as long as the top-left pixel of PTB's screen is (0, 0)
! The actual width and height are [dims(3) - dims(1), dims(4) - dims(2)]
.
Screen('CloseWindow')
doesn't present many problems, but having a shutdown script that unsets relevant settings.device
values would be a good complement.
Rename drawTrial to runTrial
The reason is obvious: drawTrial
doesn't draw anything. It defines what phases the trial has, and (at its most concrete) what elements get drawn. Current naming convention is the result of #2 and needlessly confusing.
- Rename the files
- Change the function calls
- Test that
RA
andMDM
aren't failing - Edit the
README
to reflect the change.
Decide whether to insist on full paths for images
Now that everything isn't on the same path (#18), path errors have become a possibility. One way to make sure that we cover the corner cases is to require, at the point of loadTexturesFromConfig
call, that all image paths be given as full paths.
Alternatively, if images are placed in a predictable spot ([root]/tasks/*/img
), is there a reliable way to make sure that those are always looked up and loaded?
Refactor commonalities between tasks
There's significant duplication between the two main task scripts. Most of it can be factored out (see e.g. #36).
Ensure that RA.m fully implements the full RA experiment
- Make sure that previously generated trials for the given order are saved across sessions, and are loaded in session 2.
-
Implement a voluntary argumentMake the script decide which session to run based on how many elements are present inDay
, which tells the script which session this isData.recordedBlocks
. - Implement a function that sets the order of loss/gains blocks for observer.
-
SetImplement #40.settings.game.block.noSave
that will make sure that the practice block can be set to not save. -
SetImplement block-limiting logic insettings.game.blocksPerSession
andsettings.game.blocksMax
RA.m
, then see if it can be generalized. - Implement
experimentSummary(Data)
(#30)
After all of the above are done, verify that:
- Task & blocks run and stop as they would;
- Last as long as they should;
- Record what they ought to.
When all of the checkboxes are checked, merge SP_refactor
into master
.
Re-implement experimentSummary
Will be very easy after implementing #29.
Stronger content checking
The scripting system thus far has been saving "gains" and "loss" blocks in separate files. In the case of monetary R&A, this is unnecessary -- adding a column for blockType
would have been sufficient to distinguish the records from different kinds of blocks -- but it isn't inconceivable that one experiment could have blocks that set different trial properties and collect different trial variables, which would make table concatenation non-trivial.
It's possible to write a generalized table concatenation function (viz)- one which (1) adds missing columns to either table, and (2) separates out table variables that have the same name but different variable type. This might still be a good thing to do to simplify post-experiment data analysis.
The data collection script, however, should play this safe. For this reason, I propose to implement the following data structure:
Data.recordedBlocks
is a cell array;n
th block ran byrunBlock
has a record atData.recordedBlocks{n}
.Data.recordedBlocks{n}.settings
contains theblockSettings
used for the block.Data.recordedBlocks{n}.records
contains the collected data.
This is technically a step down, because now the data will be segregated by block, not block type. If we decide that we can rely on standardization of block types -- such that each block run with RA_Gains
is, in fact, collecting the same kind of data with the same kind of trial properties -- then we can concatenate across blocks. For now, however, ensuring that data gets collected is more important.
Rewrite `config` to be default, `TASK_config` to alter it
I had been treating config.m
as the configuration file for monetary R&A, and just crudely copied it for the medical decision-making task. This is a dumb anti-pattern, and it's very easy to fix -- make it very clear that config.m
is the sensible default and TASK_config.m
is the per-task config that can, if useful, use config()
call as a baseline.
This should be an easy refactor.
Extract position-setting into a callback function
The reason position isn't set in config
is that for many objects, their position is a function of (a) screen width and height, (b) other objects. For those, however, the computation is deterministic.
settings.object.X.posFn
could be the standard place for an object to save the requisite function handle in.
The standard format is function XPos(ScreenWidth, ScreenHeight [, settings.objects]) -> rect
, where rect
is a 4x1 matrix. If the format is kept constant, this will enable other functions to extract other object positions without necessarily knowing what their specific position-calculating function is, simply by calling settings.object.X.posFn
.
Line 17: Combining choice and response time to 10s
Participant will have a maximum of 10 seconds to both view and respond to the choice.
Invoke drawBgr regularly?
If `observer` isn't supplied, run a practice block
If there's a practice run and nothing needs to be collected, invoke RA
or MDM
without arguments and let them switch to practice mode.
Basic export script
In the new system, data is saved differently. We should provide a system to cleanly export it, and optionally a system to re-make it into how the data used to look.
Use `settings.objects.X.shape` to specify Screen function
This is pretty simple -- right now, in drawX for shapes, Screen('FillRect')
or Screen('FillOval')
is hard-coded. Since their settings are pretty much the same, Screen(sprintf('Fill%s', settings.objects.X.shape)
should accomplish it just as well.
Clean up & document `config`
- Contains many ad-hoc values that shouldn't be used.
- Uses inconsistent naming conventions (#16).
- Uses inconsistent value use conventions.
- Not properly documented.
Before throwing this at users, this must be fixed or at least documented.
Implement image choices
See MDM
branch.
Remove unused files and impose a structure on the remaining ones
- Remove
RA_{Practice,CallAll,Gains,Loss}*
,experimentSummary.m
,setParams_LSRA.m
- RA_Practice_v2 contains many changes from its version on
master
, but all of them are subsumed inRA
andrunBlock
. (The largest difference is how often it callsdrawBgr
.) As the changes in refactored functions break it more and more, it's become less and less useful.
- RA_Practice_v2 contains many changes from its version on
- Remove
drawx.m
in favor of an entry in documentation - Make one-use functions local (
addBlock.m
) - Move general helper functions into
lib/
(centerRectDims
,choiceReport
,orderLotto
, all formatters / lookups,loadOrCreate
,saveData
, ...). Things that remain on the top level should be either essential (runBlock
,drawTrial
,config
) or task-specific.- Add
addpath('./lib')
to the task scripts.
- Add
- Shuffle off all task-specific functions into
tasks/{task}/*
, except for the top-level calling scripts (RA.m
,MDM.m
).- This will require path control. The suggestion is that the top-level scripts should take care of that, and scripts nested in deeper directories should assume their dependencies are on path.
- Add
addpath(genpath('tasks/{task}')
Shuffle off all task-specific images into eitherimg/{task}/*
ortasks/{task}/img/*
.
Write `tryX` scaffolding to quickly test elements
For sprawling tasks with many moving parts, it can be difficult to verify if one part works. It would be useful to have scaffolding functions that allow the user to test their element / trial stage / trial / block.
What would this require? At first glance, passing function handle and any arguments that it might expect (especially settings
). The tryX
function would open a window (& thus populate settings.device
), load any textures necessary, execute the function handle, hold it for three seconds after the function is finished, and then close the window.
I'd love to have automated testing, but until I figure out how to make PTB in a VM to play nicely with test scripts, this is the closest approximation. (If we can figure out how to make a test suite run without taking over a screen, that is. If we can, the test suite should be able to detect, at the very least, the bugs that raise exceptions / otherwise halt programs.)
Make MDM.m fully implement MDM
- Switch between monetary and medical blocks
- Define what practice looks like
- Get the values of stakes right
- Get the display of stakes right
- Determine what other changes are necessary.
Implement runSingleTrial / runFirstTrial wrapper
It might be useful, whether for testing (#34) or very granular designs, to have an runBlock
-like abstraction that skips block callbacks & other niceties.
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. ๐๐๐
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google โค๏ธ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.