GithubHelp home page GithubHelp logo

sappelhoff / sp_experiment Goto Github PK

View Code? Open in Web Editor NEW
2.0 1.0 2.0 1.89 MB

Experiment code for the `mpib_sp_eeg` experiment

Home Page: https://github.com/sappelhoff/sp_experiment/

License: BSD 3-Clause "New" or "Revised" License

Python 99.75% Makefile 0.25%
experiment psychology eeg neuroscience psychopy tobii tobii4c 4c eyetracking

sp_experiment's Introduction

Python tests codecov DOI

sp_experiment - "Sampling Paradigm Experiment"

The sampling paradigm is an experimental paradigm in which participants are repeatedly asked to sample from one of many options and observe the outcome. After some of such samples, the participant can make a final choice among the choice options and receives the outcome of this final draw as a payoff.

The sampling paradigm has close resemblance to the Pure Exploration setting of the multi-armed bandit task.

For more information, see the section on background literature.

This Python package implements the sampling paradigm

  • A version without optional stopping
  • A version with optional stopping
  • A version where outcomes are not sampled, but shown in "description format"
    • This is not a "sampling paradigm", because there is no sampling between the options.

After these three tasks, participants in the study also performed the Berlin Numeracy Task.

Further information

This experiment code was used in the mpib_sp_eeg project.

Paper

The paper is available (open access) in Cerebral Cortex.

Preprint

A preprint is available on BioRxiv.

Experiment code

The experiment code in this repository is also archived on Zenodo. available on GitHub, and is archived on Zenodo.

Data

The data is available on GIN (see "Download the data" below).

Analysis code

The analysis code for the data is available on GitHub and Zenodo.

Installation

This package is intended and tested to run on Microsoft Windows 10.

  1. It is recommended to use the Anaconda distribution, see the installation instructions
  2. Then run conda env create -f environment.yml
  3. Finally, activate the environment and call pip install -e . from the project root.

Usage

You can start the experiment by calling python sp_experiment/sp.py.

First, there will be a general navigation to:

  1. run the experiment automatically (the full experiment)
  2. run the experiment (if only parts should be run)
  3. do some test trials
  4. calculate bonus money given a participant ID
  5. just show some instructions

For the general flow, select run_experiment. This will open a GUI that asks for the following information:

  • ID: A dropdown menu of integers to select as the unique identifier of a participant
  • Age: A dropdown menu of the participant's potential age
  • Sex: Dropdown menu to indicate the biological sex of the participant
  • Condition: Dropdown menu: "Active" or "Passive"

The experiment is setup as such that the inputs in the GUI are restricted. More general settings can be changed in the define_settings.py file. Most importantly, the yoke_map: This is a dictionary that determines which replay of an active condition a participant sees when they are in the passive condition.

Example:

yoke_map = {1:1, 2:1}

Here, the first participant will see a replay of their own active condition when they perform the passive condition. The second participant will see a replay of the first participant's active condition. Note that for this to work, participant 1 HAS to perform the active condition first and the respective data needs to be present in sp_experiment/experiment_data as saved by the logger.

Makefile

There is also a Makefile to simplify several tasks. To make use of it under Windows, install GNU Make.

Eyetracking

The script works with a Tobii 4C eyetracker (on Microsoft Windows only). You will need the proprietary software "TobiiProEyeTrackerManager" to be downloaded from the TobiiPro website: https://www.tobiipro.com/learn-and-support/downloads-pro/. To get all necessary drivers, you also need to download the "Tobii Eyetracking" software from the getting started pages: https://gaming.tobii.com/getstarted/

Furthermore, you will need the actual hardware and a "Pro license", which allows users to access the data on the device. This license needs to be purchased separately and then loaded onto the specific eyetracker. See this form by Tobii for more information.

Finally, the interface to Python is done through the Tobii python API: https://pypi.org/project/tobii-research/

EEG Triggers

Event markers (also called TTL Triggers) can be sent using the pyserial library. In our setup we use the Brain Products Trigger Box as a device to send serial data via a USB port, which gets transformed into a parallel TTL signal to be picked up by the EEG amplifier.

See define_settings.py and define_ttl_triggers.py for more information

GIT notes:

You may want to git ignore the data files that are produced by running the experiment. For that, simply add the following text on a new line to the .git/info/exclude file in your clone/fork of the repository:

experiment_data/

This will gitignore the folder locally on your machine.

Details about the file structure

  • The /instructions directory contains text files of the instructions that participants saw
  • The /berlin_numeracy_task directory contains the BNT that was administered in printed form to the participants of the study after completing the computerized experiment
  • The sp_experiment directory is the python module
    • software tests for the paradigm are in /tests
    • /image_data contains images for the instructions on screen
    • all .py files with a define_ prefix control some aspect ot the experimental flow and are imported in the main file sp.py
      • the define_settings.py contains several important constants
    • descriptions.py controls the experimental flow for the "description" version of the task
    • utils.py contains helper functions

Background Literature:

  1. Meta analysis on the sampling paradigm:

    Wulff, D. U., Mergenthaler-Canseco, M., & Hertwig, R. (2018). A meta-analytic review of two modes of learning and the description-experiencegap. Psychological Bulletin, 144(2), 140-176. doi: 10.1037/bul0000115

  2. Original citation of sampling paradigm in behavioral science:

    Hertwig, R., Barron, G., Weber, E. U., & Erev, I. (2004). Decisions from experience and the effect of rare events in risky choice. Psychological science, 15(8), 534-539. doi: 10.1111%2Fj.0956-7976.2004.00715.x

  3. Corresponding ideas in the literature on Mutli-Armed-Bandits

    Audibert, J. Y., & Bubeck, S. (2010, June). Best arm identification in multi-armed bandits.

    Bubeck, S., Munos, R., & Stoltz, G. (2009, October). Pure exploration in multi-armed bandits problems. In International conference on Algorithmic learning theory (pp. 23-37). Springer, Berlin, Heidelberg.

sp_experiment's People

Contributors

octomike avatar sappelhoff avatar

Stargazers

 avatar  avatar

Watchers

 avatar

sp_experiment's Issues

Replay shows data from error trials

in the passive condition of the experiment, we draw on an already saved event log of an active condition run.

Errors that happened in the active condition run should be completely disregarded in the passive condition replay, however currently this is not the case.

Example

  1. subj-A does active condition
  2. in their trial 1, they commit a mistake after the 3rd sample (wait too long), so the trial is reset and they start new (with new reward distributions)
  3. thus, they have N + 3 samples in their dataframe for that trial
  4. The +3 samples most likely contain outcomes that are not part of the outcomes of the new reward distribution
  5. hence, a participant yoked to subj-A can in their passive condition and in the described trial, get more than the usual maximum of two outcomes per option (e.g., 3 or even 4).

This happens rarely enough, but it must be fixed!

BUG: Lotteries shown in "description" task do not necessarily correspond to those in the "active" task

When analyzing the behavioral data and plotting the expected value differences in the different conditions (active-fixed: AF, active-variable: AV, yoked-fixed: YF, yoked-variable:YV, description: DESC), I noticed the following:

  • EV differences are NOT always the same between the description task, and the corresponding active task by subject (sometimes AF, sometimes AV)

The following figure prompted my investigation of this issue:

  • "true" EV diff should always be at 0.9, except for the DESC task ✔️
  • "experienced" and "true" EV should be the same for DESC task ✔️
  • EV diffs for DESC task should be "a mixture" of EV diffs of the two active tasks (AF, AV)❌

image

This was due to several small mistakes, but the general error boils down to:

  • when preparing the sampled outcomes from an active task to be displayed as a description task, the ordering of magnitudes and probabilities was sometimes different.
  • Example: a 1-90%, 4-10% could become a 1-10%, 4-90%
  • Example: a 5-100% could become a 1-100%
    • this example needs some explanation: when only a single outcome was encountered in an option, the outcomes of that option were substituted with a code value of 98 (appended) that occurred with a probability of 0. Later in the pipeline, those values with a probability of 0 would not be shown. Now in the case of the example, the magnitude of 5 with 100% was exchanged with the magnitude 1 with a probability 0 (because in this particular example, the 98 code replaced a 1 outcome that never occurred) ... so a 1 is shown with 100% although it was never encountered, and the 5 is not even listed in the description task

Two smaller bug fixes make this error more unlikely to occur:

  • 1fa12d2 ... to prevent np.unique from sorting values
  • f09dece ... to save outcomes from the active task in order of occurrence

and one bug fix solves this issue completely (even without the two smaller bug fixes above):

  • 7704a05 ... to always match a magnitude with its corresponding probability

What does this mean?

This means that each participant did SOME trials of their DESC task with lotteries that did NOT match the lotteries they experienced in their active task version.

Next step is to find out how many trials in the DESC task are affected by this per participant. --> For some trials, we will have had the correct data displayed "by chance" ... i.e., in some cases the bug would have no effect on ordering.

events.json --> events_json.py

turn the json file into a python file that produces a JSON

... once that is done: import values from the define_ttl_values.py script ... and make that script a single function that returns the values

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.