GithubHelp home page GithubHelp logo

titta's People

Contributors

anna-stacey avatar dcnieho avatar dev-jam avatar mao-dongyang avatar marcus-nystrom avatar peircej avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

titta's Issues

merge msg_data and gaze_data into one file

Hi,
i'm a newbie both to psychopy and titta, and have a possibly trivial question.
Following the examples in the demos, i have created two separate dataframes for gaze_data and tracker messages at the end of the experiment, written into two separate tsv files.
How do i join them in one file (with one single system_time_stamp column)?
Thank you in advance!
Amanda

Error during calibration

Hi,
sometime when starting to calibrate i get the following error message and i am not sure what to make of it:

Traceback (most recent call last): File "PATH_TO_SCRIPT", line 79, in <module> File "PATH_TO_SCRIPT\titta\Tobii.py", line 425, in calibrate self.animator = helpers.AnimatedCalibrationDisplay(self.win, target, 'animate_point') File "PATH_TO_SCRIPT\titta\helpers_tobii.py", line 539, in __init__ self.screen_refresh_rate = float(win.getActualFrameRate()) TypeError: float() argument must be a string or a number, not 'NoneType'

Line 79 in my code is where the calibrate() function is called. The error does not appear all the time which makes it more confusing to me. Thanks in advance!

Callback function for gaze data

Is it possible with titta to associate a callback function when recording similar to that from the tobii_research SDK?

e.g. with tobii_research:
tracker.subscribe_to(tr.EYETRACKER_GAZE_DATA, callback=gaze_data_callback, as_dictionary=True)

Binocular Mean Warning

Hello,

I am using your Titta package to run a X2-30 Compact. After the experiment is over, I am getting the below message with the Slideshow demo that comes with the package:

C:\Program Files\PsychoPy3\lib\site-packages\titta\helpers_tobii.py:388: RuntimeWarning: Mean of empty slice
  avg_pos = np.nanmean([xyz_pos_eye_l, xyz_pos_eye_r], axis=0)
C:\Program Files\PsychoPy3\lib\site-packages\titta\Tobii.py:657: RuntimeWarning: Mean of empty slice
  str(int(np.nanmean([l_pos, r_pos])/10.0)), 'cm'])

Is there a way to just output each eye and not average?

Recording gaze data with dynamic AOIs

Hey again,

this time we are wondering if it is possible to use Titta to record gaze data and add dynamic AOIs as well as custom events.

Normally, in a normal Tobii Pro Lab project (not with the external presenter), it is possible to do a screen recording and add dynamic AOIs and custom events manually post recording. However, when there are movable dialogs, like in our case, that the dynamic AOIs should relate to, it is manually infeasible to do this task. Moving one or multiple AOIs frame by frame for each participant by hand just costs too much time.

Short Question

Is there a way to start a screen recording using Titta, add custom markers (for when a dialog is visible on the scene), and add dynamic AOIs (based on the position of the dialog)?

Long Question

As we have not found a way to do screen recording with Titta, we just tried to record the gaze data, because we are not necessarily interested in the actual scene the participants are looking at. We are rather only interested in whether participants look into the AOIs. However, when not presenting any stimuli, we cannot add AOIs since an AOI is directly attached to a stimulus. So we tried to make Tobii Pro Lab believe that we were presenting a video. We uploaded a video and sent a stimulus event to Tobii Pro Lab for it to know the timing and duration of the video. We chose the video to be at least as long as the actual experiment duration. This video is, however, never shown to the participants, which can observe the stimulus scene as they should. Though, this fake video gives us a handle to add AOIs. Now there are several problems: 1) send_custom_event() is not implemented in Titta's Python version (was not a problem for us to implement though). 2) add_aois_to_video() is not implemented in Titta's Python version. This turns out to be a bigger problem. We can delegate the task to add_aois_to_image(), which adds the defined AOIs to the video, yet only as static AOIs. This means whenever we add the same AOI but with a different position, only the last position we defined for that AOIs remains in the video timeline. We assume there either is a complete different operation to call on Tobii Pro Lab that is not governed by Titta (e.g. AddDynamicAois instead of AddAois), or the merge_mode parameter of the AddAois need to be changed from replace_aois to something else. Unfortunately, we do not have any information or documentation about the external presenter API, even though we own an external presenter license, that can help us solve this problem.

Can someone of you help us with that problem or know a better more simple solution to reach our underlying objective?

Thanks in advance!

EThead animation distorted in PsychoPy 2023.2.3

Since upgrading to PsychoPy 2023.2.3, the animated head used for subject position is distorted. It's no longer drawn as a circle, and instead appears as an ellipse. Eye tracking device is Tobii Pro Fusion. Image attached.

5f61f1c4-13b2-475c-aae9-5b253ca8eb6a

support for Tobii Pro Fusion

I have a Tobii Pro Fusion 120Hz and am trying to use Titta with psychopy. I modified Titta.py accordingly,
elif et_name == 'Tobii Pro Fusion': settings.SAMPLING_RATE = 120

but I still get the "eye tracker type not supported" error. How can I use a Fusion with this code?

PsychoPy Builder demo for "Talk to ProLab"

Hi,

I am more than happy that I found your toolbox. Thanks for all the work you put into it!!!

So far, I tried the "general" and the "Talk to ProLab"-demos. Both work.
Now I wanted to build a PsychoPy-experiment in the builder view and wanted to add the relevant code in code snippets to let PsychoPy communicate with Tobii Pro Lab like in the "TalkToProLab.py"-script. This seems to be more difficult than I thought - do you have any demo where you programmed something like this? If yes, it would be amazing if you could share it with me.

Best,
Hannah

P.S. I also tried the "slideshow_example.psyexp" - here the callibration does not work. Whatever I do, it tells me that the callibration was unsuccessful.

Recalibration during the experiment while recording in Pro Lab

Hi,
In anti_saccades.py, the program crashes when the key 'q' is pressed ("checking someone wants to calibrate") during the sac.antisaccades sessions with the following messeage: "The operation is invalid in this context. 'invalid_operation'".
To use "someself.tracker.calibrate(win)", the tracker probably should be stopped first? It works fine once having the tracker stopped "self.tracker.stop_recording(gaze_data=True)" before recalibration.
Thanks!

control over storage location of calibration files

For the Titta OpenSesame plugin I need control over the storage location of the .p file and the calibration_image/validation_image. In linux these files are stored directly in the user home directory (~/home) but on Windows titta tries to store it in a location where it does not have write permissions so the plugin is not working in Windows.

My preference is to store these files in the same directory as where I now store the .h5 data files, which is in the folder where the OpenSesame experiment is located. And insert the subjectnr from OpenSesame in these filenames.

underscores in message variable 'stimulusname'

In extract_trial_data.py, part # Read message for onset and offset:

In the onset_stimulusname and offset_stimulusname, stimulusname can have no underscores . Everything after the second underscore is omitted. I use filenames with underscores as 'stimulusname', so I have to replace my underscores in my filenames.

keypress causing calibration error

Hi,
We have noticed a small issue while implementing the titta's calibration into the Psychopy Builder experiment. The calibration function (myTobii.calibrate) skips the face adjustment screen if there is an event before the calibration that requires waiting for a keypress “space” (for example: keyboard.Keyboard().waitKeys(keyList=['space']) )
Same happens if space is pressed by mistake before the calibration function onset.
If other keypresses are being waited for, and pressed, the calibration works well. Fyi, the keyboard is using the ptb backend.
(System specifications: Python 3.6 running on MacBook Air (13-inch, 2017), OS.X. 10.14.6. Eye-tracker: Tobii Pro Nano)

I also have a question regarding the modifications of the calibration procedure. To better attract attention (the exp is for infants), I'd like to adapt Titta's calibration to play custom made movies and also make a sound when displayed. But before starting to dig into this, i was wondering if such thing may already have been done by someone else (i've seen something similar was done for matlab (dcnieho/Titta#14).

Thanks in advance.
Best,
Amanda

TalkToProLab - Issue with External Presenter Communication

Hey everyone,

We have been trying to get the demo experiment read_me_TalkToProLab.py running. Before I continue, I want to apologize for the potential lack of information. We currently have no access to a Tobii Pro Lab version that supports creating an external presenter project. We did, however, and will hopefully have access soon again. Now here is the issue:

After opening Tobii Pro Lab and creating an external presenter project, the execution of the read_me_TalkToProLab.py crashes with an unmet exception when calling ttl.get_state() in line 111.

As you can imagine, everything else that comes before works as intended. We can create a participant and several AOIs that are also visible in the Tobii Pro Lab project after incomplete execution. The exception unmet is probably thrown in respones of the message get_state() wants to send in file TalkToProLab.py in line 533 et seqq.

response = self.send_message(self.external_presenter_address,
                                      {"operation": "GetState"})  
        
assert response['status_code'] == 0, response

However, at this point in time, the demo script has already sent a message when calling ttl.add_participant() in line 59.

The only remarkable differene between the two calls to ttl.send_message() is the adress of the receiving web socket that is being used. When adding a participant the project_adress is used. When requesting the state, the external_presenter_adress is used.

Could there be an issue with the external_presenter_adress or why does the Tobii Pro Lab web socket report the unmet status?

Thanks for your help!

data output columns in titta

Hi, i have request regarding the data columns from Tobii Pro Nano.
It is unclear what is the current list of the output columns from Tobii. They are only defined in the Tobii_dummy.py files of titta, and they differ in the previous and the current versions. I list both versions below.
Given that we want to give an instant feedback about the pupil position and validity during the experiment, we have to define the columns from which the data is taken. So far i have tried to define the column using the list from the current (and the old) version, but it doesn't seem to be giving me the right data. Is it possible to obtain the correct version of the data output column, and where do we search for them?
Thanks in advance,
Amanda

Current version:
[
'device_time_stamp', 'system_time_stamp',
'left_gaze_point_on_display_area_x',
'left_gaze_point_on_display_area_y',
'left_gaze_point_in_user_coordinates_x',
'left_gaze_point_in_user_coordinates_y',
'left_gaze_point_in_user_coordinates_z', 'left_gaze_point_valid',
'left_gaze_point_available', 'left_pupil_diameter', 'left_pupil_valid',
'left_pupil_available', 'left_gaze_origin_in_user_coordinates_x',
'left_gaze_origin_in_user_coordinates_y',
'left_gaze_origin_in_user_coordinates_z',
'left_gaze_origin_in_track_box_coordinates_x',
'left_gaze_origin_in_track_box_coordinates_y',
'left_gaze_origin_in_track_box_coordinates_z', 'left_gaze_origin_valid',
'left_gaze_origin_available', 'left_eye_openness_diameter',
'left_eye_openness_valid', 'left_eye_openness_available',
'right_gaze_point_on_display_area_x',
'right_gaze_point_on_display_area_y',
'right_gaze_point_in_user_coordinates_x',
'right_gaze_point_in_user_coordinates_y',
'right_gaze_point_in_user_coordinates_z', 'right_gaze_point_valid',
'right_gaze_point_available', 'right_pupil_diameter',
'right_pupil_valid', 'right_pupil_available',
'right_gaze_origin_in_user_coordinates_x',
'right_gaze_origin_in_user_coordinates_y',
'right_gaze_origin_in_user_coordinates_z',
'right_gaze_origin_in_track_box_coordinates_x',
'right_gaze_origin_in_track_box_coordinates_y',
'right_gaze_origin_in_track_box_coordinates_z',
'right_gaze_origin_valid', 'right_gaze_origin_available',
'right_eye_openness_diameter', 'right_eye_openness_valid',
'right_eye_openness_available'
]

Previous version:
[
'device_time_stamp',
'system_time_stamp',
'left_gaze_point_on_display_area_x',
'left_gaze_point_on_display_area_y',
'left_gaze_point_in_user_coordinate_system_x',
'left_gaze_point_in_user_coordinate_system_y',
'left_gaze_point_in_user_coordinate_system_z',
'left_gaze_origin_in_trackbox_coordinate_system_x',
'left_gaze_origin_in_trackbox_coordinate_system_y',
'left_gaze_origin_in_trackbox_coordinate_system_z',
'left_gaze_origin_in_user_coordinate_system_x',
'left_gaze_origin_in_user_coordinate_system_y',
'left_gaze_origin_in_user_coordinate_system_z',
'left_pupil_diameter',
'left_pupil_validity',
'left_gaze_origin_validity',
'left_gaze_point_validity',
'right_gaze_point_on_display_area_x',
'right_gaze_point_on_display_area_y',
'right_gaze_point_in_user_coordinate_system_x',
'right_gaze_point_in_user_coordinate_system_y',
'right_gaze_point_in_user_coordinate_system_z',
'right_gaze_origin_in_trackbox_coordinate_system_x',
'right_gaze_origin_in_trackbox_coordinate_system_y',
'right_gaze_origin_in_trackbox_coordinate_system_z',
'right_gaze_origin_in_user_coordinate_system_x',
'right_gaze_origin_in_user_coordinate_system_y',
'right_gaze_origin_in_user_coordinate_system_z',
'right_pupil_diameter',
'right_pupil_validity',
'right_gaze_origin_validity',
'right_gaze_point_validity'
]

Stop and Stop Recording during Experiment

There does not appear to be any way to start and stop recording multiple times during the experiment and save the resulting data.

tracker.stop_recording(gaze_data=True)
tracker.save_data()

tracker.start_recording()

Am I missing something?

Problem with function save_data()

Hi! I'm working on a project with the eye tracker Tobii pro spectrum.
I'm having some problems with the save.data() function in Tobii.py, as in the code I'm implementing for my experiment I'm supposed to save the data of each trial (which lasts 10 seconds), so putting the function in this loop at the first iteration it saves the data. Still, it crashes right after giving the error I'm attaching. I wanted to ask you if this could be a problem with the function because I saw in the Tobii.py file that it also takes into account the calibration data so it could be that by iterating these are missing. Or maybe it could be another problem that escapes me at the moment. Let me know, thank you very much.

Ready to record data for: texture_1_21_1. Press 'Enter' to start.
Recording data for: texture_1_21_1
Took 0.0668187141418457 s to save the data
Iteration completed: texture_1_21_1
Ready to record data for: texture_1_1_1. Press 'Enter' to start.
Recording data for: texture_1_1_1
Traceback (most recent call last):

File ~\anaconda3\Lib\site-packages\spyder_kernels\py3compat.py:356 in compat_exec
exec(code, globals, locals)

File c:\users\administrator\desktop\titta-master\titta-master\demo_experiment\texturexptry.py:452
main()

File c:\users\administrator\desktop\titta-master\titta-master\demo_experiment\texturexptry.py:408 in main
tracker.save_data()

File ~\anaconda3\Lib\site-packages\titta\Tobii.py:1865 in save_data
d['level'] = [t.name for t in d['level']]

UnboundLocalError: cannot access local variable 'd' where it is not associated with a value
image

IndexError

Hi,
Program tested: extract_trial_data_and_detect_fixation.py from the 'resources' folder.
I tried run the 'testfile.pkl' from same folder and get an error :
line 31, in extract_trial_data
start_idx = np.where(df_msg.msg == msg_onset) [0] [0]
IndexError: index 0 is out of bounds for axis 0 with size 0

I also tried with another .pkl file created after running the antisaccade task demo and have the same error.
Thanks for help.
Regards,
Christophe.

General question about recording eye tracking data while using a software

Hi,
I recently started working with Titta and want to record ET data while using a specific software.
From the demos I was under the impression that Titta needs a PsychoPy visual.Window in order to record data. My first guess was to create a transparent window in PsychoPy and use that as an overlay but I could not find a way to change the alpha value of a window.
If it is possible to record data without the need for a visual.Window how can I safely close the calibration Window and start the recording process of the screen?

Setting up the viewing distance

Hey,
I haven't worked with psychopy before and wanted to change the viewing distance. However changing it to, e.g. mon.setDistance(70) did not change the distance during the calibration process. Am i missing something here?
Thanks in advance,
Béla

Issue with Inaccurate File Type Detection

The current implementation for file type detection in the code is not robust, leading to potential errors under certain conditions. Specifically, the following code segment may result in incorrect file type determination:

file_ext = media_name.split('.')[1]

The issue arises when there are multiple dots (.) in the file name or path. For instance, in the case of C:\Users\MDY\AppData\Local\Temp\tmp435q8bsg.opensesame_pool\test.jpeg, the file_ext variable will contain opensesame_pool\test as the supposed file extension, leading to a "File type not supported" error. This is particularly problematic when working with Titta for Opensesame plugin development.

The recommended fix is to replace file_ext = media_name.split('.')[1] with file_ext = media_name.split('.')[-1]. This modification ensures the correct extraction of the file extension, enhancing the code's robustness. I have already submitted a Pull Request addressing this issue and would appreciate your consideration for acceptance.

send_custom_event demo

Hi, Marcus, thanks for your very useful package. I have a question about how to send custom event when using the TalkToProLab. I want to send my custom events (e.g., trials began, response) , but I do not know how to use ttl.send_custom_event, and did not find any demos how to use it. I have a try to use ttl.send_custom_event(rec['recording_id'], str(t_offset)) , but it did not work. Could you provide some builder demos about how to use ttl.send_custom_event function.

Can't get setup.py to run on Py 3.8

At the moment, under Python 3.8 the package refuses to install with

Traceback (most recent call last):
  File "setup.py", line 13, in <module>
    long_description=open('README.md').read(),
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/codecs.py", line 322, in decode
    (result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xf6 in position 401: invalid start byte

It seems not to like the ö in Nyström

comma missin in toby.py

Hi,
just wanted to let you know there is a comma missing in the self.header list in the Tobii.py file, line 145.

I also have a few questions regarding Tobii_dummy.py:

  1. the output columns defined in sample_list and self.header (Tobii.py) are not corresponding. Also, the lists have different names. Is there a particular reason for this?
  2. sample_list is defined at the beginning of the script while in Tobii.py, the self.header is defined within the class myTobii. Because of this, output text file columns in an experimental script must be defined manually, hence any change in titta package may cause error in compiling output text file. Can this be amended as well?

Thanks in advance,
Amanda

Index 1 is out of bounds for axis 1 with size 1 during calibration

Hello, I am trying to use Titta with PsychoPy.

It seems to connect succesfully to Tobii Pro Lab, but while executing the test read_me_TalkToProLab I get an error during the calibration phase. I have provided a youtube link to show when the error occurs. I am using a Tobii Pro Nano calibrated with Tobii Pro Eye Tracker Manager.

It seems to provide to different errors. one when the application closes :

"Index 1 is out of bounds for axis 1 with size 1"

And the other one when I stop the python execution :

"AssertionError: {'reason': 'Exclusive connection to service already established. Only one allowed at a time. Please disconnect. ', 'status_code': 104, 'operation': 'StopRecording'}

link to the video :
https://youtu.be/4yjdKfx2ND0

I have tried debugging myself but to no success, so I hope someone can answer me to what I am doing wrong.

Thank you for your time.

Illegal character in LICENSE.md

When building a deb from the wheel with wheel2deb on debian I get an error:

UnicodeDecodeError: 'utf-8' codec can't decode byte 0x96 in position 621: invalid start byte

The problem is the presence of illegal characters in LICENSE.md

With RC7 I only had to replace a couple of "SPA" characters with the MINUS sign and then it worked, but with RC8 I also found "STS" and "CCH" characters. Replacing them did not solve the issue this time. So I suspect there are more hidden illegal characters.

I then created a new source package with:

python3 setup.py sdist

With this source package I could then succesfully build a deb with stdeb.

Titta in OpenSesame has smaller and more centered calibration dots

I took the code from the demo experiment (init, start_recording, send_message, save_data, helpers.MyDot2(win)) and implemented it as separate inline scripts in an OpenSesame experiment with psychopy as backend. I omitted the creation of a window because OpenSesame creates its own window. Everything worked nicely but when I compared the calibration output from the OpenSesame experiment with the demo experiment I found the calibration dots are smaller and are more located to the center than the vanilla experiment. OpenSesame was set to 1920x1080 and the pictures also had the same resolution. I suspect OpenSesame creates windows a bit different than the code in the demo.

External Presenter Error: Exclusive connection to service already established. Only one allowed at a time. Please disconnect

Hi, I'm unfamiliar with websocket protocols so I'm not sure whom to ask about this problem I'm facing. ttl.get_state() in read_me_TalkToProLab.py gives me an error which I'm able to reproduce using the code below. Does anyone have any idea why this might be happening?

>>> from websocket import create_connection
>>> import json
>>> address = create_connection('ws://localhost:8080/record/externalpresenter?client_id=RemoteClient')
>>> req = {"operation": "GetState"}
>>> msg_dict = json.dumps(req)      
>>> address.send(msg_dict)
31
>>> address.recv()
'{"reason":"Exclusive connection to service already established. Only one allowed at a time. Please disconnect.","status_code":104,"operation":"GetState"}'

Also, if someone could give me an intuition as to how these websocket links are obtained, that would be very helpful. I found that a connection to the link above is not made when Tobii Pro Lab is not running, but I was unable to find websocket links mentioned anywhere in their documentation. Thank you so much for your help in advance!

removal of dependency on demo_experiment in plot_scanpath

It would be nice to have no dependency on demo_experiment in plot_scanpath.
And make plot_scanpath the 5th step in the readme by putting plot_scanpath.py in the 'demo_analyses' folder, which gets the 'allfixations.txt' from the 'output' folder and the images from a 'stim' or 'pictures' folder (where you have to manually add the pictures).
And last, maybe a separate 'data' folder for the .h5 and .json files.

send_event

Hello, I'm a newbie and I would like to know how to upload events using the function send_custom_event() when multiple stimuli are presented at once.
Is there a way to separate the data presented with the stimulus at the time of data collection from the data presented at the time of fixation.

Thanks in advance!

Tobii and Titta Timing

Hello,

I am using a X2-30 Tobii tracker with Tittia. Below is a data frame that includes system_time_stamp and the device_time_stamp columns provided by Tittia. How are these timestamps calculated? The tracker should be sampling every 30 ms, and that is clearly not the case. The time1 variable was calculated by dividing the time stamp by 1+6E.

time1=(system_time_stamp - system_time_stamp[1])/1000000) #start time at fixation start for each trial. 
#subtract first time point from each stamp. 
 participant trial system_time_stamp device_time_stamp time1
   <lgl>       <chr>           <int64>           <int64> <dbl>
 1 NA          0         2163331982574  1173924160796621 0    
 2 NA          0         2163332365089  1173924161179154 0.383
 3 NA          0         2163332395739  1173924161209815 0.413
 4 NA          0         2163332555300  1173924161369367 0.573
 5 NA          0         2163332719376  1173924161533440 0.737
 6 NA          0         2163332758796  1173924161572861 0.776
 7 NA          0         2163332804326  1173924161618357 0.822
 8 NA          0         2163332860401  1173924161674401 0.878
 9 NA          0         2163332899621  1173924161713607 0.917
10 NA          0         2163333111208  1173924161925161 1.13 


Cannot set frequency

Hello!

I'm trying to set up Titta and am running into a similar issue as was addressed in this Titta for Matlab post (dcnieho/Titta#34).

We are using Python version 3.6.6 and PsychoPy version 2021.2.3. We have a Tobii Pro Spectrum and are just trying to run your demo experiments. We encounter an error with the default frequency (see code below) but attempting to overwrite the default with the code settings.freq = 150 doesn't fix the error.

Any advice you can give would be greatly appreciated!

C:\Program Files\PsychoPy\Lib\site-packages\Titta-master\demos>python -m read_me.py
pygame 1.9.6
Hello from the pygame community. https://www.pygame.org/contribute.html
[150.0, 120.0, 60.0] 600
Traceback (most recent call last):
File "C:\Program Files\PsychoPy\lib\runpy.py", line 183, in _run_module_as_main
mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
File "C:\Program Files\PsychoPy\lib\runpy.py", line 109, in _get_module_details
import(pkg_name)
File "C:\Program Files\PsychoPy\Lib\site-packages\Titta-master\demos\read_me.py", line 39, in
tracker.init()
File "C:\Program Files\PsychoPy\lib\site-packages\titta\Tobii.py", line 185, in init
self.set_sample_rate(self.settings.SAMPLING_RATE)
File "C:\Program Files\PsychoPy\lib\site-packages\titta\Tobii.py", line 2056, in set_sample_rate
assert np.any([int(i) == Fs for i in self.tracker.get_all_gaze_output_frequencies()]), "Supported frequencies are: {}".format(self.tracker.get_all_gaze_output_frequencies())
AssertionError: Supported frequencies are: (150.0, 120.0, 60.0)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.