GithubHelp home page GithubHelp logo

labstreaminglayer / app-pupillabs Goto Github PK

View Code? Open in Web Editor NEW
16.0 4.0 12.0 104 KB

Implementations of LSL relays for Pupil Labs eye trackers

Home Page: https://pupil-labs.com

License: GNU Lesser General Public License v3.0

Python 100.00%

app-pupillabs's Introduction

pre-commit.ci status

App-PupilLabs

This repository contains various integrations of the lab streaming layer framework with Pupil Labs products.

Pupil Invisible/Neon LSL Relay

To receive and save Pupil Invisible and/or Neon data in realtime via LSL, checkout the dedicated Pupil Labs LSL Relay application: https://pupil-labs-lsl-relay.readthedocs.io/en/stable/

The legacy Pupil Invisible LSL Relay can be found here.

Pupil Capture Plugins

app-pupillabs's People

Contributors

cboulay avatar gmierz avatar n-m-t avatar papr avatar pre-commit-ci[bot] avatar romanroibu avatar roshkins avatar tstenner avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

app-pupillabs's Issues

Failed to load 'pupil_lsl_relay' 'invalid syntax (pupil_lsl_relay.py, line 7)'

Hi. I'm trying to load the pupil_lsl_relay
But I get an error when Pupil Capture tried to load the plugin.
I have the pylsl in the correct folder and that seems to load. Can anyone guide me to a solution
log file

2019-06-17 14:19:08,028 - MainProcess - [DEBUG] root: Unknown command-line arguments: []
2019-06-17 14:19:08,038 - MainProcess - [DEBUG] os_utils: Disabling idle sleep not supported on this OS version.
2019-06-17 14:19:08,412 - world - [INFO] launchables.world: Application Version: 1.12.17
2019-06-17 14:19:08,412 - world - [INFO] launchables.world: System Info: User: Tue Hvass Petersen, Platform: Windows, Machine: LAPTOP-BR74BV5A, Release: 10, Version: 10.0.17763
2019-06-17 14:19:08,903 - world - [DEBUG] plugin: Scanning: pupil_lsl_relay.py
2019-06-17 14:19:08,904 - world - [WARNING] plugin: Failed to load 'pupil_lsl_relay'. Reason: 'invalid syntax (pupil_lsl_relay.py, line 7)'
2019-06-17 14:19:08,904 - world - [DEBUG] plugin: Scanning: pylsl
2019-06-17 14:19:08,911 - world - [DEBUG] plugin: Imported: <module 'pylsl' from 'C:\Users\Tue Hvass Petersen\pupil_capture_settings\plugins\pylsl\init.py'>
2019-06-17 14:19:09,018 - world - [DEBUG] plugin: Loading plugin: UVC_Source with settings {'frame_size': [1280, 720], 'frame_rate': 30, 'check_stripes': True, 'exposure_mode': 'manual', 'name': 'Pupil Cam1 ID2', 'uvc_controls': {'Auto Exposure Mode': 8, 'Auto Exposure Priority': 1, 'Absolute Exposure Time': 157, 'Backlight Compensation': 1, 'Brightness': 0, 'Contrast': 32, 'Gain': 0, 'Power Line frequency': 1, 'Hue': 0, 'Saturation': 60, 'Sharpness': 2, 'Gamma': 100, 'White Balance temperature': 4600, 'White Balance temperature,Auto': 1}}
2019-06-17 14:19:09,163 - world - [DEBUG] uvc: Found device that mached uid:'1:14'
2019-06-17 14:19:09,181 - world - [DEBUG] uvc: Device '1:14' opended.

Surface gaze format

@cboulay I am currently trying to extend the plugin to support surface mapped gaze. But the one channel_format-per-outlet rule is making life difficult for me.

While in a perfect world, every surface would have its own outlet, this is not feasible for the LSL Relay plugin without intertwining it deeply with Capture's Surface Tracker plugin. I thought of building a flexible outlet following the tidy data approach:

LSL timestamp surface name norm_pos_x norm_pos_y confidence is on surface
t Screen left 0.5 0.5 1.0 True
t Screen right -1.5 0.5 1.0 False
t+1 Screen left 1.5 0.5 1.0 False
t+1 Screen right 0.5 0.5 1.0 True

As mentioned above, I am not able to mix the string column with the double columns (I think). If I understood the documentation correctly, one should be building time-synced outlets instead. @cboulay Would the following outlet layouts conform to LSL best practices? If not what would you recommend? I am only semi-happy with the solution below.

  1. Surface Name outlet (type: string)
LSL Timestamp Surface ID Surface Name
t 1.0 Surface left
t 2.0 Surface right
t+1 1.0 Surface left
t+1 2.0 Surface right

Surface ID would correspond to the string representation of the surface id. Surface ids would be incrementing integers in the order of appearance in the processed data.

  1. Surface Gaze outlet (type: double)
LSL timestamp surface id norm_pos_x norm_pos_y confidence is on surface
t 1.0 0.5 0.5 1.0 1.0
t 2.0 -1.5 0.5 1.0 0.0
t+1 1.0 1.5 0.5 1.0 0.0
t+1 2.0 0.5 0.5 1.0 1.0

Re-adding old pull-requests.

Hello @cboulay, would you be able to add the old pull requests that you stored for this app? You mentioned that you made a diff of them before the repo was restructured. If it's not possible, that's no problem.

Thanks!

Syncing two eyes

Currently, each eye is an independent LSL stream with irregular sampling rate and two timestamps. Each eye has a) timestamps as channel, and i assume that's somehow from the hardware or camera, and there are b) LSL timestamps added during relay.
A challenge is how to sync two eyes, as we now have two timestamps. I did not really solve it, but implemented an approach by tweaking the plugin (see https://github.com/translationalneurosurgery/pupil-labs_to_lsl). Essentially, i use a buffer for both eyes, where each stores its last sample received as a payload, and whenever any eye receives a new payload, i am updating the buffer, and sending the buffer (which stores both eyes) with LSL.

I evaluated it by checking EOG together with pupil diameter changes caused by blinking, and it seems to be sufficiently accurate for my purposes. But this is certainly not the best implementation.

As i understand it, with ZMQ it is not really guaranteed that the camera timestamps between eye 0 and eye 1 increase monotonic, i.e. i could receive a couple of young samples from eye 0, before i receive the oldest from eye 1. So, maybe there exists another approach to link both eyes already at a lower level, or using a more advanced buffering approach based on the camera timestamps, before relaying with LSL. Yet, this later approach might introduce latency.

Add version tag of pylsl/xdf data format.

I've been using v1.0 of this code and it seems like v2.0 is quite different. I think it might be good to get some sort of "version" number added to the meta-data in case anything else changes in the future so it will be easier to detect changes. One thing that could be done to get this available to the labrecorder is adding a 'meta' channel and sending a single point of data when a consumer is detected.

right plugin for CORE

Does the pupilInvisible plugin work with pupilCORE devices?
Or, for CORE, do I have to resort to the pupil capture plugin only?
There's any other way for direct access and LSL streaming with CORE?

Thanks

overriding timestamps is usually incorrect

@papr , I know you are aware of this problem but I think the current method is incorrect because if something else affects Capture's timebase (e.g. hmd-eyes) then the pupil LSL stream is useless as it cannot be synchronized with other streams.

I think the LSL team needs to do a better job communicating how 'sacred' the LSL timestamps are. The only instances where overriding LSL timestamps is really warranted is when the outlet gets the lsl clock when the samples are acquired from the device, but then there's some processing etc after acquiring the samples which adds some delay so it is better to use the lsl-time-at-acquisition instead of the default lsl-time-at-push.

If you want to make use of timestamps provided by the device directly or some other clock then the better way to do that is to add a channel to your stream containing the timestamps.

Camera capture info needed in header

So the xdf storage is more complete, it would be good if during construct_streaminfo, we also added capture device info to the header, especially frame_size and intrinsics.

@papr, obviously you know this better than anyone, but if you don't have time to do this then please point me to where I can find an example of how to access capture details from a plugin, if such a thing exists. Is it just g_pool.capture.frame_size and g_pool.capture.intrinsics?

Issues with monocular Pupil Core device, stream time series empty

Hello,

I have two Pupil Core devices and one of these is a right eye only (monouclar) version. I have noticed that using the Pupil Core relay and Pupil Capture with this device specifically results in an empty time series (sample size 0) and the issue persists regardless of the Pupil Capture host, the Android phone, or the LSL Lab Recorder client.

The ReadMe suggests in the case of a missing eye camera or a monocular device in my case, the data should have had NaN values for the second eye. Maybe something I am misconfiguring or something else, but I'm at a loss.

Hope there is a solution, best regards.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.