GithubHelp home page GithubHelp logo

murthylab / fly-vr Goto Github PK

View Code? Open in Web Editor NEW
3.0 5.0 5.0 85.54 MB

FlyVR is code for design and control of multisensory virtual reality experiments for flies.

Home Page: https://github.com/murthylab/fly-vr

Python 62.06% MATLAB 28.68% C 9.26%

fly-vr's Introduction

FlyVR

FlyVR is a framework for the design and control of multisensory virtual reality systems for neuroscientists. It is written in Python, with modular design that allows the control of open and closed loop experiments with one or more sensory modality. In its current implementation, FlyVR uses fictrac (see below) to track the path of a fly walking on an air-suspended ball. A projector and a sound card are used for delivering visual and auditory stimuli, and other analog outputs (through NI-DAQ or phidgets devices) control other stimuli such as optogenetic stimulation, or triggers for synchronization (e.g., with ScanImage).

FlyVR is currently under development. Please see below for credits and license info. If you would like to contribute to testing the code, please contact David Deutsch ([email protected]), postdoc in the Murthy Lab @ the Princeton Neuroscience Institute.

  • For a walk-through of FlyVR's design and what experiments are possible, see Design
  • FlyVR's data format is described here
    • this describes the format and conventions of the H5 files and how to synchronize information between them

Usage

usage: flyvr [-h] [-c CONFIG_FILE] [-v] [--attenuation_file ATTENUATION_FILE]
             [-e EXPERIMENT_FILE] [-p PLAYLIST_FILE]
             [--screen_calibration SCREEN_CALIBRATION] [--use_RSE]
             [--remote_2P_disable]
             [--remote_start_2P_channel REMOTE_START_2P_CHANNEL]
             [--remote_stop_2P_channel REMOTE_STOP_2P_CHANNEL]
             [--remote_next_2P_channel REMOTE_NEXT_2P_CHANNEL]
             [--scanimage_next_start_delay SCANIMAGE_NEXT_START_DELAY]
             [--remote_2P_next_disable] [--phidget_network]
             [--keepalive_video] [--keepalive_audio] [-l RECORD_FILE]
             [-f FICTRAC_CONFIG] [-m FICTRAC_CONSOLE_OUT] [--pgr_cam_disable]
             [--wait] [--delay DELAY] [--projector_disable]
             [--samplerate_daq SAMPLERATE_DAQ] [--print-defaults]

Args that start with '--' (eg. -v) can also be set in a config file (specified
via -c). The config file uses YAML syntax and must represent a YAML 'mapping'
(for details, see http://learn.getgrav.org/advanced/yaml). If an arg is
specified in more than one place, then commandline values override config file
values which override defaults.

optional arguments:
  -h, --help            show this help message and exit
  -c CONFIG_FILE, --config CONFIG_FILE
                        config file path
  -v, --verbose         Verbose output.
  --attenuation_file ATTENUATION_FILE
                        A file specifying the attenuation function.
  -e EXPERIMENT_FILE, --experiment_file EXPERIMENT_FILE
                        A file defining the experiment (can be a python file
                        or a .yaml).
  -p PLAYLIST_FILE, --playlist_file PLAYLIST_FILE
                        A file defining the playlist, replaces any playlist
                        defined in the main configuration file
  --screen_calibration SCREEN_CALIBRATION
                        Where to find the (pre-computed) screen calibration
                        file.
  --use_RSE             Use RSE (as opposed to differential) denoising on AI
                        DAQ inputs.
  --remote_2P_disable   Disable remote start, stop, and next file signaling
                        the 2-Photon imaging (if the phidget is not detected,
                        signalling is disabled with a warning).
  --remote_start_2P_channel REMOTE_START_2P_CHANNEL
                        The digital channel to send remote start signal for
                        2-photon imaging.
  --remote_stop_2P_channel REMOTE_STOP_2P_CHANNEL
                        The digital channel to send remote stop signal for
                        2-photon imaging.
  --remote_next_2P_channel REMOTE_NEXT_2P_CHANNEL
                        The digital channel to send remote next file signal
                        for 2-photon imaging.
  --scanimage_next_start_delay SCANIMAGE_NEXT_START_DELAY
                        The delay [ms] between next and start pulses when
                        signaling the 2-photon remote (<0 disables sending a
                        start after a next).
  --remote_2P_next_disable
                        Disable remote next (+start) signaling every stimulus
                        item. Just signal start and stop at the beginning and
                        end of an experiment.
  --phidget_network     connect to phidget over network protocol (required for
                        some motor-on-ball CL tests)
  --keepalive_video     Keep the video process running even if they initially
                        provided playlist contains no video items (such as if
                        you want to later play dynamic video items not
                        declared in the playlist).
  --keepalive_audio     Keep the audio process running even if they initially
                        provided playlist contains no audio items (such as if
                        you want to later play dynamic audio items not
                        declared in the playlist).
  -l RECORD_FILE, --record_file RECORD_FILE
                        File that stores output recorded on requested input
                        channels. Default is file is Ymd_HM_daq.h5 where
                        Ymd_HM is current timestamp.
  -f FICTRAC_CONFIG, --fictrac_config FICTRAC_CONFIG
                        File that specifies FicTrac configuration information.
  -m FICTRAC_CONSOLE_OUT, --fictrac_console_out FICTRAC_CONSOLE_OUT
                        File to save FicTrac console output to.
  --pgr_cam_disable     Disable Point Grey Camera support in FicTrac.
  --wait                Wait for start signal before proceeding (default false
                        in single process backends, and always true in the
                        main launcher).
  --delay DELAY         Delay main startup by this many seconds. Negative
                        number means wait forever.
  --projector_disable   Do not setup projector in video backend.
  --samplerate_daq SAMPLERATE_DAQ
                        DAQ sample rate (advanced option, do not change)
  --print-defaults      Print default config values

TLDR;

  • FlyVR reads its configuration from the command line and one or more files
    • -c config.yml
      is the main configuration and can contain both the rig-specific configuration and the experiment playlist
      • Example rig configurations can be found in the configs directory.
    • -p playlist.yml
      contains only the definition of the stimulus playlist. Any other configuration is ignored.
      • Sample playlists can be found in the playlists directory
  • By allowing separating the playlist and rig-specific configuration (such as how the DAQ is wired) one can use the same playlists on multiple rigs.
  • It is however not necessary to have both a config.yml and a playlist.yml - they can be combined into config.yml if desired
  • More information on the configuration possibilities can be found in the configuration section

If you are developing a stimulus playlist (video example, substitute for audio as appropriate)

  • copy an example playlist e.g. copy 'playlists/video1.yml' to 'myvideo.yml'
  • exit, test and experiment on the playlist using the single launcher flyvr-video.exe --config myvideo.yml

When you have finished the playlist and experiment development and wish to run on a rig

  • create a rig-specific config file (see templates in configs/) with the electrical connections analog_in_channels, remote_start_2P_channel, etc
  • copy the rig-specific config template into a new config file for your experiment, e.g. 'my_upstairs_video_experiment.yml', on this rig
  • copy the contents of your tested 'myvideo.yml' playlists into 'my_upstairs_video_experiment.yml'
  • flyvr --config my_upstairs_audio_experiment.yml

When starting the flyvr program it will wait an additional --delay seconds, after all backends are ready, before starting the experiment (playing the first item on all playlists). If --delay is negative then flyvr will wait until the start button is pressed in the GUI.

Installation

You need to be running an up-to-date Windows 10 installation.

This requires a NI DAQ (PCI-X series). To begin, install NI-DAQmx 19.6.X (from https://www.ni.com/en-us/support/downloads/drivers/download.ni-daqmx.html#333268). The NI-DAQmx installer offers a number of options. You must install at least the following components

  • NI-DAQmx driver
  • NI Measurement and Automation Explorer 'NI MAX'
    • sometimes called 'NI Runtime with Configuration Support' (MAX) if using the
      lightweight web installer
  • NI-DAQmx Support for C

This also requires ASIO drivers for your audio device / soundcard. If you are not sure if you have dedicated ASIO drivers for your audio device (and it is recommended that you do) you should install ASIO4ALL (http://www.asio4all.org/, tested with 2.14). When installing, ensure you choose to install the 'Offline Control Panel'.

Finally, it also requires Phidgets drivers for the arbitrary IO and scanimage support. On windows you should also install

  • Phidget Control Panel (64-bit installer)
  • Network Phidget Support (optional)
    Note: On Windows 10 you might already have the required libraries. You can determine this by switching to the network tab in the Phidgets control panel. If it is not available, you must install bonjour print service. If it is available, you do not need to do anything further

flyvr

(note, these installation instructions assume using the 'official' python.org python and built-in virtual environments, NOT conda/miniconda. If you wish to use conda/miniconda then you will need to use conda specific commands to create and activate the conda environment)

  • Install Python 3.7.X
  • Create a virtual environment (in checkout dir, named env) C:\Path\To\python.exe -m venv env
    • if you installed python only to your user, the path is "C:\Users\XXX\AppData\Local\Programs\Python\Python37\python.exe"
  • Activate the virtual environment
    venv\Scrips\activate.bat
  • ensure python packaging and built utilities are up to date
    python -m pip install -U pip 'setuptools<58' wheel
  • install dependencies
    python -m pip install -r requirements.txt
  • Install flyvr
    • python -m pip install -e . to install in development mode (recommended)
    • python -m pip install . to install a release version
  • Run the tests
    python -m pytest
    • Note: by default, the tests will attempt to test the DAQ and soundcard. If you are running the tests on a computer without this hardware, or you wish to simply develop experimental logic or visual stimuli then you can skip these tests using
      • python -m pytest -m "not (use_soundcard or use_daq)"
        (skips both DAQ AND soundcard tests)
      • python -m pytest -m "not use_daq"
        (skips DAQ tests)
  • Test the whole software with sample data
    flyvr -c tests\sample_data\v2\john_rig.yml -e experiments\stop_after_10s.yml

fictrac

If you are using a Point Grey/FLIR camera, make sure the FlyCapture SDK is installed. Copy FlyCapture2_C.dll from the Point Grey directory (it is in the bin folder - for instance, C:\Program Files\Point Grey Research\FlyCapture2\bin64) and place it in your FicTrac directory. If it is named FlyCapture2_C_v100.dll rename it. I have included this version in the fictrac_calibration folder of the repo for now.

For closed loop, or general purpose tracking, FicTrac needs to be installed. FlyVR requires the version 2.1.1+0.1.1 of fictrac (https://github.com/murthylab/fictrac/releases/tag/v2.1.1%2B0.1.1), which includes shared memory commands. Please always download this file as you might have an identically named old version.

For configuring FicTrac, a few files are needed:

  1. A TIFF file used as a mask to remove non-ball areas, bright spots, etc (show examples). There is currently a MATLAB function that will help create this mask available in Im2P_CreateNewMask.m. But first need to capture a single frame to use as reference point!
    Note that you probably want to reduce the resolution of the frame to minimize how much data needs to be passed around.

  2. A configuration file. Currently in FicTracPGR_ConfigMaster.txt

  3. A calibration file (??). Currently in calibration-transform.dat. If you do not use this transform file, a new one will be created by a user-guided process the first time you run FicTrac. If you want to update it, you can delete the file and try again.

  4. To run FicTrac, run FicTrac FicTracPGR_ConfigMaster.txt or FicTrac-PGR FicTracPGR_ConfigMaster.txt (if you are using a Point Grey camera).

lightcrafter DLP (for visual stimulus)

Flyvr, and the flyvr-video.exe binary by default attempt to automatically configure and show the visual stimulus on a DLP lightcrafter configured in the appropriate mode. This assumes that the lightcrafter software has been installed, and that the lightcrafter is connected, powered on, and on the default 192.168.1.100 IP address.

For auto-configuration to work, the Lightcrafter software must be 'Disconnected' from the projector (or closed).

If you wish to show the visual stimulus on the desktop monitor (skipping the potential delay trying to configure a non-connected lightcrafter, you can pass --projector_disable.

Updating FlyVR

  • Update the source code
    git pull --ff-only origin master
  • Activate the conda or virtual environment
  • Re-install
    • python -m pip install -e . to install in development mode (recommended)
    • python -m pip install --upgrade . to install a release version
  • Run the tests
    python -m pytest

FlyVR Architecture

The flyvr is a multi-process application for multi-sensory virtual reality. The different processes are separated largely by the sensory modality they target, for example there is a single process dedicated to video stimulus, one for audio stimulus, etc. The primary separate processes are (more explanations follow)

  • flyvr
    main application launcher, launches all other processes internally. usually all that is needed to be run
  • flyvr-audio
    process which reads the audio playlist and plays audio signals via the soundcard. can also list available sound cards (flyvr-audio --list-devices).
    • you can also plot the audio playlist using --plot which will plot the audio timeseries
      flyvr-audio.exe --plot --config playlists\audio2.yml
    • you can convert single-channel audio/daq playlists into the new format with --convert-playlist with the following caveats:
      • playlists should be first converted to single-channel format, so if it is a mixed or multiple channel v1 playlist then you need to make it single channel for the backend (audio/daq) which you care about. I.e. compare 'tests/sample_data/v1/IPI36_16pulses_randomTiming_SC.txt' with 'tests/sample_data/v1/IPI36_16pulses_randomTiming_SC_1channel.txt'. in the DAQ case, see 'tests/test_data/nivedita_vr1/opto_nivamasan_10sON90sOFF.txt'
      • if the playlist was a complicated mixed audio/opto playlists then per the conversion requirement above, you will convert the input to two old v1 playlists, and call --convert-playlist twice
      • if your playlist included matlab stimuli then you should change the paths to the matlab mat files in the converted playlist. by default, if a relative path or only a filename is given, the path is relative to the playlist/config file
      • all converted playlists will be placed into an 'audio' playlist - this should be adapted to daq if the playlist is actually for the DAQ opto outputs
  • flyvr-video
    process which reads video playlist and displays video stimulus on an attached lightcrafter projector (if connected) (pass --projector_disable if you dont have a projector connected)
  • flyvr-daq
    process which drives the NI DAQ for the purposes of
    • outputing the opto stimulus
    • recording the analog inputs
  • flyvr-fictrac
    process which launches the fictrac binary
    • flyvr-fictrac -f FicTracPGR_ConfigMaster.txt -m log.txt
  • flyvr-hwio
    device which drives the scanimage signaling and is available for future expansion for other stimulus (odor, etc). It also has a few additional options to aid visual debugging
    • flyvr-hwio -debug_led 5
      • --debug_led
        port on which to flash an LED upon starting a new playlist item

Similarly, the following secondary utilities are available also as separate processes to aid debugging, development, testing or observing experiments in progress

  • flyvr-fictrac-replay
    can replay a previously saved fictrac .h5 file in order to test, for example, experiment logic or
  • flyvr-experiment
    allows running flyvr experiments (.yaml or .py) in order to test their logic and progression. often used in conjunction with flyvr-fictrac-replay
  • flyvr-gui
    launches the standalone GUI which shows FlyVR state (frame numbers, sample numbers, etc)
  • flyvr-print-state
    prints the current flyvr state to the console
  • flyvr-fictrac-plot
    shows an animated plot of the fictrac state (ball speed, direction, etc)
  • flyvr-ipc-send
    in internal utility for sending IPC messages to control other primary processes, e.g. (the complex escaping is necessary here in windows shell)
    • flyvr-ipc-send.exe "{\"video_item\": {\"identifier\": \"v_loom_stim\"}}"
    • flyvr-ipc-send.exe "{\"audio_legacy\": \"sin\t10\t1\t0\t0\t0\t1\t650\"}"
    • flyvr-ipc-send.exe "{\"video_action\": \"play\"}"
  • flyvr-ipc-relay
    (advanced only) internal message relay bus for start/stop/next-playlist-item messages

Configuration

Configuration for FlyVR is done via a YAML format configuration file that is passed on the command line via the --config or -c argument. As explained in design, a FlyVR experiment contains two (three really, including a closed loop experiment definition) types of configuration information:

  • FlyVR configuration (under the configuration section in the yaml file)
  • Stimulus playlists (under the playlist section)
  • An optional closed-loop experiment definition (under the experiment section)

Note 1: most configuration parameters can be supplied also on the command line, and these override values specified in the configuration section

Note 2: playlist and experiment configuration can be stored in separate configuration files and be supplied on the command line via the -p or --playlist_file and -e or --experiment_file arguments

The following configuration can only be defined in the configuration section of the yaml file and can not be supplied on the command line

  • analog_in_channels
    a mapping/dictionary of DAQ channel number to description, e.g. {2: 'temperature'} defines an analog input on AI2 called 'temperature'
  • analog_out_channels
    as above, but can only contain one channel. The channel to which the optogenetic stimulus is driven from

After every experiment, the total configuration is saved in a YYYYMMDD_HHMM.config.yml file alongside the other output files. The is an 'all-in-one' configuration where both the configuration and any additional -p playlist.yml or -e experiment.yml information is included in the one file.

default values

The default values of all configuration parameters can be displayed by running any application with --print-defaults.

Note: If you want to know the final value of all configuration variables as would have been used in a FlyVR experiment, you can pass --print-defaults --verbose. This will print the combined configuration, playlist, and closed-loop experiment (if defined in the yaml meta-language).

Developing

  • If you can reproduce an issue using the single issue launchers then please try to do so
  • Run everything with verbose -v arguments for more debugging information
  • It can be convenient to replay old h5 files of fictrac data for testing. With the individual utilities you must run flyvr-fictrac-replay but within your normal config file you can also run with the follwing in your yaml config
    fictrac_config: 'C:/path/to/fictrac/config/180719_103_output.h5'
  • If you do not have DAQ hardware you can create a simulated device which will allow you to otherwise use the rest of the software
    • Open NI Max, Right-click 'Devices and Interfaces', create a 'Simulated NI-DAQmx device or instrument', select 'NI PCIe-6353' as the simulated device type.
  • It is recommended to create a sample rig config file and a fictrac replay data that you can test with. For example, the following command runs a configuration and sample data which test all backends are working. It is a 30s of fictract data and a audio/video experiment that stops after 10 seconds
    $ flyvr -c tests\sample_data\v2\john_rig.yml -e experiments\stop_after_10s.yml

Credits

David Deutsch - Murthy lab, PNI, Princeton; Adam Calhoun - Murthy lab, PNI, Princeton; John Stowers - LoopBio, David Turner - PNI, Princeton

License

Copyright (c) 2020 Princeton University flyvr is released under a Clear BSD License and is intended for research/academic use only. For commercial use, please contact: Laurie Tzodikov (Assistant Director, Office of Technology Licensing), Princeton University, 609-258-7256.

fly-vr's People

Contributors

nzjrs avatar davidt0x avatar adamjcalhoun avatar mmurthy avatar ntabris avatar albertlin1194 avatar

Stargazers

Noah Pettit avatar  avatar Megan Wang avatar

Watchers

 avatar James Cloos avatar  avatar  avatar David (Dudi) Deutsch avatar

fly-vr's Issues

daq backend: support the IPC verbs

following on from #9 and now #3 is fixed and the last remains of multiprocessing are gone, the DAQ can be more conventional and similar in architecture to the other backends, with an IPC thread and a play queue

more granular 'next event' logging (detect chunks of different provenance )

The IO and sound backends use data via the chunker the Stim unit based next event callback logging (i.e. when a new Stim is yielded). This isn't the right granuality to get the signal close to when it hits the hardware because multiple Stim could be yielded into a chunk that goes out into the same buffer.

Chunk based is also more compatible with play/pause and is CL compatible.

Issues with scan trigger timing

Issues with scan image triggering starting too early or not starting at all. Sometimes it triggers on the second playlist item but not the first.

replay experiment 'experiment'

an experiment that takes takes a TOC file and commands the appropriate backend to play the same stimulus at the appropriate time

Adding 2 channel (stereo) audio delivery to FlyVR

Adding 2 channel (stereo) audio delivery to FlyVR. This involves modifying soundserver mostly on the backend side. This shouldn't be too difficult. I think the stimulus code already handles mixing multiple signals into one buffer for sending to the device. This was working for the DAQ previously at least. There is a question of what the user interface for specifying multiple channels should be? Should stimulus entries have an optional channel field?

docs: audio playlist item definition and channels

For Audio - need to explain a bit more how to read the lines in list - read an external file with the data or use the info in the line
to generate the signal etc. 

I think we also need to dedicate a paragraph to the control of the sound card, in particular - one vs more audio channels

attenuator that works with non-tonal stimuli

AFAICT FlyVR 1.0 (https://github.com/murthylab/fly-vr/blob/flyvr-1.0/audio/attenuation.py) and it's port to FlyVR2
(https://github.com/murthylab/fly-vr/blob/master/flyvr/audio/attenuation.py) only ever supported tonal attenuation using interp1d.

Use of a the white-noise transfer file / calibration information was never supported in FlyVR 1.0.

c.f. this with the MATLAB side (fbfd0b4), we are lacking the reverse transform using the white noise transform

linux support

Not really an issue, a list of things that need to be addressed so this works on linux

More control of video playback from experiment.py files.

Allow writing experiment.py files (e.g. to present video1 for 60 sec, then randomly sample among videos 1, 2, and 3 for 15 sec each for a total of 180 sec, then present video 1 for 60 sec). Max has kindly helped me take a look at this but we haven't figured out an elegant solution. The current set up writes a new playlist instance/toc line for every single video frame, this is not good. The current workaround is to run a separate flyVR experiment for the different stimulus segments, or hardcode the order of these items using a long and repetitive playlist.

daq backend starts immediately in allinone launcher

or at least the accounting of when things occur is incorrect?

because of how the data_generator works, maybe we have to signal_new_playlist_item before the chunk is callback'd to the daq buffer?

retroduce

configuration:
  analog_in_channels:
    0: 'Sound'
    1: 'Copy of AO0'
  analog_out_channels:
    0: 'Opto'
  projector_disable: true
  fictrac_config: 'C:/Users/John Stowers/Downloads/Dudi_ShareWithJohn_flyVR/DSX_VR2P_IPIshort_2018/180719_103/180719_103_output.h5'
  delay: 3
playlist:
  daq:
  - Delay-1s:
      attenuator: null
      amplitude: 0.0
      duration: 1000
      name: constant
      post_silence: 0
      pre_silence: 0
      sample_rate: 10000
  - Opto1090-9-1:
      amplitude_a: 9.0
      duration_a: 1000
      amplitude_b: 0.0
      duration_b: 2000
      name: pulse
      post_silence: 1000
      pre_silence: 1000
      sample_rate: 10000

notice that in the toc file Opto1090-9-1 plays immediately after Delay

- {"item": "Delay-1s", "backend": "daq", "sound_output_num_samples_written": 0, "video_output_num_frames": 0, "daq_output_num_samples_written": 5000, "fictrac_frame_num": 4850}
- {"item": "Opto1090-9-1", "backend": "daq", "sound_output_num_samples_written": 0, "video_output_num_frames": 0, "daq_output_num_samples_written": 15000, "fictrac_frame_num": 4852}

Adding last two FicTrac-v2 columns to get saved into FlyVR

Adding last two FicTrac-v2 columns to get saved into FlyVR. FicTrac-v2 outputs a .dat file that has 25 columns, but in flyVR2 .h5 file, the last 2 columns are not being saved.

I can add this feature but it will take a bit more than modifying the Python source code. See the python side of things here: https://github.com/murthylab/fly-vr/blob/master/flyvr/fictrac/shmem_transfer_data.py. FicTrac is a C++ program that is loosely coupled with FlyVR. Essentially, we have modified FicTrac to copy its state (the 23 fields) into a special region of shared memory. FlyVR spins up a process that peeks (protected by a semaphore) at this memory periodically to see if the state has changed. The values you see in the HDF5 file will be the values that were recorded in realtime from fictrac via this shared memory connection. I will need to add a small bit of code to fictrac to copy these additional fields. I will then need to handle the Python side of things so that things don’t break in the FicTrac1 case. This might not be too hard since we are just appending to the data structure. Anyway, the only reason I mention all these details is to explain why it might take me more than a few minutes to add this feature.

2P control (next-image scanimage signaling)

moving this out of the DAQ process and into phidgets

  • needs an xsub/xpub like channel for generating new signals from any backend
  • remove old code from daq task
  • phidget backend

video: calibration file path

Yeah VideoServer is not getting initialized with the calibration file that I pass to it through the command line. If I rename my calibration file 'calibratedBallImage.data' and place it in the top level directory that I'm running flyvr-video from it doesn't have any issues reading and performing the warp. John - I think there is a bug in how VideoServer.py reads the calibrationFile. It seems to go with the default regardless of the file I pass to in via the command line.

was not fixed by 4a77bbb (need to investigate)

test video playlist and experimental logic

I would like to pass a configuration file that defines certain playlist items and defines an experimental logic (see attached for eg.) to flyvr-video to test the presentation of visual stimuli and their timing.
I tried doing this two ways that I expected should work - but neither does.
Approach 1: flyvr-video -c playlists\shruthi_video_playlists\test_landmark_different_speeds.yaml
What I expect to see: I would like to see the different landmarks come on one after another. I don't want to activate the fictrac or daq or scanimage backends.
What I end up seeing: I only see the first playlist item. Basically identical to what I would have seen had I done flyvr-video -p playlists\shruthi_video_playlists\test_landmark_different_speeds.yaml

Aprroach 2: flyvr -c playlists\shruthi_video_playlists\test_landmark_different_speeds.yaml
What I expect to see: I would like to see the different landmarks come on one after another. I don't want to activate the fictrac or daq or scanimage backends.
What ends up happening: flyvr tries to start the fictrac process and fails because it doesn't find a camera or fictrac config files.

@nzjrs @davidt0x how would you achieve what I'm trying out here?


Find my playlist here: https://github.com/r-shruthi11/fly-vr/blob/shruthi_A88B/playlists/shruthi_video_playlists/test_landmark_different_speeds.yaml

finish multiprocess rework

  • video back to main thread
  • audio back to main thread
  • DAQ back to main thread
  • IPC on aux threads
  • subprocess to main entrypoint in main
  • all processes ctrl+c killable
    • audio
    • daq
    • video
    • experiment
    • allinone
  • startup sequence go/no-go for all backends
    * audio
    * daq
    * hwio/scanimage
    * video

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.