GithubHelp home page GithubHelp logo

lorenfranklab / spyglass Goto Github PK

View Code? Open in Web Editor NEW
83.0 83.0 40.0 283.81 MB

Neuroscience data analysis framework for reproducible research built by Loren Frank Lab at UCSF

Home Page: https://lorenfranklab.github.io/spyglass/

License: MIT License

Jupyter Notebook 78.84% Python 21.16%
datajoint electrophysiology kachery neuroscience nwb python sortingview spikeinterface

spyglass's People

Contributors

acomrie avatar calderast avatar cbroz1 avatar cristofer-holobetz avatar denissemorales avatar donghoon-shin avatar dpeg22 avatar edeno avatar emilymonroe95 avatar emreybroyles avatar jguides avatar jihyunbak avatar jsoules avatar khl02007 avatar lfrank avatar magland avatar michaelcoulter avatar rly avatar samuelbray32 avatar sharon-chiang avatar shenshan avatar xlsun79 avatar yarikoptic avatar zoldello avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

spyglass's Issues

[tutorial] errors while running SpikeSorting.populate()

Still in the tutorial notebook 1_spikesorting.ipynb, and issues #41 and #40 fixed. The SpikeSortingParameters are successfully inserted, and (SpikeSortingParameters & {'nwb_file_name' : nwb_file_name2}) has my entry in it.

When I run this cell to populate

# Specify entry (otherwise runs everything in SpikeSortingParameters)
# `proj` gives you primary key
SpikeSorting.populate([(SpikeSortingParameters & {'nwb_file_name' : nwb_file_name2}).proj()])

The output and error:

Elapsed time for creating analysis NWB file: 0.3789808750152588 sec
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-61-1dd950092fb2> in <module>
      1 # Specify entry (otherwise runs everything in SpikeSortingParameters)
      2 # `proj` gives you primary key
----> 3 SpikeSorting.populate([(SpikeSortingParameters & {'nwb_file_name' : nwb_file_name2}).proj()])

~/anaconda3/envs/nwb_datajoint6/lib/python3.8/site-packages/datajoint/autopopulate.py in populate(self, suppress_errors, return_exception_objects, reserve_jobs, order, limit, max_calls, display_progress, *restrictions)
    151                     self.__class__._allow_insert = True
    152                     try:
--> 153                         make(dict(key))
    154                     except (KeyboardInterrupt, SystemExit, Exception) as error:
    155                         try:

~/proj/nwb_datajoint/src/nwb_datajoint/common/common_spikesorting.py in make(self, key)
    500             # Create a new NWB file for holding the results of analysis (e.g. spike sorting).
    501             # Save the name to the 'key' dict to use later.
--> 502             key['analysis_file_name'] = AnalysisNwbfile().create(key['nwb_file_name'])
    503 
    504         sort_interval =  (SortInterval & {'nwb_file_name': key['nwb_file_name'],

~/proj/nwb_datajoint/src/nwb_datajoint/common/common_nwbfile.py in create(self, nwb_file_name)
    136                             nwb_object.pop(module)
    137 
--> 138             analysis_file_name = self.__get_new_file_name(nwb_file_name)
    139             # write the new file
    140             print(f'Writing new NWB file {analysis_file_name}')

~/proj/nwb_datajoint/src/nwb_datajoint/common/common_nwbfile.py in __get_new_file_name(cls, nwb_file_name)
    151         names = (AnalysisNwbfile() & {'nwb_file_name': nwb_file_name}).fetch('analysis_file_name')
    152         n1 = [str.replace(name, os.path.splitext(nwb_file_name)[0], '') for name in names]
--> 153         max_analysis_file_num = max([int(str.replace(ext, '.nwb', '')) for ext in n1])
    154         # name the file, adding the number of files with preceeding zeros
    155         analysis_file_name = os.path.splitext(nwb_file_name)[0] + str(max_analysis_file_num+1).zfill(6) + '.nwb'

ValueError: max() arg is an empty sequence

Cannot import nwb_datajoint

After updating master, I get the following error on import nwb_datajoint even after reinstalling the package with its updated requirements. There seems to be an issue with the SortGroup table. @khl02007 Did some configuration change?

Python 3.8.6 | packaged by conda-forge | (default, Jan 25 2021, 23:21:18)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import nwb_datajoint
Connecting root@localhost:3306
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/rly/nwb_datajoint/src/nwb_datajoint/__init__.py", line 13, in <module>
    from .data_import.storage_dirs import check_env, kachery_storage_dir, base_dir
  File "/home/rly/nwb_datajoint/src/nwb_datajoint/data_import/__init__.py", line 2, in <module>
    from .insert_sessions import insert_sessions
  File "/home/rly/nwb_datajoint/src/nwb_datajoint/data_import/insert_sessions.py", line 5, in <module>
    from ..common import Nwbfile, populate_all_common
  File "/home/rly/nwb_datajoint/src/nwb_datajoint/common/__init__.py", line 8, in <module>
    from .common_spikesorting import (SortGroup, SpikeSorting, SpikeSorter, SpikeSorterParameters,
  File "/home/rly/nwb_datajoint/src/nwb_datajoint/common/common_spikesorting.py", line 58, in <module>
    class SortGroup(dj.Manual):
  File "/home/rly/miniconda3/envs/nwb_datajoint/lib/python3.8/site-packages/datajoint/schemas.py", line 147, in __call__
    self._decorate_master(cls, context)
  File "/home/rly/miniconda3/envs/nwb_datajoint/lib/python3.8/site-packages/datajoint/schemas.py", line 157, in _decorate_master
    self._decorate_table(cls, context=dict(context, self=cls, **{cls.__name__: cls}))
  File "/home/rly/miniconda3/envs/nwb_datajoint/lib/python3.8/site-packages/datajoint/schemas.py", line 188, in _decorate_table
    instance.declare(context)
  File "/home/rly/miniconda3/envs/nwb_datajoint/lib/python3.8/site-packages/datajoint/table.py", line 79, in declare
    sql, external_stores = declare(self.full_table_name, self.definition, context)
  File "/home/rly/miniconda3/envs/nwb_datajoint/lib/python3.8/site-packages/datajoint/declare.py", line 281, in declare
    table_comment, primary_key, attribute_sql, foreign_key_sql, index_sql, external_stores = prepare_declare(
  File "/home/rly/miniconda3/envs/nwb_datajoint/lib/python3.8/site-packages/datajoint/declare.py", line 248, in prepare_declare
    compile_foreign_key(line, context, attributes,
  File "/home/rly/miniconda3/envs/nwb_datajoint/lib/python3.8/site-packages/datajoint/declare.py", line 143, in compile_foreign_key
    raise DataJointError('Foreign key reference %s could not be resolved' % result.ref_table)
datajoint.errors.DataJointError: Foreign key reference Session could not be resolved

IntegrityError while inserting SpikeSortingParameters

I am running the current tutorial notebook 1_spikesorting.ipynb in the main branch.

After running this cell

# collect the params
key = dict()
key['nwb_file_name'] = nwb_file_name2
key['sort_group_id'] = sort_group_id
key['sorter_name'] = sorter_name
key['parameter_set_name'] = parameter_set_name
key['sort_interval_name'] = sort_interval_name
key['artifact_param_name'] = artifact_param_name
key['cluster_metrics_list_name'] = cluster_metrics_list_name
key['interval_list_name'] = interval_list_name

this dict key looks like

{'nwb_file_name': 'beans20190718_jhbak_.nwb',
 'sort_group_id': 8,
 'sorter_name': 'mountainsort4',
 'parameter_set_name': 'beans',
 'sort_interval_name': 'beans_02_r1_10s',
 'artifact_param_name': 'default',
 'cluster_metrics_list_name': 'test',
 'interval_list_name': '02_r1'}

Then when I run this cell,

# insert
SpikeSortingParameters.insert1(key, skip_duplicates=True)

I get an IntegrityError as follows:

---------------------------------------------------------------------------
IntegrityError                            Traceback (most recent call last)
<ipython-input-53-5f8cb3d0207e> in <module>
      1 # insert
----> 2 SpikeSortingParameters.insert1(key, skip_duplicates=True)

~/anaconda3/envs/nwb_datajoint6/lib/python3.8/site-packages/datajoint/table.py in insert1(self, row, **kwargs)
    264         For kwargs, see insert()
    265         """
--> 266         self.insert((row,), **kwargs)
    267 
    268     def insert(self, rows, replace=False, skip_duplicates=False, ignore_extra_fields=False, allow_direct_insert=None):

~/anaconda3/envs/nwb_datajoint6/lib/python3.8/site-packages/datajoint/table.py in insert(self, rows, replace, skip_duplicates, ignore_extra_fields, allow_direct_insert)
    328                     duplicate=(' ON DUPLICATE KEY UPDATE `{pk}`=`{pk}`'.format(pk=self.primary_key[0])
    329                                if skip_duplicates else ''))
--> 330                 self.connection.query(query, args=list(
    331                     itertools.chain.from_iterable(
    332                         (v for v in r['values'] if v is not None) for r in rows)))

~/anaconda3/envs/nwb_datajoint6/lib/python3.8/site-packages/datajoint/connection.py in query(self, query, args, as_dict, suppress_warnings, reconnect)
    298         cursor = self._conn.cursor(cursor=cursor_class)
    299         try:
--> 300             self._execute_query(cursor, query, args, suppress_warnings)
    301         except errors.LostConnectionError:
    302             if not reconnect:

~/anaconda3/envs/nwb_datajoint6/lib/python3.8/site-packages/datajoint/connection.py in _execute_query(cursor, query, args, suppress_warnings)
    264                 cursor.execute(query, args)
    265         except client.err.Error as err:
--> 266             raise translate_query_error(err, query)
    267 
    268     def query(self, query, args=(), *, as_dict=False, suppress_warnings=True, reconnect=None):

IntegrityError: Cannot add or update a child row: a foreign key constraint fails (`common_spikesorting`.`spike_sorting_parameters`, CONSTRAINT `spike_sorting_parameters_ibfk_5` FOREIGN KEY (`cluster_metrics_list_name`) REFERENCES `spike_sorting_metrics` (`cluster_metrics_li)

Tests randomly have timeout errors when downloading test file

In the CI test suite, we use kachery to download a test file
https://github.com/LorenFrankLab/nwb_datajoint/blob/bd2e0e6be22b5c52cec020eaee3d983376501c66/tests/test_1.py#L18-L20

This was working fine for weeks, but in the last 24 hours, the test has failed a couple times, randomly, due to a TimeoutError when downloading the file using kachery > urllib.

Re-running the test often resolves the issue.

@jsoules do you have any ideas of what might be wrong? Is the kachery server intermittently down such that the CI gets a timeout? I have tried running this same code to download the test file on my local machine, hundreds of times in a for loop, and have not been able to reproduce the timeout error.

Nightly test exception on `import hither`

Nightly tests are failing due to an update in hither 0.8.1 which has deprecated the usage of import hither in favor of import hither2.

Run pytest -rP  # env vars are set within certain tests
ImportError while loading conftest '/home/runner/work/nwb_datajoint/nwb_datajoint/tests/conftest.py'.
tests/conftest.py:10: in <module>
    from .datajoint._datajoint_server import run_datajoint_server, kill_datajoint_server
tests/datajoint/_datajoint_server.py:1: in <module>
    import hither as hi
/usr/share/miniconda/envs/nwb_datajoint/lib/python3.8/site-packages/hither/__init__.py:1: in <module>
    raise Exception('Use "import hither2" instead of "import hither"')
E   Exception: Use "import hither2" instead of "import hither"
Error: Process completed with exit code 4.

Permission to workspace

@lfrank How should we set up the permissions? Would it work to:

  1. have a dj.manual table where the attributes are the user name (primary) and google account
  2. during spike sorting, get the user name, query for the google account based on user name from the table from 1
  3. give that google account permission to access the workspace containing sorting results

ResolvePackageNotFound: ndx-franklab-novela

I get the following error when trying to create the conda environment

conda env create -f environment.yml
Collecting package metadata (repodata.json): done
Solving environment: failed

ResolvePackageNotFound: 
  - ndx-franklab-novela

import fails with tutorial notebook


DataJointError Traceback (most recent call last)
in
2 import numpy as np
3
----> 4 import nwb_datajoint as nd
5 import datajoint as dj
6

~/nwb_datajoint/src/nwb_datajoint/init.py in
9 import ndx_franklab_novela
10
---> 11 from .data_import.insert_sessions import insert_sessions
12 from .data_import.storage_dirs import base_dir, check_env, kachery_storage_dir
13

~/nwb_datajoint/src/nwb_datajoint/data_import/init.py in
----> 1 from .insert_sessions import insert_sessions
2 from .storage_dirs import base_dir, check_env, kachery_storage_dir

~/nwb_datajoint/src/nwb_datajoint/data_import/insert_sessions.py in
4 import pynwb
5
----> 6 from ..common import Nwbfile, get_raw_eseries, populate_all_common
7 from .storage_dirs import check_env
8

~/nwb_datajoint/src/nwb_datajoint/common/init.py in
1 # Reorganize this into hierarchy
2 # Note: users will have their own tables... permission system
----> 3 from .common_behav import (HeadDir, LinPos, PositionSource, RawPosition, Speed,
4 StateScriptFile, VideoFile)
5 from .common_device import CameraDevice, DataAcquisitionDevice, Probe

~/nwb_datajoint/src/nwb_datajoint/common/common_behav.py in
4 import pynwb
5
----> 6 from .common_ephys import Raw # noqa: F401
7 from .common_interval import IntervalList, interval_list_contains
8 from .common_nwbfile import Nwbfile

~/nwb_datajoint/src/nwb_datajoint/common/common_ephys.py in
6 import pynwb
7
----> 8 from .common_device import Probe # noqa: F401
9 from .common_filter import FirFilter
10 from .common_interval import IntervalList # noqa: F401

~/nwb_datajoint/src/nwb_datajoint/common/common_device.py in
3 import datajoint as dj
4
----> 5 schema = dj.schema("common_device")
6
7

~/anaconda3/envs/nwb_datajoint/lib/python3.8/site-packages/datajoint/schemas.py in init(self, schema_name, context, connection, create_schema, create_tables, add_objects)
68 self.declare_list = []
69 if schema_name:
---> 70 self.activate(schema_name)
71
72 def is_activated(self):

~/anaconda3/envs/nwb_datajoint/lib/python3.8/site-packages/datajoint/schemas.py in activate(self, schema_name, connection, create_schema, create_tables, add_objects)
117 self.connection.query("CREATE DATABASE {name}".format(name=schema_name))
118 except AccessError:
--> 119 raise DataJointError(
120 "Schema {name} does not exist and could not be created. "
121 "Check permissions.".format(name=schema_name))

DataJointError: Schema common_device does not exist and could not be created. Check permissions.

spikesorting tutorial - overwrite not supported by latest spikeextractors release

Running this cell in the tutorial notebook 1_spikesorting.ipynb:

# Specify entry (otherwise runs everything in SpikeSortingParameters);
# `proj` gives you primary key
SpikeSorting.populate(
    [(SpikeSortingParameters & {'nwb_file_name' : nwb_file_name2}).proj()]
)

This gives the following error:

Getting ready...
Writing new NWB file beans20190718_jhbak_000000.nwb
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-89-fc58ddef5bdf> in <module>
      1 # Specify entry (otherwise runs everything in SpikeSortingParameters);
      2 # `proj` gives you primary key
----> 3 SpikeSorting.populate(
      4     [(SpikeSortingParameters & {'nwb_file_name' : nwb_file_name2}).proj()]
      5 )

~/anaconda3/envs/nwb_datajoint2/lib/python3.8/site-packages/datajoint/autopopulate.py in populate(self, suppress_errors, return_exception_objects, reserve_jobs, order, limit, max_calls, display_progress, *restrictions)
    151                     self.__class__._allow_insert = True
    152                     try:
--> 153                         make(dict(key))
    154                     except (KeyboardInterrupt, SystemExit, Exception) as error:
    155                         try:

~/proj/nwb_datajoint/nwb_datajoint/common/common_spikesorting.py in make(self, key)
    439 
    440         # Write recording extractor to NWB file
--> 441         se.NwbRecordingExtractor.write_recording(recording,
    442                                                  save_path = extractor_nwb_path,
    443                                                  overwrite = True)

TypeError: write_recording() got an unexpected keyword argument 'overwrite'

The keyword argument overwrite was introduced to spikeextractors as part of this Dec 12, 2020 PR;
at the time of submitting this issue, the latest release of spikeextractors was on Dec 9, 2020, version 0.9.3.
This is also the version that this repo's environment.yml installed.

Create locked tables

Create locked tables to make it harder for users to delete the results of preprocessing/analysis.

This should probably be in a new directory (not common; perhaps 'lock') in the repo.
Lock tables, owned by an administrator, should be created for
LFP
SpikeSorting

Others to follow

Allow import of times and labels from previous spike sorting

For older data we need to be able to import a curated spike sorting (cluster label, unit times, curation label, sort interval).

This should probably be done by creating a sorting object using SpikeInterface and importing that object and the labels using the curated firings.mda from mountainview.

TaskEpoch() not populating

TaskEpoch() is empty, and although TaskEpoch().populate() runs without errors, it does not populate the table. How can I fill in TaskEpoch()? Thank you.

Module not found error when importing nwb_datajoint


ModuleNotFoundError Traceback (most recent call last)
in
3 import numpy as np
4
----> 5 import nwb_datajoint as nd
6
7 # ignore datajoint+jupyter async warnings

~/Src/nwb_datajoint/src/nwb_datajoint/init.py in
9 import ndx_franklab_novela
10
---> 11 from .data_import.insert_sessions import insert_sessions
12 from .data_import.storage_dirs import base_dir, check_env, kachery_storage_dir
13

~/Src/nwb_datajoint/src/nwb_datajoint/data_import/init.py in
----> 1 from .insert_sessions import insert_sessions
2 from .storage_dirs import base_dir, check_env, kachery_storage_dir

~/Src/nwb_datajoint/src/nwb_datajoint/data_import/insert_sessions.py in
4 import pynwb
5
----> 6 from ..common import Nwbfile, get_raw_eseries, populate_all_common
7 from .storage_dirs import check_env
8

~/Src/nwb_datajoint/src/nwb_datajoint/common/init.py in
20 from .common_sensors import SensorData
21 from .common_session import ExperimenterList, Session
---> 22 from .common_spikesorting import (AutomaticCurationParameters,
23 AutomaticCurationSpikeSorting,
24 AutomaticCurationSpikeSortingParameters,

~/Src/nwb_datajoint/src/nwb_datajoint/common/common_spikesorting.py in
9
10 import datajoint as dj
---> 11 import kachery_client as kc
12 import labbox_ephys as le
13 import numpy as np

ModuleNotFoundError: No module named 'kachery_client'

Address ResourceWarning: unclosed file ?

During population of CuratedSpikeSorting (and when calling populate method on downstream tables, such as UnitMarks), warnings are given regarding unclosed nwb files. In case these files are staying open somewhere and taking up resources unnecessarily, we may want to address this. It is just a warning and might be totally harmless; it is not blocking. See two example warnings below.

/home/alison/anaconda3/envs/nwb_datajoint/lib/python3.8/site-packages/datajoint/hash.py:39: ResourceWarning: unclosed file <_io.BufferedReader name='/stelmo/nwb/analysis/chimi20200216_new_0WYDXQFCX8.nwb'>
  return uuid_from_stream(Path(filepath).open("rb"), init_string=init_string)
/home/alison/anaconda3/envs/nwb_datajoint/lib/python3.8/site-packages/datajoint/hash.py:39: ResourceWarning: unclosed file <_io.BufferedReader name='/stelmo/nwb/analysis/chimi20200216_new_IBT8GMSGMC.nwb'>
  return uuid_from_stream(Path(filepath).open("rb"), init_string=init_string)

nwb missing a method

nwbf has method add_child, but does not have remove_child.

AttributeError Traceback (most recent call last)
in
----> 1 nd.common.SpikeSorting().populate()

~/anaconda3/envs/nwb_datajoint_test/lib/python3.8/site-packages/datajoint/autopopulate.py in populate(self, suppress_errors, return_exception_objects, reserve_jobs, order, limit, max_calls, display_progress, *restrictions)
157 self.class._allow_insert = True
158 try:
--> 159 make(dict(key))
160 except (KeyboardInterrupt, SystemExit, Exception) as error:
161 try:

~/Src/nwb_datajoint/nwb_datajoint/common/common_ephys.py in make(self, key)
367
368 def make(self, key):
--> 369 key['analysis_file_name'] = AnalysisNwbfile().create(key['nwb_file_name'])
370 # get the valid times.
371 # NOTE: we will sort independently between each entry in the valid times list

~/Src/nwb_datajoint/nwb_datajoint/common/common_nwbfile.py in create(self, nwb_file_name)
90 for module in list(nwb_object.keys()):
91 mod = nwb_object.pop(module)
---> 92 nwbf._remove_child(mod)
93
94

AttributeError: 'NWBFile' object has no attribute '_remove_child'

ImportError: cannot import name 'append_docstring' from 'bokeh.util.string'

After git pulling the most recent nwb_datajoint and reinstalling the conda environment, on running import nwb_datajoint I get the following error. Thanks for any help.


ImportError Traceback (most recent call last)
/tmp/ipykernel_1993081/3902334258.py in
7 # dj.config['database.password']= 'simple'
8
----> 9 import nwb_datajoint as nd
10
11 # ignore datajoint+jupyter async warnings

~/Src/nwb_datajoint/src/nwb_datajoint/init.py in
9 import ndx_franklab_novela
10
---> 11 from .data_import.insert_sessions import insert_sessions
12 from .data_import.storage_dirs import base_dir, check_env, kachery_storage_dir
13

~/Src/nwb_datajoint/src/nwb_datajoint/data_import/init.py in
----> 1 from .insert_sessions import insert_sessions
2 from .storage_dirs import base_dir, check_env, kachery_storage_dir

~/Src/nwb_datajoint/src/nwb_datajoint/data_import/insert_sessions.py in
4 import pynwb
5
----> 6 from ..common import Nwbfile, get_raw_eseries, populate_all_common
7 from .storage_dirs import check_env
8

~/Src/nwb_datajoint/src/nwb_datajoint/common/init.py in
1 # Reorganize this into hierarchy
2 # Note: users will have their own tables... permission system
----> 3 from .common_behav import (HeadDir, LinPos, PositionSource, RawPosition, Speed,
4 StateScriptFile, VideoFile)
5 from .common_device import CameraDevice, DataAcquisitionDevice, Probe

~/Src/nwb_datajoint/src/nwb_datajoint/common/common_behav.py in
5 import pynwb
6
----> 7 from .common_ephys import Raw # noqa: F401
8 from .common_interval import IntervalList, interval_list_contains
9 from .common_nwbfile import Nwbfile

~/Src/nwb_datajoint/src/nwb_datajoint/common/common_ephys.py in
7
8 from .common_device import Probe # noqa: F401
----> 9 from .common_filter import FirFilter
10 from .common_interval import IntervalList, interval_list_censor, interval_list_intersect # noqa: F401
11 # SortInterval, interval_list_intersect, interval_list_excludes_ind

~/Src/nwb_datajoint/src/nwb_datajoint/common/common_filter.py in
1 # code to define filters that can be applied to continuous time data
2 import datajoint as dj
----> 3 import ghostipy as gsp
4 import matplotlib.pyplot as plt
5 import numpy as np

~/anaconda3/envs/nwb_datajoint/lib/python3.8/site-packages/ghostipy/init.py in
1 from ghostipy.spectral import *
2 from ghostipy.dsp import *
----> 3 from ghostipy.plotting import *
4 from ghostipy.version import version

~/anaconda3/envs/nwb_datajoint/lib/python3.8/site-packages/ghostipy/plotting/init.py in
----> 1 from ghostipy.plotting.core import *

~/anaconda3/envs/nwb_datajoint/lib/python3.8/site-packages/ghostipy/plotting/core.py in
1 import numpy as np
2 import xarray as xr
----> 3 import holoviews as hv
4 import hvplot.xarray
5 import dask.array as da

~/anaconda3/envs/nwb_datajoint/lib/python3.8/site-packages/holoviews/init.py in
10
11 from . import util # noqa (API import)
---> 12 from .annotators import annotate # noqa (API import)
13 from .core import archive, config # noqa (API import)
14 from .core.boundingregion import BoundingBox # noqa (API import)

~/anaconda3/envs/nwb_datajoint/lib/python3.8/site-packages/holoviews/annotators.py in
8 import param
9
---> 10 from panel.pane import PaneBase
11 from panel.layout import Row, Tabs
12 from panel.util import param_name

~/anaconda3/envs/nwb_datajoint/lib/python3.8/site-packages/panel/init.py in
----> 1 from . import layout # noqa
2 from . import links # noqa
3 from . import pane # noqa
4 from . import param # noqa
5 from . import pipeline # noqa

~/anaconda3/envs/nwb_datajoint/lib/python3.8/site-packages/panel/layout/init.py in
----> 1 from .accordion import Accordion # noqa
2 from .base import Column, ListLike, ListPanel, Panel, Row, WidgetBox # noqa
3 from .card import Card # noqa
4 from .flex import FlexBox # noqa
5 from .grid import GridBox, GridSpec # noqa

~/anaconda3/envs/nwb_datajoint/lib/python3.8/site-packages/panel/layout/accordion.py in
1 import param
2
----> 3 from bokeh.models import Column as BkColumn, CustomJS
4
5 from .base import NamedListPanel

~/anaconda3/envs/nwb_datajoint/lib/python3.8/site-packages/bokeh/models/init.py in
30 # Bokeh imports
31 from ..core.property.dataspec import expr, field, value # Legacy API
---> 32 from ..model import Model
33 from .annotations import *
34 from .arrow_heads import *

~/anaconda3/envs/nwb_datajoint/lib/python3.8/site-packages/bokeh/model/init.py in
23
24 # Bokeh imports
---> 25 from .data_model import DataModel
26 from .model import Model
27 from .util import Qualified, collect_models, get_class

~/anaconda3/envs/nwb_datajoint/lib/python3.8/site-packages/bokeh/model/data_model.py in
23 # Bokeh imports
24 from ..core.has_props import abstract
---> 25 from .model import Model
26
27 #-----------------------------------------------------------------------------

~/anaconda3/envs/nwb_datajoint/lib/python3.8/site-packages/bokeh/model/model.py in
49 from ..util.callback_manager import EventCallbackManager, PropertyCallbackManager
50 from ..util.serialization import make_id
---> 51 from .docs import html_repr, process_example
52 from .util import (
53 HasDocumentRef,

~/anaconda3/envs/nwb_datajoint/lib/python3.8/site-packages/bokeh/model/docs.py in
27 # Bokeh imports
28 from ..util.serialization import make_id
---> 29 from ..util.string import append_docstring
30
31 if TYPE_CHECKING:

ImportError: cannot import name 'append_docstring' from 'bokeh.util.string' (/home/jguidera/anaconda3/envs/nwb_datajoint/lib/python3.8/site-packages/bokeh/util/string.py)

importing nwb_datajoint in jupyter throws error

I'm just starting up the nwb_datajoint tutorial (the Populate_from_NWB_tutorial.ipynb notebook).

Things seem to be going smoothly (except I had to install portalocker as I guess it wasn't in the build recipe).

When I try import nwb_datajoint as nd I get the following ImportError (first few lines are cut for brevity) after I enter my username/pw:

c:\code\nwb_datajoint\nwb_datajoint\common\common_ephys.py in
6 from .common_device import Probe
7 from .common_interval import IntervalList, SortIntervalList, interval_list_intersect, interval_list_excludes_ind
----> 8 from .common_filter import FirFilter
9
10 import spikeinterface as si

c:\code\nwb_datajoint\nwb_datajoint\common\common_filter.py in
4 import scipy.signal as signal
5 import numpy as np
----> 6 import ghostipy as gsp
7 import matplotlib.pyplot as plt
8 import uuid

~\Miniconda3\envs\nwb_datajoint\lib\site-packages\ghostipy_init_.py in
1 #from ghostipy.spectral import *
----> 2 from ghostipy.dsp import *
3 #from ghostipy.crossfrequency import *
4 #from ghostipy.plotting import *

~\Miniconda3\envs\nwb_datajoint\lib\site-packages\ghostipy\dsp_init_.py in
----> 1 from ghostipy.dsp.filtering import *
2 from ghostipy.dsp.firfilter import *
3 #from ghostipy.dsp.convolution import *
4 #from ghostipy.dsp.analytic import *

~\Miniconda3\envs\nwb_datajoint\lib\site-packages\ghostipy\dsp\filtering.py in
----> 1 from ghostipy.dsp.convolution import osconvolve
2 from multiprocessing import cpu_count
3
4 all = ['filter_data_fir']
5

~\Miniconda3\envs\nwb_datajoint\lib\site-packages\ghostipy\dsp\convolution.py in
1 import numpy as np
----> 2 import pyfftw
3 from multiprocessing import cpu_count
4
5 def osconvolve(signal, kernel, *, mode='full', nfft=None,

~\Miniconda3\envs\nwb_datajoint\lib\site-packages\pyfftw_init_.py in
16 import os
17
---> 18 from .pyfftw import (
19 FFTW,
20 export_wisdom,

ImportError: DLL load failed while importing pyfftw: Not enough memory resources are available to process this command.

When I do the same import just running python from the command line, no errors. I enter my username/pw and get a command prompt.

I am in windows 10, Python 3.8. I installed things just as instructed at the repo. I have 32 GB of RAM and I don't seem to be running out of RAM on my system (despite what the error says).

Account for breaking changes in SpikeInterface

SpikeInterface recently released version 0.90.0 which breaks compatibility with previous versions 0.12 and 0.13: https://spikeinterface.readthedocs.io/en/latest/

One of the breaking changes is that SpikeInterface-related packages, such as spikesorters, spikeextractors, etc. are no longer included and must be installed separately. environment.yml needs to be updated accordingly.

There may be other breaking changes that should be looked at closely before using the latest spikeinterface.

TODO:

  • Set spikeinterface version to be strictly less than 0.90 locally spikeinterface>=0.12, <0.90, confirm that spike sorting still works, push the change to environment.yml. This should keep everything working with minimal change.
  • Set max spikeinterface version to 0.90 locally (min version may also need to be set to 0.90), make appropriate changes, confirm that spike sorting still works, and push the changes to master.

no google user id


IndexError Traceback (most recent call last)
/tmp/ipykernel_373368/3289229830.py in
1 # Specify entry (otherwise runs everything in SpikeSortingParameters)
2 # proj gives you primary key
----> 3 SpikeSorting.populate([(SpikeSortingParameters & {'nwb_file_name': nwb_file_name2}).proj()])

~/anaconda3/envs/nwb_datajoint/lib/python3.8/site-packages/datajoint/autopopulate.py in populate(self, suppress_errors, return_exception_objects, reserve_jobs, order, limit, max_calls, display_progress, *restrictions)
151 self.class._allow_insert = True
152 try:
--> 153 make(dict(key))
154 except (KeyboardInterrupt, SystemExit, Exception) as error:
155 try:

~/Src/nwb_datajoint/src/nwb_datajoint/common/common_spikesorting.py in make(self, key)
726 print(f'Google user ID for {team_member} does not exist or more than one ID detected;
727 permission to curate not given to {team_member}, skipping...')
--> 728 workspace.set_user_permissions(google_user_id[0], {'edit': True})
729 print(f'Permissions for {google_user_id[0]} set to: {workspace.get_user_permissions(google_user_id[0])}')
730

IndexError: index 0 is out of bounds for axis 0 with size 0

It looks like I do not have a google_user_id. Is there a way to get this?

NWB file insertion fails

I'm unable to insert NWB files into the Datajoint tables, and I'm specifically getting an error with the raw position. Here is the error message:

Creating a copy of NWB file CH8_20210108.nwb with link to raw ephys data: CH8_20210108_.nwb

/home/rnevers/miniconda3/envs/nwb_datajoint/lib/python3.8/site-packages/h5py/_hl/dataset.py:541: DeprecationWarning: Passing None into shape arguments as an alias for () is deprecated.
  arr = numpy.ndarray(selection.mshape, dtype=new_dtype)
/home/rnevers/miniconda3/envs/nwb_datajoint/lib/python3.8/site-packages/hdmf/spec/namespace.py:532: UserWarning: Ignoring cached namespace 'ndx-franklab-novela' version 0.0.011.37 because version 0.0.011.36 is already loaded.
  warn("Ignoring cached namespace '%s' version %s because version %s is already loaded."
/home/rnevers/miniconda3/envs/nwb_datajoint/lib/python3.8/site-packages/datajoint/hash.py:39: ResourceWarning: unclosed file <_io.BufferedReader name='/stelmo/nwb/raw/CH8_20210108.nwb'>
  return uuid_from_stream(Path(filepath).open("rb"), init_string=init_string)
/home/rnevers/miniconda3/envs/nwb_datajoint/lib/python3.8/site-packages/datajoint/external.py:234: DeprecationWarning: The truth value of an empty array is ambiguous. Returning False, but in future this will result in an error. Use `array.size > 0` to check that an array is not empty.
  if check_hash:

Populate Session...
Populate ExperimenterList...
Populate ElectrodeGroup...
Populate Electrode...
Populate Raw...
Populate SampleCount...
Populate DIOEvents...
Populate SensorData
Populate TaskEpochs
Populate StateScriptFile
Populate VideoFile
RawPosition...

/home/rnevers/miniconda3/envs/nwb_datajoint/lib/python3.8/site-packages/h5py/_hl/dataset.py:541: DeprecationWarning: Passing None into shape arguments as an alias for () is deprecated.
  arr = numpy.ndarray(selection.mshape, dtype=new_dtype)

Processing raw position data. Estimated sampling rate: 33.0 Hz

---------------------------------------------------------------------------
IntegrityError                            Traceback (most recent call last)
/tmp/ipykernel_63842/440004394.py in <module>
----> 1 nd.insert_sessions('CH8_20210108.nwb')

~/Src/nwb_datajoint/src/nwb_datajoint/data_import/insert_sessions.py in insert_sessions(nwb_file_names)
     40         copy_nwb_link_raw_ephys(nwb_file_name, out_nwb_file_name)
     41         Nwbfile().insert_from_relative_file_name(nwb_file_name)
---> 42         populate_all_common(out_nwb_file_name)
     43 
     44 

~/Src/nwb_datajoint/src/nwb_datajoint/common/populate_all_common.py in populate_all_common(nwb_file_name)
     41     VideoFile.populate(fp)
     42     print('RawPosition...')
---> 43     PositionSource().get_nwbf_position_source(nwb_file_name)
     44     RawPosition.populate(fp)
     45     # print('HeadDir...')

~/Src/nwb_datajoint/src/nwb_datajoint/common/common_behav.py in get_nwbf_position_source(self, nwb_file_name)
     49                 interval_dict['interval_list_name'] = pos_interval_list_name
     50                 interval_dict['valid_times'] = pdict['valid_times']
---> 51                 IntervalList().insert1(interval_dict, skip_duplicates=True)
     52                 # add this interval list to the table
     53                 key['nwb_file_name'] = nwb_file_name

~/miniconda3/envs/nwb_datajoint/lib/python3.8/site-packages/datajoint/table.py in insert1(self, row, **kwargs)
    264         For kwargs, see insert()
    265         """
--> 266         self.insert((row,), **kwargs)
    267 
    268     def insert(self, rows, replace=False, skip_duplicates=False, ignore_extra_fields=False, allow_direct_insert=None):

~/miniconda3/envs/nwb_datajoint/lib/python3.8/site-packages/datajoint/table.py in insert(self, rows, replace, skip_duplicates, ignore_extra_fields, allow_direct_insert)
    328                     duplicate=(' ON DUPLICATE KEY UPDATE `{pk}`=`{pk}`'.format(pk=self.primary_key[0])
    329                                if skip_duplicates else ''))
--> 330                 self.connection.query(query, args=list(
    331                     itertools.chain.from_iterable(
    332                         (v for v in r['values'] if v is not None) for r in rows)))

~/miniconda3/envs/nwb_datajoint/lib/python3.8/site-packages/datajoint/connection.py in query(self, query, args, as_dict, suppress_warnings, reconnect)
    298         cursor = self._conn.cursor(cursor=cursor_class)
    299         try:
--> 300             self._execute_query(cursor, query, args, suppress_warnings)
    301         except errors.LostConnectionError:
    302             if not reconnect:

~/miniconda3/envs/nwb_datajoint/lib/python3.8/site-packages/datajoint/connection.py in _execute_query(cursor, query, args, suppress_warnings)
    264                 cursor.execute(query, args)
    265         except client.err.Error as err:
--> 266             raise translate_query_error(err, query)
    267 
    268     def query(self, query, args=(), *, as_dict=False, suppress_warnings=True, reconnect=None):

IntegrityError: Cannot add or update a child row: a foreign key constraint fails (`common_interval`.`interval_list`, CONSTRAINT `interval_list_ibfk_1` FOREIGN KEY (`nwb_file_name`) REFERENCES `common_session`.`_session` (`nwb_file_name`) ON UPDATE CASCADE)

It should be noted that we have a custom position tracking solution outside of the default used in Trodes, but the structure of our resulting binary files should be the same as what's put out by Trodes. We didn't get this error previously when we inserted these files (about 3 months ago).

Add schema and code for cluster annealing

Given a spikesorting across epochs, we need to find potential correspondences and save the bootstrapped distances between clusters, etc. in a table so that we can choose linking criteria and get the linked clusters.

Clarify value of KACHERY_P2P_API_PORT

Current README includes this line:

export KACHERY_P2P_API_PORT="some-port-number"`  # (optional)

Two issues:

  • This variable is required for the spike sorting tutorial notebook (tutorial 1).
  • Not sure how this port number should be chosen. At first I tried an arbitrary number (actually the port number that was in the tutorial notebook), and it did not work. What worked for me was to first start a kachery p2p daemon (using the kachery-p2p-start-daemon … command), look at the port number from its output, and match this variable KACHERY_P2P_API_PORT to the port number of the running daemon.

MissingAttributeError while inserting SpikeSorterParameters (minor issue)

Running the current tutorial notebook 1_spikesorting.ipynb… this bug was before I reached the error reported in issue #40 . This was small enough I could fix and proceed.

When inserting the SpikeSorterParameters, current cell looks like:

# Insert
SpikeSorterParameters.insert1({'sorter_name': sorter_name,
                               'parameter_set_name': parameter_set_name,
                               'parameter_dict': param_dict}, skip_duplicates=True)

This gives the following MissingAttributeError:

---------------------------------------------------------------------------
MissingAttributeError                     Traceback (most recent call last)
<ipython-input-33-c7a9920fd162> in <module>
      1 # Insert
----> 2 SpikeSorterParameters.insert1({'sorter_name': sorter_name,
      3                                'parameter_set_name': parameter_set_name,
      4                                'parameter_dict': param_dict}, skip_duplicates=True)

~/anaconda3/envs/nwb_datajoint6/lib/python3.8/site-packages/datajoint/table.py in insert1(self, row, **kwargs)
    264         For kwargs, see insert()
    265         """
--> 266         self.insert((row,), **kwargs)
    267 
    268     def insert(self, rows, replace=False, skip_duplicates=False, ignore_extra_fields=False, allow_direct_insert=None):

~/anaconda3/envs/nwb_datajoint6/lib/python3.8/site-packages/datajoint/table.py in insert(self, rows, replace, skip_duplicates, ignore_extra_fields, allow_direct_insert)
    328                     duplicate=(' ON DUPLICATE KEY UPDATE `{pk}`=`{pk}`'.format(pk=self.primary_key[0])
    329                                if skip_duplicates else ''))
--> 330                 self.connection.query(query, args=list(
    331                     itertools.chain.from_iterable(
    332                         (v for v in r['values'] if v is not None) for r in rows)))

~/anaconda3/envs/nwb_datajoint6/lib/python3.8/site-packages/datajoint/connection.py in query(self, query, args, as_dict, suppress_warnings, reconnect)
    298         cursor = self._conn.cursor(cursor=cursor_class)
    299         try:
--> 300             self._execute_query(cursor, query, args, suppress_warnings)
    301         except errors.LostConnectionError:
    302             if not reconnect:

~/anaconda3/envs/nwb_datajoint6/lib/python3.8/site-packages/datajoint/connection.py in _execute_query(cursor, query, args, suppress_warnings)
    264                 cursor.execute(query, args)
    265         except client.err.Error as err:
--> 266             raise translate_query_error(err, query)
    267 
    268     def query(self, query, args=(), *, as_dict=False, suppress_warnings=True, reconnect=None):

MissingAttributeError: Field 'filter_parameter_dict' doesn't have a default value

I was able to fix this by doing

# Insert
SpikeSorterParameters.insert1({'sorter_name': sorter_name,
                               'parameter_set_name': parameter_set_name,
                               'parameter_dict': param_dict,
                               'filter_parameter_dict': ms4_default_params['filter_parameter_dict'].copy()
                              },
                              skip_duplicates=True)

Allow user config of NWB naming convention and hierarchy

The data import code assumes certain names for NWB objects. For example, it is assumed that spatial position data exists in a pynwb.behavior.Position object named "position" anywhere in the file. Datasets not generated by the Frank Lab may not follow this convention for their spatial position data - it may use a different name or there might be multiple pynwb.behavior.Position objects named "position".

In order to flexibly import these files, it would be useful to have a config text/YAML file that allows the user to customize what names (and perhaps also parent/ancestry/path) to use to find certain elements to import from a given NWB file.

This might look something like:

Position:
- neurodata_type: pynwb.behavior.Position
- name: position

AssociatedFiles:
- neurodata_type: pynwb.core.ProcessingModule
- name: associated_files

StateScriptFile:
- neurodata_type: ndx_franklab_novela.AssociatedFiles

New linkraw files don't have acquisition object

After merging the new changes I was not able to run the populate notebook. It errors out in the DIOEvents (see below), but it has an initial problem where it cannot find the acquisition object.

Opening the nwbfile gives me something that doesn't contain an acquisition object at all, so I think that needs to be fixed.

Creating a copy of NWB file beans20190718-trim.nwb with link to raw ephys data: beans20190718-trim_linkraw.nwb
Populate Session...
Institution...
Lab...
LabMember...
Subject...
DataAcquisitionDevice...
CameraDevice...
Inserted ['beans sleep camera', 'beans run camera']
Probe...
Skipping Apparatus for now...
IntervalList...
Populate NwbfileKachery...
Computing SHA-1 and storing in kachery...
Computing sha1 and manifest of /Users/loren/data/nwb_builder_test_data/beans20190718-trim_linkraw.nwb
Populate ExperimenterList...
Populate ElectrodeGroup...
Populate Electrode...
Populate Raw...
WARNING: Unable to get aquisition object in: /Users/loren/data/nwb_builder_test_data/beans20190718-trim_linkraw.nwb
Populate SampleCount...
WARNING: Unable to get sample count object in: /Users/loren/data/nwb_builder_test_data/beans20190718-trim.nwb
WARNING: Unable to get sample count object in: /Users/loren/data/nwb_builder_test_data/beans20190718-trim_linkraw.nwb
Populate DIOEvants...

Error here...

Potential issue: LFP table has no filtered_data_object_id

In reading through the code, I saw these lines in the LFP.nwb_object method:

https://github.com/LorenFrankLab/nwb_datajoint/blob/611864e86d5a7d9c64cc48e57ccc981bcf5b1e24/src/nwb_datajoint/common/common_ephys.py#L348-L349

This seems to be a typo because the LFP table has no 'filtered_data_object_id' property, so this fetch will always return nothing (or raise an error). Note that LFP has a 'lfp_object_id' property and LFPBand has a 'filtered_data_object_id' property.

So should this line instead say:

nwb_object_id = (self & {'analysis_file_name': lfp_file_name}).fetch1('lfp_object_id')

I do not know if this is used, but wanted to draw attention to it in case it is.

LFP tutorial nd.common.LFP().populate() throws error (no such file or directory)

Another minor issue.

In nwbdj_lfp_tutorial.ipynb, running nd.common.LFP().populate() threw an error:

OSError: Unable to create file (unable to open file: name = 'C:/base_dir\analysis\beans20190718-trim_00000001.nwb', errno = 2, error message = 'No such file or directory', flags = 13, o_flags = 302)

Once I created an analysis directory in my base directory everything was fine.

Write documentation describing best practices for adding a new analysis

Typically analyses require specific parameters, and our default is to have a manually entered schema with the parameters for a given analysis and then an autopopulated table with the analysis. We need documentation describing that structure and showing examples to make it easier for new users to add analyses.

Incorrect syntax in tutorial notebook 0

Cell 13 in 0_intro.ipynb says the following:

The following query returns all interval_list_name that is not 01_s1 or 04_r2

((IntervalList & {'nwb_file_name':nwb_file_name2}) - ({'interval_list_name':'01_s1'} and \ {'interval_list_name':'04_r2'})).fetch('interval_list_name')

Desired output:

array(['02_r1', '03_s2', 'pos 0 valid times', 'pos 1 valid times', 'pos 2 valid times', 'pos 3 valid times', 'raw data valid times'], dtype=object)

Actual output:

array(['01_s1', '02_r1', '03_s2', 'pos 0 valid times', 'pos 1 valid times', 'pos 2 valid times', 'pos 3 valid times', 'raw data valid times'], dtype=object)

n.b. that the actual output still includes '01_s1'

When we use python logical operators and and or we are subject to python's 'truthy' evaluation behavior. All numbers are truthy except for 0 (and 0.0 etc.). Similarly, all objects, including dictionaries, are truthy except for None. The statement dict1 and dict2 first checks to see if dict1 is truthy. In our case, it is, so the statement moves on to check the truthiness of dict2. It also finds that dict2 is truthy (because it is not of NoneType). The key to the unintended output is that the return value of and is the actual value of the last evaluated clause. So the output of dict1 and dict2 is dict2.

The end result of all this is that the given code:

((IntervalList & {'nwb_file_name':nwb_file_name2}) - ({'interval_list_name':'01_s1'} and \ {'interval_list_name':'04_r2'})).fetch('interval_list_name')

is equivalent to:

((IntervalList & {'nwb_file_name':nwb_file_name2}) - ({'interval_list_name':'04_r2'})).fetch('interval_list_name')

because the and statement collapses ({'interval_list_name':'01_s1'} and {'interval_list_name':'04_r2'}) into {'interval_list_name':'04_r2'}

I'm pretty sure this is the cause of the unintended behavior, because you can see the effects of short-circuiting if you replace the and with an or.

TL;DR I don't think python truth operators should be used in the way presented when using dictionaries as the conditions in a Datajoint query.

storage_dirs.kachery_storage_dir() throws error in Windows

Running through Populate_from_NWB_tutorial.ipynb:

import os
data_dir = r'C:/Users/Eric/Desktop/tmp_stuff/nwb_data'  # CHANGE ME
os.environ['NWB_DATAJOINT_BASE_DIR'] = data_dir
os.environ['KACHERY_STORAGE_DIR'] = os.path.join(data_dir, 'kachery-storage')

Later the following throws an error:
nd.insert_sessions(['beans20190718-trim.nwb'])

AssertionError:
Although KACHERY_STORAGE_DIR is set, it is not equal to $NWB_DATAJOINT_BASE_DIR/kachery-storage

Current values:
NWB_DATAJOINT_BASE_DIR=C:/Users/Eric/Desktop/tmp_stuff/nwb_data
KACHERY_STORAGE_DIR=C:/Users/Eric/Desktop/tmp_stuff/nwb_data\kachery-storage

You must update these variables before proceeding

It's because I'm in windows with the weird backward/forward slash mixing. I temporarily fixed it using an absolutely horrible hack in storage_dirs.py changing the assertion to:
assert p == base + '/kachery-storage' or p == base + '\kachery-storage'

Lockfile

@lfrank @edeno How should we handle locking of the files? Where should the lockfiles live, i.e. where should NWB_LOCK_FILE and ANALYSIS_LOCK_FILE env vars point to?

Mark extraction schema

Add schema to take waveforms (currently saved following spikesorting) and extract marks based on mark parameters schema.

Unable to find client auth file error when importing nwb_datajoint

After git pulling the most recent repo and reinstalling the environment, I get this error message when importing nwb_datajoint. Thanks in advance for any help.

Connecting [email protected]:3306


Exception Traceback (most recent call last)
/tmp/ipykernel_1537406/4165658341.py in
7 # dj.config['database.password']= 'simple'
8
----> 9 import nwb_datajoint as nd
10
11 # ignore datajoint+jupyter async warnings

~/Src/nwb_datajoint/src/nwb_datajoint/init.py in
9 import ndx_franklab_novela
10
---> 11 from .data_import.insert_sessions import insert_sessions
12 from .data_import.storage_dirs import base_dir, check_env, kachery_storage_dir
13

~/Src/nwb_datajoint/src/nwb_datajoint/data_import/init.py in
----> 1 from .insert_sessions import insert_sessions
2 from .storage_dirs import base_dir, check_env, kachery_storage_dir

~/Src/nwb_datajoint/src/nwb_datajoint/data_import/insert_sessions.py in
4 import pynwb
5
----> 6 from ..common import Nwbfile, get_raw_eseries, populate_all_common
7 from .storage_dirs import check_env
8

~/Src/nwb_datajoint/src/nwb_datajoint/common/init.py in
22 from .common_sensors import SensorData
23 from .common_session import ExperimenterList, Session
---> 24 from .common_spikesorting import (SortGroup, SpikeSortingFilterParameters, SpikeSortingArtifactDetectionParameters,
25 SpikeSortingRecordingSelection, SpikeSortingRecording,
26 SpikeSortingWorkspace,

~/Src/nwb_datajoint/src/nwb_datajoint/common/common_spikesorting.py in
16 import pynwb
17 import scipy.stats as stats
---> 18 import sortingview as sv
19 import spikeextractors as se
20 from spikeextractors.extractors.numpyextractors.numpyextractors import NumpySortingExtractor

~/anaconda3/envs/nwb_datajoint/lib/python3.8/site-packages/sortingview/init.py in
11 from .extractors import H5SortingExtractorV1
12 from .extractors.h5extractors.h5recordingextractorv1 import H5RecordingExtractorV1
---> 13 from .SpikeAmplitudes import SpikeAmplitudes

~/anaconda3/envs/nwb_datajoint/lib/python3.8/site-packages/sortingview/SpikeAmplitudes/init.py in
----> 1 from .SpikeAmplitudes import SpikeAmplitudes

~/anaconda3/envs/nwb_datajoint/lib/python3.8/site-packages/sortingview/SpikeAmplitudes/SpikeAmplitudes.py in
1 import seriesview as sev
----> 2 from ..backend.extensions.spikeamplitudes.spikeamplitudes import runtask_fetch_spike_amplitudes
3 from ..extractors.labboxephysrecordingextractor import LabboxEphysRecordingExtractor
4 from ..extractors.labboxephyssortingextractor import LabboxEphysSortingExtractor
5

~/anaconda3/envs/nwb_datajoint/lib/python3.8/site-packages/sortingview/backend/extensions/init.py in
1 dummy = 0
----> 2 from .averagewaveforms import *
3 from .clusters import *
4 from .correlograms import *
5 from .snippets import *

~/anaconda3/envs/nwb_datajoint/lib/python3.8/site-packages/sortingview/backend/extensions/averagewaveforms/init.py in
----> 1 from .fetch_average_waveforms_2 import *

~/anaconda3/envs/nwb_datajoint/lib/python3.8/site-packages/sortingview/backend/extensions/averagewaveforms/fetch_average_waveforms_2.py in
7 from sortingview.serialize_wrapper import serialize_wrapper
8 # from labbox import LabboxContext
----> 9 from sortingview.config import job_cache, job_handler
10 from sortingview.helpers import prepare_snippets_h5, get_unit_waveforms_from_snippets_h5
11

~/anaconda3/envs/nwb_datajoint/lib/python3.8/site-packages/sortingview/config/init.py in
----> 1 from .job_cache import job_cache
2 from .job_handler import job_handler

~/anaconda3/envs/nwb_datajoint/lib/python3.8/site-packages/sortingview/config/job_cache.py in
1 import hither2 as hi
2
----> 3 job_cache = hi.JobCache(feed_name='sortingview-job-cache')

~/anaconda3/envs/nwb_datajoint/lib/python3.8/site-packages/hither2/_job_cache.py in init(self, feed_name, feed_uri)
12 raise Exception('You cannot specify both feed_name and feed_id')
13 if feed_name is not None:
---> 14 feed = kc.load_feed(feed_name, create=True)
15 elif feed_uri is not None:
16 feed = kc.load_feed(feed_uri)

~/anaconda3/envs/nwb_datajoint/lib/python3.8/site-packages/kachery_client/main.py in load_feed(feed_name_or_uri, timeout_sec, create)
223 Feed: The loaded feed
224 """
--> 225 return _load_feed(feed_name_or_uri=feed_name_or_uri, timeout_sec=timeout_sec, create=create)
226
227 def watch_for_new_messages(subfeed_watches: Dict[str, dict], *, wait_msec, channel: str='local', signed=False) -> Dict[str, Any]:

~/anaconda3/envs/nwb_datajoint/lib/python3.8/site-packages/kachery_client/_feeds.py in _load_feed(feed_name_or_uri, timeout_sec, create)
331 else:
332 feed_name = feed_name_or_uri
--> 333 feed_id = _get_feed_id(feed_name, create=create)
334 return _load_feed(f'feed://{feed_id}')
335

~/anaconda3/envs/nwb_datajoint/lib/python3.8/site-packages/kachery_client/_feeds.py in _get_feed_id(feed_name, create)
291
292 def _get_feed_id(feed_name, *, create=False):
--> 293 feed_id = _get({'type': 'feed_id_for_name', 'feed_name': feed_name})
294 if (feed_id is None) or (not isinstance(feed_id, str)):
295 if create:

~/anaconda3/envs/nwb_datajoint/lib/python3.8/site-packages/kachery_client/_mutables.py in _get(key)
15
16 def _get(key: Union[str, dict, list]):
---> 17 daemon_url, headers = _daemon_url()
18 url = f'{daemon_url}/mutable/get'
19 x = _http_post_json(url, dict(

~/anaconda3/envs/nwb_datajoint/lib/python3.8/site-packages/kachery_client/_daemon_connection.py in _daemon_url(daemon_port, daemon_host, no_client_auth)
55 if not no_client_auth:
56 headers = {
---> 57 'KACHERY-CLIENT-AUTH-CODE': _get_client_auth_code()
58 }
59 else:

~/anaconda3/envs/nwb_datajoint/lib/python3.8/site-packages/kachery_client/_daemon_connection.py in _get_client_auth_code()
22 elapsed = time.time() - _client_auth_code_info['timestamp']
23 if elapsed > 60:
---> 24 code = _read_client_auth_code()
25 # if _client_auth_code_info['code'] and (code != _client_auth_code_info['code']):
26 # print(f'# Got new client auth code: {code}')

~/anaconda3/envs/nwb_datajoint/lib/python3.8/site-packages/kachery_client/_daemon_connection.py in _read_client_auth_code()
33 p = f'{ksd}/client-auth'
34 if not os.path.isfile(p):
---> 35 raise Exception(f'Unable to find client auth file (perhaps daemon is not running): {p}')
36 try:
37 with open(p, 'r') as f:

Exception: Unable to find client auth file (perhaps daemon is not running): None/client-auth

SpikeSorting: check to see if recording extractor has been created

There will likely be times when one wants to sort the same data multiple times (e.g. with different sorters) and in these cases we should check the SpikeSorting table to see if all of the parameters are the same as those for a previous sort, and if so we should get the recording extractor from that workspace if it exists.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.