GithubHelp home page GithubHelp logo

yttrilab / b-soid Goto Github PK

View Code? Open in Web Editor NEW
173.0 12.0 51.0 932.74 MB

Behavioral segmentation of open field in DeepLabCut, or B-SOID ("B-side"), is a pipeline that pairs unsupervised pattern recognition with supervised classification to achieve fast predictions of behaviors that are not predefined by users.

License: GNU General Public License v3.0

MATLAB 18.81% Python 77.00% C 3.12% C++ 0.99% CSS 0.06% Shell 0.02%
deeplabcut unsupervised-learning-algorithm behavior-analysis dimensionality-reduction neuroscience discover-behaviors umap-hdbscan openpose sleap

b-soid's Introduction

B-SOiD flowchart DOI

Why B-SOiD ("B-side")?

DeepLabCut 1,2,3, SLEAP 4, and OpenPose 5 have revolutionized the way behavioral scientists analyze data. These algorithm utilizes recent advances in computer vision and deep learning to automatically estimate 3D-poses. Interpreting the positions of an animal can be useful in studying behavior; however, it does not encompass the whole dynamic range of naturalistic behaviors.

B-SOiD identifies behaviors using a unique pipeline where unsupervised learning meets supervised classification. The unsupervised behavioral segmentation relies on non-linear dimensionality reduction 6,7,9,10, whereas the supervised classification is standard scikit-learn 8.

Behavioral segmentation of open field in DeepLabCut, or B-SOiD ("B-side"), as the name suggested, was first designed as a pipeline using pose estimation file from DeepLabCut as input. Now, it has extended to handle DeepLabCut (.h5, .csv), SLEAP (.h5), and OpenPose (.json) files.

Installation

Step 1: Install Anaconda/Python3

Step 2: Clone B-SOID repository with anaconda prompt

Git clone the web URL (example below) or download ZIP.

Change your current working directory to the location where you want the cloned directory to be made.

git clone https://github.com/YttriLab/B-SOID.git

Usage

Step 1: Setup, open an anaconda/python3 instance and install dependencies with the requirements file

cd /path/to/B-SOID/

For MacOS users

conda env create -n bsoid_v2 -f requirements.yaml (macOS)

or for Windows users

conda env create -n bsoid_v2 -f requirements_win.yaml (windows) 
conda activate bsoid_v2

You should now see (bsoid_v2) $yourusername@yourmachine ~ %

Step 2: Run the app!

streamlit run bsoid_app.py

Resources

We have provided our 6 body part DeepLabCut model. We also included two example 5 minute clips (labeled_clip1, labeled_clip2) as proxy for how well we trained our model. The raw video (raw_clip1, raw_clip2) and the corresponding h5/pickle/csv files are included as well.

Archives

Contributing

Pull requests are welcome. For recommended changes that you would like to see, open an issue. Join our slack group for more instantaneous feedback.

There are many exciting avenues to explore based on this work. Please do not hesitate to contact us for collaborations.

License

This software package provided without warranty of any kind and is licensed under the GNU General Public License v3.0. If you use our algorithm and/or model/data, please cite us! Preprint/peer-review will be announced in the following section.

News

September 2019: First B-SOiD preprint on bioRxiv

March 2020: Updated version of our preprint on bioRxiv

References

  1. Mathis A, Mamidanna P, Cury KM, Abe T, Murthy VN, Mathis MW, Bethge M. DeepLabCut: markerless pose estimation of user-defined body parts with deep learning. Nat Neurosci. 2018 Sep;21(9):1281-1289. doi: 10.1038/s41593-018-0209-y. Epub 2018 Aug 20. PubMed PMID: 30127430.

  2. Nath T, Mathis A, Chen AC, Patel A, Bethge M, Mathis MW. Using DeepLabCut for 3D markerless pose estimation across species and behaviors. Nat Protoc. 2019 Jul;14(7):2152-2176. doi: 10.1038/s41596-019-0176-0. Epub 2019 Jun 21. PubMed PMID: 31227823.

  3. Insafutdinov E., Pishchulin L., Andres B., Andriluka M., Schiele B. (2016) DeeperCut: A Deeper, Stronger, and Faster Multi-person Pose Estimation Model. In: Leibe B., Matas J., Sebe N., Welling M. (eds) Computer Vision – ECCV 2016. ECCV 2016. Lecture Notes in Computer Science, vol 9910. Springer, Cham

  4. Pereira, Talmo D., Nathaniel Tabris, Junyu Li, Shruthi Ravindranath, Eleni S. Papadoyannis, Z. Yan Wang, David M. Turner, et al. 2020. “SLEAP: Multi-Animal Pose Tracking.” bioRxiv.

  5. Cao Z, Hidalgo Martinez G, Simon T, Wei SE, Sheikh YA. OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields. IEEE Trans Pattern Anal Mach Intell. 2019 Jul 17. Epub ahead of print. PMID: 31331883..

  6. McInnes, L., Healy, J., & Melville, J. (2018). UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction.

  7. McInnes, L., Healy, J., & Astels, S. (2017). hdbscan: Hierarchical density based clustering. The Journal of Open Source Software, 2(11), 205.

  8. Scikit-learn: Machine Learning in Python, Pedregosa et al., JMLR 12, pp. 2825-2830, 2011.

  9. L.J.P. van der Maaten. Accelerating t-SNE using Tree-Based Algorithms. Journal of Machine Learning Research 15(Oct):3221-3245, 2014.

  10. Chen M. EM Algorithm for Gaussian Mixture Model (EM GMM). MATLAB Central File Exchange. Retrieved July 15, 2019.

b-soid's People

Contributors

runninghsus avatar shinhs0506 avatar yttrilab avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

b-soid's Issues

Python Tutorial: Conda Environment Issue with BHTSNE Module

Great work porting bsoid to python! Really, really thrilled to try it out!

Unfortunately the module BHTSNE cannot be installed at the moment ( at least in my case). In your tutorial video/gif, you are actually not adding it in the command line, is there a reason?
The following error pops up when using (after successfully creating the new environment and installing ipython)
pip install pandas tqdm matplotlib opencv-python seaborn scikit-learn bhtsne :

Collecting pandas
  Using cached pandas-1.0.3-cp38-cp38-win_amd64.whl (8.9 MB)
Collecting tqdm
  Using cached tqdm-4.45.0-py2.py3-none-any.whl (60 kB)
Collecting matplotlib
  Using cached matplotlib-3.2.1-cp38-cp38-win_amd64.whl (9.2 MB)
Collecting opencv-python
  Using cached opencv_python-4.2.0.34-cp38-cp38-win_amd64.whl (33.1 MB)
Collecting seaborn
  Using cached seaborn-0.10.1-py3-none-any.whl (215 kB)
Collecting scikit-learn
  Using cached scikit_learn-0.22.2.post1-cp38-cp38-win_amd64.whl (6.6 MB)
Collecting bhtsne
  Using cached bhtsne-0.1.9.tar.gz (86 kB)
    ERROR: Command errored out with exit status 1:
     command: 'D:\Anaconda\envs\bsoid\python.exe' -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\schwa\\AppData\\Local\\Temp\\pip-install-_9m_4o00\\bhtsne\\setup.py'"'"'; __file__='"'"'C:\\Users\\schwa\\AppData\\Local\\Temp\\pip-install-_9m_4o00\\bhtsne\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base 'C:\Users\schwa\AppData\Local\Temp\pip-install-_9m_4o00\bhtsne\pip-egg-info'
         cwd: C:\Users\schwa\AppData\Local\Temp\pip-install-_9m_4o00\bhtsne\
    Complete output (5 lines):
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "C:\Users\schwa\AppData\Local\Temp\pip-install-_9m_4o00\bhtsne\setup.py", line 4, in <module>
        from Cython.Build import cythonize
    ModuleNotFoundError: No module named 'Cython'
    ----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.

After installing Cython with pip install cython the following error is produced:

Collecting pandas
  Using cached pandas-1.0.3-cp38-cp38-win_amd64.whl (8.9 MB)
Collecting tqdm
  Using cached tqdm-4.45.0-py2.py3-none-any.whl (60 kB)
Collecting matplotlib
  Using cached matplotlib-3.2.1-cp38-cp38-win_amd64.whl (9.2 MB)
Collecting opencv-python
  Using cached opencv_python-4.2.0.34-cp38-cp38-win_amd64.whl (33.1 MB)
Collecting seaborn
  Using cached seaborn-0.10.1-py3-none-any.whl (215 kB)
Collecting scikit-learn
  Using cached scikit_learn-0.22.2.post1-cp38-cp38-win_amd64.whl (6.6 MB)
Collecting bhtsne
  Using cached bhtsne-0.1.9.tar.gz (86 kB)
    ERROR: Command errored out with exit status 1:
     command: 'D:\Anaconda\envs\bsoid\python.exe' -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\schwa\\AppData\\Local\\Temp\\pip-install-ya4007uh\\bhtsne\\setup.py'"'"'; __file__='"'"'C:\\Users\\schwa\\AppData\\Local\\Temp\\pip-install-ya4007uh\\bhtsne\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base 'C:\Users\schwa\AppData\Local\Temp\pip-install-ya4007uh\bhtsne\pip-egg-info'
         cwd: C:\Users\schwa\AppData\Local\Temp\pip-install-ya4007uh\bhtsne\
    Complete output (5 lines):
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "C:\Users\schwa\AppData\Local\Temp\pip-install-ya4007uh\bhtsne\setup.py", line 5, in <module>
        import numpy
    ModuleNotFoundError: No module named 'numpy'
    ----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.

Splitting up the pip install in pip install pandas tqdm matplotlib opencv-python seaborn scikit-learn and pip install bhtsne solves the issue, but raises a new error:

(bsoid) C:\Users\schwa>pip install bhtsne
Collecting bhtsne
  Using cached bhtsne-0.1.9.tar.gz (86 kB)
Requirement already satisfied: numpy in d:\anaconda\envs\bsoid\lib\site-packages (from bhtsne) (1.18.3)
Requirement already satisfied: cython in d:\anaconda\envs\bsoid\lib\site-packages (from bhtsne) (0.29.17)
Building wheels for collected packages: bhtsne
  Building wheel for bhtsne (setup.py) ... error
  ERROR: Command errored out with exit status 1:
   command: 'D:\Anaconda\envs\bsoid\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\schwa\\AppData\\Local\\Temp\\pip-install-4ha89yy6\\bhtsne\\setup.py'"'"'; __file__='"'"'C:\\Users\\schwa\\AppData\\Local\\Temp\\pip-install-4ha89yy6\\bhtsne\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\schwa\AppData\Local\Temp\pip-wheel-r3sbhgc2'
       cwd: C:\Users\schwa\AppData\Local\Temp\pip-install-4ha89yy6\bhtsne\
  Complete output (12 lines):
  D:\Anaconda\envs\bsoid\lib\distutils\extension.py:131: UserWarning: Unknown Extension options: 'extra_compile_flags'
    warnings.warn(msg)
  running bdist_wheel
  running build
  running build_py
  creating build
  creating build\lib.win-amd64-3.8
  creating build\lib.win-amd64-3.8\bhtsne
  copying bhtsne\__init__.py -> build\lib.win-amd64-3.8\bhtsne
  running build_ext
  building 'bhtsne_wrapper' extension
  error: Microsoft Visual C++ 14.0 is required. Get it with "Build Tools for Visual Studio": https://visualstudio.microsoft.com/downloads/
  ----------------------------------------
  ERROR: Failed building wheel for bhtsne
  Running setup.py clean for bhtsne
Failed to build bhtsne
Installing collected packages: bhtsne
    Running setup.py install for bhtsne ... error
    ERROR: Command errored out with exit status 1:
     command: 'D:\Anaconda\envs\bsoid\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\schwa\\AppData\\Local\\Temp\\pip-install-4ha89yy6\\bhtsne\\setup.py'"'"'; __file__='"'"'C:\\Users\\schwa\\AppData\\Local\\Temp\\pip-install-4ha89yy6\\bhtsne\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\schwa\AppData\Local\Temp\pip-record-bqx3h2oo\install-record.txt' --single-version-externally-managed --compile --install-headers 'D:\Anaconda\envs\bsoid\Include\bhtsne'
         cwd: C:\Users\schwa\AppData\Local\Temp\pip-install-4ha89yy6\bhtsne\
    Complete output (12 lines):
    D:\Anaconda\envs\bsoid\lib\distutils\extension.py:131: UserWarning: Unknown Extension options: 'extra_compile_flags'
      warnings.warn(msg)
    running install
    running build
    running build_py
    creating build
    creating build\lib.win-amd64-3.8
    creating build\lib.win-amd64-3.8\bhtsne
    copying bhtsne\__init__.py -> build\lib.win-amd64-3.8\bhtsne
    running build_ext
    building 'bhtsne_wrapper' extension
    error: Microsoft Visual C++ 14.0 is required. Get it with "Build Tools for Visual Studio": https://visualstudio.microsoft.com/downloads/
    ----------------------------------------
ERROR: Command errored out with exit status 1: 'D:\Anaconda\envs\bsoid\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\schwa\\AppData\\Local\\Temp\\pip-install-4ha89yy6\\bhtsne\\setup.py'"'"'; __file__='"'"'C:\\Users\\schwa\\AppData\\Local\\Temp\\pip-install-4ha89yy6\\bhtsne\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\schwa\AppData\Local\Temp\pip-record-bqx3h2oo\install-record.txt' --single-version-externally-managed --compile --install-headers 'D:\Anaconda\envs\bsoid\Include\bhtsne' Check the logs for full command output.

This seems to be a known but unsolved issue with bhtsne . I am wondering if you encountered the same issues?

Here is my conda list:

# packages in environment at D:\Anaconda\envs\bsoid:
#
# Name                    Version                   Build  Channel
backcall                  0.1.0                    py38_0
ca-certificates           2020.1.1                      0
certifi                   2020.4.5.1               py38_0
colorama                  0.4.3                      py_0
cycler                    0.10.0                   pypi_0    pypi
cython                    0.29.17                  pypi_0    pypi
decorator                 4.4.2                      py_0
ipython                   7.13.0           py38h5ca1d4c_0
ipython_genutils          0.2.0                    py38_0
jedi                      0.17.0                   py38_0
joblib                    0.14.1                   pypi_0    pypi
kiwisolver                1.2.0                    pypi_0    pypi
matplotlib                3.2.1                    pypi_0    pypi
numpy                     1.18.3                   pypi_0    pypi
opencv-python             4.2.0.34                 pypi_0    pypi
openssl                   1.1.1g               he774522_0
pandas                    1.0.3                    pypi_0    pypi
parso                     0.7.0                      py_0
pickleshare               0.7.5                 py38_1000
pip                       20.0.2                   py38_1
prompt-toolkit            3.0.4                      py_0
prompt_toolkit            3.0.4                         0
pygments                  2.6.1                      py_0
pyparsing                 2.4.7                    pypi_0    pypi
python                    3.8.2               h5fd99cc_11
python-dateutil           2.8.1                    pypi_0    pypi
pytz                      2020.1                   pypi_0    pypi
scikit-learn              0.22.2.post1             pypi_0    pypi
scipy                     1.4.1                    pypi_0    pypi
seaborn                   0.10.1                   pypi_0    pypi
setuptools                46.1.3                   py38_0
six                       1.14.0                   py38_0
sqlite                    3.31.1               h2a8f88b_1
tqdm                      4.45.0                   pypi_0    pypi
traitlets                 4.3.3                    py38_0
vc                        14.1                 h0510ff6_4
vs2015_runtime            14.16.27012          hf0eaf9b_1
wcwidth                   0.1.9                      py_0
wheel                     0.34.2                   py38_0
wincertstore              0.2                      py38_0
zlib                      1.2.11               h62dcd97_4

Multiple Licence Issue

Hi,

There is currently 3 licenses referenced in this repo and I'm relatively lost on the one that apply on this project :
LICENSE states that the repo is under GNU General Public License v3.0,
but under the License part of the README, it's stated that the project is licensed under the GNU Lesser General Public License v3.0.
And at the end of the aforementioned part, the GNU Affero General Public License v3.0 is linked.

Thank you and hope you could help to clarify which license apply on this project.
Nolan

Potential typo in bsoid_py feature extraction?

File to be examined:

  • bsoid_py/classify.py

Function to be examined:

  • bsoid_extract()

Issue:

Upon reviewing the code responsible for feature engineering (here called "feature extraction"), there seems to be an anomaly for how an intermediate feature is calculated. See variable cfp. Below is an excerpt of the current state:

        cfp = np.vstack(((data[m][:, 2 * bodyparts.get('Forepaw/Shoulder1')] +
                          data[m][:, 2 * bodyparts.get('Forepaw/Shoulder2')]) / 2,
                         (data[m][:, 2 * bodyparts.get('Forepaw/Shoulder1') + 1] +
                          data[m][:, 2 * bodyparts.get('Forepaw/Shoulder1') + 1]) / 2)).T

Look at the final two lines in the 4-line code excerpt above. It appears as though calculating the average y-value between the Forepaws/Shoulders is calculated on only Forepaw/Shoulder1 instead of the average between Forepaw/Shoulder1 and Forepaw/Shoulder2. I would like to suggestion a small amendment:

        cfp = np.vstack(((data[m][:, 2 * bodyparts.get('Forepaw/Shoulder1')] +
                          data[m][:, 2 * bodyparts.get('Forepaw/Shoulder2')]) / 2,
                         (data[m][:, 2 * bodyparts.get('Forepaw/Shoulder1') + 1] +
                          data[m][:, 2 * bodyparts.get('Forepaw/Shoulder2') + 2]) / 2)).T

The only diff between code chunks (where the actual copy/paste is on top, suggested edit is on the bottom) is on the final line. Let me know if this is indeed an issue and/or if a different solution is necessary to remedy this. Thank you for your time.

Classification using the UMAP+HDBSCAN version of B-SOID has incorrect preprocessing?

The bsoid_extract() function used in bsoid_umap/classify.py results in features with more dimensions than I began with

def bsoid_extract(data, fps=FPS):

Using the same features mentioned in bsoid_py/config/LOCAL_CONFIG.py, the data should be 7-D, however the output of the above is 36-D. I swapped out the function for the one from bsoid_py and it works fine. The model that is trained in bsoid_umap/train.py is trained on 7-D features, so clearly this presents a problem

v2 on linux

Is version 2 compatible with Ubuntu 18.04? I have failed to create the environment with given requirements.yaml file

cannot load and preprocess data

Hi,

I am running Win10 and successfully installed B-SOIDv2 but cannot load and preprocess data. Thanks1

Kyle

Here is the error message:

IndexError: arrays used as indices must be of integer (or boolean) type
Traceback:
File "c:\users\fish_behavior.conda\envs\bsoid_v2\lib\site-packages\streamlit\script_runner.py", line 332, in _run_script
exec(code, module.dict)
File "C:\Users\Fish_Behavior\Desktop\B-SOID-v2\B-SOID\bsoid_app.py", line 42, in
processor.compile_data()
File "C:\Users\Fish_Behavior\Desktop\B-SOID-v2\B-SOID\bsoid_app\data_preprocess.py", line 104, in compile_data
file_j_processed, p_sub_threshold = adp_filt(file_j_df, self.pose_chosen)
File "C:\Users\Fish_Behavior\Desktop\B-SOID-v2\B-SOID\bsoid_app\bsoid_utilities\likelihoodprocessing.py", line 87, in adp_filt
datax = curr_df1[1:, np.array(xIndex)]

Feature: Google Colab: Consider using pandas mulitindex when using read_csv('DLC.csv')

Disclaimer
Again, I want to thank you first for shipping the whole Bsoid code into python. This must be alot of work. Creating a Google Colab version of it so everyone can test it right away was a brilliant move. As you asked me to give feedback on the process, I will do so. Saying that, I would completely understand if the 'Feature' I am writing about is not an actual improvement worth rewritting code for. I just want to share it, as I was struggling with pandas for quite some time and love to use it now.

Feature
While dropping columns from my 9 bodyparts dlc file, i noticed that you are not using multiindexing when you load the csv file from deeplabcut generated files (See screenshot). This results in pandas automatically assuming that the first row is the column header, but the rows underneath are data. As you are extracting features below, i guess you will encounter this issue and probably solved it by skipping 2 rows all the time.

The original dlc dataframe is created with a multiindex, something like this:

     scorer DLC_resnet50_DLStream9Mar26shuffle1_350000  ...                       
  bodyparts                                   nose   ...       tail_root 
     coords                                       x  ...    y         likelihood
       0                                 314.659546  ...  321.318329        1.0  
       1                                 316.235596  ...  320.991638        1.0
       2                                 316.770020  ...  320.810364        1.0
       3                                 316.974335  ...  320.441833        1.0
       4                                 318.530182  ...  319.821411        1.0

You can see this also in the index (first column) that actually includes a name for those levels of multiindex (scorer, bodyparts and cords).

Current import

The current import looks like this:

      scorer  ... DLC_resnet50_DLStream9Mar26shuffle1_350000.26
0  bodyparts  ...                                      tail_tip
1     coords  ...                                    likelihood
2          0  ...                            0.9999994039535522
3          1  ...                            0.9999991655349731
4          2  ...                            0.9999995231628418

so every column has a header (DLC_resnet50_DLStream9Mar26shuffle1_350000 .26 ) which also gets an additional ID at the end because panda columns need to be unique and then start with the first two rows (bodypart, x/y/likelihood)

A possible improvement

To load a csv file like this with a multiindex, you can simple add:

currDf = pd.read_csv(filename,low_memory=False, header = [0,1,2])

if you want to include the index column as the index (rather than creating a new one):

currDf = pd.read_csv(filename,low_memory=False, header = [0,1,2], index_col=0)

This allows you to manipulate and select the dataframe based on different levels, rather than full columns all the time.

This code for example is dropping the unnecessary bodyparts without knowing the first level name (scorer name).
currDf = currDf.drop(columns=['neck', 'centroid', 'tail_tip'], level = 1)

I find it very usefull when manipulating dataframes.

Screenshot

image

Effects of Different Clustering Algorithms

Hi there,

Thanks so much for your release!

I am working on a rat behavior analysis lab, and I have adapted part of your python code to train an SVM to predict behavioral labels of rats based on a TSNE algorithm. Since I borrowed your code a long time ago, I notice that you change your clustering algorithm from different versions of TSNE to a combination of UMAP and HDBSCAN.

I wonder whether I should update my code as well, so is it possible for you to share your view of how does each clustering algorithm affects the interpretability of the resultant behavioral labels? Here for interpretability, I mean whether the label represents meaningful behaviors.

Thank you so much for time,
James

NotADirectoryError [Win267]

OS: Windows 10
Version: B-SOiD 2.0

Issue: I have just started exploring using B-SOID, although I have read the pre print and explored the github. I can't seem to get past the first step of loading a video and h5 file in the streamline web app (see screenshot)

I thought it might have to do with the whitespace in my file name (something I should probably get around to changing), but it doesnt seem to be an issue in assigning a root directory step.

Any thoughts on what I could do to move forward? Thanks in advance for your help, and thanks for the awesome open source code!

image

TypeError when generating synchronized B-SOID video in analysis app

I am trying to generate the side-by-side video in the analysis app, and am running into the following error when I click the "Generate synchronized B-SOID video" button.
A possible issue is the file input "Which file corresponds to the video?", as it's unclear to me which file is meant to be passed in here. (Currently, I am passing in the .h5 pose data file from DeepLabCut.) If it's important, I am running B-SOID using the Streamlit app on Windows.

TypeError: slice indices must be integers or None or have an __index__ method

Traceback:
File "C:\ProgramData\Anaconda3\envs\bsoid_v2\lib\site-packages\streamlit\script_runner.py", line 332, in _run_script
    exec(code, module.__dict__)
File "C:\Users\xxxx\Documents\B-SOID\bsoid_app\bsoid_analysis.py", line 49, in <module>
    video_generator.main()
File "./bsoid_app\analysis_subroutines\video_analysis.py", line 95, in main
    self.generate()
File "./bsoid_app\analysis_subroutines\video_analysis.py", line 77, in generate
    self.working_dir, self.width, self.height)
File "./bsoid_app\analysis_subroutines\analysis_utilities\visuals.py", line 251, in umap_scatter
    umap_x, umap_y = embeds[mov_range[0]:mov_range[1], 0], embeds[mov_range[0]:mov_range[1], 1]

Thanks!

error on identify and tweak number of clusters step

Run on Windows 10 (Chrome)....put in restrictive memory mode during extract and embed despite 128 GB onboard RAM....

Here is the error:


Show First 3D UMAP plot:

TypeError: '>=' not supported between instances of 'list' and 'int'
Traceback:
File "c:\programdata\anaconda3\envs\bsoid_v2\lib\site-packages\streamlit\script_runner.py", line 332, in _run_script
exec(code, module.dict)
File "C:\Users\McCormick Lab\Documents\Python\B-SOID\bsoid_app.py", line 50, in
clusterer.main()
File "C:\Users\McCormick Lab\Documents\Python\B-SOID\bsoid_app\clustering.py", line 103, in main
self.show_classes()
File "C:\Users\McCormick Lab\Documents\Python\B-SOID\bsoid_app\clustering.py", line 56, in show_classes
self.assignments.shape[0] * 100)))

Can you help me with this?

Thanks,

Evan

[Feature request] B-SOID to Python

Hi,
beautiful preprint and really interesting tool. I was wandering if you are interested in porting B-SOID to Python to match the general open-source mentality in the DeepLabCut universe.
I haven't had an indepth look at your code yet, so excuse my naive perspective. Do you think this is a doable?
I will have a look on my own in the next few weeks but I would appreciate your comment on this.
thanks.

Using action_gif

First of all, thank you for the release. I am sure that it will help alot of people doing behaviour classification in open fields.

I have adapted the code to work on 8 features derived from 7 DLC markers on a top-down video.
Our original goal was to train a supervised classifier (grooming yes/no) on dimensionality reduced data. Now shifting strategy to adapt your code, a very crucial step will be to subjectively judge the action in each of the classes that the gmm and svm works with.

My questions are regarding the action_gif function to output small videos.

Is it possible to run the function on multiple subjects automatically, or do I need to parse the action_gif videos and group_labels for each subject separately?

Another thing. While testing the function on a single video, I got a warning from line 51 of action_gif.

grp_1 = grp_fill(1:fps/20:end);

My fps is 30, so here the grp_1 is downsampled in a ambigous way... Any thoughts on this?

Again, thanks alot for the upload.

Sincerely, Peter

IndexError: list index out of range

First of all, thanks a lot for this nice contribution!

I encountered a problem when predicting new datasets. When I run this command:
data_new, feats_new, labels_fslow, labels_fshigh = bsoid_py.main.run(PREDICT_FOLDERS)

There is an "IndexError".

In [3]: data_new, feats_new, labels_fslow, labels_fshigh = bsoid_py.main.run(PRE
...: DICT_FOLDERS)

###Well, I did not copy the processed here.


IndexError Traceback (most recent call last)
in
----> 1 data_new, feats_new, labels_fslow, labels_fshigh = bsoid_py.main.run(PREDICT_FOLDERS)

~/Software/Deep Learning/B-SOiD/bsoid_py/main.py in run(predict_folders)
62 with open(os.path.join(OUTPUT_PATH, str.join('', ('bsoid_', MODEL_NAME, '.sav'))), 'rb') as fr:
63 behv_model, scaler = joblib.load(fr)
---> 64 data_new, feats_new, labels_fslow, labels_fshigh = bsoid_py.classify.main(predict_folders, scaler, FPS, behv_model)
65 filenames = []
66 all_df = []

~/Software/Deep Learning/B-SOiD/bsoid_py/classify.py in main(predict_folders, scaler, fps, behv_model)
167 plot_feats(feats_new, labels_fslow)
168 if GEN_VIDEOS:
--> 169 videoprocessing.main(VID_NAME, labels_fslow[ID], FPS, FRAME_DIR)
170 return data_new, feats_new, labels_fslow, labels_fshigh

~/Software/Deep Learning/B-SOiD/bsoid_py/utils/videoprocessing.py in main(vidname, labels, fps, output_path)
154 def main(vidname, labels, fps, output_path):
155 vid2frame(vidname, labels, fps, output_path)
--> 156 create_labeled_vid(labels, crit=3, counts=5, frame_dir=output_path, output_path=SHORTVID_DIR)
157 return
158

~/Software/Deep Learning/B-SOiD/bsoid_py/utils/videoprocessing.py in create_labeled_vid(labels, crit, counts, frame_dir, output_path)
121 sort_nicely(images)
122 fourcc = cv2.VideoWriter_fourcc(*'mp4v')
--> 123 frame = cv2.imread(os.path.join(frame_dir, images[0]))
124 height, width, layers = frame.shape
125 rnges = []

IndexError: list index out of range

Where is the problem? Is this because the DIR path is too long?

Thank you very much!

Issue with libop264

Hi,
I work on windows and I have installed and used B-SOID successfully.

I experienced an issue when generating snippets, with my videos but also with the examples video.

The following message pops-up:

libopenh264 @ 000002825bfd1540] Incorrect library version loaded
Could not open codec 'libopenh264': Unspecified error

The result is that I have truncated videos in the mp4 folder and I cannot check the quality of the clustering.

My understanding is that the coded is not installed, and so I tried to install openh264. Unfortunately, I did not find an convincing explanation on how to do it.

Does anyone have some feedback on this issue?

Thanks in advance

f10_fps referenced before assignment

Hey there, I'm attempting to run this on the .csvs output by DLC on mouse data. Whenever I run the following code from the docs:
f_10fps, trained_tsne, scaler, gmm_assignments, classifier, scores = bsoid_py.main.build(TRAIN_FOLDERS)

I get the following error:
image

As best I can guess, it looks like the program is not able to read the files from my Train folder. Here is what that folder looks like (I just dumped all my csvs in there):
image

I've attached a text file of my local config for BSOID Umap - note that the videos, csvs, and BSOID code are not all stored in the same directory - is this an issue?

Any help appreciated

config.txt

"Index error:list out of range" DLC .csv

Just getting started with BSOID. I'm running bsoid_v2 in an Anaconda environment in Windows 10. In the browser interface, I can enter the paths to my DLC evaluation results (.csv) just fine, but when BSOID gets to the pose processing step I get this:

IndexError: list index out of range
Traceback:
File "f:\users\manny\anaconda3\envs\bsoid_v2\lib\site-packages\streamlit\script_runner.py", line 332, in _run_script
exec(code, module.dict)
File "F:\Users\Manny\Documents\GitHub\B-SOID\bsoid_app.py", line 42, in
processor.compile_data()
File "F:\Users\Manny\Documents\GitHub\B-SOID\bsoid_app\data_preprocess.py", line 89, in compile_data
file0_df = pd.read_csv(data_files[0], low_memory=False)

I have 4 poses in these files, is that the issue?

Retrain function

Hello,
I was trying to run the retrain function but it appears that the bsoid_umap.retrain module is missing... is the retrain functionality available?

Preprocessing issue with adp_filt in likelihoodprocessing.py

Hi! I'm having some trouble with some of my DLC csv files where the preprocessing step fails. Attached below is the error message that comes up. It appears to occur when then likelihood is low for one of the body parts in the file. I'm wondering if I can only include files where DLC was able to successfully track all the body parts across the video? Or is it possible to include files where the tracking fails at some points?

image

running on multiple simultaneous videos

Hi,

Is it possible to run multiple simultaneously acquired videos with accompanying DLC pose estimate csv's using the GUI?

I would like to get composite states from the features of all 3 videos.

Thanks,

Evan

are logging messages correct in bsoid_extract?

In bsoid_extract there are nested loops over n, s, and k

The position the following logging message means that it is repeated n*s times.
6 times for each "n" in the case of a 60 FPS input.

logging.info('Done integrating features into 100ms bins from CSV file {}.'.format(n + 1))

In trying to understand this peculiarity I concluded that both n and s are looping over "offsets" rather than CSV files.

It seems that bsoid_extract receives all the 'offset' versions for one CSV file, process those, and would therefore only process one CSV file per call, which is why we are looping over CSV files in bsoid_frameshift.

It's not entirely clear to me yet why we need to loop over n and s, but I feel fairly certain that at least the logging message is incorrect.

classify behaviors with 3D pose estimation

I have tried the codes to analyze my deeplabcut data. The result is amazing!
Now I am tracking animals with one camera and generating 2D trajectories; but planning to use 2 camera for 3D pose estimation. The B-SOID seems designed for 2D tracking. Would it have future extension for 3D trajectories? Thanks!

ModuleNotFoundError: No module named 'past'

Hi, B-SOID is amazing for neuroscience.
but when I run the code in my win10 system, I got this:

(bsoid_v2) C:\Users\tanwu\B-SOID>streamlit run bsoid_app.py

You can now view your Streamlit app in your browser.

Local URL: http://localhost:8501
Network URL: http://192.168.0.119:8501

It gave me the following result:

ModuleNotFoundError: No module named 'past'
Traceback:

File "c:\users\tanwu\anaconda3\envs\bsoid_v2\lib\site-packages\streamlit\script_runner.py", line 332, in run_script
exec(code, module.dict)
File "C:\Users\tanwu\B-SOID\bsoid_app.py", line 3, in
from bsoid_app import data_preprocess, extract_features, clustering, machine_learner,
File "C:\Users\tanwu\B-SOID\bsoid_app\video_creator.py", line 3, in
import ffmpeg
File "c:\users\tanwu\anaconda3\envs\bsoid_v2\lib\site-packages\ffmpeg_init
.py", line 2, in
from . import nodes
File "c:\users\tanwu\anaconda3\envs\bsoid_v2\lib\site-packages\ffmpeg\nodes.py", line 3, in
from past.builtins import basestring

I checked and found that the "future" package is already there, but this 'No module named 'past'' kept showing up.
Any suggestion? Thank you

Add B-SOID to Open Neuroscience

Hello!

We are reaching out because we would love to have your project listed on Open Neuroscience, and also share information about this project:

Open Neuroscience is a community run project, where we are curating and highlighting open source projects related to neurosciences!

Briefly, we have a website where short descritptions about projects are listed, with links to the projects themselves, their authors, together with images and other links.

Once a new entry is made, we make a quick check for spam, and publish it.

Once published, we make people aware of the new entry by Twitter and a Facebook group.

To add information about their project, developers only need to fill out this form

In the form, people can add subfields and tags to their entries, so that projects are filterable and searchable on the website!

The reason why we have the form system is that it makes it open for everyone to contribute to the website and allows developers themselves to describe their projects!

Also, there are so many amazing projects coming out in Neurosciences that it would be impossible for us to keep track and log them all!

Open Neuroscience tech stack leverages open source tools as much as possible:

  • The website is based on HUGO + Academic Theme
  • Everything is hosted on github here
  • We use plausible.io to see visit stats on the website. It respects visitors privacy, doesn't install cookies on your computer
    • You can check our visitor stats here

streamlit launching error: ImportError: DLL load failed

OS: Windows 10
Python 3.7.4
Hi, I guess this might be the stupidest issue ever, but I am extremely new to Python and I can't manage to launch bsoid_v2. I have followed the installation steps (https://github.com/YttriLab/B-SOID).
When running the app streamlit run bsoid_app.py I get the error pasted below (I'm using Firefox).
On the other hand streamlit launches properly on Firefox when I execute streamlit hello in the Anaconda Promt
What am I doing wrong? Thanks!

ImportError: DLL load failed: Le module spécifié est introuvable.
Traceback:

File "c:\users\galinane\anaconda3\envs\bsoid_v2\lib\site-packages\streamlit\script_runner.py", line 332, in _run_script
    exec(code, module.__dict__)
File "C:\Users\galinane\Documents\Python Scripts\B-SOID\bsoid_app.py", line 3, in <module>
    from bsoid_app import data_preprocess, extract_features, clustering, machine_learner, \
File "C:\Users\galinane\Documents\Python Scripts\B-SOID\bsoid_app\video_creator.py", line 10, in <module>
    from bsoid_app.bsoid_utilities.videoprocessing import *
File "C:\Users\galinane\Documents\Python Scripts\B-SOID\bsoid_app\bsoid_utilities\videoprocessing.py", line 5, in <module>
    import cv2
File "c:\users\galinane\anaconda3\envs\bsoid_v2\lib\site-packages\cv2\__init__.py", line 5, in <module>
    from .cv2 import *

Google Colab: TypeError in bsoid_assign() (Too many bodyparts?)

Great work with the Google Colab notebook. As a python user, I was looking forward to this! Finally have the time to try it out!
Sadly:
Running your Google Colab notebook with original DeepLabCut data yields the following error in line
f_10fps,tsne_feats,labels,tsne_fig = bsoid_assign(data,fps = 30,comp = 1,kclass = 50,it = 30)

image

As documented, i only changed fps (from 60 to 30).

I am using an animal model with 9 Points, single animal (top-down view) the points are called ['nose', 'neck', 'left_shoulder', 'right_shoulder', 'left_hip', 'centroid', 'right_shoulder', 'tail_root', 'tail_tip'].
I was not able to see if you are using hardcoded bodyparts for the feature extraction. If so, this might be the issue?

More Details
I run the code completely on google colab with google drive connected. The deeplabcut csv files are raw but quite big (> 50k rows each).

If you need any further info, I am happy to provide it.

show confusion matrix on test fails

can you help? here is the error message:


Two confusion matrices - top: counts, bottom: probability with true positives in diagonal

ValueError: The number of FixedLocator locations (1424), usually from a call to set_ticks, does not match the number of ticklabels (1438).
Traceback:
File "c:\programdata\anaconda3\envs\bsoid_v2\lib\site-packages\streamlit\script_runner.py", line 332, in _run_script
exec(code, module.dict)
File "C:\Users\McCormick Lab\Documents\Python\B-SOID\bsoid_app.py", line 62, in
learning_protocol.main()
File "C:\Users\McCormick Lab\Documents\Python\B-SOID\bsoid_app\machine_learner.py", line 82, in main
self.show_confusion_matrix()
File "C:\Users\McCormick Lab\Documents\Python\B-SOID\bsoid_app\machine_learner.py", line 58, in show_confusion_matrix
fig = visuals.plot_confusion(self.validate_clf, self.x_test, self.y_test)
File "C:\Users\McCormick Lab\Documents\Python\B-SOID\bsoid_app\bsoid_utilities\visuals.py", line 79, in plot_confusion
cm = plot_confusion_matrix(validate_clf, x_test, y_test, cmap=sns.cm.rocket_r, normalize=normalize)
File "c:\programdata\anaconda3\envs\bsoid_v2\lib\site-packages\sklearn\utils\validation.py", line 72, in inner_f
return f(**kwargs)
File "c:\programdata\anaconda3\envs\bsoid_v2\lib\site-packages\sklearn\metrics_plot\confusion_matrix.py", line 233, in plot_confusion_matrix
values_format=values_format)
File "c:\programdata\anaconda3\envs\bsoid_v2\lib\site-packages\sklearn\utils\validation.py", line 72, in inner_f
return f(**kwargs)
File "c:\programdata\anaconda3\envs\bsoid_v2\lib\site-packages\sklearn\metrics_plot\confusion_matrix.py", line 125, in plot
xlabel="Predicted label")
File "c:\programdata\anaconda3\envs\bsoid_v2\lib\site-packages\matplotlib\artist.py", line 1113, in set
return self.update(kwargs)
File "c:\programdata\anaconda3\envs\bsoid_v2\lib\site-packages\matplotlib\artist.py", line 998, in update
ret.append(func(v))
File "c:\programdata\anaconda3\envs\bsoid_v2\lib\site-packages\matplotlib\axes_base.py", line 63, in wrapper
return get_method(self)(*args, **kwargs)
File "c:\programdata\anaconda3\envs\bsoid_v2\lib\site-packages\matplotlib\cbook\deprecation.py", line 451, in wrapper
return func(*args, **kwargs)
File "c:\programdata\anaconda3\envs\bsoid_v2\lib\site-packages\matplotlib\axis.py", line 1796, in _set_ticklabels
return self.set_ticklabels(labels, minor=minor, **kwargs)
File "c:\programdata\anaconda3\envs\bsoid_v2\lib\site-packages\matplotlib\axis.py", line 1718, in set_ticklabels
"The number of FixedLocator locations"

extract features fails

this happens while running streamlit over port forwarding. i get the following error:


Failed on feature embedding. Try again by unchecking sidebar and rerunning extract features.

UnboundLocalError: local variable 'learned_embeddings' referenced before assignment
Traceback:
File "/usr/local/anaconda3/envs/streamlit/lib/python3.9/site-packages/streamlit/script_runner.py", line 354, in run_script
exec(code, module.dict)
File "/disk/B-SOID/bsoid_app.py", line 46, in
extractor.main()
File "/disk/B-SOID/bsoid_app/extract_features.py", line 192, in main
self.compute()
File "/disk/B-SOID/bsoid_app/extract_features.py", line 137, in compute
self.learn_embeddings()
File "/disk/B-SOID/bsoid_app/extract_features.py", line 172, in learn_embeddings
self.sampled_embeddings = learned_embeddings.embedding

when initializing bsoid on these sessions, i get get the following (possibly related?) runtime / non-fatal error:


2021-10-06 10:27:59.806 Examining the path of widgets raised: cannot import name 'constants' from partially initialized module 'zmq.backend.cython' (most likely due to a circular import) (/usr/local/zmq/backend/cython/init.py)
2021-10-06 10:28:18.806 Examining the path of widgets raised: cannot import name 'constants' from partially initialized module 'zmq.backend.cython' (most likely due to a circular import) (/usr/local/zmq/backend/cython/init.py)
2021-10-06 10:28:24.747 Examining the path of widgets raised: cannot import name 'constants' from partially initialized module 'zmq.backend.cython' (most likely due to a circular import) (/usr/local/zmq/backend/cython/init.py)
2021-10-06 12:53:45.410 Traceback (most recent call last):

Do you have any ideas about what could be causing these problems?

Thanks,

Evan

Error while 'generating video snippets for interpretation' (KeyError: 'bit_rate')

Hello!

I've encountered an error while using B-SOiD v2.0 on Windows. While trying to 'generate video snippets for interpretation' I'm presented with an error that says the following:

KeyError: 'bit_rate'
Traceback:

File "c:\users\nick\anaconda3\envs\bsoid_v2\lib\site-packages\streamlit\script_runner.py", line 332, in _run_script
exec(code, module.dict)
File "C:\Users\Nick\OneDrive - The University Of Newcastle\Desktop\B-SOID\B-SOID-master\B-SOID-master\bsoid_app.py", line 69, in
creator.main()
File "C:\Users\Nick\OneDrive - The University Of Newcastle\Desktop\B-SOID\B-SOID-master\B-SOID-master\bsoid_app\video_creator.py", line 268, in main
self.setup()
File "C:\Users\Nick\OneDrive - The University Of Newcastle\Desktop\B-SOID\B-SOID-master\B-SOID-master\bsoid_app\video_creator.py", line 120, in setup
self.bit_rate = int(video_info['bit_rate'])

This appears after I've selected the corresponding video which matches the h5 file.
Do you think it could be related to the fact I don't have any audio throughout my videos?

Thank you!

I want to classify dog's behaviors.

Hi. I am Rim Yu.

We are working on a project to classify behaviors such as standing or sitting dogs using the B-SOID algorithm.

Unlike the video of the mouse in README.md, our video (output of DEEPLABCUT) does not have a fixed camera. (Please refer to the attached photo)

Is it possible to classify behavior even with images of dogs captured with a non-stationary camera?

Please advise for our project.

Thank you.

Please refer below the photo.
딥랩컷

Is it correct to run this code? (B-SOID/bsoid_py/ main.py)

hi
I want to classify animal behavior.
Is it correct to run this code? (B-SOID/bsoid_py/ main.py)

I want to run it with python code. not this method ( streamlit run bsoid_app.py)
actully I have already tried running this code. I was able to confirm that I successfully classify animal behavior.
but I'd like to see the behavior classification in real time, so,,could I get advice on how to do it?

Is it correct to run this code? (B-SOID/bsoid_py/ main.py)

please help me, ,thank you

Use with 3d CSV files

This repo looks fantastic :) I'm curious as to whether the code will work with 3d csv files now that DLC has multi-cam support.

Thank you in advance,

Harry

Model Creation Throws error

Hi

Figured out the last issue for now, however, upon model generation I'm getting this error

Screen Shot 2021-10-08 at 3 50 56 PM

Any help would be appreciated!

when : Predict labels and create example videos

ParserError: Error tokenizing data. C error: Expected 2 fields in line 69, saw 3
Traceback:
File "d:\anaconda3\envs\bsoid_v2_1\lib\site-packages\streamlit\script_runner.py", line 332, in _run_script
exec(code, module.dict)
File "G:\UCAS\plan\B-SOiD-master\bsoid_app.py", line 69, in
creator.main()
File "G:\UCAS\plan\B-SOiD-master\bsoid_app\video_creator.py", line 269, in main
self.create_videos()
File "G:\UCAS\plan\B-SOiD-master\bsoid_app\video_creator.py", line 174, in create_videos
low_memory=False)
File "d:\anaconda3\envs\bsoid_v2_1\lib\site-packages\pandas\io\parsers.py", line 688, in read_csv
return _read(filepath_or_buffer, kwds)
File "d:\anaconda3\envs\bsoid_v2_1\lib\site-packages\pandas\io\parsers.py", line 460, in _read
data = parser.read(nrows)
File "d:\anaconda3\envs\bsoid_v2_1\lib\site-packages\pandas\io\parsers.py", line 1198, in read
ret = self._engine.read(nrows)
File "d:\anaconda3\envs\bsoid_v2_1\lib\site-packages\pandas\io\parsers.py", line 2157, in read
data = self._reader.read(nrows)
File "pandas_libs\parsers.pyx", line 850, in pandas._libs.parsers.TextReader.read
File "pandas_libs\parsers.pyx", line 933, in pandas._libs.parsers.TextReader._read_rows
File "pandas_libs\parsers.pyx", line 2042, in pandas._libs.parsers.raise_parser_error

Question about bodyparts with top-down view

Hi, great software! thank you for making it available

I know a similar question has been asked before, but since things might have changed, I just wanted to make sure.

I have a top-down camera view setup, so paws are invisible for the most of the time. But reading the doc,
"""
BODYPARTS = { 'Snout/Head': 0, 'Neck': None, 'Forepaw/Shoulder1': 1, 'Forepaw/Shoulder2': 2, 'Bodycenter': None, 'Hindpaw/Hip1': 3, 'Hindpaw/Hip2': 4, 'Tailbase': 5, 'Tailroot': None }
"""
it seems shoulders and hip can be used instead of the paws? If so, how well does B-soid classify behaviours without them? the main behaviours I want to classify are locomotion, rearing, grooming, and stationary exploration

Choosing Body Parts

I am attempting to analyze the results of a DLC model that I have already created, so I had not seen the B-SOID instructions when I made it. In the config file, it says that you have to have at least 6 body parts (Snout/heat, shoulder1/forepaw, body center, etc...). Does this mean if I don't have a body part labeled "body center", then the analysis won't work?

fail to creat a model

When I try to create a model, I get the following:

TypeError: '>=' not supported between instances of 'list' and 'int'
Traceback:

File "f:\Env\B-SOID\lib\site-packages\streamlit\script_runner.py", line 354, in _run_script
exec(code, module.dict)
File "C:\Users\hhuan100\B-SOID\bsoid_app.py", line 62, in
learning_protocol.main()
File "C:\Users\hhuan100\B-SOID\bsoid_app\machine_learner.py", line 86, in main
self.randomforest()
File "C:\Users\hhuan100\B-SOID\bsoid_app\machine_learner.py", line 35, in randomforest
x = self.sampled_features[self.assignments >= 0, :]

How should I deal with it? Thank you!

Generalisation to a different number of body parts

One thing I'd be interested in contributing is generalising the script so that users can define a different number of body parts than the six defaults (which may be useful for alternative camera angles or behaviours). Does this sound feasible to implement? if so I am happy to work on it.
thank you for your help!

Using via spyder or another IDE

Hey there!

I am interested in using this tool but not a huge fan of using my web browser.

I would very much prefer using an IDE even if it is complex to use. I am unsure how to approach doing this from reading your README on the main branch.

advice?

Cheers!
Willem

Video angle

Hello! Thank you for releasing this program.

I'm interested in using it to classify rat behavior in an open field, however the videos were captured with cameras angled for a side view. I watched your demonstration on youtube to get a better sense on how to implement B-SOiD, and i'm wondering if I should expect compatibility issues with our video's angle, even after teasing out all possible covariant features. Do you have any suggestions?

All the best, Enrique

UMAP+HBDSCAN model Index error when preprocessing csv during training

The following error occurred in the middle of training when preprocessing one of the csv files. In the print-out, 1/6 preprocessing was done before the error occurred. Do you have any direction on how should I fix the error? Thanks!

Traceback (most recent call last):
  File "run.py", line 6, in <module>
    bsoid_umap.main.build(TRAIN_FOLDERS)
  File "/ihome/nurban/zig9/B-SOID/bsoid_umap/main.py", line 28, in build
    nn_assignments = bsoid_umap.train.main(train_folders)
  File "/ihome/nurban/zig9/B-SOID/bsoid_umap/train.py", line 231, in main
    filenames, training_data, perc_rect = bsoid_umap.utils.likelihoodprocessing.main(train_folders)
  File "/ihome/nurban/zig9/B-SOID/bsoid_umap/utils/likelihoodprocessing.py", line 139, in main
    filenames, data, perc_rect = import_folders(folders)
  File "/ihome/nurban/zig9/B-SOID/bsoid_umap/utils/likelihoodprocessing.py", line 72, in import_folders
    curr_df_filt, perc_rect = adp_filt(curr_df)
  File "/ihome/nurban/zig9/B-SOID/bsoid_umap/utils/likelihoodprocessing.py", line 116, in adp_filt
    if rise_a[0][0] > 1:
IndexError: index 0 is out of bounds for axis 0 with size 0

Usage with a top-down view

Dear @runninghsus

Thanks for making this code available, I think it will be a great help for many including myself.

I am trying to adapt the code to work with a top-down view of mice in their home cages, where the main problem is not having any of the paws in view for most of the time. So I have been going through your code and came across this. Does it mean that after running dlc_preprocess the hind paws positions are not used anymore? Or that you dont filter them because they are always in view and hence tracked by DeepLabCut with high confidence anyway?

Also, do you have any recommendations regarding what points to label in the mouse for use with a top-down view?

Thanks a lot,

Augusto

problem with transferring variables from section to section in UI

OS: WIN10

command:
streamlit run bsoid_app.py

when trying to load SLEAP data, pre-processing stage works, but trying to advance to the next tick (extract and embed features) i get the following error

NameError: name 'working_dir' is not defined
Traceback:
File "h:\anaconda3\envs\bsoid_v2\lib\site-packages\streamlit\script_runner.py", line 332, in run_script
exec(code, module.dict)
File "E:\BSOID\bsoid_app.py", line 45, in
[
, _, framerate, _, _, _, processed_input_data, _] = load_data(working_dir, prefix)

if i try to fix working_dir in the code i get the same error with prefix, which cannot be fixed by assigning it in code.

Thanks,
Shahaf

win_len is negative for FPS < 11

OK, closed my last issue but found an actual issue: in train.py win_len is calculated using this formula:
win_len = np.int(np.round(0.05 / (1 / fps)) * 2 - 1)

The issue here is that if fps < 11 this will return a win_len of -1. This value later gets passed to the boxcar_center function, which passes it to a1.rolling in the likelihoodprocessing.py file. This is a Pandas function that throws an exception when passed negative values. Is this an unintentional bug, or is there a reason to not use this software with low-fps video?

There are also numerous other areas where fps is divided by magic numbers (e.g. train.py line 87) which may cause errors - I'm not sure as I've not reached that point in the code yet - wanted to check if there was a real reason to not use low-framerate video before trying to test all the rounding. Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.