GithubHelp home page GithubHelp logo

let_it_be_3d's Introduction

let_it_be_3D

With let_it_be_3D we want to extend the functions of aniposelib and bring them into a pipeline structure. Our goals are having as few manual steps required as possible and standardized quality assurance/collection of metadata. We provide additional methods for video synchronisation, adjustment for different framerates, validation of anipose calibration, adjustment of intrinsic calibrations to croppings, manual marker detection, checking and correcting filenames and normalisation of the 3D triangulated dataframe.

See the pipeline flowchart!
flowchart TD;
    video_dir_R(Recording directory) ~~~ video_dir_C(Calibration directory);
    id1(Recording object) ~~~ id2(Calibration object) ~~~ id3(Calibration validation objects);
    subgraph Processing recording videos:
    video_dir_R --> |Get video metadata \nfrom filename and recording config| id1;
    id1-->|Temporal synchronisation| id4>DeepLabCut analysis and downsampling];
    end
    subgraph Processing calibration videos
    video_dir_C --> |Get video metadata \nfrom filename and recording config| id2 & id3;
    id2-->|Temporal synchronisation| id5>Video downsampling];
    id3-->id6>Marker detection];
    end
    id5-->id7{Anipose calibration};
    subgraph Calibration validation
    id7-->id8[/Good calibration reached?/];
    id6-->id8;
    end
    subgraph Triangulation
    id8-->|No|id7;
    id8-->|Yes|id9>Triangulation];
    id4-->id9-->id10>Normalization];
    id10-->id11[(Database)];
    end
Loading
Pipeline explained Step-by-Step!

1) Load videos and metadata

  • read video metadata from filename and recording config file
  • intrinsic calibrations
    • use anipose intrinsic calibration

    • run or load intrinsic calibration based on uncropped checkerboard videos adjust intrinsic calibration for video cropping

      Example

2) Video processing

  • synchronize videos temporally based on a blinking signal
    Example

  • run marker detection on videos manually or using DeepLabCut networks
    Example

  • write videos and marker detection files to the same framerate

3) Calibration

  • run extrinsic Anipose camera calibration
  • validate calibration based on known distances and angles (ground truth) between calibration validation markers
    Example This calibration validation shows the triangulated representation of a tracked rectangle, that has 90° angles at the corners.

4) Triangulation

  • triangulate recordings

    Example

  • rotate dataframe, translate to origin, normalize to centimeter

    Example The blue vectors were aligned to the yellow vectors succesfully.

  • add metadata to database

How to use

Installation

# Clone this repository
$ git clone https://github.com/retune-commons/let_it_be_3D.git

# Go to the folder in which you cloned the repository
$ cd let_it_be_3D

# Install dependencies
# first, install deeplabcut into a new environment as described here: (https://deeplabcut.github.io/DeepLabCut/docs/installation.html)
$ conda env update --file env.yml 

# Open Walkthrough.ipynb in jupyter lab
$ jupyter lab

# Update project_config.yaml to your needs and you're good to go!

Examples

Calibration
from pathlib import Path
from core.triangulation_calibration_module import Calibration
rec_config = Path("test_data/Server_structure/Calibrations/220922/recording_config_220922.yaml")
calibration_object = Calibration(
  calibration_directory=rec_config.parent,
  recording_config_filepath=rec_config,
  project_config_filepath="test_data/project_config.yaml",
  output_directory=rec_config.parent,
)
calibration_object.run_synchronization()
calibration_object.run_calibration(verbose=2)
TriangulationRecordings
  from core.triangulation_calibration_module import TriangulationRecordings
  rec_config = "test_data/Server_structure/Calibrations/220922/recording_config_220922.yaml"
  directory = "test_data/Server_structure/VGlut2-flp/September2022/206_F2-63/220922_OTE/"
  triangulation_object = TriangulationRecordings(
    directory=directory,
    recording_config_filepath=rec_config,
    project_config_filepath="test_data/project_config.yaml",
    recreate_undistorted_plots = True,
    output_directory=directory
  )
  triangulation_object.run_synchronization()
  triangulation_object.exclude_markers(
    all_markers_to_exclude_config_path="test_data/markers_to_exclude_config.yaml",
    verbose=False,
  )
  triangulation_object.run_triangulation(
    calibration_toml_filepath="test_data/Server_structure/Calibrations/220922/220922_0_Bottom_Ground1_Ground2_Side1_Side2_Side3.toml"
  )
  normalised_path, normalisation_error = triangulation_object.normalize(
    normalization_config_path="test_data/normalization_config.yaml"
  )
CalibrationValidation
  from core.triangulation_calibration_module import CalibrationValidation
  from pathlib import Path
  rec_config = Path("test_data/Server_structure/Calibrations/220922/recording_config_220922.yaml")
  calibration_validation_object = CalibrationValidation(
    project_config_filepath="test_data/project_config.yaml",
    directory=rec_config.parent, recording_config_filepath=rec_config,
    recreate_undistorted_plots = True, output_directory=rec_config.parent
  )
  calibration_validation_object.add_ground_truth_config("test_data/ground_truth_config.yaml")
  calibration_validation_object.get_marker_predictions()
  calibration_validation_object.run_triangulation(
    calibration_toml_filepath="test_data/Server_structure/Calibrations/220922/220922_0_Bottom_Ground1_Ground2_Side1_Side2_Side3.toml",
    triangulate_full_recording = True
  )
  mean_dist_err_percentage, mean_angle_err, reprojerr_nonan_mean = calibration_validation_object.evaluate_triangulation_of_calibration_validation_markers()

Required filestructure

Video filename
  • calibration:
    • has to be a video [".AVI", ".avi", ".mov", ".mp4"]
    • including recording_date (YYMMDD), calibration_tag (as defined in project_config) and cam_id (element of valid_cam_ids in project_config)
    • recording_date and calibration_tag have to be separated by an underscore ("_")
    • f"{recording_date}{calibration_tag}{cam_id}" = Example: "220922_charuco_Front.mp4"
  • calibration_validation:
    • has to be a video or image [".bmp", ".tiff", ".png", ".jpg", ".AVI", ".avi", ".mp4"]
    • including recording_date (YYMMDD), calibration_validation_tag (as defined in project_config) and cam_id (element of valid_cam_ids in project_config)
    • recording_date and calibration_validation_tag have to be separated by an underscore ("_")
    • calibration_validation_tag mustn't be "calvin"
    • f"{recording_date}_{calibration_validation_tag}" = Example: "220922_position_Top.jpg"
  • recording:
    • has to be a video [".AVI", ".avi", ".mov", ".mp4"]
    • including recording_date (YYMMDD), cam_id (element of valid_cam_ids in project_config), mouse_line (element of animal_lines in project_config), animal_id (beginning with F, split by "-" and followed by a number) and paradigm (element of paradigms in project_config)
    • recording_date, cam_id, mouse_line, animal_id and paradigm have to be separated by an underscore ("_")
    • f"{recording_date}{cam_id}{mouse_line}{animal_id}{paradigm}.mp4" = Example: "220922_Side_206_F2-12_OTT.mp4"
Folder structure
  • A folder, in which a recordings is stored should match the followed structure to be detected automatically:
    • has to start with the recording_date (YYMMDD)
    • has to end with any of the paradigms (as defined in project_config)
    • recording date and paradigm have to be separated by an underscore ("_")
    • f"{recording_date}_{paradigm}" = Example: "230427_OF"

API Documentation

Please see our API-documentation here!

License

GNU General Public License v3.0

Contributers

This is a Defense Circuits Lab project. The pipeline was designed by Konstantin Kobel, Dennis Segebarth and Michael Schellenberger. At the Sfb-Retune Hackathon 2022, Elisa Garulli, Robert Peach and Veronika Selzam joined the taskforce to push the project towards completion.

Sfb-Retune DefenseCircuitsLab

Contact

If you want to help with writing this pipeline, please get in touch.

let_it_be_3d's People

Contributors

dennisdoll avatar elgarulli avatar konkob avatar mschellenberger avatar peach-lucien avatar vselzam avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

let_it_be_3d's Issues

rework file naming

  • empty h5 filename
  • Rotation plot filename
  • crossvalidation filename should contain fps

3D Rotation in z

  • Issue won't occur if we have labeled markers with z >/< 0!
  • in our specific we can't rotate in z dimension properly, since we have only labels with z=0
  • Solution: most of the time (except head dips) the points should be above 0

make meta.yaml useable again!

implement loading/exporting if a project based on its meta.yaml

  • possibility to remove recordings from analysis

  • possibility to remove videos from calibration and triangulation

Synchronisation Bug

When synchronising calibration videos, the pattern is not properly aligned.
221219_Bottom_charuco_synchronization_individual_downsampled80

Test all possible calibrations

  • recording and calibration videos can be excluded manually in the meta.yaml via exclusion_state = "exclude"

  • for all recordings the calibration_to_use is written into meta.yaml based on the exclusion states

  • the calibrations_to_use are created as subgroup of the full calibration

  • and saved as individual Calibration object, which creates an individual toml

  • ToDo: make sure, that the right cameras are chosen for the subgroup in Triangulation as well as Calibration objects

  • ToDo: probably the create_subgroups branch is not being used

  • ToDo: maybe there is an error in the objects dictionary-keys

Save calibrations in calibrate_optimal instead of overwriting them

When the optimisation is being run, the created .toml calibration file is overwritten by each iteration

  • Whenever a "good" calibration is reached, this should be renamed to the final .toml file
  • but the others should be kept with a "not_optimal_calibration" flag for example

Function to create 3D videos

improve and merge all functions, that exist at the moment, to create 3D plots, to make them accessible via one 3D_plotting_function

  • plotting config necessary for elements that should be plotted in addition to the markers or to connect certain markers?

  • use imageio? use ffmpeg_python?

  • 3D compared to 2D,

  • start, end frame idx, filename, marker_size, label_size, markers_to_connect, markers_to_fill, additional artists, normalised/not normalised, set axis_aspect_equal

Change file naming per user_input

  • right now, user input can not be used to correct the filenames of the videos; it only shows you, that you have wrong filenamings
  • If you use user_input, it doesn't find the correct files any more.
  • the user input should instead rename the existing files and update all filepath attributes

Readability/Code Review

  • add proper documentation to the functions!

  • abstract class structure

  • functions should return changed objects and should be named intuitively

  • black/code formatting

Intrinsic calibration copies

  • intrinsic calibrations should be copied into the folder of each calibration
  • therefore recording as well as calibration objects should create the intrinsic.p files (currently only calibration objects create them)

Increase adaptability of the Code

  • identify hard coded elements

  • think of a more general way to write them

  • replace them

  • simplify the input files if you don't need the full range of functions of the pipeline (fps matches in all videos, no need for synchronisation, etc.)

errors in synchronisation

  • when do errors in synchronisation occur? -> how to fix them?
  • should not be raised as errors, return unsynchronised video -> video will be excluded by framenum

Set likelihood of invisible markers to 0

in some cameras some markers can't be seen. Sometimes DLC still finds those markers in the camera with high likelihood. We need a function to set the likelihood of a list of markers per camera to 0.

Database usability:

  • add excluded cams during triangulation/calibration
  • make sure that 3D file, date, paradigm, subject id can be added from analysis
  • make sure that group id, session id, batch, trial id can be added from databases

Create 3D plots

  • common function to plot the Positions and Recordings and save to disk

Fake recordings

If you have a camera for calibration, but not for triangulation (e.g. no trained DLC network, no video), you should be able to enter "fake" as "processing_type" in the project_config and it creates an empty DLC_fake.h5

Instead of markers_to_ignore, this should be used, to ignore videos or missing videos in specified directories!

Provide some dummy data

  • having a nice calibration

  • positions dlc fake markers

  • already synchronized and analysed videos

Find best matching frame

function to get best matching frame in rotation function

  • frame, in which all labels for normalisation are found with low error?

  • frame, in which all labels for normalisation are defined?

(- mean position of all labels?)

  • how to proceed if some labels were not found?

Fix intrinsic calibration issues

croppings are correct, but undistorted images are completely distorted

  • wrong settings for intrinsic videos

  • error in code?!

  • croppings erroneous

Add exclusion criteria

  • let the Code mark videos that need to be excluded because of missing files, calibrations, poor quality of recording, bad synchronisation, etc.

plot positions predictions

plot the DLC predictions on the positions images

  • issue: DLC.create_labeled_videos not working on single images

improve 3D video creation

  • add min/max for plot
  • possibility to ignore markers outside min/max (WATCH OUT!)
  • smaller fig size = faster?
  • Plot as class, frame idx as attribute
  • start/stop in function, not in config

Rename use_gpu key in project_config.yaml

Rapid aligner is a GPU based package, that allows synchronization of patterns. Using GPU is faster than using CPU, but you need the package installed and a GPU on your computer.

  • rename use_gpu to use_rapid_aligner and set rapid_aligner_filepath

Play around with anipose triangulations

  • using frames where marker was predicted in one camera only
  • different triangulation methods
  • different filtering options
  • weigh camera predictions for markers differently

Missing videos

  • Code should create an output message, if there are more videos in valid_cam_ids than in the recording folder. In case, they were not transferred, one can check again and copy the missing videos

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.