Need help with anipose?
Click the โ€œchatโ€ button below for chat support from the developer who created it, or find similar developers for support.

About the developer

lambdaloop
156 Stars 24 Forks BSD 2-Clause "Simplified" License 407 Commits 23 Opened issues

Description

๐Ÿœ๐Ÿ€๐Ÿ’๐Ÿšถ A toolkit for robust markerless 3D pose estimation

Services available

!
?

Need anything else?

Contributors list

# 128,433
Shell
eeg
Jupyter...
C++
366 commits
# 182,593
Python
C++
Shell
labelin...
16 commits

Anipose

PyPI version

Anipose is an open-source toolkit for robust, markerless 3D pose estimation of animal behavior from multiple camera views. It leverages the machine learning toolbox DeepLabCut to track keypoints in 2D, then triangulates across camera views to estimate 3D pose.

Check out the Anipose preprint for more information.

The name Anipose comes from Animal Pose, but it also sounds like "any pose".

Documentation

Up to date documentation may be found at anipose.org .

Demos

Videos of flies by Evyn Dickinson (slowed 5x), Tuthill Lab

Videos of hand by Katie Rupp

References

Here are some references for DeepLabCut and other things this project relies upon: - Mathis et al, 2018, "DeepLabCut: markerless pose estimation of user-defined body parts with deep learning" - Romero-Ramirez et al, 2018, "Speeded up detection of squared fiducial markers"

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.