DeepLabCut 2.2 🦄

DeepLabCut has a big sparkly new, massively tested, stable release out!

We are SO thrilled by the overwhelmingly possible feedback on 2.2 the release candidates, + we added some cool features and improvements since then. We also hit a big milestone this week — 250,000 downloads!! So, here it is! Below is the full log from rc’s 1-3, and the top is “what’s new” since rc3.

2.2: This is a MAJOR change to the stable release!

This release comes following 3 pre-releases, all documented (see links below), which we highly encourage you to read. There are major changes since maDLC was beta released in 2020, with massive performance gains. We recommend you use the new conda file, and re-train your older labeled data for best performance. Note, we include a new COLAB notebook that we highly recommend you using and/or minimally reviewing for your local use, or using the main GUI for many of the latest features and networks!

The science:
Multi-animal pose estimation and tracking with DeepLabCut

How to Use:

- To install you will need to use the new DEEPLABCUT.yaml!

- ultra quick start pip install "deeplabcut[gui]" (which now includes `tensorflow` and `wxPython` directly).

New Highlights since rc3:

  • -Note for current maDLC users is that you will find that cropandlabel is now within

    create_multianimaltraining_dataset(config, num_shuffles=1, Shuffles=None, windows2linux=False, net_type=None, numdigits=2, crop_size=(400, 400), crop_sampling='hybrid', paf_graph=None, trainIndices=None, testIndices=None)

  • - video inference (analyze_videos) is substantially faster + it gives you a performance boost! Check it out.

  • - there is now a way to "ignore" parts that hurt the animal assembly because they are outliers in the data, like the tip of a mouse tail without many points along the tail to connect to the body. You will get the point later, but for smart assembly, it can be "dropped" simply. Here is example code from our latest colab notebook.


RC3: We have updated and refactored major parts of the code base for seamless Tensorflow 2+ integration.

(1) You should update: you can create a new installation environment. This is easy, and you can keep your older env for safe keeping! In short, your older projects will work, and new projects will get all the cool advantages of TF2! See their blog (and updates for more), but in short its quite a bit easier to build on, so we hope this enables more researchers to play with the base code.

How do I easily update?

Simple: click HERE to download a NEW conda environment file. You can keep your older DLC-GPU, DLC-CPU for safe keeping (and peace of mind, I get it!). 🔥

(2) deeplabcut-core is depreciated! ✅

(3) The latest NVIDIA GPUs, CUDA, etc are supported! 🥂🙌

(4) We tested it, a lot …. a big shout out to lead developer Dr. Jessy Lauer for the big PR and testing the code across platforms, GPUs, models and TF versions to be sure we did not slow you down! See full PR here: https://github.com/DeepLabCut/DeepLabCut/pull/1323 and here are some take homes:

  • Benchmarked on 4 datasets (single- and multi-animal, w/ grayscale and color images) with TensorFlow (TF)1.15.5 (which serves as reference), TF2.3, and TF2.5; batch size 8, 30k iterations (except for the marmosets: 20k); 3 backbones (resnet_50, mobilenet_v2_0.5, efficientnet-b0); 2 GPU devices (TITAN RTX & GEFORCE GTX 1080).
    No significant main effects of either backbone or TF version were found. Training duration is reported relative to TF1 training time (Y axis, and value printed above each bar), and in seconds (underneath/within the bar).

(5) We have new docs to help you with the transition. This is simpler to install in the long run (1 conda file!) and again just requires you have CUDA (and associated cuDNN, see docs!).

Along with this major change there are some excellent updates to the code base for RC3! 

For the full change log, see here: https://github.com/DeepLabCut/DeepLabCut/compare/2.2rc2...master

Other highlights include:


RC2: 2.2rc2 is up, and has some great new additions!

To upgrade simply run pip install ‘deeplabcut[gui]’==2.2rc2 (this is fully backwards compatible with your 2.2rc1 projects, and we 1000% recommend upgrading). Read the release note here (which has links to detailed code): https://github.com/DeepLabCut/DeepLabCut/releases/tag/2.2rc2

New features:

  • DLCRNet new default net when you use maDLC (also select it easily in the Project GUI)! 🔥

  • smarter inclusion of identity (ID) information & and new way to track using ID only:

    • Automatically stitch with identity when possible!

    • Allow tracking with identity only (optimal ID assignment based on soft voting of body part identity predictions)

    • to use: deeplabcut.convert_detections2tracklets(..., identity_only=True)

    • read more here + TL’DR: if you trained with identity=True you can leverage this in tracking (it was already leveraged in assembly). This is ideal when you animal is occluded or disappears, as in this example (awesome data from co-author William Menegas!). Read more about predicting animal identity from images in our preprint here.

  • user can define frame rate of camera for make labeled videos!

  • dynamically allocate memory on GPU in tensorflow for video analysis deeplabcut.analyze_videos(config_path, [new_video_path], allow_growth=True)

  • API update to create video with all detections (easier to use!): deeplabcut.create_video_with_all_detections(config_path, ['/fullpath/project/videos/testVideo.mp4']) —> now you can make a video right after pose estimation step quickly (to check quality on a video before doing tracking).

  • you can fund us in 1 click 🙏 🤩 💖

  • Check for the addition of new animals to the config while labeling (namely, if you find you have MORE animals that you thought in your maDLC project, just add a new name to the config.yaml, and then re-open labeling GUI and go!)

  • Upgrades:

    • select DLCRNet from main project manager GUI!

    • installation docs updated!

    • tracklet docs upgraded!

    • docstring upgrade (thanks @Joilence)

    • Roadmap updated!

    • user-defined skeleton possible (but not advised per se)

    • much faster dataset creation!

    • if symlink fails, move videos during project creation

    Bug Fixes:

    • single animal mode supported in new tracklet stitcher

    • return 0 mAP when no reasonable assemblies are found (vs just indexingError); i.e., more informative error now

    • force data frame to start at 0 (i.e., even if not animal visible)

    • image panel error fixed

    • path clean up (thanks @sin-mike)

    • use the user-input individual names

    • config integrity on re-crop fix

    • slicing error (thanks @backyardbiomech!)



RC1: multi-animal DeepLabCut is out (rc1)!

After a (long) beta period (lab moved, covid delays, and many adventures), we learned a lot from our users, and re-tooled the system. This blog post will cover what we learned, what we changed, and the new advanced features of #maDLC!

Firstly, we realized the beta-version needed a lot more domain knowledge that we ultimately wanted for our user base. Although it has the feel of the standard DLC workflow, there are/were some really key decisions that needed to be made. This led to several failures that we as developers could “instantly” see the issue with, but fully appreciate it’s a lot of documentation and without the preprint, it makes the science behind it harder/impossible to grasp. So, we went “back to the drawing board” in the past year and thought about ways we can make it nearly as data-driven as possible….

What emerged is 2.2rc1 - it has novel data-driven methods, a streamlined workflow, and better performance

Here are key decisions & new things related to usability:

  • 🥇 Is multi-animal truly good for your problem? We saw many animals that were easily visually distinguishable being used with #maDLC beta, which by default has the strong assumption there should be no unique visual properties. Even two mice with one having a marked tail violates this assumption, unless you use our ID network output (see next point).

  • 🐹🐭 If you can distinguish animals, leverage it! One feature we worked on for the last years is using identity (visual features) to learn “who is who.” We did not release this fully (it was a silent feature), as we felt it would complicate beta testing, but at the end of the day, most people need this (see point above). Simply set identity = true in your config before creating a training set & training now!

  • 💡 What key points to label? More is Better. Even in our original paper, we should more labels == better performance. I.e., 4 labels on a mouse is better than 2! 8 is better than 4! This is especially true with multiple animals that have occlusions. Have two mice that are on top of each other? Our networks learn who-is-who, and what point-belongs-to-who, but you need enough points! When in doubt, adding a few extra key points will only help - you can ignore them in your analysis later if you want.

  • 🗝 The skeleton, you need(ed) to over-connect… And when we mean over-connect, we really meant everything to everything is best. Now, we (1) take this decision away from the user, and (2) introduce a novel fully-data driven approach. Using your data, we now find the ultimate skeleton for your data to get the best performance with the best speed. Super cool, right?

  • 📈 Animal Assembly, the crucial step in linking which key points belong to an individual. Here, we have an entirely new algorithm that is ultra fast, more accurate, and done in a data-drive way. This novel approach we introduce in this version is much more robust, and makes tracking even better…

  • 🚦Tracking Types. We introduce a new tracker type that beats our prior versions. Basically, use ellipse unless you have a good reason not to.

Now, how to use it!

  • 💻 We have re-written many of the docs for a streamlined process. If you go HERE, you will be linked to the new “Doc Hub” that guides you through install, opening the software, and either standard DLC or maDLC.

  • 🐭🐭🐭 maDLC will feel very much like standard DLC, and many parts are now entirely data driven. This means your big decision is to get highly quality data! 💪

  • 🛠 Future note, we will be releasing a DLC “cook book” to also give both beginner & advanced users worked examples, stay tuned!

What are the major new scientific contributions for multi-animal DLC?

  • Introduce four datasets of varying difficulty for benchmarking multi-animal pose estimation networks. 🐭🐒🐠🐁
  • A novel multi-task architecture that predicts multiple conditional random fields and therefore can predict keypoints, limbs, as well as animal identity.
  • A novel data-driven method for animal assembly that finds the optimal skeleton without user input, and that is state-of-the art (compared to top models on COCO).
  • A new tracking module that is locally and globally optimizable.
  • We show that one can predict the identity of animals, which is useful to link animals across time when temporally-based tracking fails.
  • We extend the open source DeepLabCut software to multi-animal scenarios and provide new graphical user interfaces (GUIs) to allow keypoint annotation and check reconstructed tracks.