This repository contains all the code you'll need to train your own version of MARS.
To set up your GPU to run tensorflow, follow the "Setting up your GPU for Tensorflow" section of the instructions from the end-user version of MARS, followed steps 1-2 of the instructions to install conda: for Linux | for Windows
Next, clone this Github repository + submodules with the call
git clone --recurse-submodules https://github.com/neuroethology/MARS_Developer
Navigate into the MARS_Developer directory you just created, and install the conda environment:
conda env create -f mars_dev_<linux/windows>.yml
and activate this environment by calling
conda activate mars_dev
To be able to run jupyter notebooks from within the MARS_dev
environment, use commands:
conda install -c anaconda ipykernel
python -m ipykernel install --user --name=mars_dev
Finally, to install the MARSeval
module for evaluating performance of the detection and pose models:
on Linux
pip install git+https://github.com/neuroethology/MARS_pycocotools.git#egg=MARSeval\&subdirectory=PythonAPI
on Windows
- Install Microsoft Visual C++ Build Tools from here.
pip install git+https://github.com/neuroethology/MARS_pycocotools.git#egg=MARSeval^&subdirectory=PythonAPI
MARS processes your videos in three steps:
- Detection - detects the location of animals in each video frame.
- Pose estimation - estimates the posture of each animal in terms of a set of anatomically defined "keypoints".
- Behavior classification - detects social behaviors of interest based on the poses of each animal.
Each of these steps can be fine-tuned to your own data using the code in this repository.
To get started, please check out our two Tutorial notebooks:
Training MARS to run on your own experiments includes the following steps, outlined in the Pose and Behavior tutorials above:
MARS uses a set file structure to keep track of data and models associated with your project. We'll assume you have already settled on a recording setup, and have a set of videos on hand to be analyzed.
We provide code for crowdsourcing of pose annotation to a public workforce via Amazon SageMaker. Running this code requires an Amazon Web Services (AWS) account and some initial time investment in setting up the custom annotation job. A typical pose annotation job, at high annotation quality + high label confidence (5 repeat annotations/image) costs ~68 cents/image.
If you've already collected pose annotations via another interface such as DeepLabCut, you can skip directly to the post-processing step (step 2.3) to format your data for training.
Next, we need to teach MARS what your animals look like. The Multibox Detection module covers training, validating, and testing your mouse detector.
Once you can detect your mice, we want to estimate their poses. In this step we'll train and evaluate a mouse pose estimator for your videos. The Hourglass Pose module covers training, validating, and testing a stacked hourglass model for animal pose estimation.
Now that you have a working detector and pose estimator, we'll add them to your end-user version of MARS so you can run them on new videos!
Once you've applied your trained pose estimator on some new behavior videos, you can annotate behaviors of interest in those videos and train MARS to detect those behaviors automatically.