This is the official implemntation for "Multi-scale spatial temporal graph convolutional network for skeleton-based action recognition" AAAI-2021 (pdf)
- Download the raw data of NTU RGB+D.
- Preprocess the data with
python data_gen/ntu_gendata.py
- Generate the bone data with
python data_gen/gen_bone_data.py
- Generate the motion data with
python data_gen/gen_motion_data.py
- Download the data from GoogleDrive provided by st-gcn
- Preprocess the data with
python data_gen/kinetics_gendata.py
- Generate the bone data with
python data_gen/gen_bone_data.py
- Generate the motion data with
python data_gen/gen_motion_data.py
Change the config file depending on what you want.
# train on NTU RGB+D xview joint stream
$ sh run.sh 0,1,2,3 4 2022 0
# or
$ CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 --master_port=2022 main.py --config config/ntu/train_joint_amstgcn_ntu.yaml
Please cite our paper if you find this repository useful in your resesarch:
@inproceedings{chen2021multi,
title={Multi-scale spatial temporal graph convolutional network for skeleton-based action recognition},
author={Chen, Zhan and Li, Sicheng and Yang, Bing and Li, Qinghan and Liu, Hong},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
volume={35},
number={2},
pages={1113--1122},
year={2021}
}
The framework of our code is extended from the following repositories. We sincerely thank the authors for releasing the codes.
This project is licensed under the terms of the MIT license.
For any questions, feel free to contact: [email protected]
or [email protected]