Table Of Contents
Table Of Contents

Action Recognition

Here is the model zoo for video action recognition task. We first show a visualization in the graph below, describing the inference throughputs vs. validation accuracy of Kinetics400 pre-trained models.

action_recognition

Hint

Training commands work with this script: Download train_recognizer.py

A model can have differently trained parameters with different hashtags. Parameters with a grey name can be downloaded by passing the corresponding hashtag.

  • Download default pretrained weights: net = get_model('i3d_resnet50_v1_kinetics400', pretrained=True)

  • Download weights given a hashtag: net = get_model('i3d_resnet50_v1_kinetics400', pretrained='568a722e')

The test script Download test_recognizer.py can be used for evaluating the models on various datasets.

The inference script Download inference.py can be used for inferencing on a list of videos (demo purpose).

Kinetics400 Dataset

The following table lists pre-trained models trained on Kinetics400.

Note

Our pre-trained models reproduce results from “Temporal Segment Networks (TSN)” 2 , “Inflated 3D Networks (I3D)” 3 , “Non-local Neural Networks” 4 and “SlowFast” 5 . Please check the reference paper for further information.

InceptionV3 is trained and evaluated with input size of 299x299.

Clip Length is the number of frames within an input clip. 32 (64/2) means we use 32 frames, but actually the frames are formed by randomly selecting 64 consecutive frames from the video and then skipping every other frame. This strategy is widely adopted to reduce computation and memory cost.

Segments is the number of segments used during training. For testing (reporting these numbers), we use 250 views for 2D networks (25 frames and 10-crop) and 30 views for 3D networks (10 clips and 3-crop) following the convention.

For SlowFast family of networks, our performance has a small gap to the numbers reported in the paper. This is because the performance of SlowFast network heavily depends on the frame rate. The official implementation forces re-encoding every video to a fixed frame rate of 30. For fair comparison to other methods, we do not adopt that strategy, which leads to the small gap.

Name

Pretrained

Segments

Clip Length

Top-1

Hashtag

Train Command

Train Log

inceptionv1_kinetics400 2

ImageNet

7

1

69.1

6dcdafb1

shell script

log

inceptionv3_kinetics400 2

ImageNet

7

1

72.5

8a4a6946

shell script

log

resnet18_v1b_kinetics400 2

ImageNet

7

1

65.5

46d5a985

shell script

log

resnet34_v1b_kinetics400 2

ImageNet

7

1

69.1

8a8d0d8d

shell script

log

resnet50_v1b_kinetics400 2

ImageNet

7

1

69.9

cc757e5c

shell script

log

resnet101_v1b_kinetics400 2

ImageNet

7

1

71.3

5bb6098e

shell script

log

resnet152_v1b_kinetics400 2

ImageNet

7

1

71.5

9bc70c66

shell script

log

i3d_inceptionv1_kinetics400 3

ImageNet

1

32 (64/2)

71.8

81e0be10

shell script

log

i3d_inceptionv3_kinetics400 3

ImageNet

1

32 (64/2)

73.6

f14f8a99

shell script

log

i3d_resnet50_v1_kinetics400 4

ImageNet

1

32 (64/2)

74.0

568a722e

shell script

log

i3d_resnet101_v1_kinetics400 4

ImageNet

1

32 (64/2)

75.1

6b69f655

shell script

log

i3d_nl5_resnet50_v1_kinetics400 4

ImageNet

1

32 (64/2)

75.2

3c0e47ea

shell script

log

i3d_nl10_resnet50_v1_kinetics400 4

ImageNet

1

32 (64/2)

75.3

bfb58c41

shell script

log

i3d_nl5_resnet101_v1_kinetics400 4

ImageNet

1

32 (64/2)

76.0

fbfc1d30

shell script

log

i3d_nl10_resnet101_v1_kinetics400 4

ImageNet

1

32 (64/2)

76.1

59186c31

shell script

log

slowfast_4x16_resnet50_kinetics400 5

ImageNet

1

36 (64/1)

75.3

9d650f51

shell script

log

slowfast_8x8_resnet50_kinetics400 5

ImageNet

1

40 (64/1)

76.6

d6b25339

shell script

log

slowfast_8x8_resnet101_kinetics400 5

ImageNet

1

40 (64/1)

77.2

fbde1a7c

shell script

log

UCF101 Dataset

The following table lists pre-trained models trained on UCF101.

Note

Our pre-trained models reproduce results from “Temporal Segment Networks” 2 and “Inflated 3D Networks (I3D)” 3 . Please check the reference paper for further information.

The top-1 accuracy number shown below is for official split 1 of UCF101 dataset, not the average of 3 splits.

InceptionV3 is trained and evaluated with input size of 299x299.

K400 is Kinetics400 dataset, which means we use model pretrained on Kinetics400 as weights initialization.

Name

Pretrained

Segments

Clip Length

Top-1

Hashtag

Train Command

Train Log

vgg16_ucf101 2

ImageNet

3

1

83.4

d6dc1bba

shell script

log

vgg16_ucf101 1

ImageNet

1

1

81.5

05e319d4

shell script

log

inceptionv3_ucf101 2

ImageNet

3

1

88.1

13ef5c3b

shell script

log

inceptionv3_ucf101 1

ImageNet

1

1

85.6

0c453da8

shell script

log

i3d_resnet50_v1_ucf101 3

ImageNet

1

32 (64/2)

83.9

7afc7286

shell script

log

i3d_resnet50_v1_ucf101 3

ImageNet, K400

1

32 (64/2)

95.4

760d0981

shell script

log

HMDB51 Dataset

The following table lists pre-trained models trained on HMDB51.

Note

Our pre-trained models reproduce results from “Temporal Segment Networks” 2 and “Inflated 3D Networks (I3D)” 3 . Please check the reference paper for further information.

The top-1 accuracy number shown below is for official split 1 of HMDB51 dataset, not the average of 3 splits.

Name

Pretrained

Segments

Clip Length

Top-1

Hashtag

Train Command

Train Log

resnet50_v1b_hmdb51 2

ImageNet

3

1

55.2

682591e2

shell script

log

resnet50_v1b_hmdb51 1

ImageNet

1

1

52.2

ba66ee4b

shell script

log

i3d_resnet50_v1_hmdb51 3

ImageNet

1

32 (64/2)

48.5

0d0ad559

shell script

log

i3d_resnet50_v1_hmdb51 3

ImageNet, K400

1

32 (64/2)

70.9

2ec6bf01

shell script

log

Something-Something-V2 Dataset

The following table lists pre-trained models trained on Something-Something-V2.

Note

Our pre-trained models reproduce results from “Temporal Segment Networks (TSN)” 2 , “Inflated 3D Networks (I3D)” 3 . Please check the reference paper for further information.

Name

Pretrained

Segments

Clip Length

Top-1

Hashtag

Train Command

Train Log

resnet50_v1b_sthsthv2 2

ImageNet

8

1

35.5

80ee0c6b

shell script

log

i3d_resnet50_v1_sthsthv2 3

ImageNet

1

16 (32/2)

50.6

01961e4c

shell script

log

1(1,2,3)

Limin Wang, Yuanjun Xiong, Zhe Wang and Yu Qiao. “Towards Good Practices for Very Deep Two-Stream ConvNets.” arXiv preprint arXiv:1507.02159, 2015.

2(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15)

Limin Wang, Yuanjun Xiong, Zhe Wang, Yu Qiao, Dahua Lin, Xiaoou Tang and Luc Van Gool. “Temporal Segment Networks: Towards Good Practices for Deep Action Recognition.” In European Conference on Computer Vision (ECCV), 2016.

3(1,2,3,4,5,6,7,8,9,10,11)

Joao Carreira and Andrew Zisserman. “Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset.” In Computer Vision and Pattern Recognition (CVPR), 2017.

4(1,2,3,4,5,6,7)

Xiaolong Wang, Ross Girshick, Abhinav Gupta and Kaiming He. “Non-local Neural Networks.” In Computer Vision and Pattern Recognition (CVPR), 2018.

5(1,2,3,4)

Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik and Kaiming He. “SlowFast Networks for Video Recognition.” In International Conference on Computer Vision (ICCV), 2019.