Table Of Contents
Table Of Contents

1. Getting Started with Pre-trained TSN Models on UCF101

UCF101 is an action recognition dataset of realistic action videos, collected from YouTube. With 13,320 short trimmed videos from 101 action categories, it is one of the most widely used dataset in the research community for benchmarking state-of-the-art video action recognition models.

In this tutorial, we will demonstrate how to load a pre-trained model from gluoncv-model-zoo and classify video frames from the Internet or your local disk into one of the 101 action classes.

Step by Step

We will show two exmaples here. For simplicity, we first try out a pre-trained UCF101 model on a single video frame. This is actually an image action recognition problem.

First, please follow the installation guide to install MXNet and GluonCV if you haven’t done so yet.

import matplotlib.pyplot as plt

import mxnet as mx
from mxnet import gluon, nd, image
from mxnet.gluon.data.vision import transforms
from gluoncv.data.transforms import video
from gluoncv import utils
from gluoncv.model_zoo import get_model

Then, we download and show the example image:

url = 'https://github.com/bryanyzhu/tiny-ucf101/raw/master/ThrowDiscus.png'
im_fname = utils.download(url)

img = image.imread(im_fname)

plt.imshow(img.asnumpy())
plt.show()
../../_images/sphx_glr_demo_ucf101_001.png

Out:

Downloading ThrowDiscus.png from https://github.com/bryanyzhu/tiny-ucf101/raw/master/ThrowDiscus.png...

  0%|          | 0/147 [00:00<?, ?KB/s]
148KB [00:00, 5898.99KB/s]

In case you don’t recognize it, the image is a man throwing discus. :)

Now we define transformations for the image.

transform_fn = transforms.Compose([
    video.VideoCenterCrop(size=224),
    video.VideoToTensor(),
    video.VideoNormalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])

This transformation function does three things: center crop the image to 224x224 in size, transpose it to num_channels*height*width, and normalize with mean and standard deviation calculated across all ImageNet images.

What does the transformed image look like?

img = transform_fn(img)
plt.imshow(nd.transpose(img, (1,2,0)).asnumpy())
plt.show()
../../_images/sphx_glr_demo_ucf101_002.png

Can’t recognize anything? Don’t panic! Neither do I. The transformation makes it more “model-friendly”, instead of “human-friendly”.

Next, we load a pre-trained model.

net = get_model('vgg16_ucf101', nclass=101, pretrained=True)

Finally, we prepare the image and feed it to the model

pred = net(img.expand_dims(axis=0))

classes = net.classes
topK = 5
ind = nd.topk(pred, k=topK)[0].astype('int')
print('The input video frame is classified to be')
for i in range(topK):
    print('\t[%s], with probability %.3f.'%
          (classes[ind[i].asscalar()], nd.softmax(pred)[0][ind[i]].asscalar()))

Out:

The input video frame is classified to be
        [ThrowDiscus], with probability 0.998.
        [HorseRace], with probability 0.001.
        [VolleyballSpiking], with probability 0.001.
        [Hammering], with probability 0.000.
        [TennisSwing], with probability 0.000.

We can see that our pre-trained model predicts this video frame to be throw discus action with high confidence.

The next example is how to perform video action recognition, e.g., use the same pre-trained model on an entire video.

First, we download the video and sample the video frames at a speed of 1 frame per second.


import cv2

url = 'https://github.com/bryanyzhu/tiny-ucf101/raw/master/v_Basketball_g01_c01.avi'
video_fname = utils.download(url)

cap = cv2.VideoCapture(video_fname)
cnt = 0
video_frames = []
while(cap.isOpened()):
    ret, frame = cap.read()
    cnt += 1
    if ret and cnt % 25 == 0:
        video_frames.append(frame)
    if not ret: break

cap.release()
print('We evenly extract %d frames from the video %s.' % (len(video_frames), video_fname))

Out:

Downloading v_Basketball_g01_c01.avi from https://github.com/bryanyzhu/tiny-ucf101/raw/master/v_Basketball_g01_c01.avi...

  0%|          | 0/281 [00:00<?, ?KB/s]
282KB [00:00, 8697.78KB/s]
We evenly extract 5 frames from the video v_Basketball_g01_c01.avi.

Now we transform each video frame and feed them into the model. In the end, we average the predictions from multiple video frames to get a reasonable prediction.

final_pred = 0
for frame_id, frame_img in enumerate(video_frames):
    frame_img = transform_fn(frame_img)
    pred = net(frame_img.expand_dims(axis=0))
    final_pred += pred
final_pred /= len(video_frames)

classes = net.classes
topK = 5
ind = nd.topk(final_pred, k=topK)[0].astype('int')
print('The input video is classified to be')
for i in range(topK):
    print('\t[%s], with probability %.3f.'%
          (classes[ind[i].asscalar()], nd.softmax(final_pred)[0][ind[i]].asscalar()))

Out:

The input video is classified to be
        [Basketball], with probability 0.978.
        [TennisSwing], with probability 0.020.
        [SoccerJuggling], with probability 0.002.
        [VolleyballSpiking], with probability 0.000.
        [JavelinThrow], with probability 0.000.

We can see that our pre-trained model predicts this video to be basketball action with high confidence. Note that, there are many ways to sample video frames and obtain a final video-level prediction.

Next Step

If you would like to dive deeper into training TSN models on UCF101, feel free to read the next tutorial on UCF101.

Total running time of the script: ( 0 minutes 3.655 seconds)

Gallery generated by Sphinx-Gallery