Note
Once again, in order to go through the tutorial faster, we are training on a small\n subset of the original ``MINC-2500`` dataset, and for only 5 epochs. By training on the\n full dataset with 40 epochs, it is expected to get accuracy around 80% on test data.
\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"lr_counter = 0\nnum_batch = len(train_data)\n\nfor epoch in range(epochs):\n if epoch == lr_steps[lr_counter]:\n trainer.set_learning_rate(trainer.learning_rate*lr_factor)\n lr_counter += 1\n\n tic = time.time()\n train_loss = 0\n metric.reset()\n\n for i, batch in enumerate(train_data):\n data = gluon.utils.split_and_load(batch[0], ctx_list=ctx, batch_axis=0, even_split=False)\n label = gluon.utils.split_and_load(batch[1], ctx_list=ctx, batch_axis=0, even_split=False)\n with ag.record():\n outputs = [finetune_net(X) for X in data]\n loss = [L(yhat, y) for yhat, y in zip(outputs, label)]\n for l in loss:\n l.backward()\n\n trainer.step(batch_size)\n train_loss += sum([l.mean().asscalar() for l in loss]) / len(loss)\n\n metric.update(label, outputs)\n\n _, train_acc = metric.get()\n train_loss /= num_batch\n\n _, val_acc = test(finetune_net, val_data, ctx)\n\n print('[Epoch %d] Train-acc: %.3f, loss: %.3f | Val-acc: %.3f | time: %.1f' %\n (epoch, train_acc, train_loss, val_acc, time.time() - tic))\n\n_, test_acc = test(finetune_net, test_data, ctx)\nprint('[Finished] Test-acc: %.3f' % (test_acc))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Next\n\nNow that you have learned to muster the power of transfer\nlearning, to learn more about training a model on\nImageNet, please read `this tutorial