In Defense of the Triplet Loss for Person Re-Identification

Alexander Hermans, Lucas Beyer, Bastian Leibe
arXiv:1703.07737

TL;DR: Use triplet loss, hard-mining inside mini-batch performs great, is similar to offline semi-hard mining but much more efficient.

In the past few years, the field of computer vision has gone through a revolution fueled mainly by the advent of large datasets and the adoption of deep convolutional neural networks for end-to-end learning. The person re-identification subfield is no exception to this, thanks to the notable publication of the Market-1501 and MARS datasets and several strong deep learning approaches. Unfortunately, a prevailing belief in the community seems to be that the triplet loss is inferior to using surrogate losses (classification, verification) followed by a separate metric learning step. We show that, for models trained from scratch as well as pretrained ones, using a variant of the triplet loss to perform end-to-end deep metric learning outperforms any other published method by a large margin.

» Show BibTeX
@article{HermansBeyer2017Arxiv, title = {{In Defense of the Triplet Loss for Person Re-Identification}}, author = {Hermans*, Alexander and Beyer*, Lucas and Leibe, Bastian}, journal = {arXiv preprint arXiv:1703.07737}, year = {2017} }



Disclaimer Home Visual Computing institute RWTH Aachen University