Batch-Incremental Triplet Sampling for Training Triplet Networks Using Bayesian Updating Theorem

Citation:

Sikaroudi, M. et al., 2021. Batch-Incremental Triplet Sampling for Training Triplet Networks Using Bayesian Updating Theorem. In 25th International Conference on Pattern Recognition (ICPR). January. Milan, Italy (virtual): IEEE, p. 7. Available at: https://ieeexplore.ieee.org/document/9412478.

Date Presented:

January

Abstract:

Variants of Triplet networks are robust entities for learning a discriminative embedding subspace. There exist different triplet mining approaches for selecting the most suitable training triplets. Some of these mining methods rely on the extreme distances between instances, and some others make use of sampling. However, sampling from stochastic distributions of data rather than sampling merely from the existing embedding instances can provide more discriminative information. In this work, we sample triplets from distributions of data rather than from existing instances. We consider a multivariate normal distribution for the embedding of each class. Using Bayesian updating and conjugate priors, we update the distributions of classes dynamically by receiving the new mini-batches of training data. The proposed triplet mining with Bayesian updating can be used with any triplet-based loss function, e.g., triplet-loss or Neighborhood Component Analysis (NCA) loss. Accordingly, Our triplet mining approaches are called Bayesian Updating Triplet (BUT) and Bayesian Updating NCA (BUNCA), depending on which loss function is being used. Experimental results on two public datasets, namely MNIST and histopathology colorectal cancer (CRC), substantiate the effectiveness of the proposed triplet mining method.

Website