Abstract
In recent years, computer vision tasks have increasingly used deep learning techniques. In some tasks, however, due to insufficient data, the model is not properly trained, leading to a decrease in generalizability. When trained on a dataset and tested on another similar dataset, the model predicts near-random results. This paper presents an unsupervised multi-source domain adaptation that improves transfer learning and increases generalizability. In the proposed method, a new module infers the source of the input data based on its extracted features. By making the features extractor compete against this objective, the learned feature representation generalizes better across the sources. As a result, representations similar to those from different sources are learned. That is, the extracted representation is generic and independent of any particular domain. In the training stage, a non-Euclidean triplet loss function is also utilized. Similar representations for samples belonging to the same class can be learned more effectively using the proposed loss function. We demonstrate how the developed framework may be applied to enhance accuracy and outperform the outcomes of already effective transfer learning methodologies. We demonstrate how the proposed strategy performs particularly well when dealing with various dataset domains or when there are insufficient data.
Subject
Electrical and Electronic Engineering,Computer Networks and Communications,Hardware and Architecture,Signal Processing,Control and Systems Engineering