Undoing the codebook bias by linear transformation with sparsity and F-norm constraints for image classification

This page uses JavaScript to progressively load the article content as a user scrolls. Screen reader users, click the load entire article button to bypass dynamically loaded article content. JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. This page uses JavaScript to progressively load the article content as a user scrolls. Click the View full text link to bypass dynamically loaded article content. Beijing Key Laboratory of Multimedia and Intelligent Software Technology, College of Metropolitan Transportation, Beijing University of Technology, 100124, China National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, P.O.Box 2728, Beijing, China The bag of visual words model (BoW) and its variants have demonstrated their effectiveness for visual applications. The BoW model first extracts local features and generates the corresponding codebook where the elements of a codebook are viewed as visual words. However, the codebook is dataset dependent and has to be generated for each image dataset. Besides, when we only have a limited number of training images, the codebook generated correspondingly may not be able to encode images well. This requires a lot of computational time and weakens the generalization power of the BoW model. To solve these problems, in this paper, we propose to undo the dataset bias by linear codebook transformation in an unsupervised manner. To represent each point in the local feature space, we need a number of linearly independent basis vectors. We view the codebook as a linear transformation of these basis vectors. In this way, we can transform the pre-learned codebooks for a new dataset using the pseudo-inverse of the transformation matrix. However, this is an under-determined problem which may lead to many solutions. Besides, not all of the visual words are equally important for the new dataset. It would be more effective if we can make some selection and choose the discriminative visual words for transformation. Specifically, the sparsity constraints and the F-norm of the transformation matrix are used in this paper. We propose an alternative optimization algorithm to jointly search for the optimal linear transformation matrixes and the encoding parameters. The proposed method needs no labeled images from either the source dataset or the target dataset. Image classification experimental results on several image datasets show the effectiveness of the proposed method. Copyright © 2016 Elsevier B.V. or its licensors or contributors. ScienceDirect® is a registered trademark of Elsevier B.V. Source.

Яндекс.Метрика Рейтинг@Mail.ru Free Web Counter
page counter
Last Modified: April 18, 2016 @ 8:01 am