Need help with retrieval-2017-cam?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

imatge-upc
214 Stars 58 Forks 84 Commits 2 Opened issues

Description

Class-Weighted Convolutional Features for Image Retrieval (BMVC 2017)

Services available

!
?

Need anything else?

Contributors list

Class-Weighted Convolutional Features for Image Retrieval

28th British Machine Vision Conference (BMVC 2017)

| Albert Jimenez | Xavier Giro-i-Nieto | Jose M. Alvarez | |:-:|:-:|:-:| | Albert Jimenez | Xavier Giro-i-Nieto |Jose M.Alvarez |

A joint collaboration between:

| logo-gpi | logo-data61 | |:-:|:-:| |UPC Image Processing Group | Data61|

Abstract

Image retrieval in realistic scenarios targets large dynamic datasets of unlabeled images. In these cases, training or fine-tuning a model every time new images are added to the database is neither efficient nor scalable. Convolutional neural networks trained for image classification over large datasets have been proven effective feature extractors for image retrieval. The most successful approaches are based on encoding the activations of convolutional layers, as they convey the image spatial information. In this paper, we go beyond this spatial information and propose a local-aware encoding of convolutional features based on semantic information predicted in the target image. To this end, we obtain the most discriminative regions of an image using Class Activation Maps (CAMs). CAMs are based on the knowledge contained in the network and therefore, our approach, has the additional advantage of not requiring external information. In addition, we use CAMs to generate object proposals during an unsupervised re-ranking stage after a first fast search. Our experiments on two public available datasets for instance retrieval, Oxford5k and Paris6k, demonstrate the competitiveness of our approach outperforming the current state-of-the-art when using off-the-shelf models trained on ImageNet.

Encoding_pipeline

Publication

A preprint of this paper is available on arXiv and in the BMVC 2017 proceedings.

Please cite with the following Bibtex code:

@InProceedings{Jimenez_2017_BMVC,
author = {Jimenez, Albert and Alvarez, Jose M., and Giro-i-Nieto, Xavier},
title = {Class-Weighted Convolutional Features for Visual Instance Search},
booktitle = {28th British Machine Vision Conference (BMVC)},
month = {September},
year = {2017}
}

You may also want to refer to our publication with the more human-friendly Chicago style:

Albert Jimenez, Jose M. Alvarez, and Xavier Giro-i-Nieto. "Class-Weighted Convolutional Features for Visual Instance Search." In Proceedings of the 28th British Machine Vision Conference (BMVC). 2017.

Slides

Results

Comparison with State of the Art

Comparison with State of the Art

Comparison with State of the Art - QE & RE

Qualitative Results

Qualitative Results of the Search

Code Usage

In this repository we provide the code used in our experiments. VGG-16 CAM experiments where carried out using Keras running over Theano. DenseNet and ResNet experiments were carried out using PyTorch.

In the next Section we explain how to run the code in Keras+Theano. To run the experiments using PyTorch, the requirements are the same plus having installed Pytorch and the torchvision package.

Prerequisites

Was done previous to Keras 2.0 but should work with that version as well.

Python packages necessary specified in requirements.txt run:

 pip install -r requirements.txt

Our Experiments have been carried out in these datasets:

Here we provide the weigths of the model (paste them in models folder):

How to run the code?

First thing to do (important!) is setting the path of your images and model weights. We provide lists (also modify path! - Find and Replace) that divide images in vertical and horizontal for faster processing. At the beggining of each script there are some parameters that can be tuned like image preprocessing. I have added a parser for arguments, at the beginning of each script it is shown an example of how to run them.

Feature Extraction

Both scripts extract Class-Weighted Vectors. The first one is used for the original datasets. The second for the distractors. You tune the preprocessing parameters of the images as well as the number of Class-Weighted Vectors extracted. In "Online Aggregation" the order of the stored vectors is the imagenet class order, while in "Offline Aggregation" the order of the vector is from class more probable to less probable (predicted by the network).

  • AOxfParFeatCAMs_Extraction.py
  • ADistFeatCAMsExtraction.py
A_Oxf_Par_Feat_CAMs_Extraction.py  -d  -a 

Aggregation, Ranking and Evaluation

In both scripts you can choose the dataset you want to evaluate and if use query expansion or re-ranking. The first one is for offline aggregation. The second one performs aggregation at the moment of testing.

  • BOfflineEval.py
  • BOnlineAggregation_Eval.py
B_Online_Aggregation_Eval.py -d  --nc_q  --pca  --qe  --re  --nc_re 

Aknowledgements

We would like to specially thank Albert Gil and Josep Pujal from our technical support team at the Image Processing Group at UPC.

| AlbertGil-photo | JosepPujal-photo | |:-:|:-:| | Albert Gil | Josep Pujal |

Contact

If you have any general doubt about our work or code which may be of interest for other researchers, please use the public issues section on this github repo. Alternatively, drop us an e-mail at [email protected].

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.