Need help with tensorflow_compact_bilinear_pooling?

Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

134 Stars 44 Forks Other 8 Commits 6 Opened issues

Compact Bilinear Pooling in TensorFlow

Readme

This repository contains the tensorflow implementation of Compact Bilinear Pooling.

Details of this operation can be seen in

compact_bilinear_pooling_layerin

compact_bilinear_pooling.py.

def compact_bilinear_pooling_layer(bottom1, bottom2, output_dim, sum_pool=True, rand_h_1=None, rand_s_1=None, rand_h_2=None, rand_s_2=None, seed_h_1=1, seed_s_1=3, seed_h_2=5, seed_s_2=7, sequential=True, compute_size=128) """ Compute compact bilinear pooling over two bottom inputs. Reference: Yang Gao, et al. "Compact Bilinear Pooling." in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (2016). Akira Fukui, et al. "Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding." arXiv preprint arXiv:1606.01847 (2016). Args: bottom1: 1st input, 4D Tensor of shape [batch_size, height, width, input_dim1]. bottom2: 2nd input, 4D Tensor of shape [batch_size, height, width, input_dim2]. output_dim: output dimension for compact bilinear pooling. sum_pool: (Optional) If True, sum the output along height and width dimensions and return output shape [batch_size, output_dim]. Otherwise return [batch_size, height, width, output_dim]. Default: True. rand_h_1: (Optional) an 1D numpy array containing indices in interval `[0, output_dim)`. Automatically generated from `seed_h_1` if is None. rand_s_1: (Optional) an 1D numpy array of 1 and -1, having the same shape as `rand_h_1`. Automatically generated from `seed_s_1` if is None. rand_h_2: (Optional) an 1D numpy array containing indices in interval `[0, output_dim)`. Automatically generated from `seed_h_2` if is None. rand_s_2: (Optional) an 1D numpy array of 1 and -1, having the same shape as `rand_h_2`. Automatically generated from `seed_s_2` if is None. sequential: (Optional) if True, use the sequential FFT and IFFT instead of tf.batch_fft or tf.batch_ifft to avoid out-of-memory (OOM) error. Note: sequential FFT and IFFT are only available on GPU Default: True. compute_size: (Optional) The maximum size of sub-batch to be forwarded through FFT or IFFT in one time. Large compute_size may be faster but can cause OOM and FFT failure. This parameter is only effective when sequential == True. Default: 128. Returns: Compact bilinear pooled results of shape [batch_size, output_dim] or [batch_size, height, width, output_dim], depending on `sum_pool`. """

To test whether it works correctly on your system, run:

python compact_bilinear_pooling_test.pyThe tests pass if no error occurs running the above command.

Note that

sequential=True(Default) only supports GPU computation, with no CPU kernel available.

The

sequential_fft/build/sequential_batch_fft.sois built against TensorFlow version 1.12.0 with CUDA 8.0 and g++ 5.4.0, which should be compatible with the official build of TensorFlow 1.12.0 on Ubuntu/Linux 64-bit.

If you set

sequential=True(Default), you will need this

sequential_batch_fft.soto be compatible with your TensorFlow installation.

If installed TensorFlow from source, or want to use a different version of TensorFlow other than 1.12.0 that may be built with a different compiler and a different CUDA version, you may need to rebuild

sequential_batch_fft.sowith

compile.shin

sequential_fft/,

import tensorflow as tf; print(tf.__compiler_version__)

Yang Gao, et al. "Compact Bilinear Pooling." in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (2016). Akira Fukui, et al. "Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding." arXiv preprint arXiv:1606.01847 (2016).