AOGNet

by xilaili

xilaili /AOGNet

Code for CVPR 2019 paper: " Learning Deep Compositional Grammatical Architectures for Visual Recogni...

131 Stars 22 Forks Last release: Not found MIT License 26 Commits 0 Releases

Available items

No Items, yet!

The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here:

The new version code and pretrained models are available at https://github.com/iVMCL/AOGNets, and more code and models will be released.

Learning Deep Compositional Grammatical Architectures for Visual Recognition

This repository contains the code (in MXNet) for our CVPR2019 paper: "Learning Deep Compositional Grammatical Architectures for Visual Recognition" paper by Xilai Li, Tianfu Wu*, Xi Song. (* Corresponding Author)

Citation

If you find our project useful in your research, please consider citing:

@article{li2017aognet,
  title={Learning Deep Compositional Grammatical Architectures for Visual Recognition},
  author={Xilai Li, Tianfu Wu, Xi Song, Hamid Krim},
  journal={arXiv preprint arXiv:1711.05847},
  year={2017}
}

Contents

  1. Introduction
  2. Contacts

Introduction

An AOGNet consists of a number of stages each of which is composed of a number of AOG building blocks. An AOG building block is designed based on a principled AND-OR grammar and represented by a hierarchical and compositional AND-OR graph. There are three types of nodes: an AND-node explores composition, whose input is computed by concatenating features of its child nodes; an OR-node represents alternative ways of composition in the spirit of exploitation, whose input is the element-wise sum of features of its child nodes; and a Terminal-node takes as input a channel-wise slice of the input feature map of the AOG building block. AOGNets aim to harness the best of two worlds (grammar models and deep neural networks) in representation learning with end-to-end training.

Contacts

email: [email protected]

Any discussions and contribution are welcomed!

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.