Need help with fully-differentiable-deep-ndf-tf?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

222 Stars 65 Forks MIT License 20 Commits 2 Opened issues


Fully differentiable deep-neural decision forest in tensorflow

Services available


Need anything else?

Contributors list

# 22,020
18 commits

Fully Differentiable Deep Neural Decision Forest


This repository contains a simple modification of the deep-neural decision forest [Kontschieder et al.] in TensorFlow. The modification allows joint optimization of the decision nodes and leaf nodes which theoretically should speed up the training (haven't verified).


Deep Neural Deicision Forest, ICCV 2015, proposed an interesting way to incorporate a decision forest into a neural network.

The authors proposed incorporating the terminal nodes of a decision forest as static probability distributions and routing probabilities using sigmoid functions. The final loss is defined as the usual cross entropy between ground truth and weighted average of the terminal probabilities (weights being the routing probabilities).

As there are two trainable parameters, the authors used alternating optimization. They first fixed the terminal node probabilities and trained the base network (routing probabilities), then, fixed the network and optimized the terminal nodes. Such alternating optimization is usually slower than joint optimization since variables that are not being optimized slow down the optimization of the other variable.

However, if we parametrize the terminal nodes using a parametric probability distribution, we can jointly train both terminal and decision nodes, and theoretically, can speed up the convergence.

This code is just a proof-of-concept that

  1. One can train both decision nodes and leaf nodes $\pi$ jointly using parametric formulation of leaf (terminal) nodes.

  2. It is easy to implement such idea in a symbolic math library.


The leaf node probability $p \in \Delta^{n-1}$ can be parametrized using an $n$ dimensional vector $w{leaf}$ $\exists w{leaf}$ s.t. $p = softmax(w{leaf})$. Thus, we can compute the gradient of $L$ w.r.t $w{leaf}$ as well and can jointly optimize the terminal nodes as well.


I used a simple (3 convolution + 2 fc) network for this experiment. On the MNIST, it reaches 99.1% after 10 epochs.


SDL Reading Group Slides


[Kontschieder et al.] Deep Neural Decision Forests, ICCV 2015

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.