Need help with tntorch?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

rballester
148 Stars 21 Forks GNU Lesser General Public License v3.0 236 Commits 2 Opened issues

Description

Tensor Network Learning with PyTorch

Services available

!
?

Need anything else?

Contributors list

# 308,889
Python
Shell
tensors
bayesia...
164 commits
# 117,252
lasagne
cross-p...
C
gui-fra...
32 commits
# 529,963
Shell
tensors
MATLAB
C
5 commits
# 116,580
pytorch
convolu...
Jupyter...
openpos...
3 commits

Documentation Status

tntorch - Tensor Network Learning with PyTorch

Read the Docs site: http://tntorch.readthedocs.io/

Welcome to tntorch, a PyTorch-powered modeling and learning library using tensor networks. Such networks are unique in that they use multilinear neural units (instead of non-linear activation units). Features include:

  • Basic and fancy indexing of tensors, broadcasting, assignment, etc.
  • Tensor decomposition and reconstruction
  • Element-wise and tensor-tensor arithmetics
  • Building tensors from black-box functions using cross-approximation
  • Finding global maxima and minima from tensors
  • Statistics and sensitivity analysis
  • Optimization using autodifferentiation
  • Misc. operations on tensors: stacking, unfolding, sampling, derivating, etc.
  • Batch operations (work in progress)

Available tensor formats include:

For example, the following networks both represent a 4D tensor (i.e. a real function that can take I1 x I2 x I3 x I4 possible values) in the TT and TT-Tucker formats:

In tntorch, all tensor decompositions share the same interface. You can handle them in a transparent form, as if they were plain NumPy arrays or PyTorch tensors:

> import tntorch as tn
> t = tn.randn(32, 32, 32, 32, ranks_tt=5)  # Random 4D TT tensor of shape 32 x 32 x 32 x 32 and TT-rank 5
> print(t)

4D TT tensor:

32 32 32 32 | | | | (0) (1) (2) (3) / \ / \ / \ /
1 5 5 5 1

> print(tn.mean(t))

tensor(8.0388)

> print(tn.norm(t))

tensor(9632.3726)

Decompressing tensors is easy:

> print(t.torch().shape)
torch.Size([32, 32, 32, 32])

Thanks to PyTorch's automatic differentiation, you can easily define all sorts of loss functions on tensors:

def loss(t):
    return torch.norm(t[:, 0, 10:, [3, 4]].torch())  # NumPy-like "fancy indexing" for arrays

Most importantly, loss functions can be defined on compressed tensors as well:

def loss(t):
    return tn.norm(t[:3, :3, :3, :3] - t[-3:, -3:, -3:, -3:])

Check out the introductory notebook for all the details on the basics.

Tutorial Notebooks

Installation

You can install tntorch using pip:

pip install tntorch

Alternatively, you can install from the source:

git clone https://github.com/rballester/tntorch.git
cd tntorch
pip install .

For functions that use cross-approximation, the optional package maxvolpy is required (it can be installed via

pip install maxvolpy
).

Testing

We use pytest. Simply run:

cd tests/
pytest

Contributing

Pull requests are welcome!

Besides using the issue tracker, feel also free to contact me at [email protected].

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.