Need help with autodidact?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

mattjj
667 Stars 78 Forks MIT License 13 Commits 5 Opened issues

!
?

# 16,770
Python
Jupyter...
bayesia...
7 commits
# 68,295
Jupyter...
Shell
MATLAB
Scala
3 commits
# 51,510
The Jul...
jax
C
Vue.js
1 commit

# Autodidact: a pedagogical implementation of Autograd

This is a tutorial implementation based on the full version of Autograd.

Example use:

```>>> import autograd.numpy as np  # Thinly-wrapped numpy
>>>
>>> def tanh(x):                 # Define a function
...     y = np.exp(-2.0 * x)
...     return (1.0 - y) / (1.0 + y)
...
0.41997434161402603
>>> (tanh(1.0001) - tanh(0.9999)) / 0.0002  # Compare to finite differences
0.41997434264973155
```

We can continue to differentiate as many times as we like, and use numpy's vectorization of scalar-valued functions across many different input values:

```>>> import matplotlib.pyplot as plt
>>> x = np.linspace(-7, 7, 200)
>>> plt.plot(x, tanh(x),
...          x, grad(tanh)(x),                                # first  derivative