Chapter 22 · 14 min
Appendix · Backprop by hand
Derive backprop on a small graph — every gradient written out. The math behind every loss.backward() you've ever called.
You called loss.backward() in chapter 5 and again in every chapter after. It quietly populated .grad on every in the model. This appendix unpacks what that one line does, by deriving the of a tiny network by hand and checking the result against PyTorch.
Once you have done it once, every loss.backward() afterward feels like a function you understand, not magic.
1. The network
The smallest interesting network: 2 inputs, 1 hidden unit, 1 output. activation on the hidden unit, identity on the output. Mean-squared against a single target.
Three : w₁, w₂, v. Four named intermediates: z, h, ŷ, L. Two inputs x₁, x₂ and one target y.
Concrete values to keep things calm:
| value | |
|---|---|
| 1.0, 0.5 | |
| 0.4, 0.6 | |
| 0.8 | |
| 1.0 |
2. Forward pass by hand
Plug the numbers in:
That is one full forward pass. Three multiplications, one addition, one sigmoid, one subtraction, one square. Eight arithmetic operations to turn two inputs into one number.
3. Backward by chain rule
The we want is , , — one number per . The chain rule walks the computation graph backwards.
Start with the and propagate sensitivities outward:
That number is the "'s sensitivity to the output" — how much the would change for a unit change in ŷ. Now push it through v:
The output unit's gradient with respect to v is just h, because ŷ = v·h. Multiply by the upstream sensitivity and you have ∂L/∂v. Done for v.
For w₁ and w₂ the path is longer. First push through h:
Then through z. The 's derivative is σ(z)·(1 − σ(z)), which we already have:
Finally through w₁ and w₂. Since z = w₁ x₁ + w₂ x₂, the partials are just the inputs:
Three numbers: ∂L/∂v ≈ -0.6220, ∂L/∂w₁ ≈ -0.1651, ∂L/∂w₂ ≈ -0.0825. Each tells the optimizer how to change one weight to lower the by one unit.
Now write the chain rule yourself. The cell pre-fills the forward pass; you fill in the four backward lines and watch the gradients land:
Code · JavaScript
4. Verify against PyTorch
Save this as scripts/check_backprop.py (or just paste into a Python REPL):
"""check_backprop.py — verify hand-derived gradients match PyTorch."""
import math
import torch
# inputs and target
x = torch.tensor([1.0, 0.5])
y = torch.tensor(1.0)
# parameters (require grad so PyTorch tracks them)
w = torch.tensor([0.4, 0.6], requires_grad=True)
v = torch.tensor(0.8, requires_grad=True)
# forward
z = (w * x).sum()
h = torch.sigmoid(z)
y_hat = v * h
loss = (y_hat - y) ** 2
# backward
loss.backward()
# results
print(f"forward: z={z.item():.4f} h={h.item():.4f} y_hat={y_hat.item():.4f} loss={loss.item():.4f}")
print(f"grads: w1={w.grad[0].item():.4f} w2={w.grad[1].item():.4f} v={v.grad.item():.4f}")
# expected from the hand derivation
expected = {"w1": -0.1651, "w2": -0.0825, "v": -0.6220}
assert math.isclose(w.grad[0].item(), expected["w1"], abs_tol=1e-3)
assert math.isclose(w.grad[1].item(), expected["w2"], abs_tol=1e-3)
assert math.isclose(v.grad.item(), expected["v"], abs_tol=1e-3)
print("✓ hand-derived gradients match PyTorch")Run it. Output:
forward: z=0.7000 h=0.6682 y_hat=0.5346 loss=0.2166
grads: w1=-0.1651 w2=-0.0825 v=-0.6220
✓ hand-derived gradients match PyTorchThe numbers match to four decimal places. That is what loss.backward() did, for every of every model in the rest of the book. The mechanism is the same; only the graph is bigger.
5. Why this generalizes
A real model has millions to billions of . The hand derivation does not scale. The chain rule does:
- The computation graph is built automatically by PyTorch as you compute the forward pass. Every operation records its inputs and how to compute its local partial derivative.
- The backward pass walks the graph from
lossto every , multiplying local partials. Each gets one number, written into.grad. - The cost is the same order as the forward pass — roughly 2-3× as expensive, not exponentially worse. This is the entire reason modern is feasible.
The local partials are mechanical. For every operation y = f(x), PyTorch knows dy/dx. , multiplication, addition, sum, softmax, — each has a one-line backward formula. The chain rule glues them together.
That is the whole story. When chapter 8's module calls loss.backward(), the graph being walked has hundreds of operations and tens of thousands of , but every individual edge is one of the simple rules from this appendix.
Recap
loss.backward()walks a computation graph backward, applying the chain rule one edge at a time.- Each operation contributes a local partial derivative. The forward pass remembers what was computed; the backward pass uses that record.
- The hand derivation matches PyTorch to four decimals on a 3- network. The same mechanism scales to billion- models without changing.
- One number per comes out, written into
.grad. The optimizer reads those numbers and decides how to move each weight.
Going further
- 3blue1brown's "Backpropagation calculus" — the same derivation with animated graphs.
- Andrej Karpathy's "micrograd" — a 150-line Python autodiff library that implements the same graph + chain rule mechanism. Worth reading top to bottom once.
- PyTorch's autograd mechanics — the official explanation of how the graph is built and traversed in production.