Lesson 6 of 15
Layer Backpropagation
Gradients Through One Layer
Consider a single neuron with sigmoid activation ( = sigmoid function, not standard deviation) and MSE loss:
To update and , we need and .
Applying the Chain Rule
Each factor:
Define the error signal . Then:
Your Task
Implement layer_backward(inputs, weights, bias, target) that:
- Performs the forward pass to compute
- Computes
- Returns
(dw, db)wheredw[i] = delta * inputs[i]
Python runtime loading...
Loading...
Click "Run" to execute your code.