Lesson 1 of 15
The Neuron
From Perceptron to Neuron
The perceptron you built in Machine Learning was a binary threshold unit. Real neural networks use a smoother model: the artificial neuron.
A neuron computes a weighted sum of its inputs plus a bias, then passes the result through an activation function:
- — input vector (features)
- — weight vector (learned parameters)
- — bias (a learnable scalar offset)
- — activation function (next lesson)
- — pre-activation (the weighted sum)
- — activation (the neuron's output)
The bias lets the neuron fire even when all inputs are zero, shifting the activation function left or right.
Why Weighted Sums?
Each weight controls how strongly input influences the output. A large positive weight means "this input strongly activates me". A large negative weight means "this input suppresses me". The bias is a free parameter that controls the threshold.
Your Task
Implement neuron(inputs, weights, bias) that returns the pre-activation value .
(We will apply the activation function in the next lesson.)
Python runtime loading...
Loading...
Click "Run" to execute your code.