Lesson 1 of 15

The Neuron

From Perceptron to Neuron

The perceptron you built in Machine Learning was a binary threshold unit. Real neural networks use a smoother model: the artificial neuron.

A neuron computes a weighted sum of its inputs plus a bias, then passes the result through an activation function:

z=i=1nwixi+b=wx+bz = \sum_{i=1}^{n} w_i x_i + b = \mathbf{w} \cdot \mathbf{x} + b

a=f(z)a = f(z)

  • x\mathbf{x} — input vector (features)
  • w\mathbf{w} — weight vector (learned parameters)
  • bb — bias (a learnable scalar offset)
  • ff — activation function (next lesson)
  • zz — pre-activation (the weighted sum)
  • aa — activation (the neuron's output)

The bias lets the neuron fire even when all inputs are zero, shifting the activation function left or right.

Why Weighted Sums?

Each weight wiw_i controls how strongly input xix_i influences the output. A large positive weight means "this input strongly activates me". A large negative weight means "this input suppresses me". The bias is a free parameter that controls the threshold.

Your Task

Implement neuron(inputs, weights, bias) that returns the pre-activation value z=wx+bz = \mathbf{w} \cdot \mathbf{x} + b.

(We will apply the activation function in the next lesson.)

Python runtime loading...
Loading...
Click "Run" to execute your code.