Lesson 2 of 15

Shannon Entropy

Shannon Entropy

Shannon entropy measures the average uncertainty in a probability distribution. Given a discrete random variable XX with probability mass function p(x)p(x):

H(X)=xp(x)log2p(x)bitsH(X) = -\sum_{x} p(x) \log_2 p(x) \quad \text{bits}

(Terms where p(x)=0p(x) = 0 are defined as 0log20=00 \cdot \log_2 0 = 0 by convention.)

Key Properties

DistributionEntropy
Deterministic (p=1p=1 for one outcome)H=0H = 0
Uniform over nn outcomesH=log2nH = \log_2 n (maximum)

Uniform Entropy

For a uniform distribution over nn equally likely outcomes:

Huniform(n)=log2nH_{\text{uniform}}(n) = \log_2 n

This is the maximum possible entropy for any distribution over nn outcomes — uniform distributions are the most uncertain.

Example

For a fair coin (p=0.5p = 0.5 each): H=(0.5log20.5+0.5log20.5)=1H = -(0.5 \log_2 0.5 + 0.5 \log_2 0.5) = 1 bit.

For a biased coin (p=0.9p = 0.9, 1p=0.11-p = 0.1): H0.469H \approx 0.469 bits (less uncertainty).

import math

def shannon_entropy(probs):
    return sum(-p * math.log2(p) for p in probs if p > 0)

print(shannon_entropy([0.5, 0.5]))  # 1.0
print(shannon_entropy([0.25]*4))    # 2.0

Your Task

Implement:

  • shannon_entropy(probs)H(X)H(X) for a list of probabilities (skip zeros)
  • uniform_entropy(n)log2n\log_2 n
  • max_entropy_dist(n) — returns [1/n,1/n,][1/n, 1/n, \ldots] (list of nn equal probs)
Python runtime loading...
Loading...
Click "Run" to execute your code.