Lesson 2 of 15
Shannon Entropy
Shannon Entropy
Shannon entropy measures the average uncertainty in a probability distribution. Given a discrete random variable with probability mass function :
(Terms where are defined as by convention.)
Key Properties
| Distribution | Entropy |
|---|---|
| Deterministic ( for one outcome) | |
| Uniform over outcomes | (maximum) |
Uniform Entropy
For a uniform distribution over equally likely outcomes:
This is the maximum possible entropy for any distribution over outcomes — uniform distributions are the most uncertain.
Example
For a fair coin ( each): bit.
For a biased coin (, ): bits (less uncertainty).
import math
def shannon_entropy(probs):
return sum(-p * math.log2(p) for p in probs if p > 0)
print(shannon_entropy([0.5, 0.5])) # 1.0
print(shannon_entropy([0.25]*4)) # 2.0
Your Task
Implement:
shannon_entropy(probs)— for a list of probabilities (skip zeros)uniform_entropy(n)—max_entropy_dist(n)— returns (list of equal probs)
Python runtime loading...
Loading...
Click "Run" to execute your code.