Lesson 14 of 15

Channel Matrix and Mutual Information

Channel Matrix

A discrete memoryless channel is fully described by its channel matrix (also called transition matrix or stochastic matrix):

Wij=P(Y=jX=i)W_{ij} = P(Y = j \mid X = i)

Row ii gives the conditional distribution of output YY given input X=iX = i. Each row sums to 1.

Computing Output Probabilities

Given input distribution p(X)p(X) and channel matrix WW, the marginal output distribution is:

P(Y=j)=iP(X=i)WijP(Y = j) = \sum_i P(X = i) \cdot W_{ij}

In matrix form: pY=pXW\mathbf{p}_Y = \mathbf{p}_X \cdot W.

Mutual Information Through a Channel

The mutual information between input and output is:

I(X;Y)=H(X)+H(Y)H(X,Y)I(X; Y) = H(X) + H(Y) - H(X, Y)

The joint distribution needed for H(X,Y)H(X, Y) is:

P(X=i,Y=j)=P(X=i)WijP(X = i, Y = j) = P(X = i) \cdot W_{ij}

Examples

Noiseless (identity) channel: W=IW = I (identity matrix) → I(X;Y)=H(X)I(X; Y) = H(X), no information lost.

Completely noisy: each row of WW is identical → output is independent of input → I(X;Y)=0I(X; Y) = 0.

import math

def channel_output_probs(input_probs, channel_matrix):
    n_in = len(input_probs)
    n_out = len(channel_matrix[0])
    return [sum(input_probs[i] * channel_matrix[i][j]
                for i in range(n_in)) for j in range(n_out)]

identity = [[1.0, 0.0], [0.0, 1.0]]
print(channel_output_probs([0.5, 0.5], identity))  # [0.5, 0.5]

Your Task

Implement:

  • channel_output_probs(input_probs, channel_matrix)P(Y=j)=ipiWijP(Y=j) = \sum_i p_i W_{ij}
  • channel_mutual_info(input_probs, channel_matrix)I(X;Y)I(X; Y) using the joint distribution
Python runtime loading...
Loading...
Click "Run" to execute your code.