You are on page 1of 31

Hamming Codes

11/17/04

History

In the late 1940s Richard Hamming recognized that the further evolution of computers required greater reliability, in particular the ability to not only detect errors, but correct them. His search for error-correcting codes led to the Hamming Codes, perfect 1-error correcting codes, and the extended Hamming Codes, 1-error correcting and 2-error detecting codes.

Uses

Hamming Codes are still widely used in computing, telecommunication, and other applications. Hamming Codes also applied in
Data compression Some solutions to the popular puzzle The Hat Game Block Turbo Codes

A [7,4] binary Hamming Code

Let our codeword be (x1 x2 x7) F27 x3, x5, x6, x7 are chosen according to the message (perhaps the message itself is (x3 x5 x6 x7 )). x4 := x5 + x6 + x7 (mod 2) x2 := x3 + x6 + x7 x1 := x3 + x5 + x7

[7,4] binary Hamming codewords

A [7,4] binary Hamming Code

Let a = x4 + x5 + x6 + x7 (=1 iff one of these bits is in error) Let b = x2 + x3 + x6 + x7 Let c = x1 + x3 + x5 + x7 If there is an error (assuming at most one) then abc will be binary representation of the subscript of the offending bit.

A [7,4] binary Hamming Code

If (y1 y2 y7) is received and abc 000, then we assume the bit abc is in error and switch it. If abc=000, we assume there were no errors (so if there are three or more errors we may recover the wrong codeword).

Definition: Generator and Check Matrices

For an [n, k] linear code, the generator matrix is a kn matrix for which the row space is the given code. A check matrix for an [n, k] is a generator matrix for the dual code. In other words, an (n-k)k matrix M for which Mx = 0 for all x in the code.

A Construction for binary Hamming Codes

For a given r, form an r 2r-1 matrix M, the columns of which are the binary representations (r bits long) of 1, , 2r-1. The linear code for which this is the check matrix is a [2r-1, 2r-1 r] binary Hamming Code = {x=(x1 x2 x n) : MxT = 0}.

Example Check Matrix

A check matrix for a [7,4] binary Hamming Code:

Syndrome Decoding

Let y = (y1 y2 yn) be a received codeword. The syndrome of y is S:=LryT. If S=0 then there was no error. If S 0 then S is the binary representation of some integer 1 t n=2r-1 and the intended codeword is x = (y1 yr+1 yn).

Example Using L3

Suppose (1 0 1 0 0 1 0) is received.

100 is 4 in binary, so the intended codeword was (1 0 1 1 0 1 0).

Extended [8,4] binary Hamm. Code

As with the [7,4] binary Hamming Code:


x3, x5, x6, x7 are chosen according to the message. x4 := x5 + x6 + x7 x2 := x3 + x6 + x7 x1 := x3 + x5 + x7

Add a new bit x0 such that

x0 = x1 + x2 + x3 + x4 + x5 + x6 + x7 . i.e., the new bit makes the sum of all the bits zero. x0 is called a parity check.

Extended binary Hamming Code

The minimum distance between any two codewords is now 4, so an extended Hamming Code is a 1-error correcting and 2-error detecting code. The general construction of a [2r, 2r-1 - r] extended code from a [2r 1, 2r 1 r] binary Hamming Code is the same: add a parity check bit.

Check Matrix Construction of Extended Hamming Code

The check matrix of an extended Hamming Code can be constructed from the check matrix of a Hamming code by adding a zero column on the left and a row of 1s to the bottom.

Decoding Extended Hamming Code

q-ary Hamming Codes

The binary construction generalizes to Hamming Codes over an alphabet A={0, , q}, q 2. For a given r, form an r (qr-1)/(q-1) matrix M over A, any two columns of which are linearly independent. M determines a [(qr-1)/(q-1), (qr-1)/(q-1) r] (= [n,k]) q-ary Hamming Code for which M is the check matrix.

Example: ternary [4, 2] Hamming

Two check matrices for the some [4, 2] ternary Hamming Codes:

Syndrome decoding: the q-ary case

The syndrome of received word y, S:=MyT, will be a multiple of one of the columns of M, say S=mi, scalar, mi the ith column of M. Assume an error vector of weight 1 was introduced y = x + (0 0), in the ith spot.

Example: q-ary Syndrome

[4,2] ternary with check matrix word (0 1 1 1) received.

So decode (0 1 1 1) as (0 1 1 1) (0 0 2 0) = (0 1 2 1).

Perfect 1-error correcting

Hamming Codes are perfect 1-error correcting codes. That is, any received word with at most one error will be decoded correctly and the code has the smallest possible size of any code that does this. For a given r, any perfect 1-error correcting linear code of length n=2r-1 and dimension n-r is a Hamming Code.

Proof: 1-error correcting

A code will be 1-error correcting if


spheres of radius 1 centered at codewords cover the codespace, and if the minimum distance between any two codewords 3, since then spheres of radius 1 centered at codewords will be disjoint.

Proof: 1-error correcting

Suppose codewords x, y differ by 1 bit. Then x-y is a codeword of weight 1, and M(x-y) 0. Contradiction. If x, y differ by 2 bits, then M(x-y) is the difference of two multiples of columns of M. No two columns of M are linearly dependent, so M(x-y) 0, another contradiction. Thus the minimum distance is at least 3.

Perfect

A sphere of radius centered at x is S(x)={y in An : dH(x,y) }. Where A is the alphabet, Fq, and dH is the Hamming distance. A sphere of radius e contains words. If C is an e-error correcting code then , so .

Perfect

This last inequality is called the sphere packing bound for an e-error correcting code C of length n over Fm: where n is the length of the code and in this case e=1. A code for which equality holds is called perfect.

Proof: Perfect

The right side of this, for e=1 is qn/(1+n(q-1)). The left side is qn-r where n= (qr-1)/(q-1). qn-r(1+n(q-1)) = qn-r(1+(qr-1)) = qn.

Applications

Data compression. Turbo Codes The Hat Game

Data Compression

Hamming Codes can be used for a form of lossy compression. If n=2r-1 for some r, then any n-tuple of bits x is within distance at most 1 from a Hamming codeword c. Let G be a generator matrix for the Hamming Code, and mG=c. For compression, store x as m. For decompression, decode m as c. This saves r bits of space but corrupts (at most) 1 bit.

The Hat Game

A group of n players enter a room whereupon they each receive a hat. Each player can see everyone elses hat but not his own. The players must each simultaneously guess a hat color, or pass. The group loses if any player guesses the wrong hat color or if every player passes. Players are not necessarily anonymous, they can be numbered.

The Hat Game

Assignment of hats is assumed to be random. The players can meet beforehand to devise a strategy. The goal is to devise the strategy that gives the highest probability of winning.

Source

Notes on Coding Theory by J.I. Hall http://www.mth.msu.edu/~jhall/classes/coden otes/coding-notes.html

You might also like