In binary, 010 is equal to 2. But what if it were 3 instead?

Binary numbering is an elegant way of representing any number using only 0s and 1s. To do this, each 0 or 1 (each “bit”) represents a different power of two: 1, 2, 4, 8, 16, 32, 64, and so forth. If the bit in that place is a 1, you add that power of two to the total. If the bit is a 0, you do not.

So, for example, a three bit number can represent any decimal from 0 to 7:

Binary | +4 | +2 | +1 | Decimal |

000 | 0 | 0 | 0 | 0 |

001 | 0 | 0 | 1 | 1 |

010 | 0 | 1 | 0 | 2 |

011 | 0 | 1 | 1 | 3 |

100 | 1 | 0 | 0 | 4 |

101 | 1 | 0 | 1 | 5 |

110 | 1 | 1 | 0 | 6 |

111 | 1 | 1 | 1 | 7 |

Binary is more useful than decimal numbers because you can store and transmit 1s and 0s with a series of simple on/off switches. Or simple electrical charges (1=on, 0=off). Or pulses of light. It’s this efficiency that makes binary the elemental language at the core of modern computing.

So far so good. But there are some times when binary is a problem.

Consider a simple clock in binary. A number ticks up one step at a time until it gets to the maximum, then resets back to the start. (The third link below shows you an actual binary clock.) In the binary system above, there are several steps where several bits have to switch at once. 3 to 4, for example, switches all three bits (011 goes to 100). 5 to 6 takes two bit changes (101 to 110). This doesn’t seem so bad, except that in some circumstances it’s really difficult to get bits to switch simultaneously. And, if there’s even a tiny delay between those bit switches, there’s the possibility for error.

Say I take a look at that clock, and it says 101. Is that 5? Or did I just catch the clock moving between 3 and 4, and the first two bits have switched but the last one has not? Is 101 a number, or just an intermediate state between two other numbers?

Now, this problem is most pronounced whenever you’re dealing with physical bits. Back when computers used vacuum tubes as memory, for example, each tube acted as a single bit. But it’s enough of a problem that, back in the 1940s, an alternative binary system was proposed. This is gray code:

Decimal | Binary | Gray code |

0 | 000 | 000 |

1 | 001 | 001 |

2 | 010 | 011 |

3 | 011 | 010 |

4 | 100 | 110 |

5 | 101 | 111 |

6 | 110 | 101 |

7 | 111 | 100 |

In this system, stepping up one number only ever involves changing a single bit. There’s no possibility for those pesky intermediate states, because no more than one bit toggles at once. Gray code is also circular: going from 7 back to 0 also involves just a single bit change.

Today gray code is used in a few surprising places: to guard against and correct errors in broadcasting telecommunication, in rotary encoders (camera lenses, computer mice, radar…), and even as a possible solution to the Tower of Hanoi problem.

That’s not a gray code: it’s a Gray code, named for the television pioneer. (I’d heard that he was involved in creating color television, but alas I can’t verify that.)

This Gray code, “reflected binary code”, is common, but lots of fun ones exist. For example, RBC toggles that end bit a lot — the toggled position goes 12131214… like the marks on an inch ruler — and maybe your bit wear out. Then you want a balanced Gray code.

Oh, single-track Gray codes are cool. Wikipedia has an animation that says it better than I could.