Given bits, with bits numbered
, we assume that bit
corresponding to the least significant bit.
Then the sequence of bits on the left is the binary number equivalent of
the decimal number represented by the summation on the right.
Since each bit can only take values from the alphabet {0,1}, a string of
bits so numbered can represent up to
unique patterns. The
type representation adopted for this string of bits determines how the
string will be interpreted, as described below.