In mathematics and computing, Fibonacci coding is a universal code which encodes positive integers into binary code words. It is one example of representations of integers based on Fibonacci numbers. Each code word ends with "11" and contains no other instances of "11" before the end.

The Fibonacci code is closely related to the Zeckendorf representation, a positional numeral system that uses Zeckendorf's theorem and has the property that no number has a representation with consecutive 1s. The Fibonacci code word for a particular integer is exactly the integer's Zeckendorf representation with the order of its digits reversed and an additional "1" appended to the end.

Definition

For a number , if represent the digits of the code word representing then we have:

where F(i) is the ith Fibonacci number, and so F(i+2) is the ith distinct Fibonacci number starting with . The last bit is always an appended bit of 1 and does not carry place value.

It can be shown that such a coding is unique, and the only occurrence of "11" in any code word is at the end i.e. d(k1) and d(k). The penultimate bit is the most significant bit and the first bit is the least significant bit. Also leading zeros cannot be omitted as they can in e.g. decimal numbers.

The first few Fibonacci codes are shown below, and also their so-called implied probability, the value for each number that has a minimum-size code in Fibonacci coding.

SymbolFibonacci representationFibonacci code wordImplied probability
1111/4
20111/8
300111/16
410111/16
5000111/32
6100111/32
7010111/32
80000111/64
91000111/64
100100111/64
110010111/64
121010111/64
1300000111/128
1410000111/128

To encode an integer N:

  1. Find the largest Fibonacci number equal to or less than N; subtract this number from N, keeping track of the remainder.
  2. If the number subtracted was the ith Fibonacci number F(i), put a 1 in place i2 in the code word (counting the left most digit as place 0).
  3. Repeat the previous steps, substituting the remainder for N, until a remainder of 0 is reached.
  4. Place an additional 1 after the rightmost digit in the code word.

To decode a code word, remove the final "1", assign the remaining the values 1,2,3,5,8,13... (the Fibonacci numbers) to the bits in the code word, and sum the values of the "1" bits.

Comparison with other universal codes

Fibonacci coding has a useful property that sometimes makes it attractive in comparison to other universal codes: it is an example of a self-synchronizing code, making it easier to recover data from a damaged stream. With most other universal codes, if a single bit is altered, none of the data that comes after it will be correctly read. With Fibonacci coding, on the other hand, a changed bit may cause one token to be read as two, or cause two tokens to be read incorrectly as one, but reading a "0" from the stream will stop the errors from propagating further. Since the only stream that has no "0" in it is a stream of "11" tokens, the total edit distance between a stream damaged by a single bit error and the original stream is at most three.

This approach—encoding using sequence of symbols, in which some patterns (like "11") are forbidden, can be freely generalized.[1]

Example

The following table shows that the number 65 is represented in Fibonacci coding as 0100100011, since 65 = 2 + 8 + 55. The first two Fibonacci numbers (0 and 1) are not used, and an additional 1 is always appended.

Generalizations

The Fibonacci encodings for the positive integers are binary strings that end with "11" and contain no other instances of "11". This can be generalized to binary strings that end with N consecutive 1's and contain no other instances of N consecutive 1's. For instance, for N = 3 the positive integers are encoded as 111, 0111, 00111, 10111, 000111, 100111, 010111, 110111, 0000111, 1000111, 0100111, …. In this case, the number of encodings as a function of string length is given by the sequence of Tribonacci numbers.

For general constraints defining which symbols are allowed after a given symbol, the maximal information rate can be obtained by first finding the optimal transition probabilities using maximal entropy random walk, then use entropy coder (with switched encoder with decoder) to encode a message as a sequence of symbols fulfilling the found optimal transition probabilities.

See also

References

  1. Duda, Jarek (2007). "Optimal encoding on discrete lattice with translational invariant constrains using statistical algorithms". arXiv:0710.3861 [cs.IT].

Further reading

  • Stakhov, A. P. (2009). The Mathematics of Harmony: From Euclid to Contemporary Mathematics and Computer Science. Singapore: World Scientific Publishing.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.