https://en.wikipedia.org/wiki/Fibonacci_coding#Example Fibonacci coding From Wikipedia, the free encyclopedia Jump to navigation Jump to search This article includes a list of references, related reading or external links, but its sources remain unclear because it [40px] lacks inline citations. Please help to improve this article by introducing more precise citations. (January 2013) (Learn how and when to remove this template message) Numeral systems Hindu-Arabic numeral system * Western Arabic * Eastern Arabic * Bengali * Devanagari * Gujarati * Gurmukhi * Odia * Sinhala * Tamil * Malayalam * Telugu * Kannada * Dzongkha * Tibetan * Balinese * Burmese * Javanese * Khmer * Lao * Mongolian * Thai East Asian * Chinese + Suzhou * Hokkien * Japanese * Korean * Vietnamese * Tangut * Counting rods American * Cherokee * Kaktovik (Inupiaq) Alphabetic * Abjad * Armenian * Aryabhata * Cyrillic * Ge`ez * Georgian * Greek * Hebrew * Roman Former * Aegean * Attic * Babylonian * Brahmi * Chuvash * Cistercian * Egyptian * Etruscan * Glagolitic * Kharosthi * Mayan * Muisca * Pentimal * Rumi * Prehistoric counting * Tally marks * Quipu Positional systems by base * 2 * 3 * 4 * 5 * 6 * 8 * 10 * 12 * 16 * 20 * 60 Non-standard positional numeral systems * Bijective numeration (1) * Signed-digit representation (balanced ternary) * mixed (factorial) * negative * Complex-base system (2i) * Non-integer base of numeration (ph) * Asymmetric numeral systems List of numeral systems * v * t * e In mathematics and computing, Fibonacci coding is a universal code^[ citation needed] which encodes positive integers into binary code words. It is one example of representations of integers based on Fibonacci numbers. Each code word ends with "11" and contains no other instances of "11" before the end. The Fibonacci code is closely related to the Zeckendorf representation, a positional numeral system that uses Zeckendorf's theorem and has the property that no number has a representation with consecutive 1s. The Fibonacci code word for a particular integer is exactly the integer's Zeckendorf representation with the order of its digits reversed and an additional "1" appended to the end. [ ] Contents * 1 Definition * 2 Comparison with other universal codes * 3 Example * 4 Generalizations * 5 See also * 6 References * 7 Further reading Definition[edit] For a number N {\displaystyle N\!} N\!, if d ( 0 ) , d ( 1 ) , ... , d ( k - 1 ) , d ( k ) {\displaystyle d(0),d(1),\ldots ,d(k-1),d(k)\!} d (0),d(1),\ldots ,d(k-1),d(k)\! represent the digits of the code word representing N {\displaystyle N\!} N\! then we have: N = [?] i = 0 k - 1 d ( i ) F ( i + 2 ) , and d ( k - 1 ) = d ( k ) = 1. {\displaystyle N=\sum _{i=0}^{k-1}d(i)F(i+2),{\text{ and }}d(k-1)=d(k)=1.\!} N=\sum _{i=0}^{k-1}d(i)F(i+2),{\text{ and }}d (k-1)=d(k)=1.\! where F(i) is the ith Fibonacci number, and so F(i+2) is the ith distinct Fibonacci number starting with 1 , 2 , 3 , 5 , 8 , 13 , ... {\ displaystyle 1,2,3,5,8,13,\ldots } 1,2,3,5,8,13,\ldots . The last bit d ( k ) {\displaystyle d(k)} d(k) is always an appended bit of 1 and does not carry place value. It can be shown that such a coding is unique, and the only occurrence of "11" in any code word is at the end i.e. d(k-1) and d(k). The penultimate bit is the most significant bit and the first bit is the least significant bit. Also leading zeros cannot be omitted as they can in e.g. decimal numbers. The first few Fibonacci codes are shown below, and also their so-called implied probability, the value for each number that has a minimum-size code in Fibonacci coding. Symbol Fibonacci representation Fibonacci Implied code word probability 1 F ( 2 ) {\displaystyle F(2)} F(2) 11 1/4 2 F ( 3 ) {\displaystyle F(3)} F(3) 011 1/8 3 F ( 4 ) {\displaystyle F(4)} F(4) 0011 1/16 4 F ( 2 ) + F ( 4 ) {\displaystyle F(2)+F 1011 1/16 (4)} F(2)+F(4) 5 F ( 5 ) {\displaystyle F(5)} F(5) 00011 1/32 6 F ( 2 ) + F ( 5 ) {\displaystyle F(2)+F 10011 1/32 (5)} F(2)+F(5) 7 F ( 3 ) + F ( 5 ) {\displaystyle F(3)+F 01011 1/32 (5)} F(3)+F(5) 8 F ( 6 ) {\displaystyle F(6)} F(6) 000011 1/64 9 F ( 2 ) + F ( 6 ) {\displaystyle F(2)+F 100011 1/64 (6)} F(2)+F(6) 10 F ( 3 ) + F ( 6 ) {\displaystyle F(3)+F 010011 1/64 (6)} F(3)+F(6) 11 F ( 4 ) + F ( 6 ) {\displaystyle F(4)+F 001011 1/64 (6)} F(4)+F(6) F ( 2 ) + F ( 4 ) + F ( 6 ) {\ 12 displaystyle F(2)+F(4)+F(6)} F(2)+F(4) 101011 1/64 +F(6) 13 F ( 7 ) {\displaystyle F(7)} F(7) 0000011 1/128 14 F ( 2 ) + F ( 7 ) {\displaystyle F(2)+F 1000011 1/128 (7)} F(2)+F(7) To encode an integer N: 1. Find the largest Fibonacci number equal to or less than N; subtract this number from N, keeping track of the remainder. 2. If the number subtracted was the ith Fibonacci number F(i), put a 1 in place i-2 in the code word (counting the left most digit as place 0). 3. Repeat the previous steps, substituting the remainder for N, until a remainder of 0 is reached. 4. Place an additional 1 after the rightmost digit in the code word. To decode a code word, remove the final "1", assign the remaining the values 1,2,3,5,8,13... (the Fibonacci numbers) to the bits in the code word, and sum the values of the "1" bits. Comparison with other universal codes[edit] Fibonacci coding has a useful property that sometimes makes it attractive in comparison to other universal codes: it is an example of a self-synchronizing code, making it easier to recover data from a damaged stream. With most other universal codes, if a single bit is altered, none of the data that comes after it will be correctly read. With Fibonacci coding, on the other hand, a changed bit may cause one token to be read as two, or cause two tokens to be read incorrectly as one, but reading a "0" from the stream will stop the errors from propagating further. Since the only stream that has no "0" in it is a stream of "11" tokens, the total edit distance between a stream damaged by a single bit error and the original stream is at most three. This approach--encoding using sequence of symbols, in which some patterns (like "11") are forbidden, can be freely generalized.^[1] Example[edit] The following table shows that the number 65 is represented in Fibonacci coding as 0100100011, since 65 = 2 + 8 + 55. The first two Fibonacci numbers (0 and 1) are not used, and an additional 1 is always appended. 0 1 1 2 3 5 8 13 21 34 55 - F ( 0 ) F ( 1 ) F ( 2 ) F ( 3 ) F ( 4 ) F ( 5 ) F ( 6 ) F ( 7 ) F ( 8 ) F ( 9 ) F ( 10 ) additional - - 0 1 0 0 1 0 0 0 1 1 {\displaystyle {\begin{array}{ccccccccccc|c}\ hline 0&1&1&2&3&5&8&13&21&34&55&-\\\hline F(0)&F(1)&F(2)&F(3)&F (4)&F(5)&F(6)&F(7)&F(8)&F(9)&F(10)&\scriptstyle {\text {additional}}\\\hline -&-&0&1&0&0&1&0&0&0&1&1\\\hline \end {array}}} {\displaystyle {\begin{array}{ccccccccccc|c}\hline 0&1& 1&2&3&5&8&13&21&34&55&-\\\hline F(0)&F(1)&F(2)&F(3)&F(4)&F(5)&F (6)&F(7)&F(8)&F(9)&F(10)&\scriptstyle {\text{additional}}\\\hline -&-&0&1&0&0&1&0&0&0&1&1\\\hline \end{array}}} Generalizations[edit] The Fibonacci encodings for the positive integers are binary strings that end with "11" and contain no other instances of "11". This can be generalized to binary strings that end with N consecutive 1's and contain no other instances of N consecutive 1's. For instance, for N = 3 the positive integers are encoded as 111, 0111, 00111, 10111, 000111, 100111, 010111, 110111, 0000111, 1000111, 0100111, .... In this case, the number of encodings as a function of string length is given by the sequence of Tribonacci numbers. For general constraints defining which symbols are allowed after a given symbol, the maximal information rate can be obtained by first finding the optimal transition probabilities using maximal entropy random walk, then use entropy coder (with switched encoder with decoder) to encode a message as a sequence of symbols fulfilling the found optimal transition probabilities. See also[edit] * Golden ratio base * Ostrowski numeration * Universal code * Varicode, a practical application * Zeckendorf's theorem * Maximal entropy random walk References[edit] 1. ^ Duda, Jarek (2007). "Optimal encoding on discrete lattice with translational invariant constrains using statistical algorithms". arXiv:0710.3861 [cs.IT]. * Allouche, Jean-Paul; Shallit, Jeffrey (2003). Automatic Sequences: Theory, Applications, Generalizations. Cambridge University Press. p. 105. ISBN 978-0-521-82332-6. Zbl 1086.11015. * Fraenkel, Aviezri S.; Klein, Shmuel T. (1996). "Robust universal complete codes for transmission and compression". Discrete Applied Mathematics. 64 (1): 31-55. CiteSeerX 10.1.1.37.3064. doi :10.1016/0166-218X(93)00116-H. ISSN 0166-218X. Zbl 0874.94026. Further reading[edit] * Stakhov, A. P. (2009). The Mathematics of Harmony: From Euclid to Contemporary Mathematics and Computer Science. Singapore: World Scientific Publishing. * v * t * e Data compression methods * Arithmetic * Asymmetric numeral systems * Golomb * Huffman + Adaptive + Canonical + Modified * Range * Shannon Entropy type * Shannon-Fano * Shannon-Fano-Elias * Tunstall * Unary * Universal + Exp-Golomb + Fibonacci + Gamma + Levenshtein * Byte pair encoding * Lempel-Ziv + Brotli + Deflate Lossless + LZ4 + LZFSE + LZJB + LZMA Dictionary type + LZO + LZRW + LZS + LZSS + LZW + LZWL + LZX + Snappy + Zstandard * BWT * CTW * Delta * DMC Other types * DPCM * LDCT * MTF * PAQ * PPM * RLE * Discrete cosine transform + DCT + MDCT * DST Transform type * FFT * Wavelet + Daubechies + DWT + SPIHT * DPCM Lossy + ADPCM * LPC + ACELP + CELP + LAR Predictive type + LSP + WLPC * Motion + Compensation + Estimation + Vector * Psychoacoustic * Bit rate + ABR + CBR + VBR * Companding * Convolution Concepts * Dynamic range * Latency * Nyquist-Shannon theorem * Sampling * Sound quality * Speech coding * Sub-band coding Audio * A-law * m-law * DPCM + ADPCM + DM * FT + FFT Codec parts * LPC + ACELP + CELP + LAR + LSP + WLPC * MDCT * Psychoacoustic model * Chroma subsampling * Coding tree unit * Color space * Compression artifact * Image resolution Concepts * Macroblock * Pixel * PSNR * Quantization * Standard test image Image * Chain code * DCT * Deflate * Fractal * KLT Methods * LP * RLE * Wavelet + Daubechies + DWT + EZW + SPIHT * Bit rate + ABR + CBR + VBR * Display resolution Concepts * Frame * Frame rate * Frame types * Interlace * Video characteristics * Video quality Video * DCT * DPCM * Deblocking filter * Lapped transform * Motion Codec parts + Compensation + Estimation + Vector * Wavelet + Daubechies + DWT * Entropy * Information theory + Timeline Theory * Kolmogorov complexity * Quantization * Rate-distortion * Redundancy * Template Compression formats * Template Compression software (codecs) * Retrieved from "https://en.wikipedia.org/w/index.php?title= Fibonacci_coding&oldid=1022372087" Categories: * Non-standard positional numeral systems * Lossless compression algorithms * Fibonacci numbers Hidden categories: * Articles lacking in-text citations from January 2013 * All articles lacking in-text citations * All articles with unsourced statements * Articles with unsourced statements from October 2015 Navigation menu Personal tools * Not logged in * Talk * Contributions * Create account * Log in Namespaces * Article * Talk [ ] Variants Views * Read * Edit * View history [ ] More Search [ ] [Search] [Go] Navigation * Main page * Contents * Current events * Random article * About Wikipedia * Contact us * Donate Contribute * Help * Learn to edit * Community portal * Recent changes * Upload file Tools * What links here * Related changes * Upload file * Special pages * Permanent link * Page information * Cite this page * Wikidata item Print/export * Download as PDF * Printable version Languages * Francais * Italiano * Nederlands * Polski * Russkii * Ukrayins'ka * Zhong Wen Edit links * This page was last edited on 10 May 2021, at 04:30 (UTC). * Text is available under the Creative Commons Attribution-ShareAlike License ; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy . Wikipedia(r) is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. * Privacy policy * About Wikipedia * Disclaimers * Contact Wikipedia * Mobile view * Developers * Statistics * Cookie statement * Wikimedia Foundation * Powered by MediaWiki