MadSci Network: Computer Science |
Not being sure of your background, and therefore the exact aim of the question, I'll try to answer as generally as possible. The positional (standard decimal) number representation you mention is the only one I'm aware of. However, positional numbers have many variations! Among these: 1 Scientific notation (where positional decimals are supplemented by powers of 10 or, in some cases, 2) 2 Computer decimal representation: either unsigned, most-significant-bit signed, or twos-complement signed (see URL at #3 for more info). 3 Computer floating point (based on, but different from #1 and #2). Net references: Concepts in Programming Languages, Chapter 5 Principles of Computer Architecture, by M. Mendocca 4 Use of what theoretical mathematicians call "index sets" for the real numbers. An index set is a set whose elements each uniquely correspond to the elements of another set. (The normal positional notation, e.g., uses an index set based on the set of all countable integer combinations) Cf. Lin's book "Naive Set Theory" for basics of indexing sets. As for any systems of arithmetic based on nonstandard index sets, I'm not sure. You might begin your search (if this is your area of interest) with the American Mathematical Society archive Hope this helps. If you're interested in something more specific, or in a different direction, please repost.