MadSci Network: Computer Science
Query:

RE: Why does Intel's CPU have 'little endian'?

Area: Computer Science
Posted By: Paul Schleifer, Grad student Computing
Date: Wed Sep 25 16:55:18 1996
Message:
The term "little-endian" refers to a computer architecture's "byte order". It is
 used to describe computer architectures in which, within a "word" (which usually
 consists of 16, 32, or 64 bits, depending on the architecture), bytes at the 
lower address have lower significance. Conversely, "big-endian" architectures store 
the most significant bytes at the lower addresses.


The difference between these two byte orders is shown in the next two figures.
In each case, the number being represented by the two 32-bit words is "1".


BIG-ENDIAN BYTE ORDER
---------------------

Most Significant Byte Least Significant Byte
vvvvvvv vvvvvvv
+-------+-------+-------+-------+
|byte 0 |byte 1 |byte 2 |byte 3 | "address"
+-------+-------+-------+-------+
| 0 | 0 | 0 | 1 | "value" 
+-------+-------+-------+-------+



LITTLE-ENDIAN BYTE ORDER
---------------------

Least Significant Byte Most Significant Byte
vvvvvvv vvvvvvv
+-------+-------+-------+-------+
|byte 0 |byte 1 |byte 2 |byte 3 | "address"
+-------+-------+-------+-------+
| 1 | 0 | 0 | 0 | "value" 
+-------+-------+-------+-------+


As you can see, the big-endian approach more closely resembles the way we 
naturally write numbers. We write:

1

to represent the number 1 instead of

1000

This is because we recognise that the set of numbers is infinite, and so to use
 a little-endian representation would be impossible since this would place an 
upper limit on the size of numbers we can write down, which would be only 9999 
in this simple example. But in a computer, there has to be a limit to the size 
of numbers because there is no architecture that can cope with an infinite set 
of numbers, so it doesn't matter if the byte order is big-endian or little 
endian.

The reason some architectures use the big-endian formats and others use 
little-endian formats is usually due to the specific design history. Firstly, 
many chip designers try to ensure "backward compatibility" so that programs 
written for older versions of a particular hardware platform will still run on 
the newer versions. Since the byte order of an archicture is fundamental to so
 many important features of archictures, such as memory addressing, the byte 
order must be preserved from one chip generation to the next to ensure backward
 compatibilty.

The second reason, I suspect, is due to the fact that architectures are very 
expensive to design, and so there is a tendency to re-use designs where-ever 
possible. Thus, if a company has designed more chips using a little-endian 
byte order than using big-endian byte orders, then this bias is likely to be 
continued in future chip designs.

Most RISC (Reduced Instruction Set Computers) designs use the big-endian byte 
order. This may be because of efficiency reasons -- the number system used by
people is effectively big-endian! -- but I don't know. I don't think either 
byte ordering system has any real advantages in terms of machine efficiency.

Current Queue | Current Queue for Computer Science | Computer Science archives

Try the links in the MadSci Library for more information on Computer Science.




MadSci Home | Information | Search | Random Knowledge Generator | MadSci Archives | Mad Library | MAD Labs | MAD FAQs | Ask a ? | Join Us! | Help Support MadSci
MadSci Network
webadmin@www.madsci.org