Computer Number Systems | |||||
Part 2: Hex, Octal, and what is "Radix" | |||||
The reason that "hex" and octal are popular in computing is that it's easy to translate to and from the binary system that computers really use. People use decimal primarily because they have ten fingers, but it's just not that convenient to switch back and forth from 10011010010 to 1234. It IS convenient to convert to 4D2 (hex) or 2322 (octal), however. Let's see how this is done. These are all called radix numbers because the method for figuring out how to represent the value (remember the rocks in a bag) is the same. The only real difference between them is the base that they use. binary - base 2 One thing that should be kept in mind is that the number system used to represent the value of the base above is actually expressed as a decimal, or base 10 number. That is, we don't normally say hexadecimal is base 10 in hex. We say it's base 16 in decimal. Any radix number is a sum of a series of powers of the base times a number from 0 to 1 less than the base. This is what it looks like as a formula:
Let's see how this works with a real number ... say, decimal 7382. In decimal, 7382 means: It's the same in binary, octal, or hex. The only thing that changes is the base. For example, in binary, the same number (expressed as 1110011010110 in binary) is: Decimal is easier for humans to handle mainly because we usually have ten fingers and we're just used to thinking that way. Binary is the only thing that computers use. Octal and hex are a kind of a compromise between the two. Here's the number (1CD6) in hex. Not as many numbers to deal with, but what does C and D mean? Simple. Hex needs to have six more symbols in addition to the symbols 0 through 9. (Octal needs two fewer - 0 through 7.) IBM invented the term hexadecimal and they decided that since this system needed six extra symbols, why not just use the first six letters of the alphabet. The symbol for the value 10 in decimal is represented by the symbol A in hex and the two symbols 12 in octal. And all three are represented in the computer as binary 1010. {There is an interesting reason why IBM named it "hexadecimal". The prefix "hex" is Greek but "decimal" is Latin. Why didn't they go with an "all Latin" name? Well, then the name would then have been "sexadecimal" and IBM just couldn't accept a name like that.} Octal and hex are used to represent numbers instead of decimal because there is a very easy and direct way to convert from the "real" way that computers store numbers (binary) to something easier for humans to handle (fewer symbols). To translate a binary number to octal, simply group the binary digits three at a time and convert each group. For hex, group the binary digits four at a time. Here's how to convert to hex using our example number, 7382 (decimal) == 1110011010110 (binary). First, group the binary digits four at a time (start with the least significant digits): 1 1100 1101 0110 1 == 1 The original article in this series (available at this link) features some programming code that converts from hex to decimal. In these articles, we focus mainly on hex rather than octal because you seldom see or use octal in Visual Basic. The reason is that the fundamental unit of data in Microsoft architecture is the byte which is 8 binary digits, or 2 hex digits long. See this link for a more complete definition. Some computers have been built which use a fundamental data unit 6 binary digits long. On those computers, octal is more convenient. If you haven't read them, this might be a good time to read the first two articles in this series since they use both binary and hex a lot in explaining symbolic logic. Next page > Numbers in Visual Basic > Page 1, 2, 3, 4 |