In VB 6, the Currency data type was designed for financial calculations. But Microsoft decided that it just didn't do the job so they dropped it and now we have the Decimal data type in VB.NET.
This article tells you all about the Decimal data type in VB.NET: What's new, what works and what doesn't. Like the rest of .NET, Decimal is far more powerful. And like the rest of .NET, there are hidden traps. Just to get started, here's one you might not have seen before:
If you just happened to use the VB 6 Currency data type to create a record in a file using a structure like this one ...
Private Type FileRecord
Field1 As Integer
CurrencyField As Currency
Field2 As Double
Then you have a problem upgrading to VB.NET!
According to Microsoft Knowledge Base article KB 906771, .NET just won't read it correctly!
If you have this problem, the KB article referenced above gives more details, but the workaround recommended by Microsoft is to read the value using the VB.NET Int64 data type and then the Decimal.FromOACurrency method to convert it.
To be very complete, there actually was a way declare something that was called "Decimal" in VB 6. You could convert a VB 6 Variant to a VB 6 Decimal data type, using the VB 6 CDec function. These are called "subtypes" in VB 6. In spite of the fact that they have the same name, the VB 6 Decimal data type isn't natively supported by .NET. Since it's really a "variant" it's simply converted into an "object" in VB.NET and generates an error. It's one of those things you have to convert manually upgrading a VB 6 program to VB.NET.
The old Currency data type in VB 6 uses 8 bytes of memory and can represent numbers with fifteen digits to the left of the decimal point and four to the right. So it was capable of a sort of "fixed point" arithmetic with a maximum of four decimal digits of accuracy. But lots of calculations these days just need more. A lot more! So Microsoft created the new Decimal data type for .NET.
Decimal allows up to twenty-nine digits of accuracy and stores all numbers as integers with a "scaling factor" that simply tells VB.NET where to place the decimal point. This means that although the decimal point "floats" in Decimal variables, it is not the same as floating point. Single and Double data types are floating point. The difference is that floating point numbers are stored as "binary fractions". This means that a value has to be capable of being exactly represented using the binary number system and some aren't! For example, the value 1/3. A Single or Double data type simply gets as close as it can to the actual value. The value of a Decimal data type is always exact within the limits of precision that it can handle.
What does this mean in a program? Here's an example in VB.NET to demonstrate.
The Microsoft documentation states that Double and Single data types "store an approximation of a real number." But they don't mention what a "real number" is. This was something that was taught in your high school math class. If you divide 1 by 3, you get a fraction that repeats forever:
0.333333333333333333 ... <and so forth to infinity>
This is a real number. It's an actual value, but (at least using "base 10" arithmetic), you can't store a completely precise value. This can present a problem for us since bank auditors like precise values.
Let's add the value of "1/3" a hundred thousand times in both Decimal and Double and see what we get.
Dim DecimalVar As Decimal
Dim DoubleVar As Double
Dim AccumDecimal As Decimal = 0
Dim AccumDouble As Double = 0
Dim Difference As Decimal = 0
Dim i As Integer
DecimalVar = 1 / 3
DoubleVar = 1 / 3
For i = 1 To 100000
AccumDecimal += DecimalVar
AccumDouble += DoubleVar
Difference = AccumDecimal - AccumDouble
Debug.WriteLine("AccumDecimal: " & AccumDecimal.ToString)
Debug.WriteLine("AccumDouble: " & AccumDouble.ToString)
Debug.WriteLine("Difference: " & Difference.ToString)
Here's the result:
Neither value is exact, but Decimal is a lot closer. Notice that there are five zero's at the end of the Decimal value, however. That's because an error of .0000000000000001 accumulated every time the Decimal approximation to 1/3 was added. Since the Double value is even further off (remember that it's saved as a "binary fraction"), an even larger error accumulates.
The bottom line is that you can't get ultra-precise calculations with standard data types. This isn't just a problem with Visual Basic, it's true of all the usual programming languages because it happens due to the way computers store values. If you need that much precision for some reason, you can get it using special math libraries such as Maple.