Question As Double, As Decimal

jaybee33

Member
Joined
Mar 12, 2009
Messages
9
Location
fife, dunfermline
Programming Experience
Beginner
what is the difference between the two? i have heard that As Double is more accurate than As Decimal? and if so, why is that ? thanks.
 
Since decimals don't use floating point precision, I'd have to say that the decimal data type is more precise (accurate) than double
 
A floating-point number is any number with a decimal point. It's also known as a "real" number or a number which may contain a fractional part. This is opposed to an integral type number, which is a whole number.
 
A floating-point number is any number with a decimal point. It's also known as a "real" number or a number which may contain a fractional part. This is opposed to an integral type number, which is a whole number.
Under that definition you've described the Decimal data type too, which doesn't help the OP at all.
 

So floating meaning the decimal is NOT fixed?? for example, last night in my studies i was asked to write a program to do the following Calculation: 0.12345678 * 1234 the results were as follows:

As Double = 152.34566652
As Decimal = 152.34566652

but the results are both the same :S ...... sorry but im still confused lol
there looks to be no diff to me, but there obviously is, its just not clicking ...... yet :p
 
Floating-point arithmetic is faster than fixed-point, by orders of magnitude. That's why the Double type is the default type for non-integral numbers in VB. If you had a lot of numeric calculations to perform and you used Decimal it would take a lot longer than using Double.

The Double data type is also smaller than the Decimal type (64-bit versus 128-bit) so it takes less mempry to store a lot of Doubles than to store a lot of Decimals.

The problem is that not all numbers can be accurately represented using a floating-point type, so floating-point arithmetic is not always 100% accurate. Where 100% accuracy is required, like for financial calculations, the Decimal type is used for the increased accuracy and the decreased performance must be lived with.

In short, yes, the final result using Double and Decimal will often be exactly the same. The Double is faster and takes up less space though, so is preferred in most cases. Where the final result is not exactly the same the error will almost certainly be in the Double, so if that's a problem then Decimal should be used in that case.
 
Floating-point arithmetic is faster than fixed-point, by orders of magnitude. That's why the Double type is the default type for non-integral numbers in VB. If you had a lot of numeric calculations to perform and you used Decimal it would take a lot longer than using Double.

The Double data type is also smaller than the Decimal type (64-bit versus 128-bit) so it takes less mempry to store a lot of Doubles than to store a lot of Decimals.

The problem is that not all numbers can be accurately represented using a floating-point type, so floating-point arithmetic is not always 100% accurate. Where 100% accuracy is required, like for financial calculations, the Decimal type is used for the increased accuracy and the decreased performance must be lived with.

In short, yes, the final result using Double and Decimal will often be exactly the same. The Double is faster and takes up less space though, so is preferred in most cases. Where the final result is not exactly the same the error will almost certainly be in the Double, so if that's a problem then Decimal should be used in that case.


Ahhhhh!! ok, good stuff. Making sense now guys. Just had to hear it from a diff perspective, but understanding it good now :D


thanks dude
 
Back
Top