jaybee33
Member
what is the difference between the two? i have heard that As Double is more accurate than As Decimal? and if so, why is that ? thanks.
That's a good question, usually people learning programming don't ask that question.may sound silly but, what do you mean by floating point ?
Under that definition you've described the Decimal data type too, which doesn't help the OP at all.A floating-point number is any number with a decimal point. It's also known as a "real" number or a number which may contain a fractional part. This is opposed to an integral type number, which is a whole number.
Double is floating-point, Decimal is fixed-point.
Floating Point/Fixed-Point Numbers - Wikibooks, collection of open-content textbooks
Floating-point arithmetic is faster than fixed-point, by orders of magnitude. That's why the Double type is the default type for non-integral numbers in VB. If you had a lot of numeric calculations to perform and you used Decimal it would take a lot longer than using Double.
The Double data type is also smaller than the Decimal type (64-bit versus 128-bit) so it takes less mempry to store a lot of Doubles than to store a lot of Decimals.
The problem is that not all numbers can be accurately represented using a floating-point type, so floating-point arithmetic is not always 100% accurate. Where 100% accuracy is required, like for financial calculations, the Decimal type is used for the increased accuracy and the decreased performance must be lived with.
In short, yes, the final result using Double and Decimal will often be exactly the same. The Double is faster and takes up less space though, so is preferred in most cases. Where the final result is not exactly the same the error will almost certainly be in the Double, so if that's a problem then Decimal should be used in that case.