ChatGPT解决这个技术问题 Extra ChatGPT

How do I interpret precision and scale of a number in a database?

I have the following column specified in a database: decimal(5,2)

How does one interpret this?

According to the properties on the column as viewed in SQL Server Management studio I can see that it means: decimal(Numeric precision, Numeric scale).

What do precision and scale mean in real terms?

It would be easy to interpret this as a decimal with 5 digits and two decimals places...ie 12345.12

P.S. I've been able to determine the correct answer from a colleague but had great difficulty finding an answer online. As such, I'd like to have the question and answer documented here on stackoverflow for future reference.


N
NinjaBeetle

Numeric precision refers to the maximum number of digits that are present in the number.

ie 1234567.89 has a precision of 9

Numeric scale refers to the maximum number of decimal places

ie 123456.789 has a scale of 3

Thus the maximum allowed value for decimal(5,2) is 999.99


Don't forget that if you're using a system that allows you to pre-define precision and scale of an input for a percentage in something like Microsoft Access, you must consider the percent as it's whole number form. In this case, 25.5% would require precision 4 and scale of 3 (not one) since we have to consider it as .255. I came across this problem early on and was stumped for a while wondering why scale 1 wasn't working.
@mezoid What does a negative scale value mean?
@Geek According to technet.microsoft.com/en-us/library/ms187746.aspx The scale cannot be less than zero. 0 <= scale <= precision. Essentially a negative scale value would be meaningless.
Shouldn't it be: "Numeric precision refers to the maximum number of digits that could be present in the number."? The exact number 123.5 could be of precision 10 as well, but there are no more digits to add. Or would this become 123.5000000?
b
boumbh

Precision of a number is the number of digits.

Scale of a number is the number of digits after the decimal point.

What is generally implied when setting precision and scale on field definition is that they represent maximum values.

Example, a decimal field defined with precision=5 and scale=2 would allow the following values:

123.45 (p=5,s=2)

12.34 (p=4,s=2)

12345 (p=5,s=0)

123.4 (p=4,s=1)

0 (p=0,s=0)

The following values are not allowed or would cause a data loss:

12.345 (p=5,s=3) => could be truncated into 12.35 (p=4,s=2)

1234.56 (p=6,s=2) => could be truncated into 1234.6 (p=5,s=1)

123.456 (p=6,s=3) => could be truncated into 123.46 (p=5,s=2)

123450 (p=6,s=0) => out of range

Note that the range is generally defined by the precision: |value| < 10^p ...


Note that MS SQL Server wouldn't allow 12345 or 1234.56 because "[scale] is substracted from [precision] to determine the maximum number of digits to the left of the decimal point." (source: decimal and numeric)
How about 12345000? Precision 5 or 8? If 5, with what Scale? Scale -3?
Nice answer, but why is 123450 (p=6,s=0) out of range? 123450 has 6 digits and no digits after a point?
@MatthiasBurger 123450 (p=6,s=0) would be out of range for a decimal field with 5 precision (as mentioned in the example). Because the precision of a number you want to store in a field must be less than or equal to the precision of the field.
@DominikSeitz ah thx, I misunderstood the the answer of boumbh. 123450 is out of range for (p=5,s=2). I understood 123450 was out of range for (p=6,s=0)
A
Air

Precision, Scale, and Length in the SQL Server 2000 documentation reads:

Precision is the number of digits in a number. Scale is the number of digits to the right of the decimal point in a number. For example, the number 123.45 has a precision of 5 and a scale of 2.


Thank you. I just realized that a piece of Delphi/Pascal code was using a scale of 0 to chop off the decimal part of float
K
Key

Precision refers to the total number of digits while scale refers to the digits allowed after the decimal. The example quoted by would have a precision of 7 and a scale of 2.

Moreover, DECIMAL(precision, scale) is an exact value data type unlike something like a FLOAT(precision, scale) which stores approximate numeric data. For example, a column defined as FLOAT(7,4) is displayed as -999.9999. MySQL performs rounding when storing values, so if you insert 999.00009 into a FLOAT(7,4) column, the approximate result is 999.0001.

Let me know if this helps!