On Creating A Table Column Of Type "float With Precision" Column Gets Created As "real" Type In Sql Server
Solution 1:
SQL Server doesn't remember what language you used when you create the table - it just consults the current table definition and renders a piece of text that could have been used to create an identical table.
As the documentation points out, SQL Server doesn't support arbitrary precisions - it only supports two precisions - 24 and 53:
SQL Server treats
nas one of two possible values. If 1<=n<=24,nis treated as 24. If 25<=n<=53,nis treated as 53.
As it also points out, real is treated as a synonym for float(24):
The ISO synonym for real is float(24).
So, any float column specified with a precision of 24 or lower will in fact be created as a float(24). And a synonym for float(24) is real. The system doesn't remember whether it was originally specified as float(1), float(24) or real.
Why is it getting converted to real by default whenever float has a precision specified?
It doesn't. As per the above, if you create a column with type float(25) (or any higher precision), you'll find it gets returned as plain float, rather than real, because it was created as a float(53) and 53 is the default precision when none is supplied.
Post a Comment for "On Creating A Table Column Of Type "float With Precision" Column Gets Created As "real" Type In Sql Server"