Number is the built-in object corresponding to the primitive number data type. As discussed in Chapter 3, all numbers are represented in IEEE 754-1985 double-precision floating-point format. This representation is 64 bits long, permitting floating-point magnitudes as large as ±1.7976ґ10308 and as small as ±2.2250ґ10-308. The Number() constructor takes an optional argument specifying its initial value:
var x = new Number(); var y = new Number(17.5);
Table 7-5 lists the special numeric values that are provided as properties of the Number object.
Property |
Value |
---|---|
Number.MAX_VALUE |
Largest magnitude representable |
Number.MIN_VALUE |
Smallest magnitude representable |
Number.POSITIVE_INFINITY |
The special value Infinity |
Number.NEGATIVE_INFINITY |
The special value -Infinity |
Number.NaN |
The special value NaN |
The only useful method of this object is toString(), which returns the value of the number in a string. Of course it is rarely needed, given that generally a number type converts to a string when we need to use it as such.