You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have a PR (#19) that's been outstanding for while, which wants to add additional constants to Data.Number. Technically it will be easy to resolve. We either:
Put the relevant constants in Number.js and import them into Number.purs
Hardcode the relevant constants in Number.purs
The current PR takes approach (1) but it would be very easy to change to (2). But the two approaches imply different things about what Number means in Purescript (as opposed to Javascript).
Means that Number is a floating point number of some kind but could be 64-bit or 32-bit or possibly other sizes. The compiler to Javascript chooses to implement that as 64-bit but other compilers can make other choices. All compilers must then specify things like maxValue for that compiler
Means that Number is a 64-bit floating point number across all compilers. The compiler needs to provide implementations of things like sin but any constant can be hardcoded directly in Purescript
I don't really have an opinion on this since I only use the Javascript compiler.
For what it's worth, Wikipedia says that "In most implementations of PostScript, and some embedded systems, the only supported precision is single."
The text was updated successfully, but these errors were encountered:
I didn't notice before but both Int and Number state specific representations in the module Prim. Specifically,
Number: "A double precision floating point number (IEEE 754)."
Int: "A 32-bit signed integer."
That's fairly dispositive, so let's go with approach (2) -- defining all constants in Purescript and placing no additional load on the compiler writer but constraining them as to what Number and Int mean.
I just wanted to say that my lack of a response is mainly because I'm not sure how to respond to this, not because I'm ignoring this.
The questions asked here are similar to the 'what should the runtime representation of Char be?' In some backends, one encoding may be better/possible whereas another encoding is problematic/impossible.
We have a PR (#19) that's been outstanding for while, which wants to add additional constants to
Data.Number
. Technically it will be easy to resolve. We either:The current PR takes approach (1) but it would be very easy to change to (2). But the two approaches imply different things about what
Number
means in Purescript (as opposed to Javascript).Number
is a floating point number of some kind but could be 64-bit or 32-bit or possibly other sizes. The compiler to Javascript chooses to implement that as 64-bit but other compilers can make other choices. All compilers must then specify things likemaxValue
for that compilerNumber
is a 64-bit floating point number across all compilers. The compiler needs to provide implementations of things likesin
but any constant can be hardcoded directly in PurescriptI don't really have an opinion on this since I only use the Javascript compiler.
For what it's worth, Wikipedia says that "In most implementations of PostScript, and some embedded systems, the only supported precision is single."
The text was updated successfully, but these errors were encountered: