-
-
Notifications
You must be signed in to change notification settings - Fork 11.2k
Description
Describe the issue:
Shouldn't the sign of a zero simply return the original argument unchanged? Always coercing it to +0.0 seems a little too opinionated, especially in this particular method, given that we are explicitly asking for "the sign"!
Although the magnitude of the output of a sign
function has historically been subject to much disagreement, I believe most people would agree that the sign of the output of any coherent sign
function should preserve the sign of the input (unless restricted by the output domain itself: e.g., both +0.0 and -0.0 become the neither-positively-nor-negatively-signed 0
when coerced to int).
Major precedent in Java:
If the argument is positive zero or negative zero, then the result is the same as the argument.
import java.lang.Math;
System.out.println(Math.signum(-0.0))
-0.0
JavaScript also does this!
console.log(Math.sign(-0.0))
-0
The only implementations of sign
I can find in popular programming languages that return unsigned zero for a signed input are those that are returning int
data types rather than floating point types (i.e. -1, 0, 1 rather than -1.0, -0.0, +0.0, +1.0). (That behavior obviously does make sense for int results, but clearly not float results, unless I am missing something.)
Begging the question: what exactly is the rationale behind numpy's current behavior of returning +0.0 as the sign of -0.0?
Reproduce the code example:
import numpy as np
zero = np.float64(-0.0)
print("Actual:", np.sign(zero))
print("Expected:", zero)
Error message:
> Actual: 0.0
> Expected: -0.0
Python and NumPy Versions:
2.0.2
3.12.7 (main, Jan 15 2025, 09:54:13) [Clang 19.0.0git (https:/github.com/llvm/llvm-project 0a8cd1ed1f4f35905df318015b
Runtime Environment:
No response
Context for the issue:
The sign
behavior in Java & JavaScript was convenient and it's annoying that I can't translate my code word-for-word without doing workarounds. Java & JavaScript's behavior makes a lot more sense, so why not follow suit? Unless there is some detail or rationale I'm missing?
Additional motivation: sometimes the non-zero result of a floating-point operation rounds to zero due to the limits on the exponent. When that happens, IEEE 754 guarantees that the sign bit of that zero matches the sign of what the result would have been given infinite precision. Although the unit-magnitude of np.sign
would be lost in this scenario, it seems more logical and less error-prone to at least preserve the correct sign bit.