-
-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
printf: %a
output is different from coreutils
#7364
Comments
Yeah this was one of the shortcuts I took while implementing this. I had to do a big refactor and simplified it to using
While that is a pretty good reference, note that coreutils has a custom implementation that differs slightly in a few places. I might be misremembering (it's a while back), but I think that some C implementation also do the thing where they always use Also, if I recall correctly, GNU makes a distinction between a default precision and a precision that was specified of the same length, which explains the difference in precision that you're seeing. |
I understand one goal of this project is to exactly match the output of GNU coreutils for compatibility? Is that correct? (for context... I actually bumped into this in an attempt to debug further what's going on in #5759...) So, there's at least 2 issues here with First, GNU coreutils appear to always pack 4 bits in the first hex digit ( Second, GNU coreutils appears to use Generally, do we need to detect the architecture (I assume |
Ah yes, don't take my previous comment as discouraging you, it was meant as quite the opposite! |
A small note though. I think you might run into difficulties with |
%a
should output "Hexadecimal floating point, lowercase"After fixing #7362, we still see some issues.
It seems like GNU coreutils prefers "shifting" the output so that we have a single hex digit between
0x1
and0xf
before the decimal point, while uutils always picks0x1
. And the output is padded with 0.The value is technically correct though:
(note: be careful to add
env
beforeprintf
as some shell implementations provide built-inprintf
...)Also, the behaviour is different across platforms. Running
LANG=C env printf '%a %.6a\n' 0.12544 0.12544
in various dockers (gist):%a
%.6a
linux-386
/linux-amd64
0x8.07357e670e2c12bp-6
0x8.07357ep-6
linux-arm-v5
/linux-arm-v7
0x1.00e6afcce1c58p-3
0x1.00e6b0p-3
linux-arm64-v8
/linux-mips64le
/linux-ppc64le
/linux-s390x
0x1.00e6afcce1c58255b035bd512ec7p-3
0x1.00e6b0p-3
According to https://en.cppreference.com/w/c/io/fprintf:
The default precision is sufficient for exact representation of the value.
.On x86, 16 nibbles = 64 bits are printed at most, including the integer part. That corresponds to the internal x86 80-bit floating point,
long double
type.printf
shifts 3 of the fraction bits in the integer part before the.
, so that the whole 64 bits can fit neatly in 16 nibbles when printed. It's interesting that this behaviour is preserved when specifying a precision (e.g.%.6f
).On arm64 (and a bunch of other archs): 28 nibbles = 112 bits are printed after the decimal point. That corresponds to quad-precision 128-bit float. Also
long double
type.On arm32: 13 nibbles = 52 bits are printed. That's double-precision 64-bit float. Also
long double
type.The text was updated successfully, but these errors were encountered: