Does anyone reading this thread have an operational insight into the practical difference it makes to outcomes whether the first patch reads 109.0 or 109.3?
I agree and it's is a question I've been curious about for the last week or so when I first noticed it. The only thing I can tell you is when I compare the truncated TIFF values in ColorLab with the original values, the maximum Delta E is 1.6, with the worst 10% having a Delta E of 1.01, which means the differences would be perceptible to some (though probably only if they were looking). I brought it up on this thread because the discussion became one of precision and having different values in different files when those files may or may not be used when creating a profile doesn't seem to be overly "precise" to me, regardless of its practical difference (though, ultimately, I realize it's the practical difference the matters).
Only X-Rite could tell us and I’d wonder why then they do not save out a high bit TIFF.
Somehow I don't think that's going to happen anytime soon, but it would be nice if we were surprised. Since I looked at the TIFF, I don't think it does make a difference in i1P. Its XML data is truncated and the TIFF is 8-bit. The difference could be in MT and PMP where if it's using the data exported from i1P, the reference would be 16-bit but the TIFF would be 8-bit, which would seem to lead to some erroneous colors, though, as Mark asks, I don't know the practical application of it or even if MT uses the 16-bit numbers or whether it truncates the values when creating a profile.