I believe that a 14 bit linear raw file can be lossy compressed at practically no loss of image information. I.e. some of the bandwidth is wasted on storing noise that has no image-information in it.
While this forum/thread is perhaps not the best place to discuss this in depth, I'm not so sure about the 'waste' part of storing noise.
Afterall, the signal consists of (Poisson distributed) shot noise, with 'some' more noise added by the capture system. The shot-noise component in my mind adds a bit to the malleability of the image data when we get to post-processing it. That of course does depend on some additional factors, like well depth that allows a significant signal level in order to steer clear of the other (system generated) noise components, but also low levels of PRNU. Some of that system noise could in principle be reduced by averaging multiple-read-outs from CMOS devices and good calibration.
There is even information hidden in the higher levels of shot-noise (which is where Sony does most of its compression), which can be made more detectable by averaging the signal component of multiple captures (which requires mostly stationary subjects). Lossy (actually irreversible) compression of that hidden info will risk losing it.