Ad
Ad
Ad
Pages: [1]   Bottom of Page
Print
Author Topic: lossy compression/resizing of raw data is coming to DNG ?  (Read 12680 times)
deejjjaaaa
Sr. Member
****
Offline Offline

Posts: 857



« on: December 18, 2011, 11:20:55 AM »
ReplyReply

lossy compression/resizing of raw data is coming to DNG ? that 's what I heard, rumor-wise... will it be a new DNG spec then ?
Logged
sandymc
Sr. Member
****
Offline Offline

Posts: 269


« Reply #1 on: December 18, 2011, 12:09:03 PM »
ReplyReply

There is a new version of the DNG spec coming:

http://forums.adobe.com/message/4084921

Compression/resizing - don't know.

Sandy

Logged
deejjjaaaa
Sr. Member
****
Offline Offline

Posts: 857



« Reply #2 on: December 18, 2011, 12:27:53 PM »
ReplyReply

There is a new version of the DNG spec coming:

http://forums.adobe.com/message/4084921

Compression/resizing - don't know.

Sandy



aha... so the rumor sounds more solid, as the new specification is coming it might be the time for Adobe to do that... may be some "DNG" company like Ricoh (Pentax) wants to have an option for smaller "raw" files w/ 24mp+ sensors coming to be a norm or may be just a logical evolution, TIFF has options for lossy data compression, so why not DNG which serves the same purpose as an intermediate workflow data format.
Logged
sandymc
Sr. Member
****
Offline Offline

Posts: 269


« Reply #3 on: December 18, 2011, 12:52:26 PM »
ReplyReply

It may be the case, although DNG already allows for "lookup table" type lossy compression, so I'm not sure that an additional form of lossy compression will bring that much. However, something that allowed data to be packed would useful. Currently, if you have e.g., 12-bit data, every 12-bit value takes up 16 bits. That can easily be eliminated losslessly.

Sandy
Logged
deejjjaaaa
Sr. Member
****
Offline Offline

Posts: 857



« Reply #4 on: December 18, 2011, 04:05:30 PM »
ReplyReply

It may be the case, although DNG already allows for "lookup table" type lossy compression, so I'm not sure that an additional form of lossy compression will bring that much. However, something that allowed data to be packed would useful. Currently, if you have e.g., 12-bit data, every 12-bit value takes up 16 bits. That can easily be eliminated losslessly.

Sandy

claimed reduction in size was like ~2-3 times less than current DNG converter can achieve... does not look like any kind of lookup tables and efficient packing of 12bit data... more like like a real compression w/ loss of some data.
Logged
kwalsh
Jr. Member
**
Offline Offline

Posts: 89


« Reply #5 on: December 21, 2011, 11:42:26 AM »
ReplyReply

It may be the case, although DNG already allows for "lookup table" type lossy compression, so I'm not sure that an additional form of lossy compression will bring that much. However, something that allowed data to be packed would useful. Currently, if you have e.g., 12-bit data, every 12-bit value takes up 16 bits. That can easily be eliminated losslessly.

DNG files already have lossless compression applied unless you specifically turn it off (and I don't think you can turn it off in ACR or LR, only in the DNG converter itself).  With lossless compression data packing doesn't matter, 12-bit in a 16-bit field will result in no larger a file than 12-bit packed.

Ken
Logged
deejjjaaaa
Sr. Member
****
Offline Offline

Posts: 857



« Reply #6 on: December 21, 2011, 12:01:22 PM »
ReplyReply

DNG files already have lossless compression
the question was about lossy (and downscaling in addition to lossy)... I am just curious as to what was the reason behind... is any DNG-friendly camera maker (Leica, Ricoh/Pentax) going to offer such options in camera a la sRaw from Canon (and they asked Adobe) or was it just a further development of DNG in general to make it more flexible... and then if that is going to be implemented how that affect other raw converters
Logged
sandymc
Sr. Member
****
Offline Offline

Posts: 269


« Reply #7 on: December 21, 2011, 12:05:39 PM »
ReplyReply

DNG files already have lossless compression applied unless you specifically turn it off (and I don't think you can turn it off in ACR or LR, only in the DNG converter itself).  With lossless compression data packing doesn't matter, 12-bit in a 16-bit field will result in no larger a file than 12-bit packed.

Ken

Yes, but only for files exported from LR/ACR or the DNG converter. The issue is, no camera (that I can think of) implements lossless DNG compression, because it takes up too much computational power. E.g., the M9 does lossy table based compression, but not lossless compression.

Sandy
Logged
deejjjaaaa
Sr. Member
****
Offline Offline

Posts: 857



« Reply #8 on: December 21, 2011, 12:26:16 PM »
ReplyReply

The issue is, no camera (that I can think of) implements lossless DNG compression, because it takes up too much computational power.

so you think that lossless compression is more cpu intensive than lossy ? why ?

PS: certainly you can imagine a very cpu intensive lossless compression algorithm, but provided that we do not try to beat the likes of zip, 7zip, rar, etc
« Last Edit: December 21, 2011, 12:29:22 PM by deejjjaaaa » Logged
sandymc
Sr. Member
****
Offline Offline

Posts: 269


« Reply #9 on: December 21, 2011, 12:40:09 PM »
ReplyReply

so you think that lossless compression is more cpu intensive than lossy ? why ?

PS: certainly you can imagine a very cpu intensive lossless compression algorithm, but provided that we do not try to beat the likes of zip, 7zip, rar, etc

Table base look-up (for the current lossy compression) is a single cycle; just a indexed lookup. Nothing matches that Smiley

If I were to make a bet, then I'd bet that if the next DNG spec has a new lossy compression capability, that it will be JPEG DCT based. Reason being that a lot of camera chips have hardware acceleration for JPEG DCT built in to support JPEG out, so doing JPEG DCT lossy compression will be "costless" in terms of CPU utilization for many modern cameras.

Sandy
Logged
kwalsh
Jr. Member
**
Offline Offline

Posts: 89


« Reply #10 on: December 22, 2011, 09:34:24 PM »
ReplyReply

Yes, but only for files exported from LR/ACR or the DNG converter. The issue is, no camera (that I can think of) implements lossless DNG compression, because it takes up too much computational power. E.g., the M9 does lossy table based compression, but not lossless compression.

Ah... I didn't know that.  Sort of strange, almost every proprietary RAW format uses lossless compression in camera that is as effective as the DNG compression algorithm (in fact at least the Canon CR2 and DNG both use the same algorithm, lossless JPEG).  And most all of them are based on TIFF just like DNG.  If all the proprietary cameras have no problem doing lossless compression with their processors I wonder why the DNG cameras are not.

Ken
Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1678


« Reply #11 on: December 23, 2011, 06:18:15 AM »
ReplyReply

Table base look-up (for the current lossy compression) is a single cycle; just a indexed lookup. Nothing matches that Smiley
I would not make that claim without knowing the exact hardware in question. Is it an ASIC? A FPGA? Some DSP? Doing a full 14-16-bit table lookup can easily be a relatively "expensive" operation on some platforms, since it involves a large amount of memory that must be accessed in a seemingly random (i.e. hard-to-cache) pattern for each individual pixel. Now, perhaps the standard does something clever to avoid this by e.g. having a smaller table that is accessed with some extra logic. E.g. "if value is small, encode it directly. If value is large, encode it using this (smaller) table".

I would guess that quantizing the value can often be considered "less expensive". My suggestion would be something ala "if value is small, encode it directly. If value is large, drop the N lsb bits".
Quote
If I were to make a bet, then I'd bet that if the next DNG spec has a new lossy compression capability, that it will be JPEG DCT based. Reason being that a lot of camera chips have hardware acceleration for JPEG DCT built in to support JPEG out, so doing JPEG DCT lossy compression will be "costless" in terms of CPU utilization for many modern cameras.

Sandy
JPEG usually can do only 8-bit YCbCr values, and its rate-perceptual distortion is depending on nonlinear gamma, sensible exposure, luma/chroma separation etc.. Fitting a 14-bit mosaiced bgrg signal into that in such a way that JPEG can efficiently compress it is nontrivial. Not saying that it cannot be done, but it is going to cost brainpower, cpupower, reduced compression/artifact efficiency, or all three.

-h
« Last Edit: December 23, 2011, 06:21:43 AM by hjulenissen » Logged
sandymc
Sr. Member
****
Offline Offline

Posts: 269


« Reply #12 on: December 23, 2011, 07:45:39 AM »
ReplyReply

I would not make that claim without knowing the exact hardware in question. Is it an ASIC? A FPGA? Some DSP? Doing a full 14-16-bit table lookup can easily be a relatively "expensive" operation on some platforms, since it involves a large amount of memory that must be accessed in a seemingly random (i.e. hard-to-cache) pattern for each individual pixel. Now, perhaps the standard does something clever to avoid this by e.g. having a smaller table that is accessed with some extra logic. E.g. "if value is small, encode it directly. If value is large, encode it using this (smaller) table".

Well, you can implement anything you want to run in a single cycle with custom hardware - my reply should have included "in the absence of compression-specific hardware nothing beats a table look up." It's true that on modern hardware, e.g., the iPhone/iPad's A4 hardware, you can get probably 10 op-code cycles (maybe 40+ if you're using the NEON SIMD extensions), in the time that a single memory fetch would occur. But it's still hugely difficult to implement a non-trivial lookup equivalent in that number of op-codes - I've tried Cry And if your code is smart, the processor can be doing other things in that time anyway.

On compression hardware, agree with you on the issues of JPEG compression hardware, but my point was that I doubt that in-camera lossy compression will be widely used unless it's hardware supported, and the only thing I can think of that's in camera silicon today is JPEG DCT compression.

Sandy
Logged
madmanchan
Sr. Member
****
Offline Offline

Posts: 2108


« Reply #13 on: December 23, 2011, 10:02:02 AM »
ReplyReply

Hi Sandy,

It is currently possible to store 12 bits of data per sample (i.e., BitsPerSample tag value = 12) in a packed format, i.e., where 2 pixels are stored in 3 bytes instead of 4 bytes.  Both TIFF and (by extension) DNG support this.  The Compression tag in the IFD should still be set to uncompressed.  It is also possible to store the image data in unpacked format, but of course that wastes space.
Logged

sandymc
Sr. Member
****
Offline Offline

Posts: 269


« Reply #14 on: December 23, 2011, 10:27:34 AM »
ReplyReply

Hi Sandy,

It is currently possible to store 12 bits of data per sample (i.e., BitsPerSample tag value = 12) in a packed format, i.e., where 2 pixels are stored in 3 bytes instead of 4 bytes.  Both TIFF and (by extension) DNG support this.  The Compression tag in the IFD should still be set to uncompressed.  It is also possible to store the image data in unpacked format, but of course that wastes space.

Well, $%1$.

Now that's something I didn't know. At least for DNG. And the SDK supports this? I'll go look at the specs again.

Thanks,

Sandy
Logged
madmanchan
Sr. Member
****
Offline Offline

Posts: 2108


« Reply #15 on: December 23, 2011, 09:33:25 PM »
ReplyReply

Hi Sandy,

Yes, the SDK should support this (at least for reading). See the dng_read_image::ReadUncompressed routine in dng_read_image.cpp for details. I think when writing uncompressed images it doesn't currently pack the bytes, though this wouldn't be hard to add. (In practice, most folks want to use compression ...)
Logged

sandymc
Sr. Member
****
Offline Offline

Posts: 269


« Reply #16 on: December 24, 2011, 09:10:05 AM »
ReplyReply

Hi Sandy,

Yes, the SDK should support this (at least for reading). See the dng_read_image::ReadUncompressed routine in dng_read_image.cpp for details. I think when writing uncompressed images it doesn't currently pack the bytes, though this wouldn't be hard to add. (In practice, most folks want to use compression ...)

Yep, looked at the SDK - read support, but no write support. Explains why I didn't know - the write code was only part I've ever needed to look at, and I just assumed. The spec does actually say packing is allowed. My bad Sad

Sandy
Logged
Pages: [1]   Top of Page
Print
Jump to:  

Ad
Ad
Ad