Ad
Ad
Ad
Pages: « 1 [2]   Bottom of Page
Print
Author Topic: Question about manipulation of RAW values in NEF (Nikon) files  (Read 4162 times)
FranciscoDisilvestro
Sr. Member
****
Offline Offline

Posts: 323


WWW
« Reply #20 on: February 01, 2013, 04:46:32 PM »
ReplyReply

So the idea is to start with a 'white balanced' raw file.?

No, as far as I understand the idea is to start with a "raw" file that will require the same multipliers in post for White Balance regardless of the parameters you chosed in the camera.

As an example, shoting the same scene, if you select different Aperture/Speed/ISO combinations where all correspond to correct exposure, you should end up with a raw file that is pretty much the same. If this is not happening at "sensor dump" data level, then it is reasonable to do some correction by the camera firmware before writing the NEF file, at least for general photography.

but why are there holes or gaps in the histogram?

That is an evidence of multiplication in the digital domain (after conversion from analog). If the encoding is linear, as in the case of most cameras, multiplicating the raw values by a constant value is equivalent to exposure compensation. In this case these small compensations (we are talking in the order of 0.1 EV) are applied individually to each channels.
That is actually what most cameras do for high ISO, it is really nothing more than multiplying the raw values after conversion. The attached image shows a small region of the raw histogram from a image taken at ISO 3200 with a nikon D800.
Logged

jejv
Newbie
*
Offline Offline

Posts: 9


« Reply #21 on: February 09, 2013, 02:16:20 PM »
ReplyReply

If Nikon - or anyone else - are scaling raw channels, that is quite unfortunate, and - err - dumb.

That doesn't require one doing it in the Raw file data. It could even better be postponed to a more potent processing environment which can also use more bits.
Exactly.

While scaling Real Numbers is linear, scaling finite precision numbers is not.

To illustrate the problem, consider a 3-bit RAW camera, so the RAW format supports values 0-7, but where the sensor red channel is usually in the range 0-5.  Let's try scaling the red range up to 0-7 and rounding to 3-bits.

InputOutput
00
11
23
34
46
57

Oh dear. The input has even steps, but the output doesn't.
Also, we have a problem if we take a picture where the sensor red channel goes up to 6.

If you have some very clever RAW conversion software that analyses the gaps in the RAW red histogram, in principle, that software could undo the damage done by scaling and rounding back to 0-5 - on the basis that a histogram with regularly spaced gaps in it was statistically implausible.

OTOH, if your RAW software takes this scaled RAW data at face value, the non-linearity introduced by the scaling may introduce banding in the shadows, and emphasise sensor pattern noise in the shadows, which might not otherwise be visible.

If the manufacturers want to offer more ISO settings than their hardware has gain settings, the way to do it is to record the requested gain in the metadata, rather than by attempting to scale the "RAW" component values.

« Last Edit: February 09, 2013, 02:18:23 PM by jejv » Logged
BartvanderWolf
Sr. Member
****
Online Online

Posts: 3007


« Reply #22 on: February 09, 2013, 05:47:38 PM »
ReplyReply

If the manufacturers want to offer more ISO settings than their hardware has gain settings, the way to do it is to record the requested gain in the metadata, rather than by attempting to scale the "RAW" component values.

Hi,

I agree that that would be the preferred quality preserving route to take.
Therefore I assume it must be for reasons of compression efficiency, certainly not for quality.

Cheers,
Bart
Logged
Vladimirovich
Sr. Member
****
Offline Offline

Posts: 1147


« Reply #23 on: February 09, 2013, 08:27:36 PM »
ReplyReply


To illustrate the problem, consider a 3-bit RAW camera


they do this exactly because the cameras are not 3-bit RAW cameras, so there are no degradation really detectable by their target customer... hence you want to illustrate the problem by producing 2 raw files and suggested parameters of conversion in a real raw converter of your choice.
Logged
jejv
Newbie
*
Offline Offline

Posts: 9


« Reply #24 on: February 12, 2013, 02:01:40 PM »
ReplyReply

Hello.

I assume it must be for reasons of compression efficiency, certainly not for quality.

I don't see low that would help compression efficiency.   If I understand NEF - which I may not - the same (fixed) Huffman encoding is used for all channels.
Logged
sandymc
Full Member
***
Offline Offline

Posts: 235


« Reply #25 on: February 12, 2013, 02:06:24 PM »
ReplyReply

Hello.

I don't see low that would help compression efficiency.   If I understand NEF - which I may not - the same (fixed) Huffman encoding is used for all channels.


NEF can also use level compression.

Sandy
Logged
jejv
Newbie
*
Offline Offline

Posts: 9


« Reply #26 on: February 12, 2013, 02:15:44 PM »
ReplyReply

they do this exactly because the cameras are not 3-bit RAW cameras, so there are no degradation really detectable by their target customer... hence you want to illustrate the problem by producing 2 raw files and suggested parameters of conversion in a real raw converter of your choice.

No.

I illustrate the problem with a 3-bit RAW format to keep the numbers in the illustration manageable.

Certainly, when we get to the mid-tones, the relative error caused in the OP's example by scaling followed by re-quantisation is unlikely to be significant or noticeable.

But the re-quantisation errors might become a real problem if we start to try to move the tone curve towards the shadows - increasing the gain in the shadows.

It seems like an unforced error if Nikon is doing that.
Logged
jejv
Newbie
*
Offline Offline

Posts: 9


« Reply #27 on: February 12, 2013, 02:43:51 PM »
ReplyReply

Then we could look at the whole approach to RAW compression.

Over the years, the number of logic gates that an IC designer can throw at a problem has increased exponentially, while memory bandwidth has increased much more slowly.  So throwing more logic gates at compression makes sense, and folk like Luminous Landscape should be beating up Canon, Nikon et al. about why this isn't happening.

The size of RAW images is getting to be a problem, limiting [sustained] camera frame rates, and increasing storage/backup costs.

A predictive code for RAW images would let us store images with greater precision in less space, but that doesn't seem to be happening.  You folk with D800's should be asking Nikon why, given the state of the art - say, five years ago - in image compression, your RAW files are so very very big.

Then I have a 10MP Samsung EX1, which I like because of its flip-and-twist display [and the fast-ish lens].  As MR put it a long time ago, it has trouble walking and chewing gum at the same time.  Never mind compression, it features RAW expansion!  A 10MP RAW takes > 20MBytes!
« Last Edit: February 12, 2013, 02:54:31 PM by jejv » Logged
BartvanderWolf
Sr. Member
****
Online Online

Posts: 3007


« Reply #28 on: February 12, 2013, 03:01:14 PM »
ReplyReply

Hello.

I don't see low that would help compression efficiency.   If I understand NEF - which I may not - the same (fixed) Huffman encoding is used for all channels.

Hi,

Even if Huffman encoding is used (I don't know if it is), having a number of LSB zeroes will compress nicely, and gaps at the same bit position in the represented numbers also help compression, as does having fewer zeroes spread in the MSBits.

IOW it helps to avoid the issues seen when compressing 16-bit numbers, where compression may cause the file size may grow!

Cheers,
Bart
« Last Edit: February 12, 2013, 03:05:33 PM by BartvanderWolf » Logged
Jack Hogan
Full Member
***
Offline Offline

Posts: 195


« Reply #29 on: February 18, 2013, 08:01:29 AM »
ReplyReply

A predictive code for RAW images would let us store images with greater precision in less space, but that doesn't seem to be happening.  You folk with D800's should be asking Nikon why, given the state of the art - say, five years ago - in image compression, your RAW files are so very very big.

They are only very very big if one doesn't use Nikon's Compressed modes: about one byte per pixel, not bad for 14 bit data: the OOC 8 bit TIFF is about three times as large.  However, if folks are squeamish about using even this type of conservative-though-perfect information recording (they are, oh they are), I can just imagine what it would take to convince them to use anything even remotely more aggressive...

Then I have a 10MP Samsung EX1, which I like because of its flip-and-twist display [and the fast-ish lens].  As MR put it a long time ago, it has trouble walking and chewing gum at the same time.  Never mind compression, it features RAW expansion!  A 10MP RAW takes > 20MBytes!
That would be about 10 x 14 / 8 = 17.5MB, plus a little overhead... yeah, no compression here  Wink

Jack
Logged
jonathanlung
Jr. Member
**
Offline Offline

Posts: 70


« Reply #30 on: February 18, 2013, 11:00:32 AM »
ReplyReply

Embedded JPEG in the Samsung?

Now if only there were a way to disable the storing of embedded JPEGs in NEFs. If I wanted JPEGs, I'd be using NEF+JPEG! Could reduce storage costs (and processing speed/copying time) by about 10-20%!
« Last Edit: February 18, 2013, 11:03:50 AM by jonathanlung » Logged
TheSuede
Newbie
*
Offline Offline

Posts: 16


« Reply #31 on: February 24, 2013, 05:38:09 PM »
ReplyReply

To see why the preconditioning is included in the flow, you just have to follow the signal flow in the Milbaut Expeed.

Some parts of the signal conditioning is made as the signal from the sensor FIRST streams into the logic unit, basically all parts that are supposed to go into the raw file type of choice (14, 14LC, 14C, 12, 12LC, 12C).

Other parts aren't applied to the signal/image until after the entire image has been one round through the buffer memory - like dark-frame subtraction (long exposure NR), jpg conversion settings, jpg-compression and so on.

To use the Compressed format, the channels need to be aligned to their respective maximum before the gamma-like curve is applied to avoid as much of the unavoidable gradation loss as possible. This means that the multiplication has to be placed before the compression choices in the raw in-flow before the first buffering. Remember that this are hardcoded signal flows.

When not in Compressed raw mode, the conditioning could theoretically be turned off - but then you'd have the peculiar problem that your WB multipliers would be different depending on if you chose Compressed or uncompresssed / LC raw files. So keep it in. It's not like they (the holes in the histogram) actually do any visible harm to the image quality. In theory they might, in practice they don't.
Logged
Pages: « 1 [2]   Top of Page
Print
Jump to:  

Ad
Ad
Ad