Ad
Ad
Ad
Pages: [1] 2 »   Bottom of Page
Print
Author Topic: Question about manipulation of RAW values in NEF (Nikon) files  (Read 6031 times)
FranciscoDisilvestro
Sr. Member
****
Offline Offline

Posts: 558


WWW
« on: January 14, 2013, 08:51:27 PM »
ReplyReply

Hi,

Just for curiosity, does anybody know what could be the reason or benefit of multiplying the values from the R and B channels by a factor? I mean, not the factors for White Balance, but a scalar multiplication of the raw values after converting to digital (what you get in a "unprocessed NEF"). This can be shown by looking at raw histograms, where you can see a periodic "hole" in the histograms of the Red and Blue channels.

The attached image shows the histogram from RawDigger for the first 128 values (0-127) of a 14-bit uncompressed NEF file at base ISO (100) from a D800. The pattern will look the same if you show any set of values. This is a clear indication of a scalar multiplication after converting to digital, and happens with other models too (I have observed this on D300 files too, using RawDigger and Rawnalize).

The factors are different for the red and blue channels, adding more curiosity to why they are doing this. From my understanding of digital image processing, they could just adjust the WB factors and achieve the same end result.

A practical implication (which I think it is negative) is that if you shot at base ISO, the green channel shows saturation at a value close to 15800 (out of 16383 for 14 bit) while the red and blue channels go all the way up to 16383. This give the false impression of the green channel saturating before the other channels, but it is just the effect of the multiplication mentioned before, which will throw values close to 15800 well above 16383 for the R and B channels. The second image shows the histogram for the highest values, where the early saturation of the green channel could be observed.

I have found explanations for other manipulations of the raw data in Nikon cameras, such as the black point offset and the "Star-killing" algorithm, but I haven't found any about this.

Regards,
Francisco
Logged

Fine_Art
Sr. Member
****
Offline Offline

Posts: 1151


« Reply #1 on: January 14, 2013, 11:43:15 PM »
ReplyReply

Are you looking at 14 bit data in 16 bit space? My software has radio buttons for looking at the histogram in 8,10,12,14,16 bits My Raws are 12 bit. If I select anything up to including 12 the file is continuous. If I select 14 or 16 it shows gaps.

Most data manipulations (photo edits) I do after conversion are in the full 16 bit tif space and the file is back to smooth when viewed in 16 bit.
Logged
FranciscoDisilvestro
Sr. Member
****
Offline Offline

Posts: 558


WWW
« Reply #2 on: January 15, 2013, 12:03:56 AM »
ReplyReply

Hi,

No data manipulation, these are the raw values as they are in the NEF file, whit the x-axis showing each individual value.

If you have a 12 bit raw file then you will have values between 0 and 4095; a 14 bit raw file will have values between 0 and 16383 and so on.

When the histogram shows "holes" in the red channel at values 32,41,50,59,68, etc. (or every 9 values) it means that there are no red pixels with those values, typical of a multiplication.

As an example, multiplying the raw values by 2 is equivalent to an exposure compensation of +1 EV, but you will end with a raw file where there are no odd values for any pixel
Logged

Fine_Art
Sr. Member
****
Offline Offline

Posts: 1151


« Reply #3 on: January 15, 2013, 12:36:06 AM »
ReplyReply

Hi,

No data manipulation, these are the raw values as they are in the NEF file, whit the x-axis showing each individual value.

If you have a 12 bit raw file then you will have values between 0 and 4095; a 14 bit raw file will have values between 0 and 16383 and so on.

When the histogram shows "holes" in the red channel at values 32,41,50,59,68, etc. (or every 9 values) it means that there are no red pixels with those values, typical of a multiplication.

As an example, multiplying the raw values by 2 is equivalent to an exposure compensation of +1 EV, but you will end with a raw file where there are no odd values for any pixel

I understand your idea that it is a multiplication. What I was proposing is that it is your view settings in the software. I don't know the software so I am suggesting the view is 16 bit 0-65535. The missing values would be rounding.

A sensor cannot fill up skipping values. You could have the occasional histogram value that is skipped but not a repeating pattern, that is a manipulation.

Your ISO gain would skip groups. What is the shot's ISO? What is that ISO as listed on DXO? Try using those values for your multiplication factor.
Logged
Fine_Art
Sr. Member
****
Offline Offline

Posts: 1151


« Reply #4 on: January 15, 2013, 12:44:30 AM »
ReplyReply

There would be no gain at your camera's base ISO. Does it vanish in RAWs at base ISO?
Logged
FranciscoDisilvestro
Sr. Member
****
Offline Offline

Posts: 558


WWW
« Reply #5 on: January 15, 2013, 12:55:43 AM »
ReplyReply

Hi,

Those are at base ISO, there are no skipped values at the histogram. This is actually a known characteristic of NEF files, but what I have never found is an explanation on why it is done

Regards
Logged

Jack Hogan
Sr. Member
****
Offline Offline

Posts: 250


« Reply #6 on: January 15, 2013, 03:09:36 AM »
ReplyReply

does anybody know what could be the reason or benefit of multiplying the values from the R and B channels by a factor?

It looks like White Balance pre-conditioning.  Since the Blue and Red filters in the CFA are less sensitive than the Green filter, the R* and B* data is scaled to provide an approximately 'correct' relative output in the raw data, partly compensating for the filters' different physical properties - prior to light source dependent white balance.  See below for an example.



Saturation of the Green channel is what it is by design.  Since the Red and Blue sensels have physically more 'highlight headroom' it makes sense to have them use the entire 14 bit range when scaled to maintain accuracy.  Other than in blown highlight situations where this becomes apparent, it is uncommon to find complete (i.e. not just a saturated detail) natural scenes where the Red or Blue raw channel clip before green, before or after scaling.  Take a look at white balance settings in various lighting conditions to see that.

Cheers,
Jack
« Last Edit: January 15, 2013, 03:37:44 AM by Jack Hogan » Logged
Iliah
Sr. Member
****
Offline Offline

Posts: 415


« Reply #7 on: January 29, 2013, 09:08:47 PM »
ReplyReply

> It looks like White Balance pre-conditioning

It is "WB preconditioning". 
Logged
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3868


« Reply #8 on: January 30, 2013, 05:05:02 AM »
ReplyReply

Other than in blown highlight situations where this becomes apparent, it is uncommon to find complete (i.e. not just a saturated detail) natural scenes where the Red or Blue raw channel clip before green, before or after scaling.

Hi Jack,

It is quite common for Red in flowers to saturate before the Green channel does. That would create an even bigger issue with this WB pre-compensation, leading to a requirement to under-expose the Green channel, which becomes more noisy. That would be bad news if you are in the business of flower photography (which is a serious branch of product photography in the Netherlands).

Since this post quantization multiplication doesn't add one iota of accuracy, I also wonder if there is another reason, e.g. such as improved compression (speed/efficiency).

Cheers,
Bart
Logged
eronald
Sr. Member
****
Online Online

Posts: 4207



« Reply #9 on: January 30, 2013, 06:23:14 AM »
ReplyReply


I simply don't understand the need for scaling in a format which no user can ever see.

But then I'm really not good at all this analytical stuff, working out circuit functionality in the IC reverse engineering class I took was a pain, while most of my classmates liked it.

Edmund

Hi Jack,

It is quite common for Red in flowers to saturate before the Green channel does. That would create an even bigger issue with this WB pre-compensation, leading to a requirement to under-expose the Green channel, which becomes more noisy. That would be bad news if you are in the business of flower photography (which is a serious branch of product photography in the Netherlands).

Since this post quantization multiplication doesn't add one iota of accuracy, I also wonder if there is another reason, e.g. such as improved compression (speed/efficiency).

Cheers,
Bart
Logged
sandymc
Sr. Member
****
Offline Offline

Posts: 270


« Reply #10 on: January 30, 2013, 07:24:19 AM »
ReplyReply

There are at least two reason to pre-multiply that I can think of - firstly, it's normal to premultiply before demosaicing anyway; demosaicing engines tend to work a little better if grey is grey. Secondly, if you are going to level compress data (as Nikon do in compressed NEFs), it is better to use all the available levels for data in all three channels rather than only using all the levels in one channel, and having a lot of possible levels unused in the other two channels.

Sandy
Logged
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3868


« Reply #11 on: January 30, 2013, 10:20:27 AM »
ReplyReply

There are at least two reason to pre-multiply that I can think of - firstly, it's normal to premultiply before demosaicing anyway; demosaicing engines tend to work a little better if grey is grey.

Hi Sandy,

That doesn't require one doing it in the Raw file data. It could even better be postponed to a more potent processing environment which can also use more bits.

Quote
Secondly, if you are going to level compress data (as Nikon do in compressed NEFs), it is better to use all the available levels for data in all three channels rather than only using all the levels in one channel, and having a lot of possible levels unused in the other two channels.

That's along the lines of what I was also thinking. More zeroes in the LSBs (least significant bits) allows to either compress more efficiently or drop LSBs alltogether.

Such mutilation of image data is not welcome in astrophotography circles, so the question becomes, can it be avoided or is this the best Raw data quality?

Cheers,
Bart
Logged
sandymc
Sr. Member
****
Offline Offline

Posts: 270


« Reply #12 on: January 30, 2013, 11:28:23 AM »
ReplyReply

Hi Sandy,

That doesn't require one doing it in the Raw file data. It could even better be postponed to a more potent processing environment which can also use more bits.

That's along the lines of what I was also thinking. More zeroes in the LSBs (least significant bits) allows to either compress more efficiently or drop LSBs alltogether.

Such mutilation of image data is not welcome in astrophotography circles, so the question becomes, can it be avoided or is this the best Raw data quality?

Cheers,
Bart

Well, NEF dates back quite a bit now - back in those days, storage was precious, and it took a long tim to move bytes, so the trade-offs were were different.

But I'd be surprised if there was actually a reduction in data quality for an uncompressed image. Simplistically, if the original data ranged from 0-100, and it's now scaled from 0-1000, then yes, you would see "holes", in fact 90% of the data would be holes - it would go 0, 10, 20, 30,.... etc. But actually, you still have the same number of real levels as the sensor/converter measured.

Sandy
Logged
FranciscoDisilvestro
Sr. Member
****
Offline Offline

Posts: 558


WWW
« Reply #13 on: January 30, 2013, 02:17:43 PM »
ReplyReply

Hi,

Thanks for your inputs. I have checked more NEFs and it seems that those multipliers are not always the same, depending on camera settings. This might be performed to compensate slight nonlinearities and being able to use the same WB multipliers in post processing.

Regards,
Francisco
Logged

bwana
Full Member
***
Offline Offline

Posts: 183


« Reply #14 on: January 31, 2013, 07:03:10 AM »
ReplyReply

So the idea is to start with a 'white balanced' raw file.? And because there are twice as many green pixels, you have to double the blue and red pixel data? That should increase the amplitudes - but why are there holes or gaps in the histogram? That implies those channels start out with less dynamic range and are then scaled to higher and lower values- like what happens in post with contrast increase.

The gaps remind me of early versions of photoshop - when you looked at 8 bit jpgs there were all these gaps. But if it were just the software being used to look at 14 bit files in 16 bit space, the green channel should have gaps too.

I don't understand it either.
Logged
sandymc
Sr. Member
****
Offline Offline

Posts: 270


« Reply #15 on: January 31, 2013, 07:37:18 AM »
ReplyReply

That implies those channels start out with less dynamic range and are then scaled to higher and lower values

Not necessarily less dynamic range; dynamic range depends on e.g., on noise, and noise might not be same in each channel although in practice it's often similar. But less resolution (aka levels), yes.

Sandy
Logged
bwana
Full Member
***
Offline Offline

Posts: 183


« Reply #16 on: January 31, 2013, 08:05:57 AM »
ReplyReply

At the risk of going off topic - I don't see how 'resolution' of the number of gray shades is that different than dynamic range. Although we think of dynamic range as the number of EV between darkest and lightest (the actual number of lumens a sensor can resolve as represented by the lowest and highest number), dynamic range can also be interpreted as the number of detectable shades between darkest and lightest. And this is the 'version' of dynamic range I am using.
Logged
sandymc
Sr. Member
****
Offline Offline

Posts: 270


« Reply #17 on: January 31, 2013, 08:31:49 AM »
ReplyReply

At the risk of going off topic - I don't see how 'resolution' of the number of gray shades is that different than dynamic range. Although we think of dynamic range as the number of EV between darkest and lightest (the actual number of lumens a sensor can resolve as represented by the lowest and highest number), dynamic range can also be interpreted as the number of detectable shades between darkest and lightest. And this is the 'version' of dynamic range I am using.

It's a bit of an "how many angel can dance on a pinhead" debate, but "darkest" is mostly defined by noise level - below the noise floor, you don't know how dark dark is(!), and for most sensors, the noise floor is well above the lowest level that the A/D can measure.

Sandy
Logged
Jack Hogan
Sr. Member
****
Offline Offline

Posts: 250


« Reply #18 on: February 01, 2013, 12:15:23 PM »
ReplyReply

Hey Bart,

I believe that there are many reasons why data may be scaled before it is written to the raw file, most of which are camera/sensor dependent and we can only imagine (some of them were mentioned in this thread).

It is quite common for Red in flowers to saturate before the Green channel does.

'Quite' common for Red in flowers to saturate before Green - in the R* and G* raw data channels?   Are you sure?  Everything I know on this topic I learned from Iliah (thanks for the spell check :-) and my experience plus some of his posts have led me do believe that it is 'uncommon' in the situation mentioned in my post:

Quote from: Jack Hogan
Other than in blown highlight situations where this becomes apparent, it is uncommon to find complete (i.e. not just a saturated detail) natural scenes where the Red or Blue raw channel clip before green, before or after scaling.

In red flower shots, it is on the other hand more common for the Red Channel to saturate before Green after typical rendering in a colorimetric color space such as sRGB.

Jack
« Last Edit: February 01, 2013, 01:48:02 PM by Jack Hogan » Logged
Jack Hogan
Sr. Member
****
Offline Offline

Posts: 250


« Reply #19 on: February 01, 2013, 01:54:56 PM »
ReplyReply

I don't see how 'resolution' of the number of gray shades is that different than dynamic range. Although we think of dynamic range as the number of EV between darkest and lightest (the actual number of lumens a sensor can resolve as represented by the lowest and highest number), dynamic range can also be interpreted as the number of detectable shades between darkest and lightest. And this is the 'version' of dynamic range I am using.

You are probably interested in Dxomark's Tonal Range measurement then.  However, that tells you more about the capabilities of the human visual system than the sensor's.

Jack
Logged
Pages: [1] 2 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad