Pages: « 1 ... 11 12 [13] 14 »   Bottom of Page
 Author Topic: Will Michael revisit ETTR?  (Read 56088 times)
Guillermo Luijk
Sr. Member

Offline

Posts: 1336

the DXO normalization procedure is of interest. The normalized SNR equation is:

SNRnorm = SNR + 20 * log10 (sqrt[N/N0]),

where N0 is the original number of pixels, N is the number of pixels for the sensor with the higher pixel count, and SNR is the original SNR.

If we average 4 pixels into one, the formula shows that the SNR increases by 6.02 dB or 1 stop, in agreement with Emil's figure. I think DXO is using flat patches to derive their figures. As you suggest, in real world use with demosaiced images, the SNR may be somewhat less than the theoretical value.

I think the 'patches discusion' is going farer than it really deserves. Averaging 4 pixels of the same value + their individually added noise improves SNR by 2, I think we all agree here. If the signal is different on each pixel, the noise will also be different, so SNR will be different on each pixel. If each source pixel has its own SNR, it's nonsense to look for 'SNR improves by a factor of X', since source SNR is not unique.

But once defined, the 'patch model' can be extended to any real world case with the purpose of SNR normalisation. It will always be an approximation, but a valid approximation to make noise performance from different sensors comparable. It is not intended to quantify SNR on any real picture of our cat.

SNR_norm_dB = SNR_perpixel_dB + 20 * log10 [(Mpx / 12,7)^0.5]

Regards
 « Last Edit: August 31, 2011, 10:02:18 AM by Guillermo Luijk » Logged

BartvanderWolf
Sr. Member

Offline

Posts: 3907

While awaiting Emil's analysis (presuming he takes the trouble to reply to your post), the DXO normalization procedure is of interest. The normalized SNR equation is:

SNRnorm = SNR + 20 * log10 (sqrt[N/N0]),

where N0 is the original number of pixels, N is the number of pixels for the sensor with the higher pixel count, and SNR is the original SNR.

If we average 4 pixels into one, the formula shows that the SNR increases by 6.02 dB or 1 stop, in agreement with Emil's figure. I think DXO is using flat patches to derive their figures. As you suggest, in real world use with demosaiced images, the SNR may be somewhat less than the theoretical value. The DXO engineer also states that 4:1 binning will give 1 one stop improvement in SNR for the DXO engineer also states that binning outside the sensor doubles the SNR whereas hardware binning quadruples the SNR. What are your figures?

Hi Bill,

joofa's numbers are the same, he is just acting a bit anal about the fact that most real life images are dominated by non-uniform (image detail) patches, so it's hard to calculate average noise (the amount of noise is different for each pixel where the signal is different as well).

Even with uniform areas, a non-binning type of averaging further complicates the noise averaging statistics, because the downsampling filters are not uniform over their support width (e.g. Lanczos window). He conveniently does not mention the possibility (although not practical with subject motion) to average mutiple exposures of the same scene, as is e.g. done routinely by astrophotographers. That way each pixel gets averaged as expected and noise is reduced in quadrature, regardless of its ajacent pixel's signal level.

Cheers,
Bart
 Logged
Ray
Sr. Member

Offline

Posts: 8944

The point is that noise acts to dither the levels.  Suppose the true signal is some value X between zero and one, over a patch of the image (we are going to ignore natural scene variation for the purpose of answering your question).  Suppose the noise is of strength N, for example let N be one level.  The noise adds a random number roughly between -N and N to X, so that the pixel wants to record some number between X-N and X+N.  Of course, the resulting signal plus noise is digitized so the output is either 0 or 1; if the noise is random (uncorrelated from pixel to pixel), the value of X is reflected in the percentage of 1's vs 0's in the patch -- a fraction X of the pixels will be 1 and the rest 0.  If we average the levels over a large enough patch, we recover the original signal, even though each individual pixel only recorded 0 or 1.

This is the basic idea that allows one to trade resolution for noise -- and why downsampled images look less noisy.  Note also that while the average is more finely graded than steps of one, that doesn't mean we buy anything by making individual pixels record values more finely spaced than the level of noise, because the individual values are jumping around randomly by an amount between -N and N.

Thanks for attempting the explanation, Emil, even though I have to admit it is not all totally clear.

Nevertheless, I can see with my own eyes that noisy images after downsampling exhibit less noise. A similar effect can be achieved simply by viewing the image from a greater distance.

What still puzzles me is that a 14 bit A/D converter in a camera with a good dynamice range, such as the D7000, will not provide any IQ advantages in the deepest shadows, compared with 12 bit.

However, I can understand this might be the case with a Canon camera which doesn't go beyond 12 stops of DR.

I've gone back to the RAW files and created crops of the 12th, 13th and 14th stops, which show a clear improvement in IQ in the 12th stop.

Now, you seem to be claiming that such improvement in that 12th stop is entirely due to the greater quantity of light resulting in an improved SNR (1/500th sec as opposed to 1/2000th for the 14th stop).

Of course, I am able to appreciate that 4x the number of photons will improve the SNR and the IQ as a result.

I'm just a bit skeptical that the image quality in that 12th stop, as shown in my attached image, could be achieved with just the one level per channel of a 12 bit converter, as opposed to the 4 levels per channel of the 14 bit converter.

Normally, I would just repeat the test in 12 bit and 14 bit modes. If I see a difference at the same very low exposures, then there's a difference, depite any theory from an eminent Physicist.

Unfortunately, my D7000 is in for repair due to its inability to autofocus in cold weather. It looks as though I won't get it back for a while because it is now Spring in Australia, and the Nikon repair agent is unable to duplicate the problem in the artificially cold environments he has created.

Tell me now whether I can expect to see the same image quality in that 12th stop with the D7000 set to 12 bit mode, and when I get the camera back, or a new replacement, I'll carry out the tests and tell you whether you are right or wrong.
 Logged
bjanes
Sr. Member

Offline

Posts: 2882

I think the 'patches discusion' is going farer than it really deserves. Averaging 4 pixels of the same value + their individually added noise improves SNR by 2, I think we all agree here. If the signal is different on each pixel, the noise will also be different, so SNR will be different on each pixel. If each source pixel has its own SNR, it's nonsense to look for 'SNR improves by a factor of X', since source SNR is not unique.

Quite true. Also, we should remember that the SNR for each color of the Bayer array will be different. Figures are usually given for the green channels, since they should be near saturation with proper exposure. However, the red and blue channels will be below saturation according to the white balance multipliers. Nonetheless, their noise will also be reduced by a like factor when adding in quadrature.

Regards,

Bill
 Logged
joofa
Sr. Member

Offline

Posts: 488

And, remember, that is just one way of doing it. In fact, I have changed my own approach somewhat from the above links that is years older.

joofa's numbers are the same, he is just acting a bit anal about the fact that most real life images are dominated by non-uniform (image detail) patches, so it's hard to calculate average noise (the amount of noise is different for each pixel where the signal is different as well).

I don't know how can you claim the numbers are the same, without even knowing what I did. Kindly see the link posted above.

Quote
Even with uniform areas, a non-binning type of averaging further complicates the noise averaging statistics, because the downsampling filters are not uniform over their support width (e.g. Lanczos window). He conveniently does not mention the possibility (although not practical with subject motion) to average mutiple exposures of the same scene, as is e.g. done routinely by astrophotographers. That way each pixel gets averaged as expected and noise is reduced in quadrature, regardless of its ajacent pixel's signal level.

I don't think you realize that there is more to it than quadrature. Noise still adds in quadrature, but that is not the end of story. Again see the link above.

Sincerely,

Joofa
 Logged

Joofa
http://www.djjoofa.com
hjulenissen
Sr. Member

Offline

Posts: 1713

The DXO engineer also states that 4:1 binning outside the sensor hardware doubles the SNR whereas hardware binning quadruples the SNR. What are your figures?

Regards,

Bill
But using digital processing, the choice of averaging kernel is without limits, including negative coefficients, space-variant, signal-dependent (non-linear) manually guided processing compared to the simple box-car filter that binning even non-bayer sensels. This might (or might not) be enough to offset the initial advantage of sensor binning.

-h
 Logged
hjulenissen
Sr. Member

Offline

Posts: 1713

I think the 'patches discusion' is going farer than it really deserves.
What is the reason that patches are being discussed? Just curiosity, or is there some real-world photography application being limited by this approach?

-h
 Logged
BartvanderWolf
Sr. Member

Offline

Posts: 3907

And, remember, that is just one way of doing it. In fact, I have changed my own approach somewhat from the above links that is years older.

I don't know how can you claim the numbers are the same, without even knowing what I did.

And therein lies the problem with you. You are not very clear in what you base your assumptions on (so one can only assume common statistical principles which apply to us all), yet you hint at outcomes that deviate from common experience. You then don't explain but refer to another website where you also cause confusion by not explaining that you use a model that assumes correlation between sensels. Why? Nobody but you knows, until the info is tortured out of you. Why? And then (above) you add that the models you use in the discussion on that website are not the models you now use. Well thanks for creating yet more confusion. Was the earlier model not correct? What are you using now? Why are you using a correlation model in the first place, any literature/references/experiments you want to share that other can independently confirm?

No wonder Emil has given up on 'discussing' with you.

Quote
Noise still adds in quadrature, but that is not the end of story.

More riddles. I'm not even going to ask...

Sincere cheers,
Bart
 Logged
ejmartin
Sr. Member

Offline

Posts: 575

I will await Emil's response, since he is more technically adept than myself. In the meantime, what are your calculations?

Emil is not going to follow joofa's red herrings.  This was a thread about ETTR; I am happy to discuss that topic, though I suspect it has been covered amply already.  If the subject is to be changed, how about mass in general relativity?  Did you know that there is no precise definition of local mass in Einstein's theory?  Somehow, it doesn't keep my bathroom scale from working...
 Logged

emil
joofa
Sr. Member

Offline

Posts: 488

any literature/references/experiments you want to share that other can independently confirm?

At DPReview I have gone over this topic several times with details of what I'm doing. You are welcome to browse my messages there. But in summary, I just used elementary stochastic processes as applied in electrical engineering.

Quote
No wonder Emil has given up on 'discussing' with you.

I think signal processing is not Emil's forte  .

Sincerely,

Joofa
 « Last Edit: August 31, 2011, 02:40:42 PM by joofa » Logged

Joofa
http://www.djjoofa.com
bjanes
Sr. Member

Offline

Posts: 2882

At DPReview I have gone over this topic several times with details of what I'm doing. You are welcome to browse my messages there. But in summary, I just used elementary stochastic processes as applied in electrical engineering.

I think signal processing is not Emil's forte  .

I can see why you are not pleased with Emil. In the thread on DPReview which you referenced, he was apparently aware of your thesis but chose to ignore it.

Regards,

Bill
 Logged
joofa
Sr. Member

Offline

Posts: 488

I can see why you are not pleased with Emil. In the thread on DPReview which you referenced, he was apparently aware of your thesis but chose to ignore it.

Bill, I have a great deal of respect for you. Lets not make this personal. It's Emil's prerogative to ignore it if he wants to and that is fine with me. I stand by my assertions and anybody who doubts them can work out the numbers themselves.

Sincerely,

Joofa
 Logged

Joofa
http://www.djjoofa.com
kwalsh
Jr. Member

Offline

Posts: 93

Before anyone gets further bent out of shape it appears Joofa is simply saying in a practical sense that the basic 6.02 dB figure for resizing by a factor of two is complicated by two issues.

First, it is rare for anyone to use a rectangular kernel when resizing and practical kernels will not necessarily produce the same change in SNR.  There are a variety of reasons for this, but one would be the differing spatial frequency distribution of the scene detail and noise combined with a kernel with a non-uniform frequency response.

Second, since the noise distribution is amplitude dependent in a real scene the statistics will be more complicated than a simple model based on uniform patches.  This will also cause a deviation from the 6.02 dB figure.

Of what practical use this point is to the ETTR discussion in this thread I'm not sure.  It seems like a thread hijack of thread hijack.  Regardless, he's not off base in stating the simple 6.02 dB model used by DxO is in fact a simplification complicated by real world images and real world resizing.  No need to jump down his throat about it.

If you follow Joofa's posts here and elsewhere you'll know he likes to ask questions instead of answer them and always leaves dangling caveats to everything he says.  If you spend any time in academia you will encounter personalities who seem to think obfuscation makes them appear erudite.  Joofa is one of those souls who hasn't figured out clear communication is a better standard of competence.  You'll need to cut him some slack if you want to figure out what he's trying to say, and usually you'll find he isn't too far off base.  Whether it is worth parsing his rhetorical methods is for the reader to decide.

Ken
 Logged
PierreVandevenne
Sr. Member

Offline

Posts: 512

BTW, I've just finished a nice book that should be mandatory reading for anyone taking part in longish threads on the Internet

 Logged
Alan Goldhammer
Sr. Member

Offline

Posts: 1739

BTW, I've just finished a nice book that should be mandatory reading for anyone taking part in longish threads on the Internet

But this ONE is far better and more in keeping with what is going on!
 Logged

degrub
Guest

EPDM ? i thought that was a kind of synthetic rubber

Frank
 Logged
Ray
Sr. Member

Offline

Posts: 8944

In case anyone is still interested in the differences between 12 bit and 14 bit A/D conversions, I've done a few tests with my D7000 of the same target shown before.

I'm surprised that there is not a greater difference in the 12th stop and 11th stop. The differences are subtle. The 12 bit shots exhibit slightly greater grain and a degree of clipping of blacks that no adjustment in ACR can completely correct.

However, in the 13th and 14th stops of DR the differences smack one in the face.

One could argue that such differences are of no consequence because they are apparent only in the deepest shadows which one might want to deliberately clip. However, it seems to me if one is underexposing say 5 stops at ISO 100 as an alternative to using ISO 3200, one wants to be able to retrieve as much detail as possible from the shadows which will appear totally black before exposure compensation.

I'm going to try posting a couple of comparisons of the 13th and 14th stops of DR, but the jpeg engine seems to have difficulty compressing noise and the file sizes are quite large. So, if you don't see the images, don't be surprised. I may have to post them one at a time.
 Logged
bjanes
Sr. Member

Offline

Posts: 2882

In case anyone is still interested in the differences between 12 bit and 14 bit A/D conversions, I've done a few tests with my D7000 of the same target shown before.

I'm surprised that there is not a greater difference in the 12th stop and 11th stop. The differences are subtle. The 12 bit shots exhibit slightly greater grain and a degree of clipping of blacks that no adjustment in ACR can completely correct.

However, in the 13th and 14th stops of DR the differences smack one in the face.

Since a 12 bit linear file can only encode 12 stops of DR, it is confusing to try to extract 14 bits of DR from a 12 bit file.

Regards.

Bill
 Logged
Ray
Sr. Member

Offline

Posts: 8944

Since a 12 bit linear file can only encode 12 stops of DR, it is confusing to try to extract 14 bits of DR from a 12 bit file.

Regards.

Bill

Indeed it is. But Emil's point, as I understood it, was that noise would prevent any advantage of an increase in the number of levels beyond 12 bit, and that 14 bits served no purpose other than a possible slight increase in the accuracy of the conversion.

Perhaps it would be clearer to state, if the DR of the camera is no greater than 12 stops, then more levels that 12 bits afford, is of little use.

However if the camera has a DR of more than 12 stops, then more than 12 bits is of use.

If one wishes to engage in pixel peeping, 14 bits does provided a pixel-peeping  IQ  advantage in the 11th and 12th stops, with the D7k. However, I doubt that would be the case with lesser cameras, DR-wise.
 Logged
Guillermo Luijk
Sr. Member

Offline

Posts: 1336

Indeed it is. But Emil's point, as I understood it, was that noise would prevent any advantage of an increase in the number of levels beyond 12 bit, and that 14 bits served no purpose other than a possible slight increase in the accuracy of the conversion.

The D7000 is such a low noise camera that less than 14 bits are not sufficient to properly encode the deep shadows information at base ISO. But this is compatible with the fact that as long as noise dithers posterization, having more levels (i.e. having more bits) is useless.

I developed a RAW file from the Pentax K5 (same sensor as D7000), emulating a 12-bit RAW file (by decimating the RAW data from 14-bit to 12-bit prior to RAW development), and posterization began to show up:

14-bit: noise dithers posterization

12-bit: posterization becomes visible

However, doing the same test on Canon 40D's RAW files, there was no loss of IQ using 12-bit. That camera is noisy enough to make 14 bits unnecesary.

Regards
 Logged

 Pages: « 1 ... 11 12 [13] 14 »   Top of Page