Ad
Ad
Ad
Pages: [1] 2 3 »   Bottom of Page
Print
Author Topic: 12, 14 or 16 bits?  (Read 11833 times)
ErikKaffehr
Sr. Member
****
Online Online

Posts: 7417


WWW
« on: February 17, 2011, 11:42:29 PM »
ReplyReply

Hi,

This discussion here is limited to number of bits in image processing pipeline, it's not a statement on image quality of different sensors, that depends on many more parameters than number of bits.

The first image below demonstrates clearly the number of bits actually used by the different sensor systems. It shows maximum signal (at saturation) divided by readout noise. Each EV corresponds to one bit so 13 EV is 13 bits. This figure does not take into account the fact the larger sensors have more pixels and large physical size. Those factors will result in better image quality.

Second figure shows latest generation CMOS sensors. Nikon and Pentax are close to utilizing 14 bits, Canon is lagging behind. This is due to different technology in readout.

The third image takes the number of pixels into account. The advantage of the Nikon gets much smaller (around half stop). This does not include resolution and MTF advantages of the larger sensor.

The intention with this posting is to illustrate the significance of bits, it's not intended to discuss aspects of image quality. I'd also add that the data from DxO-mark is very reliable and totally relevant in this context, that is how many bits are actually needed.

The finding is that no camera sensor today seems to need 14 bits and MFDBs would do fine with 12 bits.

If there is some who can explain the advantage of having a 16 bit pipeline on MFDBs please step forward and inform us!

Best regards
Erik
« Last Edit: February 18, 2011, 12:02:30 AM by ErikKaffehr » Logged

NikoJorj
Sr. Member
****
Offline Offline

Posts: 1063


WWW
« Reply #1 on: February 18, 2011, 03:07:33 AM »
ReplyReply

An example has already been posted here by Guillermo : http://www.luminous-landscape.com/forum/index.php?topic=49200.msg409770#msg409770
I personally conclude that with its higher DR, the latest K5/D7000 sony sensor is the first to give a vague hint of usefulness to the 13th and 14th bits.
Logged

Nicolas from Grenoble
A small gallery
Dick Roadnight
Sr. Member
****
Offline Offline

Posts: 1730


« Reply #2 on: February 18, 2011, 05:30:28 AM »
ReplyReply

Hi,

This discussion here is limited to number of bits in image processing pipeline, it's not a statement on image quality of different sensors, that depends on many more parameters than number of bits.

If there is some who can explain the advantage of having a 16 bit pipeline on MFDBs please step forward and inform us!

Best regards
Erik
Are you trying to prove:

We have all wasted 10s of Ks on equipment that is no better than prosumer kit?

DXO data is biased, inaccurate, nonsense...?

It is difficult to separate bit depth from IQ, as they tend to be related.

DXO have carefully chosen colours which are indistinguishable on my calibrated monitor, so I do not know which line refers to which camera.

All the lines are pretty linear for most of the range, and none of the lines deviates much from linearity both ends, as would be the case if the graph indicated the limits of dynamic range of any of the sensors.

If you wanted to do an experiment to determine the latitude or Dynamic range of a sensor, you would have to have a subject with a wider range of luminosity than can be accommodated by the sensor... so 20 levels would be about right.

Even if you never photograph subjects with light areas 2^16 times brighter than the dark areas, higher dynamic range gives smoother gradation between colours, reducing banding, especially in files where the contrast has been enhanced.



Logged

Hasselblad H4, Sinar P3 monorail view camera, Schneider Apo-digitar lenses
ejmartin
Sr. Member
****
Offline Offline

Posts: 575


« Reply #3 on: February 18, 2011, 07:20:41 AM »
ReplyReply

Are you trying to prove:

We have all wasted 10s of Ks on equipment that is no better than prosumer kit?

No I don't think that was the point, at least I hope not.

Quote
DXO data is biased, inaccurate, nonsense...?

I suspect DxO data was being used to bolster a contention that 16 bits is superfluous for MFDB files whose pixel values have no more than 12 bits of data.  The last four bits are swamped by noise.

Quote
It is difficult to separate bit depth from IQ, as they tend to be related.

Between 8 bits and 12 bits, most certainly.  Between 12 bits and 16 bits, hardly.  The last four bits are largely random due to noise.

Quote
DXO have carefully chosen colours which are indistinguishable on my calibrated monitor, so I do not know which line refers to which camera.


Agreed.  Nevertheless the D3x is the outlier.

Quote
All the lines are pretty linear for most of the range, and none of the lines deviates much from linearity both ends, as would be the case if the graph indicated the limits of dynamic range of any of the sensors.

It's not the linearity of the curve, it's the value at base ISO that matters.

Quote
Even if you never photograph subjects with light areas 2^16 times brighter than the dark areas, higher dynamic range gives smoother gradation between colours, reducing banding, especially in files where the contrast has been enhanced.

Noise dithers tonal transitions, so that there is little benefit to extra bits when the noise level exceeds the bit depth.  When the noise ratio exceeds the gradation steps in tonality, then the noise properly dithers transitions; when it doesn't you get banding.  The tonal steps of 8 bit data are too coarse for the noise to dither.  Anything beyond 12 for a camera with 12 bit DR, the noise is sufficient to dither.  16 bits is overkill, the data is being oversampled by 4 bits.
« Last Edit: February 18, 2011, 08:44:07 AM by ejmartin » Logged

emil
cunim
Full Member
***
Online Online

Posts: 128


« Reply #4 on: February 18, 2011, 08:24:01 AM »
ReplyReply

When working with ultra-low light images one quickly appreciates the benefits of higher bit densities.  The low res monochrome images make random noise very evident at the pixel level.  You engineer the camera package to cool enough and read out slowly enough, making trade offs to get a particular precision (12, 14 16 bits whatever).  We accept these trade offs because we want the SNR to be adequate to our application.

Photography, in contrast, is much less demanding.  Photographic quality is perceptual (this is 5 on a scale of 1-10) as opposed to quantitative (this is the SNR) and I am not sure how one quality metric relates to the other.  Certainly, photographs have a whole lot of spatially convolved things going on, both in hardware and in our heads.  For example, the visual system generates its own perceptual SNR by making adjacent pixel operations that decrease the perceived noise.

What does this mean to required bit density for good photographs?  God knows.  I don't think the pixel SNR of a high end color sensor means much, to tell the truth.  Once you get to the level of quality available from the top DSLR or MFD systems the obvious (to me) image quality differences become global and perceptual as opposed to quantitative.  They are sort of like an MTF chain, in which the final result is a product of all the input factors - sensor, lens, raw decode, camera electronics, etc. 

I expect we will have a relevant metric one day.  Great topic for a psychophysics PhD thesis.
Logged
ErikKaffehr
Sr. Member
****
Online Online

Posts: 7417


WWW
« Reply #5 on: February 18, 2011, 10:43:12 AM »
ReplyReply

Hi,

The very simple reason for posting this information was twofold:

- Someone on the forum was asking about usefulness of bith depth
- In my view it is good information and therefore worth sharing

My opinion is that if you are paying 10K for 16 bit pipeline instead of 14 bit pipeline you are wasting money. If you are paying 10K to get better images that is a different issue.

Best regards
Erik

Are you trying to prove:

We have all wasted 10s of Ks on equipment that is no better than prosumer kit?

DXO data is biased, inaccurate, nonsense...?

It is difficult to separate bit depth from IQ, as they tend to be related.

DXO have carefully chosen colours which are indistinguishable on my calibrated monitor, so I do not know which line refers to which camera.

All the lines are pretty linear for most of the range, and none of the lines deviates much from linearity both ends, as would be the case if the graph indicated the limits of dynamic range of any of the sensors.

If you wanted to do an experiment to determine the latitude or Dynamic range of a sensor, you would have to have a subject with a wider range of luminosity than can be accommodated by the sensor... so 20 levels would be about right.

Even if you never photograph subjects with light areas 2^16 times brighter than the dark areas, higher dynamic range gives smoother gradation between colours, reducing banding, especially in files where the contrast has been enhanced.




« Last Edit: February 19, 2011, 08:31:33 AM by ErikKaffehr » Logged

ErikKaffehr
Sr. Member
****
Online Online

Posts: 7417


WWW
« Reply #6 on: February 18, 2011, 12:41:22 PM »
ReplyReply

Hi,

I'm of the opinion that dynamic range is not normally determining or limiting image quality. There are certainly situations where it matters a lot, however. In many of those cases HDR may be an option.

The idea was pretty much to shed some light on the importance of having 16 bits.

As a small comment, much criticism has been directed against DxO's definition of DR, claiming that SNR for a photographic DR needs to be much higher than one. Applying such a criteria would reduce the number of useful bits even more.

So, why are MFDB images conceived better? My answers are:

- I don't know
- A larger sensor collects more photons so it would have less noise, but this effect would be quite small
- An MFDB can have significantly higher MTF for a detail of given size, this may matter a lot!
- MFDBs can have better individual calibration that may be used optimally in vendor specific raw converters

In general, having a larger sensor has many benefits, and those benefits should not be ignored. It's easy to invent situations where sensor size is of no advantage, but if we assume that MFDBs are used where they function best the sensor size is a definitive benefit.

Best regards
Erik


When working with ultra-low light images one quickly appreciates the benefits of higher bit densities.  The low res monochrome images make random noise very evident at the pixel level.  You engineer the camera package to cool enough and read out slowly enough, making trade offs to get a particular precision (12, 14 16 bits whatever).  We accept these trade offs because we want the SNR to be adequate to our application.

Photography, in contrast, is much less demanding.  Photographic quality is perceptual (this is 5 on a scale of 1-10) as opposed to quantitative (this is the SNR) and I am not sure how one quality metric relates to the other.  Certainly, photographs have a whole lot of spatially convolved things going on, both in hardware and in our heads.  For example, the visual system generates its own perceptual SNR by making adjacent pixel operations that decrease the perceived noise.

What does this mean to required bit density for good photographs?  God knows.  I don't think the pixel SNR of a high end color sensor means much, to tell the truth.  Once you get to the level of quality available from the top DSLR or MFD systems the obvious (to me) image quality differences become global and perceptual as opposed to quantitative.  They are sort of like an MTF chain, in which the final result is a product of all the input factors - sensor, lens, raw decode, camera electronics, etc.  

I expect we will have a relevant metric one day.  Great topic for a psychophysics PhD thesis.
« Last Edit: February 18, 2011, 02:32:58 PM by ErikKaffehr » Logged

douglasf13
Sr. Member
****
Offline Offline

Posts: 547


« Reply #7 on: February 18, 2011, 02:34:18 PM »
ReplyReply

  Joakim, "theSuede," who is on various forums and works in the industry, claims that MFDB isn't true 16bit in the first place, but, rather, it their 16bits are interpolated up from 12bits.
Logged
deejjjaaaa
Sr. Member
****
Offline Offline

Posts: 860



« Reply #8 on: February 18, 2011, 10:51:19 PM »
ReplyReply

  Joakim, "theSuede," who is on various forums and works in the industry.

he says about himself - "Has worked with the press since 1992, pre-press process engineer since 1999."
Logged
Dick Roadnight
Sr. Member
****
Offline Offline

Posts: 1730


« Reply #9 on: February 19, 2011, 02:21:22 AM »
ReplyReply

Noise dithers tonal transitions, so that there is little benefit to extra bits when the noise level exceeds the bit depth.  When the noise ratio exceeds the gradation steps in tonality, then the noise properly dithers transitions; when it doesn't you get banding.  The tonal steps of 8 bit data are too coarse for the noise to dither.  Anything beyond 12 for a camera with 12 bit DR, the noise is sufficient to dither.  16 bits is overkill, the data is being oversampled by 4 bits.
Beyond the dynamic range, there is no tone to dither - it is all black or white.

If 16 bits gives you shadow and highlight detail and colour, then that is a real benefit - even if the MTF res is not so high. To deny this would be like insisting that lens manufacturers specify image circle diameters that give the same res as the center of the lens.

If, in the shadows, 1 pixel in 4 or 10 captures a photon, and the software interpolates that to a soothe shade of some colour, that too is a benefit: a trade-off between res and noise.
Logged

Hasselblad H4, Sinar P3 monorail view camera, Schneider Apo-digitar lenses
ErikKaffehr
Sr. Member
****
Online Online

Posts: 7417


WWW
« Reply #10 on: February 19, 2011, 02:36:24 AM »
ReplyReply

Hi,

The problem is that each pixel will see 10-15 fake photons on the average. So even if the photon would be detected you couldn't tell a real photons from fake photons. The technical definition of DR says that number of fake photons (noise) equates number real photons (signal).

The simplest way to measure readout noise to make an exposure with lens cap on and measure the signal on each channel (red, green, blue, green2) in linear space, that is before gamma correction has been applied. You need a tool like "Rawanalyser" or Guillermo's program for that.

The other end of DR is full well capacity. As far as I understand it you need to a surface near saturation and calculate noise. The standard of deviation is square root of the number of photons detected (more correctly electrons collected). So there is no magic involved, just simple math.

MTF has nothing to do with bits or DR. It's the amount of contrast (called modulation) the lens can transfer from subject to sensor. Having a high MTF would thus increase signal. MTF is by definition 1 at zero frequency and drops almost linearly with increasing frequency, at least for an ideal, diffraction limited lens. So, very good lenses assumed, a sensor with 12 micron pitch would have about twice the MTF at pixel resolution compared with a 6 micron pitch sensor. Also the 12 micron sensor could hold four times the electrons, so signal would be 8 times larger. That's and advantage of large pixels. Todays MFDB sensors don't have 12 micron pixels, 6 microns is more typical. If you bin 4 6-micron in to a binned 12 micron pixel in software the 'shot noise', that is photon statistics will act as a big pixel but read noise will not be reduced so DR is less improved. Phase Pxx+ series has binning in hardware which is said to also reduce readout noise. That binning is called Sensor+.

The enclosed figures show the effect of binning at actual pixels and normalized for print.


Best regards
Erik

Ps. I'm not to happy about DxO's coloring choices, but I can easily tell them apart. I'm also using a calibrated monitor, BTW.

Beyond the dynamic range, there is no tone to dither - it is all black or white.

If 16 bits gives you shadow and highlight detail and colour, then that is a real benefit - even if the MTF res is not so high. To deny this would be like insisting that lens manufacturers specify image circle diameters that give the same res as the center of the lens.

If, in the shadows, 1 pixel in 4 or 10 captures a photon, and the software interpolates that to a soothe shade of some colour, that too is a benefit: a trade-off between res and noise.
« Last Edit: February 19, 2011, 02:59:19 AM by ErikKaffehr » Logged

Dick Roadnight
Sr. Member
****
Offline Offline

Posts: 1730


« Reply #11 on: February 19, 2011, 02:43:30 AM »
ReplyReply

Hi,

The problem is that each pixel will see 10-15 fake photons on the average. So even if the photon would be detected you couldn't tell a real photons from fake photons. The technical definition of DR says that number of fake photons (noise) equates number real photons (signal).
So there we have it - Dynamic Range, as defined, is meaningless.
Logged

Hasselblad H4, Sinar P3 monorail view camera, Schneider Apo-digitar lenses
ErikKaffehr
Sr. Member
****
Online Online

Posts: 7417


WWW
« Reply #12 on: February 19, 2011, 03:20:21 AM »
ReplyReply

Yeah,

On the other hand DR and bit depth are the same thing. So if DR is irrelevant than 16, 14, 12 or 10 bits are also irrelevant

But you can simply reduce it by subtracting 1, 2, 3 stops whatever depending on the criteria you want.

So if you need an SNR of 8 you subtract 3 EV from the measured range.

On the other hand, read noise will always only be present in the very darkest part of the image. So DR essentially says how much you can increase "exposure" in postprocessing until noise in the darks will be obvious:

The  images from Marc McCalmont taken with Pentax K5 and P45+ illustrates the issue. The Pentax K5 image shows more shadow detail than the P45+. That of course says nothing about the other end (highlights). Being able to do exact comparison is one of the advantages of using lab tests.

There is a proverb in engineering. Measure with micrometer, mark with chalk and cut with axe. 16 bit is like micrometer.

Best regards
Erik



So there we have it - Dynamic Range, as defined, is meaningless.
Logged

Dick Roadnight
Sr. Member
****
Offline Offline

Posts: 1730


« Reply #13 on: February 19, 2011, 04:50:39 AM »
ReplyReply

Dynamic Range, as defined, is meaningless.

Yeah,

On the other hand DR and bit depth are the same thing. So if DR is irrelevant than 16, 14, 12 or 10 bits are also irrelevant
Dynamic Range is relevant, and would be a (more) useful quantitative yardstick for comparing cameras if the definition was more "real world".
Quote

But you can simply reduce it by subtracting 1, 2, 3 stops whatever depending on the criteria you want.

So if you need an SNR of 8 you subtract 3 EV from the measured range.
Yes - but you would add to the measured DR rather than subtract.
Logged

Hasselblad H4, Sinar P3 monorail view camera, Schneider Apo-digitar lenses
ErikKaffehr
Sr. Member
****
Online Online

Posts: 7417


WWW
« Reply #14 on: February 19, 2011, 05:29:34 AM »
ReplyReply

Hi,

Nope, say that SNR for your needs is 8. That this signal is eight times noise. This would be three steps as 2^3 = 8. So now you can take the DR measured by DxO and subtract three steps.

So:
A Hasselblad H3D250 would give you 12.7 - 3 = 9.7 eV
A Canon EOS 5DII would give you 11.9 - 3 = 8.9 eV
A Nikon D3X would give you 13.7 - 3 = 10.7 eV

These figure are already normalized for megapixels. If we looked at actual pixels the Nikon would have a larger advantage.

The figures are measurement of the noise but doesn't describe the look of the noise. Some noise is more ugly.

Best regards
Erik


Dynamic Range, as defined, is meaningless.
Dynamic Range is relevant, and would be a (more) useful quantitative yardstick for comparing cameras if the definition was more "real world".Yes - but you would add to the measured DR rather than subtract.

Logged

Dick Roadnight
Sr. Member
****
Offline Offline

Posts: 1730


« Reply #15 on: February 19, 2011, 06:06:28 AM »
ReplyReply

Hi,

Nope, say that SNR for your needs is 8. That this signal is eight times noise. This would be three steps as 2^3 = 8. So now you can take the DR measured by DxO and subtract three steps.

So:
A Hasselblad H3D250 would give you 12.7 - 3 = 9.7 eV
A Canon EOS 5DII would give you 11.9 - 3 = 8.9 eV
A Nikon D3X would give you 13.7 - 3 = 10.7 eV

These figure are already normalized for megapixels. If we looked at actual pixels the Nikon would have a larger advantage.
I give up... subtracting stops would be valid if the measured DR was for a high SNR, and you wanted to know what the DR was for 1:1 SNR... there is a trade-off between DR and SNR, and the  lower you set the SNR yardstick, the higher the DR.

So, with DR measured @1:1 SNR=12.7, even my lowly Hasselblad H3D2-50 (which I have upgraded to an H4D-60) had a "real world" Dynamic range of 15.7 stops - thank you for the information.

Perhaps I will not waste any more of my time on this topic.
Quote
The figures are measurement of the noise but doesn't describe the look of the noise. Some noise is more ugly.

Best regards
Erik
Now you are talking about real world IQ... but you said about that "fake" pixels were counted as noise? What you call fake pixels are interpolation, and there is a difference between interpolated data (signal) and noise.

Logged

Hasselblad H4, Sinar P3 monorail view camera, Schneider Apo-digitar lenses
ErikKaffehr
Sr. Member
****
Online Online

Posts: 7417


WWW
« Reply #16 on: February 19, 2011, 08:28:37 AM »
ReplyReply

Hi,

In my math book 12.7 - 3.0 is 9.7.

Anyway the textbook definition of DR is based on SNR = 1. That is what DxO uses. So if you want SNR of say 8 you make the requirement more stringent. Therefore you would reduce the DR, that is subtract. That said, it is not given that readout noise would dominate when you specify SNR of 8, shot noise may come into play.

Please keep in mind that the origin posting was about the usefulness of bits. And the technical definition of DR is essentially the same as the number of useful bits.

Best regards
Erik

I give up... subtracting stops would be valid if the measured DR was for a high SNR, and you wanted to know what the DR was for 1:1 SNR... there is a trade-off between DR and SNR, and the  lower you set the SNR yardstick, the higher the DR.

So, with DR measured @1:1 SNR=12.7, even my lowly Hasselblad H3D2-50 (which I have upgraded to an H4D-60) had a "real world" Dynamic range of 15.7 stops - thank you for the information.

Perhaps I will not waste any more of my time on this topic. Now you are talking about real world IQ... but you said about that "fake" pixels were counted as noise? What you call fake pixels are interpolation, and there is a difference between interpolated data (signal) and noise.


Logged

ejmartin
Sr. Member
****
Offline Offline

Posts: 575


« Reply #17 on: February 19, 2011, 08:52:47 AM »
ReplyReply

Beyond the dynamic range, there is no tone to dither - it is all black or white.

If 16 bits gives you shadow and highlight detail and colour, then that is a real benefit - even if the MTF res is not so high. To deny this would be like insisting that lens manufacturers specify image circle diameters that give the same res as the center of the lens.

If, in the shadows, 1 pixel in 4 or 10 captures a photon, and the software interpolates that to a soothe shade of some colour, that too is a benefit: a trade-off between res and noise.

I think you are misunderstanding the nature of DR.   At base ISO, a typical FF DSLR or MFDB is capturing 40,000 to 80,000 photons (depending on pixel size and efficiency).  Now, a 16-bit capture records data to a part in 2^16 = 65536, so let's take a figure in the middle, 60,000 photons, and so one digital level would naively seem like a change in illumination by one photon's worth.  But it doesn't work that way; the camera electronics has noise in it, and the voltage fluctuations from the noise are indistinguishable from the voltage change due to an increased or decreased signal.  So the noise causes random fluctuations up or down on top of the signal, and therefore throws off the count in the raw data so that it doesn't completely accurately reflect the actual photon count that the camera recorded.

One can translate the camera's electronic noise into an 'equivalent photons' count.  For the D3x at base ISO, say, it is a tad over 6 photons' worth of noise, with a saturation capacity of a little under 50,000 photons.  For the P65+ it seems (estimating from DxO data) that the saturation capacity is also a tad under 50,000 photons, and the electronic noise at base ISO is about 16 photons' worth.

Now let's ask if all those bits are worthwhile.  For the D3x, and 14 bits data recording, the precision of the recording is one part in 2^14; the full scale range of 0-50,000 photons is divided up into 2^14=16,384 steps, and so each step is about 3 photons' worth.   In a perfect world, an extra two bits would help and the counts would distinguish individual photons, but since the camera's electronic noise amounts to +/- 6 photons' worth of inaccuracy, 14 is ample (in fact, 13 would do).  For the P65+, and 16 photons' worth of inaccuracy, 16/50,000 is more than a a part in 2^12, so 12 bits would have been sufficient.

Bit depth is not the same thing as DR; rather, DR bounds the number of bits needed to accurately specify the count delivered by the camera, given the inaccuracy in the count inherent in the camera electronics.

Finally, the pixel DR is but one of a whole host of measures of data quality, so it's not worth obsessing about.
« Last Edit: February 19, 2011, 08:57:13 AM by ejmartin » Logged

emil
ErikKaffehr
Sr. Member
****
Online Online

Posts: 7417


WWW
« Reply #18 on: February 19, 2011, 09:18:31 AM »
ReplyReply

Hi Emil,

Thanks for chiming in and explaining much better than I could. The intention with the initial posting was to shed some light on the real utilization of bits.

Best regards
Erik


I think you are misunderstanding the nature of DR.   At base ISO, a typical FF DSLR or MFDB is capturing 40,000 to 80,000 photons (depending on pixel size and efficiency).  Now, a 16-bit capture records data to a part in 2^16 = 65536, so let's take a figure in the middle, 60,000 photons, and so one digital level would naively seem like a change in illumination by one photon's worth.  But it doesn't work that way; the camera electronics has noise in it, and the voltage fluctuations from the noise are indistinguishable from the voltage change due to an increased or decreased signal.  So the noise causes random fluctuations up or down on top of the signal, and therefore throws off the count in the raw data so that it doesn't completely accurately reflect the actual photon count that the camera recorded.

One can translate the camera's electronic noise into an 'equivalent photons' count.  For the D3x at base ISO, say, it is a tad over 6 photons' worth of noise, with a saturation capacity of a little under 50,000 photons.  For the P65+ it seems (estimating from DxO data) that the saturation capacity is also a tad under 50,000 photons, and the electronic noise at base ISO is about 16 photons' worth.

Now let's ask if all those bits are worthwhile.  For the D3x, and 14 bits data recording, the precision of the recording is one part in 2^14; the full scale range of 0-50,000 photons is divided up into 2^14=16,384 steps, and so each step is about 3 photons' worth.   In a perfect world, an extra two bits would help and the counts would distinguish individual photons, but since the camera's electronic noise amounts to +/- 6 photons' worth of inaccuracy, 14 is ample (in fact, 13 would do).  For the P65+, and 16 photons' worth of inaccuracy, 16/50,000 is more than a a part in 2^12, so 12 bits would have been sufficient.

Bit depth is not the same thing as DR; rather, DR bounds the number of bits needed to accurately specify the count delivered by the camera, given the inaccuracy in the count inherent in the camera electronics.

Finally, the pixel DR is but one of a whole host of measures of data quality, so it's not worth obsessing about.
Logged

bjanes
Sr. Member
****
Offline Offline

Posts: 2794



« Reply #19 on: February 19, 2011, 03:51:30 PM »
ReplyReply

The simplest way to measure readout noise to make an exposure with lens cap on and measure the signal on each channel (red, green, blue, green2) in linear space, that is before gamma correction has been applied. You need a tool like "Rawanalyser" or Guillermo's program for that.

That method will work for Canon and some other cameras that add a black offset to prevent clipping at the black point, but Nikon uses no offset and clips the black point. As shown below, the left half of the bell shaped curve has been clipped. A better way to determine the read noise is to plot points near the black point short of clipping and extend the regression line from the region where clipping isn't yet present as shown in the second figure.


The other end of DR is full well capacity. As far as I understand it you need to a surface near saturation and calculate noise. The standard of deviation is square root of the number of photons detected (more correctly electrons collected). So there is no magic involved, just simple math.

If you want to get full capacity in electrons, you need to isolate shot noise by subtracting out fixed pattern noise as Roger Clark shows in this post. If you want to convert from data numbers to electrons, you need to determine the camera gain as Roger explains. For the highlight you have to be careful to not clip the right tail of the bell shaped curve as shot noise will decrease when clipping starts and will fall to zero when the sensor is saturated or the ADC overflows.

Elimination of fixed pattern noise may give overly optimistic results, as banding and other fixed pattern noises will be eliminated and can be quite distracting in the image.

Regards,

Bill
Logged
Pages: [1] 2 3 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad