Do you know what a sigma of 7 or 8 electrons means? It doesn't mean that most counts are right, and occasionally one will be 3.5 - 4 electrons off. If the average signal is 15 electrons, you will have many pixels clipping at zero, and going as far off as 40 or 50 electrons worth of analog signal. It's a big mess.

Yes, I do know what a standard deviation is, even though you seem to think you are the only one who understands anything. First of all, your method of determining the read noise is non-standard. The standard method used by Roger and also illustrated at

RIT is to subtract the dark frame from a bias frame, determine the SD, and divide the result by 1.414. This eliminates any clipping at black and your crude corrections for clipping are not needed.

For example, with the D200, here is a dark frame of the green channel at ISO 1600 as produced by Iris:

[attachment=2897:attachment]

And here is the result of adding 200 to one dark frame (the bias) and then subtracting a second dark frame. Note that one gets a smooth symmetrical bell shaped curve as one should for a normal distribution.

[attachment=2898:attachment]

Note that in both cases the standard deviation is 22.63. In the second case, the standard deviation of the subtraction is 32.304 and dividing by 1.414 gives 22.84.

In the first case, the mean is 20.469 and the standard deviation is 23.881; the usual 95 confidence interval is the mean ± 2 SD or 20.369 ± 22.84 * 2. This means that 95% of all determinations will be within this interval. I did not determine the camera gain, but using Roger's data, it is 0.5 electrons/ADU. Therefore, the SD expressed in electrons would be 11 electrons, which is close to the value Roger obtained in his analysis. 20.369 ± 22.84 * 2 gives negative counts, which is an impossibility, and results in "clipping". However with the bias frame subtraction frame one has all positive numbers. Your fudge factor does not seem necessary.