Ad
Ad
Ad
Pages: « 1 [2] 3 »   Bottom of Page
Print
Author Topic: Link between high ISO noise and DR?  (Read 9103 times)
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7680


WWW
« Reply #20 on: October 24, 2008, 11:18:02 AM »
ReplyReply

Hi!

I was actually thinking about this based on experience while I was photographing with a very good friend on Iceland. My friend was using a Canon 20D and myself a Konica-Minolta 7D. I played around with both my friends pictures and I felt pretty shure I could pull a lot more shadow detail from my pictures than from my friends. It may be be for different causes. I felt especially that the "fill light" was much more effective on KM7D than on the 20D.

Best regards
Erik

Quote from: Ray
What I find significant, Bernard, is that dpreview's figure of 12.6 stops for the A900 is a 'best case' scenario which involves extreme adjustments to RAW images in ACR, such as -2.65 EV and a -50 contrast.

I can find no comparable ACR adjustments in relation to other similar cameras that the A900 is compared with. What we have for other cameras are DR figures in relation to the default ACR settings and in relation to 'auto' ACR settings. The figure of 12.6 stops I therefore find misleading. The ACR default settings provide the worst DR figure; the ACR 'auto' settings provide a significantly better figure, and the best DR outcome results from the extreme settings mentioned above which have not been applied to any other camera in the review.

The only DR comparisons I can find in the review relate to in-camera jpegs, and those are quite surprising. Whilst the A900 excels in jpeg mode, having 1.2 stops greater DR than the 5D and 0.8 stops greater DR than the 1Ds3, the 5D is shown as having 0.4 stops greater DR than the D700.

One has to be clear when one is comparing apples with apples. The 12.6 figure for the A900 appears to be an orange.
Logged

imagico
Newbie
*
Offline Offline

Posts: 47


WWW
« Reply #21 on: October 24, 2008, 12:12:09 PM »
ReplyReply

Actually checking the 12.6 stops claim should be fairly easy since dcraw supports the A900 - as i understand things it is just a matter of taking a black frame and a white frame (i.e. a completely overexposed image), extracting the raw, unmodified values  (dcraw -D - although it might make sense to determine DR separately for r, g and b) and determining the ratio of the white frame saturation value and the black frame noise.

In the light of Michaels latest article it might also be interesting to see what DR current top P&S offer in comparison - i am doubtful that they can reach comparable levels but who knows.  This would be another test case for Bernard's high iso noise/DR theory - from a completely different side (P&S traditionally offering a very poor high iso performance).  The G10 is not supported by dcraw yet but the P6000 is.
Logged

Christoph Hormann
photolog / artificial images / other stuff
ejmartin
Sr. Member
****
Offline Offline

Posts: 575


« Reply #22 on: October 24, 2008, 01:28:25 PM »
ReplyReply

Quote from: imagico
In the light of Michaels latest article it might also be interesting to see what DR current top P&S offer in comparison - i am doubtful that they can reach comparable levels but who knows.

You mean like this?
http://www.naturescapes.net/phpBB3/viewtop...=a&start=22
Logged

emil
joofa
Sr. Member
****
Offline Offline

Posts: 488



« Reply #23 on: October 24, 2008, 02:28:53 PM »
ReplyReply

Quote from: GLuijk
In fact a pure binning strategy by averaging 2x2 pixels into 1 final pixel, which according to Emil's statistics doubles the SNR, ...

BR

Emil Martinec has good information, however, unfortunately like many others he also does not consider that there is a difference between noise reduction when a number of images are averaged together and the noise reduction resulting when image is resized. I could have read his pages wrongly. However, reading his pages I could sense it as an underlying assumption as quoted from his website below:

"Bottom line: At the cost of having half the linear resolution, the superpixel made by binning together a 2x2 block of pixels has twice the signal-to-noise ratio,"
"If one downsamples an image properly, one decreases the resolution, and noise decreases in proportion to the linear change in image size.";

and, additionally, Emil does not present any analysis regarding the differential rate of the signal change for SNR formulation, etc. For e.g., best SNR in simple average-based downsampling is related to second derivative of signal intensity.

The problem is that the assumption is that the mean square error used in formulation of SNR is only dependent upon only on noise reduction, where as, it has an additional dependency on the signal degradation due to smoothing, which is frequently ignored (the bias term). In the worst case just consider, that at each pixel position you average all pixels in an image and you end up with a flat, constant image, where each pixel has the same value. That is pathetic SNR.

Perhaps the easiest to way to see that is the mean square error in such estimation is given as:

error = (bias).^2 + (variance of noise)

When you average images of the same scene to get a cleaner image, then though image intensity is varying at each pixel, however, the average is in the time domain and each pixel can be considered to have a true value being estimated, though it is different for each pixel, of course. In this case, since average is an unbiased estimator, the bias is zero, and the variance of the noise decreases with an increase in the number of images being averaged, therefore SNR increases as a measure of the square root of the number of images.

However, when image is resized by averaging neighboring pixels, the average is in the spatial domain. Even in the absence of noise, the signal is not constant in neighboring pixels (unfortunately Emil chose a degenerate example of adding noise to constant image on his website), therefore, the bias term is not zero. Though noise variance goes down by averaging a larger number of pixels, on the other hand the bias increases as a square.

There is an optimal number of pixels to be averaged to get best SNR, however, that varies at each pixel position, implying at each pixel position a different number of pixels must be averaged together to get the same max SNR at each location!. The reason that happens is the differential rate of change of signal and noise determine the optimal number of pixels to be averaged together and these parameters change at each location.

It is easy to figure out when SNR would be less in resizing based upon averaging. Suppose you take an image and average each 2x2 pixels into 1 pixel, then you get an image 1/4 in size of the original image. In this case, spatial pixel averaging results in good noise reduction and many people erroneously conclude that the process could be extended to averaging even larger number of pixels, i.e., 4x4->1, 8x8->1, 16x16->1, etc. But there is a problem, though noise is being reduced, the image is also becoming more and more blurry. And, if you keep on doing that, then you are going to end up with a constant flat image, and terrible SNR.

On the other hand, it does not matter how many images you average in time to get a cleaner image, if it is considered that all those images are of the same static shot where each pixel had a fixed true value, but was corrupted by noise (hopefully zero mean). Averaging a constant number is the same as the constant number, so the original signal is *not* blurred (bias is zero), however, the noise is reduced significantly, and the SNR improves.
« Last Edit: October 24, 2008, 02:33:09 PM by joofa » Logged

Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins
imagico
Newbie
*
Offline Offline

Posts: 47


WWW
« Reply #24 on: October 24, 2008, 03:45:10 PM »
ReplyReply

Quote from: ejmartin

Yes, thanks.  This is probably quite good for a sensor of this size.
Logged

Christoph Hormann
photolog / artificial images / other stuff
ejmartin
Sr. Member
****
Offline Offline

Posts: 575


« Reply #25 on: October 24, 2008, 04:23:37 PM »
ReplyReply

Quote from: joofa
stuff about noise

I think you may be talking about noise reduction, whereas the quoted discussion in the article has in mind the noise itself.  Photon noise (the only signal dependent noise of consequence in this context) has variance equal to signal, and when binning pixels, the variance of the binned pixels equals the total signal, which is the sum of the individual signals.  This is a basic property of Poisson statistics -- the sum of the variances is the variance of the sum.  

The thrust of the argument in the article was to counteract the mistaken impression that many have, that image noise at the pixel level has some sort of absolute significance.  Rather, any individual pixel of a group of binned pixels has only a fraction of the binned signal, and therefore lower S/N ratio than the aggregate.  This is true regardless of whether the signal has content at high spatial frequencies.  The upshot is that two cameras of given sensor size, one with pixel pitch half of the other, will have the same noise at any given spatial scale that both can resolve, even though the one with the finer pixel pitch has higher characteristic noise at its Nyquist frequency than the one with coarser pixel pitch has at its Nyquist frequency.

Your arguments about non-uniform signal have to do with one's ability to estimate how much variance an individual pixel of an individual sample image has from the ensemble average.  That is relevant to noise reduction, but it is not relevant to how much noise is present, or how that noise aggregates to coarser scales.

BTW, a "flat, constant image, where each pixel has the same value" is not "pathetic" SNR, it is infinite SNR.
« Last Edit: October 24, 2008, 04:26:02 PM by ejmartin » Logged

emil
bjanes
Sr. Member
****
Offline Offline

Posts: 2825



« Reply #26 on: October 24, 2008, 06:01:33 PM »
ReplyReply

Quote from: Ray
Bill,
ACR is what I use to process my RAW images, and I believe it's what most photographers use. What Dpreview does is what I do, and many others, except I try to avoid pushing exposure to the exteme right because I'm aware that ACR's attempt to reconstruct color data is sometimes unsatisfactory. An obvious example is the shift towards green in skies where the blue channel is clipped.

Ray,

I use ACR also. However, if you do not look at the raw file, you do not even know if you have exposed the image properly. For example, in the image which I posted as an example, the raw histogram showed underexposure of the green channel by 2/3 EV. Looking back at the metadata for the image, I see that the camera was on autoexposure and I gave +0.3 EV of exposure compensation to give a reasonable ETTR exposure as judged by the camera histogram. The ACR histogram demonstrates slight clipping of highlights. ACR was set to defaults.

[attachment=9156:BirdsACR_ScrCap.png]

ACR uses a baseline exposure offset of +0.5 EV for the D3 and one should use an exposure compensation of -0.5 EV with ACR in order to get an accurate histogram.

Bill
« Last Edit: October 24, 2008, 06:05:23 PM by bjanes » Logged
joofa
Sr. Member
****
Offline Offline

Posts: 488



« Reply #27 on: October 24, 2008, 06:16:16 PM »
ReplyReply

Quote from: ejmartin
I think you may be talking about noise reduction, whereas the quoted discussion in the article has in mind the noise itself.

Yes, this is true, and I agree with that. And that is the basic point of argument. I realized this, that is why I wrote that the signal blurring/smoothing is not incorporated in typical SNR measures. However, consider for e.g., a valid SNR measure such as PSNR which is dependent upon mean square error (that is why I went into resolving mean square error as bias and variance on estimation noise, which results in the estimator because of  the presence of the physical noise), and such measures of SNR will let me include the spatial blur degradation.

Also please note that even with a fixed resize factor of, say original to 1/4 the size of original, I am considering a more general problem than 2x2->1 spatial pixel averaging. Because, we can still have the size reduced by 1/4 by using a larger window size of samples than 2x2 that we consider for averaging. We want to determine what size of window works best for SNR formulations that include signal blur degradation measures resulting from a fixed image size reduction but variable size of window.

Quote from: ejmartin
BTW, a "flat, constant image, where each pixel has the same value" is not "pathetic" SNR, it is infinite SNR.

Depends upon how you are calculating SNR. Since, I want to include the spatial blur degradation measure then the error term will increase as more and more samples are averaged because of bias the term and that lowers the measure of SNR as it is defined in terms of error.

Logged

Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins
Ray
Sr. Member
****
Offline Offline

Posts: 8908


« Reply #28 on: October 24, 2008, 07:55:18 PM »
ReplyReply

Quote from: Tony Beach
Nonsense like this is why I ignore your posts, but since Emil responded then I see you have thrown more garbage into the discussion that should be responded to.

The reason why I generally don't like responding to you, Tony, is because you come across as a very rude person.
Logged
Ray
Sr. Member
****
Offline Offline

Posts: 8908


« Reply #29 on: October 24, 2008, 08:56:09 PM »
ReplyReply


Quote
Ray, DPreview will never be able to compare cameras using the same tool and parameters unless they repeat all the tests for the old cameras with the last version software everytime a new camera appears in the market, and we know they won't do that. This is a flaw in the method, and you agree.

Guillermo,
I'm not sure whether I would call it a flaw in their methodology. Perhaps laziness would be a better word, or to be kind, a time constraint on the testing procedure. The fact is, dpreview never compare a new model of camera with all the older models of different brands. They are very selective, choosing only similar models with similar pixel count and usually of the same format. But they are not inflexible in this regard. In the current review of the A900 they have sensibly included the cameras from which many readers will be upgrading, should they buy an A900.

What I criticise them for is not taking the extra time to repeat the DR tests of RAW images from the other cameras included in the review. If they had the time, they might also examine how ACR handles the reconstruction of color data in clipped channels and how that varies with camera model.

A standardised DR test procedure which excludes or bypasses the most commonly used RAW converter would not necessarily be a better guide for the photographer or camera buyer than Dpreview's current DR testing procedure. We could have a situation where camera 'A' has a slightly higher DR than camera 'B', according to your hypothetical objective testing procedure, but in practice camera 'B' produces slightly better highlight and shadow detail than camera 'A' when its RAW images are processed in ACR, the converter which most people use.
Logged
Tony Beach
Sr. Member
****
Offline Offline

Posts: 452


WWW
« Reply #30 on: October 24, 2008, 09:21:07 PM »
ReplyReply

Quote from: Ray
The reason why I generally don't like responding to you, Tony, is because you come across as a very rude person.

Generally you might as well not respond to me, because I have you on my "Ignore User" list and am not paying attention to what you write since it is almost always misinformed and centered on your own biases and needs.  However, your posts seep over to other's replies when they quote you and you have a tendency to dominate many of the threads you participate in (for instance, in the A900 Update thread started by Nick Rains as a follow-up to his A900 review, where you posted fully a third of the replies).  I think you should consider reading more and writing less, you might actually learn something that way; you could ask questions instead of making assertions along the lines of if a camera doesn't work as well with ACR as another camera does, then that camera is necessarily not as good as the camera that works better with ACR.
« Last Edit: October 24, 2008, 11:30:16 PM by Tony Beach » Logged
Ray
Sr. Member
****
Offline Offline

Posts: 8908


« Reply #31 on: October 24, 2008, 11:18:19 PM »
ReplyReply

Quote from: ejmartin
DPR's tests are often a source of amusement.  Apparently they don't understand that it is impossible for a linear medium like RAW to encode more stops of DR than there are bits in the encoding (12 for the A900).    



Or a lemon  

That's a good point. I'd forgotten that the A900 doesn't have 14 bit encoding like some of its competitors. One might also wonder how a camera with 12 bit encoding can deliver greater DR than another camera, such as the 1Ds3, which has 14 bit encoding.

Nevertheless, the fact that 12.6 stops might be a bit of an exaggeration is not significant in paractical terms, provided the DR of the other cameras with which the A900 is compared, is exaggerated to the same degree.

There is always a subjective element to the meaningfulness of DR figures relating to captured images, as opposed to the engineering DR specification of the sensor.

I once tested the DR of my 5D at 11.6 stops using Jonathan Wienke's DR target which he posted on the forum. A long argument followed between Jonathan and John Sheehy regarding the meaningfulness of the faint detail in the shadows. From my perspective it doesn't really matter whether the DR of the 5D is actually 11.6 stops, 10.3 stops or 9.5 stops. Where there's a subjective element, one can argue till the cows come home without resolution. The essential point is, how does the DR compare (in real world prints and images on monitor) with other cameras which I might be thinking of buying, and other cameras which I already use. Is it greater or less, and by a degree which is worth bothering with?

The A900 appears to be a camera which has a slight edge in DR and resolution at base ISO, compared with the 1Ds3. How significant that edge is, I simply don't know. I've not seen any 100% crops showing the amount of greater detail in either shadows or highlights that the claimed addtional DR should provide.


Logged
Ray
Sr. Member
****
Offline Offline

Posts: 8908


« Reply #32 on: October 24, 2008, 11:51:36 PM »
ReplyReply

Quote from: bjanes
Ray,

I use ACR also. However, if you do not look at the raw file, you do not even know if you have exposed the image properly. For example, in the image which I posted as an example, the raw histogram showed underexposure of the green channel by 2/3 EV. Looking back at the metadata for the image, I see that the camera was on autoexposure and I gave +0.3 EV of exposure compensation to give a reasonable ETTR exposure as judged by the camera histogram. The ACR histogram demonstrates slight clipping of highlights. ACR was set to defaults.

[attachment=9156:BirdsACR_ScrCap.png]

ACR uses a baseline exposure offset of +0.5 EV for the D3 and one should use an exposure compensation of -0.5 EV with ACR in order to get an accurate histogram.

Bill

Bill,
Achieving a full ETTR without clipping any of the channels is a separate issue, surely. Whatever the DR of the camera, those problems remain. If one is using ACR to compare the DR of various cameras, then obviously one must ensure that the images being compared have the same amount of detail at some reference point, such as highlights, after all the stops have been pulled out to get the best result in ACR.

In my experience, there comes a point where no further detail can be extracted as one moves the EV slider into negative territory, for example. That point might be -2.65 EV for one camera in relation to one particular scene and -2 EV in relation to another camera and/or another scene.

The point that such a process is not completely objective and standardised is understood. What procedure do you have in mind that would be more helpful for people who use ACR to process their RAW images?
Logged
bjanes
Sr. Member
****
Offline Offline

Posts: 2825



« Reply #33 on: October 25, 2008, 08:01:02 AM »
ReplyReply

Quote from: ejmartin
High ISO noise and low ISO DR need not be closely related.  The noise floor at low ISO in CMOS DSLR's is controlled by the noise in the ISO amplifier and A/D converter, while the noise floor at high ISO is controlled by the noise of the sensor array.  Different components, different noise characteristics, and the one that effects high ISO midtone and shadow noise doesn't enter into the determination of low ISO DR.

Emil's explanation clarifies the basis for the findings Roger Clark's figure 5, where he shows that the DR of large and small pixel cameras may not differ much at base ISO, whereas the large pixel cameras have a marked advantage at high ISOs. Previously, I had trouble in understanding that figure. Roger's  have a wealth of information and are well worth a look. His concept of unity gain still seems a bit confusing.

Bill
Logged
bjanes
Sr. Member
****
Offline Offline

Posts: 2825



« Reply #34 on: October 25, 2008, 08:49:14 AM »
ReplyReply

Quote from: Ray
Bill,
Achieving a full ETTR without clipping any of the channels is a separate issue, surely. Whatever the DR of the camera, those problems remain. If one is using ACR to compare the DR of various cameras, then obviously one must ensure that the images being compared have the same amount of detail at some reference point, such as highlights, after all the stops have been pulled out to get the best result in ACR.

In my experience, there comes a point where no further detail can be extracted as one moves the EV slider into negative territory, for example. That point might be -2.65 EV for one camera in relation to one particular scene and -2 EV in relation to another camera and/or another scene.

The point that such a process is not completely objective and standardised is understood. What procedure do you have in mind that would be more helpful for people who use ACR to process their RAW images?

The main point is that to expose to the right, one needs to determine at what exposure channel clipping begins. Most photographers use the camera histogram or the blinking highlights. However, on many cameras, these are a bit conservative and may indicate clipping before it actually occurs. The best way to determine the behavior of these tools for your camera is to bracket exposures and compare the results of the exposure tool to the contents of the raw file. The best tool that I have found for this process is Rawnalize. You can then determine how much headroom you have when the blinking highlights or clipped histogram appear on the camera display. Similar considerations apply to the histogram in ACR. The exposure offset that ACR uses for your camera may be determined by looking at a DNG conversion of the raw file. The offset of +0.5 EV for the D3 can cause the appearance of overexposure when none exists. Some ISO standards allow 0.5 EV of headroom for the highlights.

One must be aware that the camera histogram is derived from the JPEG preview of the raw file and is affected to some extent by the camera picture control settings. The camera black and white histogram is usually a luminance histogram and is mainly sensitive to green and may not show red or blue clipping. The camera RGB histogram (if available) will show clipping of the color channels after white balance has been applied, but clipping may be indicated for the red and blue channels when these channels are actually intact in the raw file. Again, one must look at the raw file.

Bill
Logged
ejmartin
Sr. Member
****
Offline Offline

Posts: 575


« Reply #35 on: October 25, 2008, 09:50:05 AM »
ReplyReply

Quote from: bjanes
Emil's explanation clarifies the basis for the findings Roger Clark's figure 5, where he shows that the DR of large and small pixel cameras may not differ much at base ISO, whereas the large pixel cameras have a marked advantage at high ISOs. Previously, I had trouble in understanding that figure. Roger's  have a wealth of information and are well worth a look. His concept of unity gain still seems a bit confusing.

Bill

Actually, what you are seeing in this graph are several things.  First, the fact that both CCD cameras (Canon S70 and Nikon D200) have DR that drops nearly a stop with each stop increase in ISO; this is because the read noise floor in photoelectrons is roughly independent of ISO for typical CCD designs, while the increased amplification loses a stop of headroom for each doubling of ISO.  Second, the intial flatness of the DR curve for CMOS is due to the different sources of the noise floor at low and high ISO with current designs.  The fact that the amplifier is the main source of the noise at low ISO limits the DR of current CMOS implementations; a cleaner amplifier/ADC would extend the low ISO DR of Canon/Nikon CMOS cameras by as much as two stops, as I showed in a thread I started a couple of months back.  Third, DR of pixels is correlated to pixel size.  Dynamic range is scale dependent; the proper scaling multiplies pixel DR by the square root of the number of megapixels to obtain a figure of merit that can be compared equitably between different formats, and pixel counts within a given format.  In this sense, Roger's graph is misleading; if the D3 and 1Ds3 were plotted together in such a graph, the large pixel D3 would come out substantially ahead, but when one properly compares the aforementioned DR figure of merit, the two are quite close.  The S70 is a poor performer because it has a tiny sensor; that necessitates small pixels in order to have decent resolution.  But Roger would have you believe that large sensors with small pixels have as little DR as the S70, where the truth is that the performance differences are much smaller (the big pixels do have an advantage but not nearly as much as indicated in Roger's graph).
« Last Edit: October 25, 2008, 10:12:58 AM by ejmartin » Logged

emil
Ray
Sr. Member
****
Offline Offline

Posts: 8908


« Reply #36 on: October 25, 2008, 10:00:51 AM »
ReplyReply

Quote from: bjanes
The main point is that to expose to the right, one needs to determine at what exposure channel clipping begins. Most photographers use the camera histogram or the blinking highlights. However, on many cameras, these are a bit conservative and may indicate clipping before it actually occurs. The best way to determine the behavior of these tools for your camera is to bracket exposures and compare the results of the exposure tool to the contents of the raw file. The best tool that I have found for this process is Rawnalize. You can then determine how much headroom you have when the blinking highlights or clipped histogram appear on the camera display. Similar considerations apply to the histogram in ACR. The exposure offset that ACR uses for your camera may be determined by looking at a DNG conversion of the raw file. The offset of +0.5 EV for the D3 can cause the appearance of overexposure when none exists. Some ISO standards allow 0.5 EV of headroom for the highlights.

One must be aware that the camera histogram is derived from the JPEG preview of the raw file and is affected to some extent by the camera picture control settings. The camera black and white histogram is usually a luminance histogram and is mainly sensitive to green and may not show red or blue clipping. The camera RGB histogram (if available) will show clipping of the color channels after white balance has been applied, but clipping may be indicated for the red and blue channels when these channels are actually intact in the raw file. Again, one must look at the raw file.

Bill

Bill,
I understand what you are saying. These are issues which all users have to sort out one way or another if they shoot RAW and are concerned about maximising the DR capability of their camera. I've had the jpeg picture styles in my 5D set at minimum contrast, saturation and sharpness for at least the past couple of years now, ever since I realised the camera's histogram and blinking review was giving me a false reading regarding exposure in RAW mode, which I always use.

The fact is, if I want to compare the DR of my own cameras using ACR as my converter, I don't have a problem. I just recently compared the DR of my new 50D with that of my old 5D. I was curious as to whether or not the cropped format really did have a shutter speed advantage resulting from its greater DoF at any given aperture and the option of using a wider aperture. I found that it didn't. Images from the 5D at F7.1 and ISO 250-320 were just as clean as images from the 50D at F4 and ISO 100, or as close as matters. I bracketed all shots in 1/3rd stop increments, compared ETTR images with equal highlight detail, then examined the shadows for detail and noise. (Same scene, same FoV and same lighting conditions, of course).

Is such a comparison invalid? If so, why? Does it matter if the pairs of images I was comparing, after applying a negative EV adjustment in ACR, -50 contrast and linear tone curve etc, had slight clipping of highlights according to Rawnalyze (for example), despite the fact that they didn't look clipped in ACR? I can't see that it does matter, provided that both images appear to contain the same amount of highlight detail after the same ACR adjustments.
« Last Edit: October 25, 2008, 10:03:16 AM by Ray » Logged
bjanes
Sr. Member
****
Offline Offline

Posts: 2825



« Reply #37 on: October 25, 2008, 11:59:41 AM »
ReplyReply

Quote from: Ray
Is such a comparison invalid? If so, why? Does it matter if the pairs of images I was comparing, after applying a negative EV adjustment in ACR, -50 contrast and linear tone curve etc, had slight clipping of highlights according to Rawnalyze (for example), despite the fact that they didn't look clipped in ACR? I can't see that it does matter, provided that both images appear to contain the same amount of highlight detail after the same ACR adjustments.

A contrast curve affects the quarter and three quarter tones but does not change the black point or the white point as shown below.

[attachment=9164:contrastCurve.gif]

With ACR, the default tone curve clips the black point rather strongly. If you set the black point to zero in the default tone curve and leave the other settings unchanged, the resulting tone curve does not differ that much from the linear tone curve in either the blacks or whites. However, the quarter tones are brighter and the slope of the curve is steeper near the midtones with the default curve. The camera histogram may behave slightly differently, but the tone curve on my D200 or D2 does not significantly affect the appearance of the highlights in the histogram, so I normally leave the camera tone curve to normal to get a better preview of the image. Others use low contrast.

[attachment=9162:NikonGraph.png]

If you bracket the images and use negative exposure correction to determine when additional highlight detail is no longer recovered, you may achieve the same end result. However, without looking at the raw files, you don't know when clipping has occurred and has been at least partially corrected by highlight recovery. In most instances, I prefer not to clip the highlights with the D3 since this camera has fairly good shadow detail. However, if the dynamic range of the scene is large and the camera histogram demonstrates clipping in both the shadows and highlights, some bracketing may be advisable and one could consider using mulitple exposures and HDR if the image is static.

Bill
Logged
bjanes
Sr. Member
****
Offline Offline

Posts: 2825



« Reply #38 on: October 25, 2008, 12:02:36 PM »
ReplyReply

Quote from: ejmartin
Actually, what you are seeing in this graph are several things.

Thanks, very informative.

Bill
Logged
Ray
Sr. Member
****
Offline Offline

Posts: 8908


« Reply #39 on: October 25, 2008, 10:05:33 PM »
ReplyReply

Quote from: bjanes
If you bracket the images and use negative exposure correction to determine when additional highlight detail is no longer recovered, you may achieve the same end result. However, without looking at the raw files, you don't know when clipping has occurred and has been at least partially corrected by highlight recovery. In most instances, I prefer not to clip the highlights with the D3 since this camera has fairly good shadow detail. However, if the dynamic range of the scene is large and the camera histogram demonstrates clipping in both the shadows and highlights, some bracketing may be advisable and one could consider using mulitple exposures and HDR if the image is static.

Bill

The question I'm asking is, does it really matter if, technically, highlighted areas of an image are clipped, but don't look clipped because ACR has done some clever reconstruction work? Does such a situation invalidate any DR comparisons?

Images are all about appearance. If ACR is my default converter, then the characteristics of all my images will be influenced to some degree by the characteristics of ACR. If we have a situation where camera 'A' appears to have a greater DR than camera 'B' in ACR, but the same camera 'B' appears to have a greater DR than the same camera 'A' when DPP is used as the converter (for example), then we have a problem. If this is the sort of thing that actually happens amongst different converters in a significant way, above the extreme pixel-peeping level, then that might invalidate any DR comparisons based on results from one particular converter.

Whenever I've compared different converters, I've always found that ACR is at least the equal of the other converters, and better than some, with regard to  shadow detail and highlight recovery.

It's understood if you are going to compare shadow detail in images, then you should have contrast settings at a minimum in ACR, including 'black' at zero and the tone curve at 'linear'.

Logged
Pages: « 1 [2] 3 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad