Ad
Ad
Ad
Pages: « 1 [2] 3 4 ... 10 »   Bottom of Page
Print
Author Topic: interesting article  (Read 59672 times)
Schewe
Sr. Member
****
Offline Offline

Posts: 5253


WWW
« Reply #20 on: April 14, 2006, 12:14:57 AM »
ReplyReply

Quote
If the scene is of relatively low dynamic range, consisting mainly of midtones, there's no great need to expose to the right. Correct exposure for the mid tones will suffice. If the DR of the scene is wide, as many landscape shots are which include sky or sunlit areas, then exposing to the right is pretty essential for noise-free shadow detail.
[a href=\"index.php?act=findpost&pid=62531\"][{POST_SNAPBACK}][/a]

Actually, you have it backwards...in the case of a scene that DOES fit in the dynamic range of a sensor, you are GOING to get far smoother captures by increasing the exposure and placing more of the scene in the lighter tones-as long as you don't clip textural highlight detail.

In the case of a scene that exceedes the dynamic range of the sensor, your only choice is to decide, your self, what part of the scene is the most important and expose as best you can knowing the bright specular will clip. The ONLY way around this is to either shoot multiple exposures to assemble after the fact or do a dual process from raw to regain as much highlight detail as possible and blend the highlights into a normally processed image.

The other factor Rags seems to ignore is that sensors wiill indeed vary camera to camera-although a single camera will itself be pretty constistant.

When you set the ISO to 100, it's a nominal setting. Your actual sensor sensitivity might be 80, or 120. This can indeed impact the way your exposures will be biased when shooting. Using an exposure compensation will help nail exposures better based upon the REAL ISO of your camera, not the nominal ISO. Once to nail that, the next step is to learn the exact dynamic range of YOUR sensor. Is it 6, 7, 8 REAL stops? Depending on your acceptance of noise, one can make an argument that many DSLR are at or near 8 stops, maybe a bit more-particularly when you examine just how much data is there nclumped, not clipped at the highlights if you can get to it.

Camera Raw does a really remarkable extraction of highlight detail, why? Aside from the fact that Camera Raw DOESN'T quit when the first channel is clipped, the fact is there's an enourmous amount of detail in those extreme highlights. It's there if you know how to use it.

Which, if you approach exposure based upon metering for the mid-tones and letting highlights and shadows fall will pretty much make sure that you loose that advantage of controlling the usefulness of your dynamic range and your bits.

Back when sensors first came out, there was a serious problem called blooming that would have photosites in a sensor woth extreme spectrals bleed accross to other photosites and creally blow out. In that period a photographer HAD to "under expose" digital to keep from having speculars blow accross large portions of the sensor. Those times are effectively ended with today's sensors which simply do not bloom like the older ones did.

The problem really boils down to one of convention...digital sensors are NOT like film. They do not react the way chrome nor neg film reacts to light. Therefore you really can't use old film exposure techniques...the aim is to get as much data captured as far up the scale as you can while still preserving usable textural highlight detail. This is NOT "over exposing" it's "proper exposing"-for a digital sensor.
Logged
Ray
Sr. Member
****
Offline Offline

Posts: 8812


« Reply #21 on: April 14, 2006, 01:15:32 AM »
ReplyReply

Quote
Actually, you have it backwards...in the case of a scene that DOES fit in the dynamic range of a sensor, you are GOING to get far smoother captures by increasing the exposure and placing more of the scene in the lighter tones-as long as you don't clip textural highlight detail.


[a href=\"index.php?act=findpost&pid=62533\"][{POST_SNAPBACK}][/a]

I was referring to a scene well within the DR of the sensor, not one which fits it. In other words, if the histogram does not occupy regions close to the left or right, but is largely in the middle, there's not much point in increasing exposure to push it more to the right. Doing so will give you more levels, but you probably already have enough. Exposing to the right provides sorely needed levels on the left.
Logged
bjanes
Sr. Member
****
Offline Offline

Posts: 2714



« Reply #22 on: April 14, 2006, 06:44:11 AM »
ReplyReply

Quote
If you want to work in a linear space and make a custom Photoshop working space with a gamma 1.0, then yes. But the big point of failure with Rag's 18% midtone approach is in linear space, a middle grey-the target Rag's says to shoot for is about level 50 in an 8 bit linear file. Way too low to do any kind of "normal" tone curve and it's pretty far south of optimal in terms of midtone. And if you examine the relative levels consentration of a gamma 1.0 image there is WAY too many levels packed into the brightest bits (the expose to the right concept) and a gamma 1.0 image in 8 bit/channel would be extremely prone to breaking.

The raw image capture is in linear and has far more bits (levels) to deal with in the highlights. The shadows far less bits. Therefore it makes far more sense to expose for the textural highlights and make use of all those bits. It's far easier and produces much better signal to noise to keep the midtone up the scale not down the scale. You can easily make things darker without adding noise than the other way around.
[a href=\"index.php?act=findpost&pid=62526\"][{POST_SNAPBACK}][/a]

I agree with Jeff's analysis, but my point was that one can use a zone system approach with digital, and it makes a lot of sense. With his negative material, Adams exposed for the shadows, not the midtones. With digital, I would expose for Zone 1.

Converting the raw to gamma one, sometimes with no white balance, with DCRaw is often instructive, but in practical work it is easier to work in a gamma encoded space.
« Last Edit: April 14, 2006, 06:57:36 AM by bjanes » Logged
bjanes
Sr. Member
****
Offline Offline

Posts: 2714



« Reply #23 on: April 14, 2006, 06:56:52 AM »
ReplyReply

Quote
We already have an example of excellent noise reduction with Canon DSLRs at high ISO. Take two shots using the same exposure, but one at ISO 100 and the other at ISO 1600, then compare the shadows. The ISO 1600 shot will likely have much better shadow detail, yet those shadows (on the sensor) in both shots have received the same amount of light. It seems that amplification of the analog signal prior to digitisation allows for dramatic noise reduction. Of course, if the exposure at ISO 100 was a full exposure to the right, then the same exposure at ISO 1600 would blow the highlights by 4 stops of overexposure.

Ray, the ISO 1600 exposure will have markedly less dynamic range and much more noise than the ISO 100 image because with the short exposure in the ISO 1600 image, few photons were collected by the sensor, and the number of photons converted to electrons is the main determinant of noise and DR. For a scientific analysis of DR and noise, see this post and look at table 1:

http://www.clarkvision.com/imagedetail/eva...-1d2/index.html

The shadows and all other levels in the high ISO exposure do not receive the same amount of light as with the low ISO exposure--the high ISO exposure will be less for all levels.
« Last Edit: April 14, 2006, 06:58:40 AM by bjanes » Logged
bjanes
Sr. Member
****
Offline Offline

Posts: 2714



« Reply #24 on: April 14, 2006, 08:31:29 AM »
ReplyReply

Quote
I was referring to a scene well within the DR of the sensor, not one which fits it. In other words, if the histogram does not occupy regions close to the left or right, but is largely in the middle, there's not much point in increasing exposure to push it more to the right. Doing so will give you more levels, but you probably already have enough. Exposing to the right provides sorely needed levels on the left.
[a href=\"index.php?act=findpost&pid=62534\"][{POST_SNAPBACK}][/a]

Ray, I agree that in the exposure situation to which you refer, there may already be enough levels to represent the scene. Twelve bit raw capture can represent 4096 levels and 256 levels are sufficient for most images. If you place your highlight at 2048 (down one stop), you still have 2048 levels to work with. I think that proponents of ETTR overplay the importance of levels.

However, if you place your highlight at 2048 rather than 4095 (with a Canon EOS 1D Mark II) your signal to noise drops from 256 to 172 according to Roger Clark's calculations. The dynamic range (maximum signal / read noise) suffers a similar hit. IMHO, signal to noise and dynamic range are often a better argument for ETTR than the number of levels. The same considerations apply down the tonal scale to from the lighlights to the shadows, and the effects are most pronounced in the shadows.
Logged
bjanes
Sr. Member
****
Offline Offline

Posts: 2714



« Reply #25 on: April 14, 2006, 10:00:09 AM »
ReplyReply

Quote
Since I'm not an engineer, I find it difficult to appreciate the difficulties involved in devising a capture system which compresses the DR. That is, one which redistributes the levels through a process of selective augmentation of low level signals.

But supposing there was a way of diminishing the intensity of those 4 stops, ie. the darker tones are augmented and the brighter tones are simultaneously diminished. We would then have a compression of dynamic range which, when unpacked, could be very wide indeed.

I'm reminded of developments in vinyl LP audio recording prior to the audio CD. The dynamic range of LP discs used to be typically around 50dB until a system called dbx was invented which compressed the signal at the recording stage, uncompressed it during playback producing a significant boost to DR which, as I recall, was around 80dB. However, CD audio gave us around 90dB and greater, so the dbx technology was too late.
[a href=\"index.php?act=findpost&pid=62531\"][{POST_SNAPBACK}][/a]

Ray,

Your posts have raised a lot of good questions and have inspired me to reread some sources and do some thinking about the issues. In fact, analog to digital converters that use a log scale rather than linear are available and are widely used in signal processing--they compress the high signal levels rather than augmenting the lower levels, but the effect is the same. These methods are used when bandwidth is a major consideration, as in telephony.

However, such log AD converters are not used in any digital raw capture that I know of. One can either compress the image (such as is done with a gamma correction in 8 bit JPEGs) or use a sufficient number of bits to represent the entire dynamic range with a sufficient number of levels in the shadows as is done with raw capture. Most cameras use 12 bits, but some high end models use 16. As dynamic range increases, 16 bit encoding will probably become more common.

As Bruce Fraser explains in some depth in his Camera Raw with PSCS2 book, many operations such as white balance, highlight recovery, etc, are best done on the linear image data. The reason that ETTR applies mainly to raw capture is that one can simply use a linear exposure adjustment in ACR to lower the whole image tone values by a set amount, say one stop, in order to correct the mid tones to where they should be in a short scale subject exposed to the right. If a gamma correction had been applied, you would have to use some type of curve to adjust the tones nonlinearly. Also, with 8 bit encoding you have thrown away so many tone values that the adjustment could not recover the original data.

With regard to your sound recording analogy, the current approach in recording is simply to use more bits rather than compression. When storage becomes a consideration, as with iPods, compression is used.
Logged
Graeme Nattress
Sr. Member
****
Offline Offline

Posts: 582



WWW
« Reply #26 on: April 14, 2006, 10:11:44 AM »
ReplyReply

The difference with audio though, is that we can see a much greater dynamic range that we can hear. CD's with their 96db dynamic range are pure overkill. Considering it's very hard to get background noise in a typical home lower than what, 40 to 50db, and that going much beyond 100db is painful, the 55db or so of vinyl is more than adequate.

Graeme
Logged

www.nattress.com - Plugins for Final Cut Pro and Color
www.red.com - Digital Cinema Cameras
bjanes
Sr. Member
****
Offline Offline

Posts: 2714



« Reply #27 on: April 14, 2006, 11:11:27 AM »
ReplyReply

Quote
The difference with audio though, is that we can see a much greater dynamic range that we can hear. CD's with their 96db dynamic range are pure overkill. Considering it's very hard to get background noise in a typical home lower than what, 40 to 50db, and that going much beyond 100db is painful, the 55db or so of vinyl is more than adequate.

Graeme
[a href=\"index.php?act=findpost&pid=62565\"][{POST_SNAPBACK}][/a]

Quite true. The situation is even worse when listening to a CD in the car. With classical music, the softer passages are inaudible over the wind noise. My old car had dynamic range compression, which helped, but unfortunately my new one does not.

With photographs, however, most would prefer more dynamic range even at the expense of megapixels.
Logged
Schewe
Sr. Member
****
Offline Offline

Posts: 5253


WWW
« Reply #28 on: April 14, 2006, 12:09:55 PM »
ReplyReply

Quote
In other words, if the histogram does not occupy regions close to the left or right, but is largely in the middle, there's not much point in increasing exposure to push it more to the right. Doing so will give you more levels, but you probably already have enough.
[a href=\"index.php?act=findpost&pid=62534\"][{POST_SNAPBACK}][/a]

Well, that's where I think you are wrong...increasing the exposure to get the main portion of the histogram to the right -WILL- provide more tones (levels) and allow you to do more with the data such as expand the data and increase contrast on the resulting raw conversion WITHOUT the noise growing in the shadows.

As long as you don't clip textural highlights you -WILL- get better results (better signal to noise) by increasing the exposure. Since there are more levels the further up the scale you go, you'll have more levels to control and use. The curves can get pretty tweaky at times and you may need to do a dual or multi process conversion but  the more levels you have in the final file the smoother the resulting tone curved image will be.

Under expose and you increase the noise, increase the exposure and you reduce the noise...the final toned file may look real close but the increased exposure will provide a cleaner file. Assuming your increased exposure didn't produce camera shake or reduced depth of field resulting in other non-digital image defects.
« Last Edit: April 14, 2006, 12:12:29 PM by Schewe » Logged
Jonathan Wienke
Sr. Member
****
Offline Offline

Posts: 5759



WWW
« Reply #29 on: April 15, 2006, 01:50:05 PM »
ReplyReply

I'm with Schewe here. When shooting a scene within the DR of the sensor, You'll always get a better result by pushing exposure to the right as long as you don't do so to the point of clipping. The number of levels involved may be overkill regardless of how you expose, but the signal-to-noise ratio will be greater when exposure is pushed to the right. The difference may be small in many cases, but there will always be a difference.
Logged

61Dynamic
Sr. Member
****
Offline Offline

Posts: 1442


WWW
« Reply #30 on: April 15, 2006, 02:55:16 PM »
ReplyReply

When an image fits within the camera's DR, that is the best opportunity to utilize ETTR to its greatest potential.
Logged
Jonathan Wienke
Sr. Member
****
Offline Offline

Posts: 5759



WWW
« Reply #31 on: April 15, 2006, 04:14:37 PM »
ReplyReply

And when it does not, you still must use your knowledge of the behavior of the sensor to optimize what you capture vs what gets lost. ETTR is the best paradigm for figuring that out when using a digital sensor. The zone system worked well for B&W film, but is unsuitable for digital because of the difference between the TRC's of film and digital sensors.
Logged

BJL
Sr. Member
****
Offline Offline

Posts: 5069


« Reply #32 on: April 15, 2006, 04:28:58 PM »
ReplyReply

Quote
The current linear capture system provides far too many levels for the brighter parts of the image and far too few levels for the darker tones.
[a href=\"index.php?act=findpost&pid=62531\"][{POST_SNAPBACK}][/a]
Linearity really only enters at A/D conversion: until then you have just RAW data in the form of photon counts. And it seems that there is little problem producing an A/D converter (say 16 bits) than can handle the full DR of any sensor's output with no significant loss of sensor information, linearity is not a problem at that stage. Nor is there much penalty to storing the few extra bits related to excessively fine tonal distinctions in bright parts of the image.

So the most useful place to introduce nonlinearity is in the subsequent encoding into formats with bit depth less than the number of stops of DR --- which is what gamma compression and tone curves do.


P. S. (not to Ray in particular) It is a myth that gamma compression is needed to match the human visual system: it is needed to compensate for the fact that monitors perform gamma expansion, with a very nonlinear relationship between input voltage and output brightness. Native CRT gamma is about 2.5, with calibration bringing it down to 2.2 in the Windows standard and 1.8 in the Mac standard. (Printers also have a native gamma, which is typically about 1.8, or at least was when Apple chose their gamma of 1.8.)

So if you have files converted with linear gamma, setting your monitor calibration to linear gamma will display them correctly. I create a linear gamma profile option on my Macs in System Preferences>Displays>Color for purposes like this.
Logged
bjanes
Sr. Member
****
Offline Offline

Posts: 2714



« Reply #33 on: April 15, 2006, 07:18:30 PM »
ReplyReply

Quote
When an image fits within the camera's DR, that is the best opportunity to utilize ETTR to its greatest potential.
[{POST_SNAPBACK}][/a]


Quote
And when it does not, you still must use your knowledge of the behavior of the sensor to optimize what you capture vs what gets lost. ETTR is the best paradigm for figuring that out when using a digital sensor. The zone system worked well for B&W film, but is unsuitable for digital because of the difference between the TRC's of film and digital sensors.
[a href=\"index.php?act=findpost&pid=62640\"][{POST_SNAPBACK}][/a]

I find Jonathan's comments confusing. ETTR applies only when the dynamic range of that portion of the scene you wish to capture is less than that of the sensor. This is what Ansel Adams calls a short scale subject. With negative film, Adams exposed to the left, but with digital, we expose to the right. In both cases, we are making best use of the properties of the medium. If the dynamic range of the scene equals that of the sensor, there is no room for ETTR. If the dynamic range of the subject exceeds that of the sensor, as Jeff Schewe points out above, you have to expose for the most important part of the scene and accept clipping or else use multiple exposures and combine them with HDR in Photoshop or by other means. ETTR does not remap the values or tell us how out of range values should be handled.

The digital sensor is indeed linear, wheras film's response is log. However, when we apply a gamma correction and a TRC to the digital, the response is not unlike that of film. Zone concepts can easily be applied to digital capture, as discussed by Norman Koren in the links below. He even gives equations to convert pixel values to zones.

[a href=\"http://www.normankoren.com/digital_tonality.html#Gamma_contrast]http://www.normankoren.com/digital_tonalit...#Gamma_contrast[/url]
http://www.normankoren.com/zonesystem.html

Furthermore, if you look at the characteristic curve of a digital image processed in camera or in Photoshop with Norman's Imitest, it looks very much like the H&D curves of film. Concepts we learned with the zone system serve us very well today, but are much easier to apply with digital.
« Last Edit: April 15, 2006, 07:22:32 PM by bjanes » Logged
Schewe
Sr. Member
****
Offline Offline

Posts: 5253


WWW
« Reply #34 on: April 15, 2006, 10:12:53 PM »
ReplyReply

Quote
Concepts we learned with the zone system serve us very well today, but are much easier to apply with digital.
[a href=\"index.php?act=findpost&pid=62666\"][{POST_SNAPBACK}][/a]

While the concepts may serve us, the exact implementation is -NOT- the same and that's where some people, such as Rags goes astray.

Rags is still trying to force fit analog film techniques to digital capture and that dog don't hunt.

Likewise, the Zone System pretty much falls apart because when dealing with linear captures there is an unprecedented amount of detail in the brightest potion of a linear capture. To the point that while it may SEEM like data may be getting clipped, use of a negative Exposure setting reveals far more data to be had via Camera Raw than any experience with film would indicate.

The Zone system approach really only works post raw processing in working with tone reproduction curves and even then it's not really a direct application since the old days of fixed contrast, or even poly-contrast silver gel papers no longer applies.

The Zone system as I learned it (at RIT under Zakia, Todd and Strobel) was a method of mapping scene contrast range to negative contrast to paper contrast in a manner that allowed one to pre-visualize the relative B&W reproduction of the original scene. That said, Ansel Adams had no problem using altered neg development, local hot paper developing, dodging and burning and anything else he had to do in the darkroom to get the exact tone reproduction he desired.

Knowledge of the zone system may be useful but I would be careful assuming there is a direct relationship to digital capture. Photoshop and even Camera Raw completely alters the old technical limitations of B&W neg film, developers and traditional silver paper and developers.

Again, I must point out the importance of determining your sensor's exact ISO and dynamic range and to expose to just retain textural detail without clipping. But without certain knowledge of your sensor's ISO and dynamic range, most people fail to actually use that densely packed area of near clipped data.
« Last Edit: April 16, 2006, 01:02:29 PM by Schewe » Logged
bjanes
Sr. Member
****
Offline Offline

Posts: 2714



« Reply #35 on: April 16, 2006, 09:16:35 AM »
ReplyReply

Quote
While the concepts may serve us, the exact implimentation is -NOT- the same and that's where some peope, such as Rags goes astray.

The Zone system as I learned it (at RIT under Zakia, Todd and Strobel) was a method of mapping scene contrast range to negative contrast to paper contrast in a manner that allowed one to pre-visualize the relative B&W reproduction of the original scene. That said, Ansel Adams had no problem using altered neg development, local hot paper developing, dodging and burning and anything else he had to do in the darkroom to get the exact tone reproduction he desired.
[{POST_SNAPBACK}][/a]

Of course, I would agree that the implementation is quite different. One should also note that the zone system is for monochrome photography. Although this thread started with reference to Rags' exposition on his web site, I don't think that it is necessary to rehash certain technical errors in Rags' arguments--these have already been discussed on the Adobe Camera Raw Forum (Exposure to the Right and Tone Placement). Why this fixation on Rags?

With digital, we are still faced with mapping scene contrast to tonal values in a print (assuming the print is the final product, which in many cases is no longer true). As explained in a white paper on the Adobe web site (which uses one of your pictures for illustration, by the way), even the best print has a contrast ratio of 500:1, whereas the original scene may have a contrast ratio of 100,000:1. As the author points out, the art of photography is the interpretation of the scene on the printed page. A literal representation is not possible.

[a href=\"http://www.adobe.com/digitalimag/pdfs/calibrating_digital_darkroom.pdf]http://www.adobe.com/digitalimag/pdfs/cali...al_darkroom.pdf[/url]

In this re-mapping, it can be useful to divide the scene into 10 or so zones similar to what Adams used in the zone system and then deal with the zones digitally. One Photoshop plugin (which I have not used) goes so far as to place the zones (roman numerals and all) on its interface:

http://www.curvemeister.com/

As you rightfully stress, the process should begin with calibration of the light meter. In his The Negative, Adams preceeds the zone discussion with calibration of the meter. Some concepts still apply. However, for digital exposure should be based on Zone X, not Zone IV. With highlight recovery in ACR, some highlight clipping can be tolerated, and one can expose even more to the right for more dynamic range.

However, as Norman Koren points out: "In a photographic print, which has about a 100:1 luminance ratio, the eye can distinguish between 100 and 200 discrete luminance levels-- fewer than the 256 available in 8-bit B&W or 24-bit color". In most cases one can not really make full use of all those 2048 tones in the brightest stop of a 12 bit digital capture .

http://www.normankoren.com/digital_tonality.html
Logged
Schewe
Sr. Member
****
Offline Offline

Posts: 5253


WWW
« Reply #36 on: April 16, 2006, 01:16:57 PM »
ReplyReply

Quote
As you rightfully stress, the process should begin with calibration of the light meter. In his The Negative, Adams preceeds the zone discussion with calibration of the meter.
Well, I would argue that the calibration is not just the meter. . .but the entire exposure system-particularly the sensor and it's actual vs. nominal sensitivity and the exact dynamic range it's capable of. Which is different than Adam's calibration of just the meter...but if you are talking about the concept of gaining control over the entire "system", then I agree.

Quote
In most cases one can not really make full use of all those 2048 tones in the brightest stop of a 12 bit digital capture .
No, you can't make use of -ALL- the levels of the brightest stop of detail but with proper toning you -CAN- use a lot more than most people think they can. I would point people to an article on the Adobe site that talks about Highlight Recovery in Camera Raw (2.7MB PDF) (Which will be updated for Camera Raw 3 as soon as Adobe gets around to posting the revised article-it's been done for a couple of months now)

Quote
Why this fixation on Rags?
It's not so much that I have a fixation on Rags as the article in question is the basis of this thread so I like to try to stay on topic where I can...that and the fact that Rags has gone to the lengths he has and I'm afraid too many people will drink his Cool-Aid, ya know?

:~)
« Last Edit: April 16, 2006, 01:17:33 PM by Schewe » Logged
61Dynamic
Sr. Member
****
Offline Offline

Posts: 1442


WWW
« Reply #37 on: April 16, 2006, 02:42:18 PM »
ReplyReply

Quote
However, as Norman Koren points out: "In a photographic print, which has about a 100:1 luminance ratio, the eye can distinguish between 100 and 200 discrete luminance levels-- fewer than the 256 available in 8-bit B&W or 24-bit color". In most cases one can not really make full use of all those 2048 tones in the brightest stop of a 12 bit digital capture.

http://www.normankoren.com/digital_tonality.html
[a href=\"index.php?act=findpost&pid=62699\"][{POST_SNAPBACK}][/a]
Mr. Koren isn't fully accurate in that. While true, the human eye can distinguish 100 to 200 discrete tones, they are also distinct tones. The difference between 250 and 249, for example, is so small the eye won't notice it. There needs to be a good difference of about three tonal values in a 24-bit image before you'll readily notice a difference. It would be more accurate to say they eye can distinguish only 5/12th (108 out of 256) to 1/3rd (83 out of 256) of the 256 available tones in a 24-bit image. That puts a 24-bit at the least in the low end of the 100-200 the eye can distinguish.
Logged
bjanes
Sr. Member
****
Offline Offline

Posts: 2714



« Reply #38 on: April 16, 2006, 03:49:23 PM »
ReplyReply

Quote
Well, I would argue that the calibration is not just the meter. . .but the entire exposure system-particularly the sensor and it's actual vs. nominal sensitivity and the exact dynamic range it's capable of. Which is different than Adam's calibration of just the meter...but if you are talking about the concept of gaining control over the entire "system", then I agree.

[a href=\"index.php?act=findpost&pid=62715\"][{POST_SNAPBACK}][/a]

Yes, I should have specified exposure system calibration, not merely meter calibration. In practice, the meter has to be reproducable but not necessarily well calibrated--one merely needs to know the exposure bias from the metered reading needed to give the maximum data number in the raw file (usually about 4095, but sometimes less) and use this to place the highlight.

Knowledge of the dynamic range is helpful so you will have some idea of the shadow tonal values and noise, but the effective dynamic range varies markedly with the ISO setting and noise characteristics of the camera. A lot of testing is needed to get these data so many of us simply use the lowest practical ISO and hope for the best.
Logged
Ray
Sr. Member
****
Offline Offline

Posts: 8812


« Reply #39 on: April 18, 2006, 05:33:39 AM »
ReplyReply

Quote
Ray, the ISO 1600 exposure will have markedly less dynamic range and much more noise than the ISO 100 image because with the short exposure in the ISO 1600 image, few photons were collected by the sensor, and the number of photons converted to electrons is the main determinant of noise and DR. For a scientific analysis of DR and noise, see this post and look at table 1:

The shadows and all other levels in the high ISO exposure do not receive the same amount of light as with the low ISO exposure--the high ISO exposure will be less for all levels.
[a href=\"index.php?act=findpost&pid=62545\"][{POST_SNAPBACK}][/a]

I'm still travelling so don't often get on the net. I see you've misunderstood what I mean here. Exposure is determined by shutter speed and f stop. No matter what the ISO, if the exposure is the same, then the same amount of light falls upon the sensor. An ISO setting is merely an instruction to the camera to amplify the signal and apply certain noise reduction processes. If you compare a 'correctly' exposed shot at ISO 1600 (exposed fully to the right) with the same shot at ISO 100 (same shutter speed and f stop), the ISO 100 shot will appear to be underexposed.  However, the sensor has received the same amount of light, yet shadow noise will be very much greater. This implies that noise reduction cna be more successfully implemented when the signal is boosted.
Logged
Pages: « 1 [2] 3 4 ... 10 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad