Ad
Ad
Ad
Pages: [1]   Bottom of Page
Print
Author Topic: Are Graduated ND filters required with digital backs?  (Read 1875 times)
MNG
Newbie
*
Offline Offline

Posts: 32


WWW
« on: October 24, 2012, 06:19:05 PM »
ReplyReply

Hi,

I am wondering if Graduated ND filters are required with digital or if the better option would be to bracket exposures then blend in photoshop?

Are there any disadvantages with using these types of filters such as increased noise or colour casts?

Thank you
Michael
Logged
ChristopherBarrett
Guest
« Reply #1 on: October 24, 2012, 07:34:11 PM »
ReplyReply

I personally prefer to blend a bracket with a gradient mask... more control.  The big down side is, if you're shooting outside, you're bound to have tree branches and leaves that don't line up.  Sometimes I'll take a single exposure and process it normal then change the curve to "Linear Response" drop it a stop and re-process, then blend the two resulting files.
Logged
torger
Sr. Member
****
Online Online

Posts: 1286


« Reply #2 on: October 25, 2012, 03:34:51 AM »
ReplyReply

I think graduated filters still have some merit, I use them myself sometimes.

One drawback is that graduated filters and filter holders are generally of somewhat poor quality, light leaks in the holder, uncoated filters, sometimes degrading sharpness etc. I'd recommend using Schneider MPTV coated glass filters for MF gear, although you can get the resin filters to work okay with proper shading and checking sharpness so that you have got a good copy.

There is no drawback concerning sensor color casts etc, so no worries for that. Resin filters can in themselves have a slight color cast (use Schneider MPTV glass if absolutely neutral is important) but that's a different issue.

When I use graduated filters I often shoot the LCC calibration shot with the filter on so I can cancel out the graduated filter effect in post-processing and then add a new virtual filter which can be more precise. The graduated filter in the field helps me to get an overall better exposure, i e less noise, more dynamic range. Trees etc sticking up over the horizon line will get on the dark side of the filter and thus be a bit underexposed, but since those features often end up quite dark in the finished picture anyway there's not a big problem. The foreground where it can be quite a lot of pushing gets better exposure. I also enjoy using the filters, optimizing exposure using graduated filters adds some fun to the shooting process.

I find it more pleasing to capture a scene in one shot when possible, and graduated filters help me do that more often. Sometimes I resort to bracketing and HDR merging though, for example very curvy mountain horizons when I need good exposure of the mountains, or when there's only a 20 second window or so to make the shot so I don't have time to fiddle with the filter holder, then bracketing goes quicker.

If you *need* to do this thing is very personal, since we all accept different levels of noise. In a print one can have quite high noise levels without it getting disturbing. I think color rendition dying off in underexposed areas is a larger problem than noise.

To make any real gain in exposure optimization you should use quite strong filters, a good basic filter is a 3 stop hard edge. You can use hard edge also with wides if you make an LCC shot with it on, since then you can remove any visible transition and add on a softer more precise filter in post.

I've been planning to make more formal testing on this and publish an article about it showing the results, I have not yet had time to do it though.
« Last Edit: October 25, 2012, 03:44:15 AM by torger » Logged
ChristopherBarrett
Guest
« Reply #3 on: October 25, 2012, 08:22:41 AM »
ReplyReply

I think graduated filters still have some merit, I use them myself sometimes.

One drawback is that graduated filters and filter holders are generally of somewhat poor quality

That drives me nuts.  I've been trying to figure out a way to mount my Arri Mattebox on my Arca  Wink
Logged
Scott Hargis
Full Member
***
Offline Offline

Posts: 190



WWW
« Reply #4 on: October 25, 2012, 10:13:29 AM »
ReplyReply

I use my Schneider grad ND all the time. In some cases, it's because I like to get everything done in-camera, but in extreme cases, it's just getting me to a point where the software can take over and finish the job.
Logged

<a href="http://www.scotthargisphoto.com">Website</a>
Don Libby
Sr. Member
****
Offline Offline

Posts: 723


Iron Creek Photography


WWW
« Reply #5 on: October 25, 2012, 11:03:37 AM »
ReplyReply

Required? No. Nice to have? Maybe.

For me it all depends on the situation at hand.  I've found that while I have filters I haven't used them as much as I used to instead either bracketing the shot or doing it in post much like CB.

Also agree with the poor quality comment of filters and holders in general although there are some good ones out there.

My preference is to get the shot in camera without the use of a filter then worry about it in post.  Just one less thing I have to pack and handle while on the road.  Also one less thing to worry about getting smudges and dust, and fingerprints on.

Just my 2

Don 
Logged

ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 6925


WWW
« Reply #6 on: October 25, 2012, 11:40:38 PM »
ReplyReply

Hi,

I'm not actually shooting digital MF, my experience is with DSLRs. In the film days I used MF (mostly Velvia on Pentax 67) and I used grad filters (from Cokin). The loss of sharpness with those filters was evident.

When I transformed to digital I never felt the need of graduated filters. Also digital is improving all the time. In the photography I'm doing, which is in no way extreme, I seldom get a better image from HDR than what I eek out from a correct ETTR exposure from digital.

One major issue with the graduated filter is that it is not very flexible, it cannot handle an uneven horizon for instance, think of a V-shape valley between two mountains. Lightroom 4 has a graduated filter which is great. What I like is that it has highlight compression, saturation and clarity. So what I do is to use very little exposure reduction, but a lot of highlight compression, some saturation and clarity. This works very well for me.

Best regards
Erik

Hi,

I am wondering if Graduated ND filters are required with digital or if the better option would be to bracket exposures then blend in photoshop?

Are there any disadvantages with using these types of filters such as increased noise or colour casts?

Thank you
Michael
Logged

torger
Sr. Member
****
Online Online

Posts: 1286


« Reply #7 on: October 26, 2012, 03:15:43 AM »
ReplyReply

It's great that there are so many ways to work. Shadows/highlights controls in the raw converters are very good these days, and cameras have good dynamic range.

In the field I use graduated filters when it fits the scene and situation, sometimes bracketing.

In post-processing I work differently depending on what the image is going to be used for. For the quickies I rarely care to merge to HDR or process in any other software than the raw converter. However, when I make an image that is to be framed as a fine art print I use more work-intensive methods, and then I can use that extra RAW file to get that little bit extra tonal quality in the (pushed) dark parts of the image. I also like to use Photoshop and similar software with masks and adjustment layers in these cases so I feel that I am doing the adjustments with methods I fully understand and not a magic slider in the raw converter. Software that's too automatic, comfortable and uses lots of secret sauce makes me nervous, is it I that is the artist, or is it the algorithm programmer?

Attached an example of a scene with backlit sky where grads with did not work out, but I bracketed one extra shot. One image is the "out of camera" of the darker exposure ETTR for highlights, the other a post-processed version. The camera has enough dynamic range so I can use only the darker exposure if I want to, but for the example I used both to get that little extra in tonal quality. Here I used a photo editor with masking so I could shift down the sky a stop or two without any side-effects on local contrast or saturation, which gives a very "true to the eye" starting point, which I prefer.

Ok, this became a bit off-topic, but just like different ways of doing post-processing I think using grads or not is much about an artistic preference rather than an absolute need. Photographic problems can nowadays be solved in several different ways, and we all have our preferences on which methods we like and in what situations.

To really kill off grads from a quality standpoint I think we need cameras that extend capacity in the other direction -- higher full well capacity, if we had 2 - 3 more stops than we have today I would probably stop using grads. What grads allows you to do is to lengthen the shutter speed so you capture more photons in the darker areas. Almost all dynamic range improvements we've seen in the last 5 years or so in the sensors is reduction in noise from the electronics, the ability to capture more photons per pixel has not increased as much, sometimes even decreased, which means that we don't capture that many photons in the darker stops. If a camera has 12 or 14 stops of engineering dynamic range does not matter much at all for fine art, since the darker stops are "half-dead" in terms of colors anyway and one would not want to push them much.
Logged
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 6925


WWW
« Reply #8 on: October 26, 2012, 03:43:05 AM »
ReplyReply

Hi Anders,

I don't want to argue on the issue, and it is obvious that you are right on DR extension coming lowering readout noise.

What I would like to point out is that it is not about the FWC (Full Well Capacity) per pixel but about the total FWC of the sensor. I actually think we may have seen some development in that direction. According to sensorgen the D4 has a FWC of 117813, which is large, P65+ is said to 53019 according to the same data.  I don't know if sensorgen info is correct on the other hand.

Best regards
Erik


It's great that there are so many ways to work. Shadows/highlights controls in the raw converters are very good these days, and cameras have good dynamic range.

....
Attached an example of a scene with backlit sky where grads with did not work out, but I bracketed one extra shot. One image is the "out of camera" of the darker exposure ETTR for highlights, the other a post-processed version. The camera has enough dynamic range so I can use only the darker exposure if I want to, but for the example I used both to get that little extra in tonal quality. Here I used a photo editor with masking so I could shift down the sky a stop or two without any side-effects on local contrast or saturation, which gives a very "true to the eye" starting point, which I prefer.
....

To really kill off grads from a quality standpoint I think we need cameras that extend capacity in the other direction -- higher full well capacity, if we had 2 - 3 more stops than we have today I would probably stop using grads. What grads allows you to do is to lengthen the shutter speed so you capture more photons in the darker areas. Almost all dynamic range improvements we've seen in the last 5 years or so in the sensors is reduction in noise from the electronics, the ability to capture more photons per pixel has not increased as much, sometimes even decreased, which means that we don't capture that many photons in the darker stops. If a camera has 12 or 14 stops of engineering dynamic range does not matter much at all for fine art, since the darker stops are "half-dead" in terms of colors anyway and one would not want to push them much.
Logged

torger
Sr. Member
****
Online Online

Posts: 1286


« Reply #9 on: October 26, 2012, 05:25:14 AM »
ReplyReply

You are correct of course, but I currently look at megapixels this way: if I buy a system that has 40 megapixels instead of 20 I want it to have the same quality per pixel so up close they look about the same. I'm a print-peeper :-), i e I want large prints to look good up close. If 40 megapixels would not mean that I can get the same quality as two 20 megapixel images side by side I'd be a bit disappointed.

At some point cameras will have so many megapixels it becomes irrelevant and then we'll start to use other units. Concerning FWC maybe we'll use well capacity per square mm.

The journalist cameras have very large pixels (by today's standards) and get large full well capacity. Those cameras are optimized for high iso performance though, it would be interesting to see the same development in MFDB backs. The old KAF-22000 with its 9x9um pixels 22 megapixiel sensor stored 100,000 electrons per pixel, that is about the same as the D4. The IQ180 I would guess have less than half (which is only one stop difference though...).

I kind of liked the old situation where MF gave you larger sensor area and higher resolution and larger pixels and larger FWC. Today I think they have gone a bit too far too early on the pixel count side, but I guess that is what sells.

Hi Anders,

I don't want to argue on the issue, and it is obvious that you are right on DR extension coming lowering readout noise.

What I would like to point out is that it is not about the FWC (Full Well Capacity) per pixel but about the total FWC of the sensor. I actually think we may have seen some development in that direction. According to sensorgen the D4 has a FWC of 117813, which is large, P65+ is said to 53019 according to the same data.  I don't know if sensorgen info is correct on the other hand.

Best regards
Erik


Logged
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 6925


WWW
« Reply #10 on: October 29, 2012, 12:53:26 AM »
ReplyReply

Hi Anders,

My take on the issue that more pixels are a good thing. Increasing the number of pixels will loose some DR (because readout noise is not ideally handled when binning in software), the sensor+ technology the Phase One + backs have takes care of the DR issue by binning at high ISO.

Smaller pixels have many advantages. There is less need for OLP-filtering as both lens and diffraction acts as an OLP-filter, if the pixels are small enough. Or if OLP-filtering is kept less microcontrast is lost at the image level. Also I'm pretty sure sharpening also works better.

As an example, Tim Parkin tested the new Nikon D800 and found that he could use f/22 on the D800 and still get sharper pictures than with f/8 on his Sony Alpha 900. The reason was that the Nikon responded better to sharpening. I might add that I'm not sure I draw the same conclusions from his article. Anyway, I do believe that more pixels is a good thing unless you are giving up to much of DR.

If you check the two diagrams from DxO below you will note the effect of Sensor + on DR, there is a notch where Sensor + sets in. On the tonal range curve that essentially is about shot noise Sensor + has no effect.

No, I don't have experience of the IQ180. Would be nice, but far to expensive.

Best regards
Erik



I kind of liked the old situation where MF gave you larger sensor area and higher resolution and larger pixels and larger FWC. Today I think they have gone a bit too far too early on the pixel count side, but I guess that is what sells.

Logged

torger
Sr. Member
****
Online Online

Posts: 1286


« Reply #11 on: October 29, 2012, 02:31:00 AM »
ReplyReply

My take on the issue that more pixels are a good thing.

It may be so, but for us tech cam guys there's an additional issue - color cast! It's probably not impossible to make small pixels that can handle light from low angles, but so far the pattern has been smaller pixels = more color cast. The IQ180 is much worse than the IQ160 for example, over the top I'd say. Many more high end tech cam users would probably have chosen the IQ160 if it weren't for the attractive upgrade deals where from P65+ to IQ180, not to IQ160.

With large color cast issue movements are more limited and one cannot use the traditional simple large format lens designs, but must move to more complex designs which has some disadvantages - higher cost, weight and more distortion.
Logged
ondebanks
Sr. Member
****
Offline Offline

Posts: 805


« Reply #12 on: October 29, 2012, 07:17:04 AM »
ReplyReply

Almost all dynamic range improvements we've seen in the last 5 years or so in the sensors is reduction in noise from the electronics, the ability to capture more photons per pixel has not increased as much, sometimes even decreased

It's worse than "sometimes even decreased" - it's actually "nearly always decreased".

Of course, you expect that as a general rule, a smaller pixel will have a smaller full well capacity. So for a fairer comparison, we should normalize the FWCs to some standard area, like "per 9-microns squared" (the exact area value doesn't matter - just pick one and stick with it). I did this some time back, for all the MFD sensors with published datasheets...

And what I found is that even with the playing-field leveller of area normalization, the trend with time is downwards.
In general, from lowest/worst to highest/best, the normalized FWCs per 9-microns squared run as follows:

Kodak 6 micron sensors ~ 62k
Dalsa 7.2 micron sensors ~ 69k [but 33MP FTF5066C only 60k]
Dalsa 6 micron ~ 75k
Kodak 6.8 micron ~ 79k
Kodak 9 micron ~ 100k
Fill Factory 11.4 micron ~ 110k
Dalsa 9 micron ~ 170k [but FTF4027C only 76k, so big variation]
Dalsa 12 micron ~ 262k

As you can see, there is a tendency for a given Dalsa generation to exceed a given Kodak generation. But cancelling that, the opposite is true of readout noise - Kodak tends to beat Dalsa. So a similar overall DR tends to emerge.

At some point cameras will have so many megapixels it becomes irrelevant and then we'll start to use other units. Concerning FWC maybe we'll use well capacity per square mm.

That's basically what I'm doing above.

I kind of liked the old situation where MF gave you larger sensor area and higher resolution and larger pixels and larger FWC. Today I think they have gone a bit too far too early on the pixel count side, but I guess that is what sells.

That's certainly a factor, but I also guess that is what they are stuck with selling - as they don't command the sensor market.

The MFD sector has always been good at marketing the things that they can do nothing about, as though they were carefully conceived and deliberately planned advantages. For example: "we have fat pixels!" - at a time when there were no other large sensors but ones with fat pixels!

Ray
« Last Edit: October 29, 2012, 09:56:26 AM by ondebanks » Logged
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 6925


WWW
« Reply #13 on: October 29, 2012, 03:47:50 PM »
ReplyReply

Hi,

The figures you give on 6 micron sensors are pretty low (27k resp 33k unnormalized). Where did you get the figures from? I have not found data for those sensors, probably because I was not looking at right place.

The reason I'm somewhat suprized that I think these figures are less than for 4.7micron CMOS (Nikon D800) which should have 45k according to sensorgen. (Although I'm somewhat skeptical of sensorgen figures).

Best regards
Erik

It's worse than "sometimes even decreased" - it's actually "nearly always decreased".

Of course, you expect that as a general rule, a smaller pixel will have a smaller full well capacity. So for a fairer comparison, we should normalize the FWCs to some standard area, like "per 9-microns squared" (the exact area value doesn't matter - just pick one and stick with it). I did this some time back, for all the MFD sensors with published datasheets...

And what I found is that even with the playing-field leveller of area normalization, the trend with time is downwards.
In general, from lowest/worst to highest/best, the normalized FWCs per 9-microns squared run as follows:

Kodak 6 micron sensors ~ 62k
Dalsa 7.2 micron sensors ~ 69k [but 33MP FTF5066C only 60k]
Dalsa 6 micron ~ 75k
Kodak 6.8 micron ~ 79k
Kodak 9 micron ~ 100k
Fill Factory 11.4 micron ~ 110k
Dalsa 9 micron ~ 170k [but FTF4027C only 76k, so big variation]
Dalsa 12 micron ~ 262k

As you can see, there is a tendency for a given Dalsa generation to exceed a given Kodak generation. But cancelling that, the opposite is true of readout noise - Kodak tends to beat Dalsa. So a similar overall DR tends to emerge.

That's basically what I'm doing above.

That's certainly a factor, but I also guess that is what they are stuck with selling - as they don't command the sensor market.

The MFD sector has always been good at marketing the things that they can do nothing about, as though they were carefully conceived and deliberately planned advantages. For example: "we have fat pixels!" - at a time when there were no other large sensors but ones with fat pixels!

Ray
Logged

ondebanks
Sr. Member
****
Offline Offline

Posts: 805


« Reply #14 on: October 29, 2012, 05:49:35 PM »
ReplyReply

Hi,

The figures you give on 6 micron sensors are pretty low (27k resp 33k unnormalized).


Oops, you're right Erik - thanks for checking the figures. I had copied my formula for readnoise normalization and forgotten to take out the sqrt() factor. With the corrected formula, the picture changes somewhat:

Dalsa 7.2 micron sensors ~ 86k [but 33MP FTF5066C only 74k]
Fill Factory 11.4 micron ~ 87k
Kodak 6 micron sensors ~ 92k
Kodak 9 micron ~ 100k
Kodak 6.8 micron ~ 105k
Dalsa 6 micron ~ 112k
Dalsa 9 micron ~ 170k [but FTF4027C only 76k, so big variation]
Dalsa 12 micron ~ 197k

The effect of the correction is to pull back the largest pixels (> 9 micron) and raise the smaller pixels.
Again, Dalsa's overall trend is still downwards with time.
Again, each Dalsa generation still does better in general than each Kodak one, apart from the oddly poor numbers for the 7.2 micron Dalsas.
Kodak's numbers cluster together more, and the initial modest improvement from 9 to 6.8 microns was followed by a bigger step backwards from 6.8 to 6 microns.

Where did you get the figures from? I have not found data for those sensors, probably because I was not looking at right place.

I got all the Kodak and Dalsa ones on their respective websites. The Fill Factory figures were in an academic paper of theirs.

In the case of 4 of the Dalsa sensors, they don't state the FWCs in electrons, preferring for some reason to use voltage units, but it's easy to calculate them from the gain.

The 60MP Dalsa datasheet (as used in the P65+/IQ180/Credo 60) has only been made public in recent weeks. It actually has microlenses! How strange that that information is totally absent from all the Phase One documentation on their backs using this sensor! I don't think that even the Phase One dealer/experts here (Doug, Steve etc.) knew about that: when describing which backs are suitable for view camera use and why, the Kodak 18MP and 31MP microlensed sensors were the only ones that they would rule out.

Thanks again for spotting my normalization error.
Ray
Logged
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 6925


WWW
« Reply #15 on: October 29, 2012, 09:32:59 PM »
ReplyReply

Hi,

Interesting you found out about the microlenses. Torger pointed out that IQ180 had much more issues with lens cast than IQ160, and I was surprised at the difference.

I would also like to thank your for all good contributions.

Best regards
Erik

Oops, you're right Erik - thanks for checking the figures. I had copied my formula for readnoise normalization and forgotten to take out the sqrt() factor. With the corrected formula, the picture changes somewhat:

Dalsa 7.2 micron sensors ~ 86k [but 33MP FTF5066C only 74k]
Fill Factory 11.4 micron ~ 87k
Kodak 6 micron sensors ~ 92k
Kodak 9 micron ~ 100k
Kodak 6.8 micron ~ 105k
Dalsa 6 micron ~ 112k
Dalsa 9 micron ~ 170k [but FTF4027C only 76k, so big variation]
Dalsa 12 micron ~ 197k

The effect of the correction is to pull back the largest pixels (> 9 micron) and raise the smaller pixels.
Again, Dalsa's overall trend is still downwards with time.
Again, each Dalsa generation still does better in general than each Kodak one, apart from the oddly poor numbers for the 7.2 micron Dalsas.
Kodak's numbers cluster together more, and the initial modest improvement from 9 to 6.8 microns was followed by a bigger step backwards from 6.8 to 6 microns.

I got all the Kodak and Dalsa ones on their respective websites. The Fill Factory figures were in an academic paper of theirs.

In the case of 4 of the Dalsa sensors, they don't state the FWCs in electrons, preferring for some reason to use voltage units, but it's easy to calculate them from the gain.

The 60MP Dalsa datasheet (as used in the P65+/IQ180/Credo 60) has only been made public in recent weeks. It actually has microlenses! How strange that that information is totally absent from all the Phase One documentation on their backs using this sensor! I don't think that even the Phase One dealer/experts here (Doug, Steve etc.) knew about that: when describing which backs are suitable for view camera use and why, the Kodak 18MP and 31MP microlensed sensors were the only ones that they would rule out.

Thanks again for spotting my normalization error.
Ray
« Last Edit: October 29, 2012, 11:29:57 PM by ErikKaffehr » Logged

Pages: [1]   Top of Page
Print
Jump to:  

Ad
Ad
Ad