Ad
Ad
Ad
Pages: « 1 2 3 [4] 5 »   Bottom of Page
Print
Author Topic: Nikon D800E - Amazing Resolution ...but "Houston, We Have A Problem!"  (Read 17337 times)
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3412


« Reply #60 on: June 05, 2012, 08:57:59 AM »
ReplyReply

Wondering if lensezone.de will use a D800E to test new lenses. Aliasing can be used as a measure of lense sharpness if the input signal is known beforehand.

The false color artifacting is also very much Raw converter dependent. I don't think it would be useful to also add Rawconverter effects in the mix when one really only wants to test the lens. The sensel pitch is already a variable in lens testing, so IMHO they might want to settle for the D800 instead.

Cheers,
Bart
Logged
Ray
Sr. Member
****
Offline Offline

Posts: 8847


« Reply #61 on: June 05, 2012, 09:10:17 AM »
ReplyReply

The false color artifacting is also very much Raw converter dependent. I don't think it would be useful to also add Rawconverter effects in the mix when one really only wants to test the lens. The sensel pitch is already a variable in lens testing, so IMHO they might want to settle for the D800 instead.

Bart,
I've always found that the average test chart with variably spaced B&W lines can produce a blaze of color artifacts visible through the optical viewfinder, or on the LiveView screen if the camera has one, that is an indication of accurate focussing, even using cameras that do have an AA filter.

I'm not sure what's causing such pronounced artifacts through a viewfinder, but I remember returning my first copy of the Canon EF-S 10-22mm, which I bought for use with my 20D, because the lens would only show such artifacts when manually focussing on the Norman Koren test chart I was using. As soon as I depressed the shutter button halfway, the artifacts would disappear.
Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1666


« Reply #62 on: June 05, 2012, 09:24:38 AM »
ReplyReply

The false color artifacting is also very much Raw converter dependent. I don't think it would be useful to also add Rawconverter effects in the mix when one really only wants to test the lens. The sensel pitch is already a variable in lens testing, so IMHO they might want to settle for the D800 instead.

Cheers,
Bart
If the test input is monochrome (black lines on white bacground), there is no color (information) to record. If need be, this can be baked into a dedicated "demosaic" algorithm. I.e. input and output could be monochrome. One would need some kind of "white-balance" that normalized color-channel signal level. Hopefully this would be a global parameter.

I am suggesting this only as a means to investigate current lenses performance close to the limit of todays sensor resolution.

-h
« Last Edit: June 05, 2012, 09:39:11 AM by hjulenissen » Logged
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7235


WWW
« Reply #63 on: June 05, 2012, 12:11:39 PM »
ReplyReply

Hi,

No issue ;-)

What I have seen is downsampling maintains edge contrast but looses resolution and may introduce fake detail.

Best regards
Erik

Yes, I know. Just having a bit of fun. Hope Erik forgives me.  Grin

The following image is a comparison of the bottom left corners showing the D800E downsampled to the D700 size. I've also shown the full corners, edge to edge, which demonstrates the slight advantage I've given to the D700.

Downsampling a higher resolution image throws away image information. Upsampling a lower resolution image retains all the initial data. I think upsampling is the more truthful comparison.
Logged

BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3412


« Reply #64 on: June 05, 2012, 12:41:09 PM »
ReplyReply

If the test input is monochrome (black lines on white bacground), there is no color (information) to record.

Unfortunately, that is not how a Bayer CFA decoding sees things. The most robust decoding is done by properly pre-filtering the image before sampling. And given that the average OLPF is not strong enough to avoid all aliasing, there will usually already some false color artifacting. There is no need to add even more artifacts, because the sampling density (which is identical between the D800 / D800E) will result in virtually identical resolution.

Also, the effects of varying amounts of defocus between the different lens tests is probably larger than the difference between these two cameras. It requires very accurate and systematic testing to avoid focus errors, and a huge amount of images to find the best.

Cheers,
Bart
Logged
Fine_Art
Sr. Member
****
Offline Offline

Posts: 1056


« Reply #65 on: June 05, 2012, 02:12:05 PM »
ReplyReply

If the test input is monochrome (black lines on white bacground), there is no color (information) to record. If need be, this can be baked into a dedicated "demosaic" algorithm. I.e. input and output could be monochrome. One would need some kind of "white-balance" that normalized color-channel signal level. Hopefully this would be a global parameter.

I am suggesting this only as a means to investigate current lenses performance close to the limit of todays sensor resolution.

-h

We have already proven complete systems are able to produce images very close to nyquist. There is no point in yet another test.
Logged
Ray
Sr. Member
****
Offline Offline

Posts: 8847


« Reply #66 on: June 05, 2012, 07:09:42 PM »
ReplyReply

Hi,

No issue ;-)

What I have seen is downsampling maintains edge contrast but looses resolution and may introduce fake detail.

Best regards
Erik


Hi Erik,
So far, I haven't noticed any fake detail from the D800E, which is not to say it isn't there. However, if it's not noticeable or obvious, then it's not a problem for me.

The next set of tests will be more stringent, comparing the D7000 with the D800E using the same lens at the same, and sharpest, aperture from the same position, and using a tripod and LiveView. Since the D800E will replace both my D700 and D7000, I'd like to see if there's any noticeable advantage in that lack of an AA filter.

Cheers!

Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1666


« Reply #67 on: June 06, 2012, 02:04:35 AM »
ReplyReply

Unfortunately, that is not how a Bayer CFA decoding sees things. The most robust decoding is done by properly pre-filtering the image before sampling. And given that the average OLPF is not strong enough to avoid all aliasing, there will usually already some false color artifacting. There is no need to add even more artifacts, because the sampling density (which is identical between the D800 / D800E) will result in virtually identical resolution.
A bayer CFA decoding written by me or you can "see" things however it like - the only fundamental limitation is what information it receives, and our skill. Both can be very real limitations. Imagine for a second that there was no CFA, just a regular sampling device (like an A/D converter). The regular setup is that you have a (non-realizable) brickwall lowpass-filter in front that removes everything >= fs/2, and Harry Nyquist and Claude Shannon takes care of the rest: the (bandlimited) waveform can be perfectly recreated.

There is, however, nothing in sampling theory that requires the filter to be a lowpass filter. It can equally well be a bandpass filter (or possibly something else). This means that rapid variations can still be presicely measured/sampled by a relatively slow sampling rate if the bandwidth is <fs/2, and the "carrier frequency" is known. Usually these conditions are not known in imaging, but when designing test-charts we have a great deal of freedom.

So far, I have argued (the equivalent to) that a non-CFA, non-OLPF camera can be a good tool for estimating the MTF of lenses close to fs/2. I believe that the addition of a CFA can effectively be negated by have close to zero color variation in the scene, and taking care to have sufficient exposure of all three color channels. A "white" paper with "black" print lighted by a suitable source of light should be sufficient.
Quote
Also, the effects of varying amounts of defocus between the different lens tests is probably larger than the difference between these two cameras. It requires very accurate and systematic testing to avoid focus errors, and a huge amount of images to find the best.
I agree that this can be a problem, but it would be a problem with a hypothetical future 54MP D900(E) as well? Anyone that contemplate buying expensive lenses expecting to use it with several generations of DSLR houses might want to know such things.

-h
Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1666


« Reply #68 on: June 06, 2012, 02:07:19 AM »
ReplyReply

We have already proven complete systems are able to produce images very close to nyquist. There is no point in yet another test.
Are there reliable numerical measurements of MTF at 0.9x Nyquist or 0.99x Nyquist for 36MP 24x36mm sensors for the lenses out there?

-h
Logged
KevinA
Sr. Member
****
Offline Offline

Posts: 898


WWW
« Reply #69 on: June 06, 2012, 04:58:59 AM »
ReplyReply

Everytime since the birth of digital photography whenever a camera with more pixels is introduced, we get the call of "it really shows up your lenses" or "this lens no longer works "etc.
If 36mp did not show up differences to 20mp there would be little point in more mp. So looking at a 100% view on screen of 36mp shows up more detail and highlights existing deficiencies. What the F*** does anyone expect?
Is the point of these articles to draw attention to the obvious, or is it for someone to brag about their gear or superior quality standards?
If anyone is expecting any future mp increases to not show up increased resolution and lens imperfections  I have bad news for you.......they will.
And just as always since the year dot in photography best results are had from using low iso, a few stops down and a tripod.
The wheel is still to be reinvented.

Kevin.
Logged

Kevin.
Tony Jay
Sr. Member
****
Offline Offline

Posts: 2044


« Reply #70 on: June 06, 2012, 05:12:05 AM »
ReplyReply

Kevin I think you do have a point in general.
It must be remembered though, that most lens manufacturers have been forced to redesign and build their lenses at least to try to get them focus properly on a very thin plane, edge to edge, (the sensor) that is much thinner than any film emulsion ever was.
Some of the other issues may have to be dealt with via lens profiles and fixed electronically.

As for optimising image quality when shooting - all your suggestions and more are vital.

Regards

Tony Jay
Logged
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3412


« Reply #71 on: June 06, 2012, 05:13:07 AM »
ReplyReply

I agree that this can be a problem, but it would be a problem with a hypothetical future 54MP D900(E) as well? Anyone that contemplate buying expensive lenses expecting to use it with several generations of DSLR houses might want to know such things.

Hi,

All they need to realise is that a sensor with a higher sampling density will improve their given lens performance. How that exactly works out is the result of the combination of (optical+sampling) MTFs, so improving one component will allow to better approach the other's maximum. Combined MTFs will always result in lower performance than the maximum performance of the best one.

Therefore your original remark that current lens tests could be augmented by adding a D800/D800E type of sensor resolution (sampling density), makes sense in that it will give a more accurate idea of what the lens is capable of, and where its limits are. However, IMHO the lack of an AA-filter will only complicate the interpretation of limiting resolution results, because the inevitable aliasing will in real life no longer be separable from real detail and noise. It adds aliasing to the lens evaluation data, and it doesn't add meaningful resolution (which is determined by the sensel pitch).

So while the higher MTF near Nyquist may be helpful in certain imaging scenarios (and not helpful in others), it won't be helpful in testing lens performance.

Cheers,
Bart
Logged
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3412


« Reply #72 on: June 06, 2012, 06:56:56 AM »
ReplyReply

Are there reliable numerical measurements of MTF at 0.9x Nyquist or 0.99x Nyquist for 36MP 24x36mm sensors for the lenses out there?

Here is a start ...

Cheers,
Bart
Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1666


« Reply #73 on: June 06, 2012, 07:19:07 AM »
ReplyReply

IMHO the lack of an AA-filter will only complicate the interpretation of limiting resolution results, because the inevitable aliasing will in real life no longer be separable from real detail and noise. It adds aliasing to the lens evaluation data, and it doesn't add meaningful resolution (which is determined by the sensel pitch).
As I was trying to say, "aliased" data _can_ be as meaningful information as "non-aliased" data, but in many practical cases it is not. I think that you may be confusing your practical experience ("aliasing does not give me any more true image detail") with theory ("aliasing does not contain information"). It all depends on knowledge about the process that generated this data.

When doing multi-image super-resolution, one depends on minute spatial shifts and aliasing to reveal finer details than the sensor could reveal directly. In other words, aliasing together with knowledge of the capture process results in a better reconstruction.

My suggestion is similar to super resolution, only that the scene is limited, instead of using multiple images + aliasing (SR). If the test target is sufficiently designed so that there is no signal to confuse with the aliasing, then one will only be left with the aliasing + noise.

-h
« Last Edit: June 06, 2012, 07:23:16 AM by hjulenissen » Logged
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3412


« Reply #74 on: June 06, 2012, 07:34:41 AM »
ReplyReply

When doing multi-image super-resolution, one depends on minute spatial shifts and aliasing to reveal finer details than the sensor could reveal directly. In other words, aliasing together with knowledge of the capture process results in a better reconstruction.

I think that is a misunderstanding of how super-resolution works. It's not the aliasing that helps (on the contrary). Super resolution is useful when the image detail itself is lacking (low-pass limited), and the procedure depends on multiple slightly different alignments of image detail with the sampling grid. Aliasing never helps in reconstructing detail, it only obscures real detail at various lower spatial frequencies.

Cheers,
Bart
Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1666


« Reply #75 on: June 06, 2012, 12:51:24 PM »
ReplyReply

I think that is a misunderstanding of how super-resolution works. It's not the aliasing that helps (on the contrary). Super resolution is useful when the image detail itself is lacking (low-pass limited), and the procedure depends on multiple slightly different alignments of image detail with the sampling grid. Aliasing never helps in reconstructing detail, it only obscures real detail at various lower spatial frequencies.

Cheers,
Bart
This is what the contact person of photoacute software told me:
Quote
Removal of AA-filter should drastically improve the gain of our software. But, a special profile will be required for processing images taken with a camera with removed filter.

Does your Canon 7D already have AA-filter removed?

This is what wikipedia states:
http://en.wikipedia.org/wiki/Super-resolution
Quote
In the most common SR algorithms, the information that was gained in the SR image was embedded in the LR images in the form of aliasing. This requires that the capturing sensor in the system is weak enough that aliasing is actually happening. A diffraction-limited system contains no aliasing, nor does a system where the total system Modulation Transfer Function is filtering out high-frequency content.

What you are describing cannot be improved using super-resolution AFAIK, at least not established, generic algorithms. When you apply proper lowpass filtering in front, the signal is allready bandlimited. Shifting the sensor by a tiny amount does not record any new information,  and therefore it is hard to see a significant gain from combining multiple images (besides SNR improvements).

-h
« Last Edit: June 06, 2012, 12:54:46 PM by hjulenissen » Logged
Ray
Sr. Member
****
Offline Offline

Posts: 8847


« Reply #76 on: June 07, 2012, 09:42:48 AM »
ReplyReply

I'm surprised that none of you very technically savvy people have commented on my experience of seeing very obvious color artifacts through an optical viewfinder when photographing a B&W line test chart from a fairly close distance.

A LiveView image on the camera's LCD screen, is straight from the sensor, I believe, so the lack of an AA filter would accentuate any color aliasing visible on the LiveView screen, when photographing the regular and artificial patterns of a test chart.

But where does this blaze of color come from which is very noticeable in an optical viewfinder, at a particular distance to a B&W line chart?
Logged
MatthewCromer
Sr. Member
****
Offline Offline

Posts: 411


« Reply #77 on: June 07, 2012, 09:51:17 AM »
ReplyReply

Ray,

Perhaps color artifacts from cone cell stimulation in your eyes by sharp, high-contrast, high MTF stimulus.
Logged
Ray
Sr. Member
****
Offline Offline

Posts: 8847


« Reply #78 on: June 07, 2012, 09:56:24 AM »
ReplyReply

Ray,

Perhaps color artifacts from cone cell stimulation in your eyes by sharp, high-contrast, high MTF stimulus.

Or perhaps more likely, some of you guys have never photographed standard test charts.  Grin
Logged
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3412


« Reply #79 on: June 07, 2012, 11:32:06 AM »
ReplyReply

What you are describing cannot be improved using super-resolution AFAIK, at least not established, generic algorithms. When you apply proper lowpass filtering in front, the signal is allready bandlimited. Shifting the sensor by a tiny amount does not record any new information,  and therefore it is hard to see a significant gain from combining multiple images (besides SNR improvements).

Hi,

Although the posts about Super Resolution are a bit off-topic, the aliasing part is relevant for the D800E.

Maybe this little experiment will shed some (hopefully not false color) light on the matter. I've taken 18 shots of my resolution target, each displaced horizontally by 15 micron on my 1Ds3 with a 6.4 micron sensel pitch at a distance that produced a magnification factor of 0.0324x (1 : 30.86). This should allow to cover a horizontal offset of the projected image of some 8.75 micron, slightly more than the sensel pitch.

That would give a series of images with only horizontal sub-pixel samples, and the aliasing in each sub-pixel image is the same in both horizontal and vertical direction.
Here is the Super Resolution result from PhotoAcute:
and here one of the input frames, upsampled 2x with ImageMagick:

You can (hopefully) clearly see that the horizontal oversampling at the sub-pixel level allowed to resolve the vertical spokes without aliasing artifacts, and the horizontal spokes which mostly had aliasing as their guide to super resolution (and no vertical sub-pixel offset) still show the false color/aliasing artifacts in their full 'glory', only larger and sharper.

The yellow circle equals the Nyquist frequency of the smaller originals at 92 pixels also resized 2x to a 184 pixels diameter. Where the original image didn't quite reach Nyquist, the Super Resolution image indeed increased the resolution of the original image(s) all the way to their original Nyquist frequency, and upsampled the original 2x to allow and display the increased resolution. There are even some hints of aliasing that could be mistaken for detail, if the orientation of the features happens to align with the sensel grid.

This hopefully demonstrates that it is the sub-pixel oversampling that leads to the added resolution, not the aliasing artifacts. The reason that the PhotoAcute person said that a non-OLP filtered image would help the Super Resolution results is because of higher modulation near Nyquist, not because of aliasing being helpful, it isn't (in their algorithm), as demonstrated.

When you look beyond the Wikipedia text and read some of the  PDFs mentioned as reference there, you'll see that sub-pixel sampling is a useful mechanism, whereas aliasing can only help to determine those spatial offsets in very unlikely setups. The aliasing artifacts themselves are not going to help and boost the resolution, also because aliasing is by definition larger than Nyquist.

Cheers,
Bart
« Last Edit: June 07, 2012, 11:51:04 AM by BartvanderWolf » Logged
Pages: « 1 2 3 [4] 5 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad