Ad
Ad
Ad
Pages: [1] 2 »   Bottom of Page
Print
Author Topic: Lens outresolving sensor or vice versa - what's the real story?  (Read 5316 times)
AFairley
Sr. Member
****
Offline Offline

Posts: 1127



« on: December 12, 2012, 10:47:48 AM »
ReplyReply

On another forum someone posted that a certain zoom lens "outresolved" the D800E sensor.  Since I own both of those pieces of gear, I replied that I had a prime lens that clearly resolved more detail than the zoom at the same non-diffraction-limited aperture at the same focal length, so I questioned how the zoom could be said to outresolve the sensor since the sensor evidently was capable of resolving more detail than the zoom if a different lens was in front of it.  The poster responded dismissively with resolution test graphs that apparently measure what the zoom is capable of, but did not respond to my question. 

Since there are members here who know whereof they speak, I ask the question again: How can a lens be said to outresolve a sensor if the sensor is capable of resolving more detail in a scene with a different lens of the same focal length in front of it?  And more generally, what is behing this lens vs sensor resolution deal anyway?
Thanks!
Logged

Ellis Vener
Sr. Member
****
Offline Offline

Posts: 1722



WWW
« Reply #1 on: December 12, 2012, 10:53:41 AM »
ReplyReply

How can a lens be said to outresolve a sensor if the sensor is capable of resolving more detail in a scene with a different lens of the same focal length in front of it?

Obviously, your dismissive friend was talking twaddle. You can look at numbers and graphs all you want but it's  photons hitting sensors that counts.
Logged

Ellis Vener
http://www.ellisvener.com
Creating photographs for advertising, corporate and industrial clients since 1984.
BJL
Sr. Member
****
Offline Offline

Posts: 5120


« Reply #2 on: December 12, 2012, 12:07:49 PM »
ReplyReply

Neither lenses nor sensors have a single resolution level, like being able to resolve everything on a size scale bigger than 100 line pairs per mm and nothing smaller than this. Instead, there is a gradual decline as image features get smaller, sometimes measured by MTF graphs, which is roughly a measure of the fraction of contrast that the lens or sensor delivers for image details at various sizes (lp/mm). The overall performance of a lens-sensor combination as measured by MTF is the product of lens MTF with sensor MTF.

So even if one lens has more resolution than a sensor in the sense of higher MTF curve, another lens of even higher MTF can still improve the combined lens-sensor MTF, and so improve the overall resolution.
Logged
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7234


WWW
« Reply #3 on: December 12, 2012, 01:47:48 PM »
ReplyReply

Hi,

I agree with Bill's take on this.

A better sensor will always give better results than a lesser sensor. I'd also add that it is my belief that a sensor with better resolution will respond to both interpolation and sharpening better than a sensor of lesser resolution.

Best regards
Erik


Neither lenses nor sensors have a single resolution level, like being able to resolve everything on a size scale bigger than 100 line pairs per mm and nothing smaller than this. Instead, there is a gradual decline as image features get smaller, sometimes measured by MTF graphs, which is roughly a measure of the fraction of contrast that the lens or sensor delivers for image details at various sizes (lp/mm). The overall performance of a lens-sensor combination as measured by MTF is the product of lens MTF with sensor MTF.

So even if one lens has more resolution than a sensor in the sense of higher MTF curve, another lens of even higher MTF can still improve the combined lens-sensor MTF, and so improve the overall resolution.
Logged

Fine_Art
Sr. Member
****
Offline Offline

Posts: 1054


« Reply #4 on: December 12, 2012, 02:03:33 PM »
ReplyReply

Its also possible a lens can be weak or just lack contrast at a certain frequency which happens to be in the image. Sometimes you see a drop in the MTF curve at a point.

Another possibility is a protective filter on that lens which is not on the other.
Logged
IWC Doppel
Jr. Member
**
Offline Offline

Posts: 99


« Reply #5 on: December 15, 2012, 12:56:22 PM »
ReplyReply

When we are talking lines per mm, I assume this is at the sensor or fim plane ?

Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1666


« Reply #6 on: December 15, 2012, 01:20:26 PM »
ReplyReply

It seems that the hands-on/practical types out there have deviced many practical "rules of thumb". This is all nice and good as long as it is used in the proper context.

I am no expert on optics or quantum physics, but I am convinced that rules about the "diffraction limit" are just that: rules of thumb.

-h
Logged
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7234


WWW
« Reply #7 on: December 15, 2012, 03:30:47 PM »
ReplyReply

Hi,

Not at all. A lens that is diffraction limited will not improve on stopping down. Technically you could say that MTF is dominated by diffraction an not aberrations.

The term comes from astronomy, on a diffraction limited lens you can discern the first airy ring. In practice it works like this, stopping down reduces most of the common aberrations, but also increases diffraction. If aberrations are reduced to a certain level diffraction will start to dominate. The better the lens, the sooner will it become diffraction limited.

http://echophoto.dnsalias.net/ekr/index.php/photoarticles/68-effects-of-diffraction

http://echophoto.dnsalias.net/ekr/index.php/photoarticles/49-dof-in-digital-pictures?start=1

Best regards
Erik


It seems that the hands-on/practical types out there have deviced many practical "rules of thumb". This is all nice and good as long as it is used in the proper context.

I am no expert on optics or quantum physics, but I am convinced that rules about the "diffraction limit" are just that: rules of thumb.

-h
Logged

BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3404


« Reply #8 on: December 15, 2012, 07:01:37 PM »
ReplyReply

It seems that the hands-on/practical types out there have deviced many practical "rules of thumb". This is all nice and good as long as it is used in the proper context.

I am no expert on optics or quantum physics, but I am convinced that rules about the "diffraction limit" are just that: rules of thumb.

Hi h,

The diffraction limit is not a rule of thumb, it's an absolute physical certainty (for a given wavelength and usually a circular aperture).

The loose use of its definition, when used in combination with other unknowns and/or Point Spread Functions (PSFs), can become inaccurate if relevant factors are left out of the equation for full system impact and thus lead to wrong conclusions ... One common mistake is to disregard the effect on different spatial frequencies. The highest spatial frequency detail (highest detail micro contrast) will suffer first, the lower spatial frequencies last several F-stops more before they lose all modulation. The lowest spatial frequencies have hardly any detail information, and they suffer least.

Cheers,
Bart
Logged
Fine_Art
Sr. Member
****
Offline Offline

Posts: 1054


« Reply #9 on: December 15, 2012, 08:30:19 PM »
ReplyReply

There are beyond diffraction systems now in labs. No they wont be commercial products anytime soon.

Logged
Fine_Art
Sr. Member
****
Offline Offline

Posts: 1054


« Reply #10 on: December 15, 2012, 08:37:19 PM »
ReplyReply

http://nanotechwire.com/news.asp?nid=9315
Logged
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7234


WWW
« Reply #11 on: December 16, 2012, 01:00:40 AM »
ReplyReply

Hi,

I think you mix up things a little. I have never seen an MTF vs. frequency plot that is non monotonous (sorry for the term!). What I'd suggest you see is that MTF varies over the image. One reason is often that the field of focus is often wavy. That is coming from correction of field curvature.

Best regards
Erik


Its also possible a lens can be weak or just lack contrast at a certain frequency which happens to be in the image. Sometimes you see a drop in the MTF curve at a point.

Another possibility is a protective filter on that lens which is not on the other.
Logged

ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7234


WWW
« Reply #12 on: December 16, 2012, 01:02:50 AM »
ReplyReply

Yes.

Erik

When we are talking lines per mm, I assume this is at the sensor or fim plane ?


Logged

Fine_Art
Sr. Member
****
Offline Offline

Posts: 1054


« Reply #13 on: December 16, 2012, 02:21:27 AM »
ReplyReply

Hi,

I think you mix up things a little. I have never seen an MTF vs. frequency plot that is non monotonous (sorry for the term!). What I'd suggest you see is that MTF varies over the image. One reason is often that the field of focus is often wavy. That is coming from correction of field curvature.

Best regards
Erik



Im not sure what you mean by non monotonous. Probably a translation difference. Your English is excellent btw.

Have a look at the 4th diagram: diffraction MTF for example.
http://www.optenso.com/optix/ex_diffana.html

The S shape on the red line is typical for mirror lenses for example. I've seen other MTF charts with serious sudden drops at higher frequencies.
Logged
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7234


WWW
« Reply #14 on: December 16, 2012, 02:29:12 AM »
ReplyReply

Hi,

I'm pretty sure you got the non monotonous part right ;-)

Thanks for the example! Unfortunately I cannot read the legend, but I take it at face value.

Best regards
Erik


Im not sure what you mean by non monotonous. Probably a translation difference. Your English is excellent btw.

Have a look at the 4th diagram: diffraction MTF for example.
http://www.optenso.com/optix/ex_diffana.html

The S shape on the red line is typical for mirror lenses for example. I've seen other MTF charts with serious sudden drops at higher frequencies.
Logged

IWC Doppel
Jr. Member
**
Offline Offline

Posts: 99


« Reply #15 on: December 16, 2012, 04:20:09 AM »
ReplyReply

From the technical and practical analysis by Erwin Putts for 35mm film, digital sensors and lenses he suggests 35 MP is about the right target for 35mm full frame digital. As we know just adding pixels is not always the answer as there is a practical quantity quality issue today.

I think the lens will more often be the compromise in 2-3 years, if you look at where Leica a going with their latest lenses this makes sense.

Trouble is I love their older glass  Roll Eyes
Logged
JohnCox123
Guest
« Reply #16 on: December 16, 2012, 07:30:32 AM »
ReplyReply

From what I recall from building pinhole cameras the red and infra-red parts of the spectrum were more prone to diffraction due to a wider wavelength I would always use an IR (Red 99) filter over the front of my camera to help limit diffraction. I have no idea if this works, or if the red lenses are less diffraction prone compared to green or blue in the Bayer array. If this is the case it would be interesting to see if a random (X-Trans) array would be more prone to diffraction than a static (Bayer) array.
Logged
thierrylegros396
Sr. Member
****
Offline Offline

Posts: 636


« Reply #17 on: December 16, 2012, 09:22:15 AM »
ReplyReply

Neither lenses nor sensors have a single resolution level, like being able to resolve everything on a size scale bigger than 100 line pairs per mm and nothing smaller than this. Instead, there is a gradual decline as image features get smaller, sometimes measured by MTF graphs, which is roughly a measure of the fraction of contrast that the lens or sensor delivers for image details at various sizes (lp/mm). The overall performance of a lens-sensor combination as measured by MTF is the product of lens MTF with sensor MTF.

So even if one lens has more resolution than a sensor in the sense of higher MTF curve, another lens of even higher MTF can still improve the combined lens-sensor MTF, and so improve the overall resolution.

From my recents tests, it appears that RX100 lens is very good in the center, probably resolving the sensor, but really very poor at all corners, so poor that the G15 with it's "only 12MP" sensor is better at the corners by a fairly large margin !

And the lens of my RX100 has been exchanged, the former was not better !

So yes, the "G series" is well known for their very good lenses, but I didn't imagine such a big difference.
Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1666


« Reply #18 on: December 16, 2012, 01:14:11 PM »
ReplyReply

Hi,

Not at all. A lens that is diffraction limited will not improve on stopping down. Technically you could say that MTF is dominated by diffraction an not aberrations.
The question posed in the first post is (as I understand it): can the "diffraction limit" be used to perfectly predict at what pixel pitch/aperture combination the final resolution is limited only by diffraction (i.e. increasing the amount of megapixels will give exactly zero benefit).

http://coinimaging.com/blog1/?p=139
Quote

The distance between two Airy discs where they are still considered to be resolved separately is the radius of the disc also called the Raleigh criterion. A smaller Airy disc means a smaller disc radius and a higher resolution. This distance is somewhat arbitrary as there is still a small amount of contrast remaining in the space between the discs at the Raleigh criterion. If the discs are moved any closer, the remaining contrast between the two objects will completely disappear. The point where all contrast is lost between the adjacent discs is called the Sparrow criterion and is the absolute limit of resolution.

The Raleigh criterior is the most commonly used measure of resolution:

Airy disc radius = 1.22 * N * light wavelength ( N = aperture size, light wavelength is commonly 546 or 550 nm or 0.550 um, a wavelength of green)
In a classical physics, linear, noiseless world, one would expect proper deconvolution to be theoretically able to resolve point-sources whos Airy disks overlap to a very high degree (assuming that the blur kernel can be precisely estimated, that there is very little noise etc).

In practice, this is practically impossible because of noise and other errors, but that is another story than the "theoretically, you can never move beyond the diffraction limit, no matter what".

Now, my physics understanding does not extend well into quantum physics, and even Huygens and Fresnel are but vague memories to me. So can your claims be supported somewhere in that territory?

See also these forum posts:
http://forums.dpreview.com/forums/post/40935896
http://forums.dpreview.com/forums/post/40992931 (Marianne Oelund)

And this wikipedia entry:
http://en.wikipedia.org/wiki/Sparrow%27s_resolution_limit
Quote
Rayleigh's resolution limit is reached when the two stars are separated by the radius of the Airy disk, but many astronomers say they can still distinguish the two stars even when they are closer than Rayleigh's resolution limit. Sparrow's Resolution Limit improves on this by saying that the ultimate resolution limit is reached when the combined image from the two stars no longer has a dip in brightness between them, but instead has a roughly constant brightness from the peak of one star's image to the other. But because of the extended image, it is still distinguishable from a single star.
Sparrow's resolution limit is about half Rayleigh's resolution limit. For example, for an eight-inch telescope, Rayleigh's resolution limit is 0.70 seconds of arc, but Sparrow's resolution limit is 0.35 seconds of arc.
Sparrow's resolution limit is also used for optical microscopes.

http://www.olympusmicro.com/primer/digitalimaging/deconvolution/deconresolution.html
Quote

It should also be understood that any resolution criterion is not an absolute indicator of resolution, but rather an arbitrary criterion that is useful for comparing different imaging conditions.
...
In some applications, such as localization of a moving object, resolution below the Rayleigh limit is possible. This highlights the fact that resolution is task-dependent and cannot be defined arbitrarily for all situations.

In addition, resolution also depends to a great extent on image contrast, or the ability to distinguish specimen-generated signals from the background.
-h
« Last Edit: December 16, 2012, 01:47:45 PM by hjulenissen » Logged
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3404


« Reply #19 on: December 16, 2012, 01:46:14 PM »
ReplyReply

From what I recall from building pinhole cameras the red and infra-red parts of the spectrum were more prone to diffraction due to a wider wavelength I would always use an IR (Red 99) filter over the front of my camera to help limit diffraction.

Hi John,

Yes, that would help. The formula (not a rule of thumb, but a physical fact) to calculate the diameter of the Airy 'disk' is (approximated by rounding):
    2.43934 x lambda x N
where lambda is the wavelength, and N is the F-number. The fractions of the total power contained within the first, second, and third dark rings (disk diameters) of the total diffraction pattern are 83.8%, 91.0%, and 93.8% respectively for the multipliers 2.43934, 4.46626, and 6.47663.

However, for resolution we also have to factor in the relative weight of the different wavelengths in the luminance of the signal. For the Human Visual System (HVS) the resolution for Luminance is much more important than for Chrominance. Chrominance typically fluctuates much less than Luminance in real world scenes, just check out the channels of an image in the Lab, or HSV/HSL, colorspaces.

The common choice for a wavelength of 555 nanometres is related to the peak of luminous efficiency of the human eye. This also corresponds reasonably well with the common calculation for relative Luminance from Red, Green, and Blue channels in an RGB colorspace (with e.g. the ITU-R BT.709 primaries):
    Y = 0.2126 R + 0.7152 G + 0.0722 B
That weighting shows that Red is the second most important contributor to luminance, but only at a fraction of the Green contribution. It of course also depends on the colors of the subject we are photographing ...

Quote
I have no idea if this works, or if the red lenses are less diffraction prone compared to green or blue in the Bayer array.

The diffraction that matters for photographic resolution, originates in the lens we use. The microlenses and/or color filters on the Bayer CFA only concentrate or filter the total luminous flux (after lens diffraction) for that sensor element (with little further effect on resolution). Interesting observation is that the luminance resolution of the RGB channels after demosaicing is almost identical (because the demosaicing uses luminance to a large extent to reconstruct the missing 2 channels' data).

Having said that, Diffraction only reperesents an upper limit when the rest of the system is perfect. As soon as the lens is not, the combined MTF will therefore never reach that level, it will always be a bit worse. However, when diffraction becomes a very dominant factor, then the combined MTF will come close to the theoretical limit. A very good lens will just get a bit closer.

Cheers,
Bart
Logged
Pages: [1] 2 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad