Ad
Ad
Ad
Pages: « 1 [2] 3 »   Bottom of Page
Print
Author Topic: Shorten the flange distance, then tilt micro-lens? why not just keep it longer  (Read 8721 times)
scooby70
Full Member
***
Offline Offline

Posts: 214


« Reply #20 on: November 11, 2013, 08:40:23 AM »
ReplyReply

Glen,

1. Use a Canon body.

2. Use a Canon body.

3. Use a Canon body.

Job done. Wink

There's also the issue of bulk and weight.

I have a 5D which I hardly ever use as I prefer using a CSC. The Sony A7 I've ordered will give me better ultimate IQ than I get from both my current CSC and 5D in a much more compact and lighter body and lens combo than my Canon gear.
Logged
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7403


WWW
« Reply #21 on: November 11, 2013, 02:02:08 PM »
ReplyReply

I found out that the Alpha 99 I have also has offset micro lenses. I guess hat may help a bit with large aperture lenses, too.

Best regards
Erik


There's also the issue of bulk and weight.

I have a 5D which I hardly ever use as I prefer using a CSC. The Sony A7 I've ordered will give me better ultimate IQ than I get from both my current CSC and 5D in a much more compact and lighter body and lens combo than my Canon gear.
Logged

Christoph C. Feldhaim
Sr. Member
****
Offline Offline

Posts: 2509


There is no rule! No - wait ...


« Reply #22 on: November 11, 2013, 02:15:35 PM »
ReplyReply

The main advantage of this short flange distance/exit pupil to sensor distance thing is the ability to use Biogon (symmetric) design wideangles. These lenses autocorrect due to their symmetry many aberrations (especially distortion) much better than Distagon (reverted telelens) designs.

In times of lens profile based autocorrection of Vignette, CA, distortion and purple fringing in Raw developers it can be argued if this design is still direly needed, the opposite is true: symmetric short focal length designs cause a whole lot of problems on digital sensors. Some lens / Digiback combos simply don't work because of this, even if you take correction shots for LCC profiles.

However up to the time where this technology became available short exit pupil/sensor(film) distance made a LOT of sense in terms of image quality.
« Last Edit: November 11, 2013, 02:17:17 PM by Christoph C. Feldhaim » Logged

BJL
Sr. Member
****
Offline Offline

Posts: 5129


« Reply #23 on: November 11, 2013, 03:18:39 PM »
ReplyReply

The main advantage of this short flange distance/exit pupil to sensor distance thing is the ability to use Biogon (symmetric) design wide angles.
Those days, a low flange distance is not only for short focal length symmetric lenses: some of the best near-telecentric designs also have rear elements very close to the focal plane, despite having a high exit pupil, so they are not like the classic inverted telephoto retro-focus designs first developed for movie cameras and then for SLRs. For example, the Sony RX1 reportedly has its rear lens elements very close to the sensor, as did its ancestors from a decade ago, the Sony F707, F717 and F828; those old cameras definitely did not use offset micro-lenses.
Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1678


« Reply #24 on: November 11, 2013, 03:24:51 PM »
ReplyReply

Obviously, any designated MILF lens _could_ include the equivalent of a converter bolted on. Thus, anything that is possible (optically) when you design a DSLR camera + DSLR lens, is also possible when you design a MILF camera + MILF lens. What you gain is the possibility of designing lenses that could never have been used on a DSLR due to the mirror: lenses that include optical elements very close to the sensor.

It is safe to assume that having more freedom to design something is never negative. It seems that in this case, for certain designs, it allows some features that are positive for image quality, compactness etc. But if you take advantage of those features, you also get some drawbacks.

Why is it that film could accept wide angles of incoming light while digital cannot? Is this a fundamental property of ccd/cmos sensors, or is it some trade-off (perhaps with micro-lenses/sensitivity)?

-h
« Last Edit: November 11, 2013, 03:37:38 PM by hjulenissen » Logged
Christoph C. Feldhaim
Sr. Member
****
Offline Offline

Posts: 2509


There is no rule! No - wait ...


« Reply #25 on: November 11, 2013, 03:34:05 PM »
ReplyReply

Why is it that film could accept wide angles of incoming light while digital cannot? Is this a fundamental property of ccd/cmos sensors, or is it some trade-off (perhaps with sensitivity)?



From http://www.luminous-landscape.com/tutorials/vignetting.shtml
Quote
The second issue is that the photo sites (individual pixels) on a sensor do not lay on the surface of the chip, but rather, in shallow wells. This means that the light hitting the ones toward the corners of the image area lose some photons, because these are blocked by the sides of the well. Less photons, less exposure. Less exposure of the corners vs the center = vignetting.
Logged

EinstStein
Sr. Member
****
Offline Offline

Posts: 281


« Reply #26 on: November 12, 2013, 10:48:22 AM »
ReplyReply

It is correct that a lens designed for a longer flange distance  can be mounted on a short flange distance body, subject to a proper mountinng adapter, if everything else is equal.
But it is no longer true if the sensor on the short flange body is not compatible. Tilted or offset  microlens can be one of the cause.
If the body need the offset microlens, the flange distance is too short. It can still be good if the lens is design to take this advantage or to tolerate this disadvantage,but it will loose image quality for the lenses that were designed for the plain straight sensor.
It can even cause trouble for the lenses designed for a offset microlens with less offset. Actually, an sensor with offset microlens can be more picky than a straight sensor.
Any compensation can have the troubles of overcompensation for ones that do not need compensation.
« Last Edit: November 12, 2013, 10:56:04 AM by EinstStein » Logged
Christoph C. Feldhaim
Sr. Member
****
Offline Offline

Posts: 2509


There is no rule! No - wait ...


« Reply #27 on: November 12, 2013, 11:02:23 AM »
ReplyReply

Those days, a low flange distance is not only for short focal length symmetric lenses: some of the best near-telecentric designs also have rear elements very close to the focal plane, despite having a high exit pupil, so they are not like the classic inverted telephoto retro-focus designs first developed for movie cameras and then for SLRs. For example, the Sony RX1 reportedly has its rear lens elements very close to the sensor, as did its ancestors from a decade ago, the Sony F707, F717 and F828; those old cameras definitely did not use offset micro-lenses.

It is correct that a lens designed for a longer flange distance  can be mounted on a short flange distance body, subject to a proper mountinng adapter, if everything else is equal.
But it is no longer true if the sensor on the short flange body is not compatible. Tilted or offset  microlens can be one of the cause.
If the body need the offset microlens, the flange distance is too short. It can still be good if the lens is design to take this advantage or to tolerate this disadvantage,but it will loose image quality for the lenses that were designed for the plain straight sensor.
Any compensation can have the trouble of overcompensation for ones that do not need compensation.

Long story short:
Short exit pupil-sensor distance can cause much more problems on digital sensors than on film,
but still is definitely not obsolete as the example of the Sony RX 1 clearly shows.

And now for something completely different ....
http://www.youtube.com/watch?v=KDPTC-yAmgo
Cheesy

Logged

scooby70
Full Member
***
Offline Offline

Posts: 214


« Reply #28 on: November 12, 2013, 06:37:56 PM »
ReplyReply

Is digital v film a fair comparison any more?

Forgive me if I'm on the wrong track but did we look at film shots so very closely? Was film flat in the camera? Is it fair to complain if something in a digital camera is 1 micron out of alignment and we can detect it when looking at 400% on screen?

I'm just asking Cheesy
Logged
BJL
Sr. Member
****
Offline Offline

Posts: 5129


« Reply #29 on: November 12, 2013, 07:16:17 PM »
ReplyReply

Long story short:
Short exit pupil-sensor distance can cause much more problems on digital sensors than on film,
but still is definitely not obsolete as the example of the Sony RX 1 clearly shows.
One more time: "short exit pupil" is not the same as "short back focus", which means "rear lens elements close to the focal plane". A lens design can have a reasonably high exit pupil (near-telecentric design) even though its rear-most lens element is very close to the focal plane, as illustrated by cameras like the Sony F707 from 2001, which had rear elements very close to the focal plane and no offset microlenses yet had no major problems with "microlens vignetting" because the lens design had an adequately high exit pupil.

The RX1 lens has short back-focus, but is there any evidence that it has a low exit pupil? If I am remembering correctly, the lens did not look anything like a symmetrical design in the picture I saw.
« Last Edit: November 12, 2013, 07:36:14 PM by BJL » Logged
EinstStein
Sr. Member
****
Offline Offline

Posts: 281


« Reply #30 on: November 12, 2013, 08:09:34 PM »
ReplyReply

Short exit pupil or short back focus do not reveal the basic problem. While geometrical optics is a good approximation, a more precise view is the wave optics. The wave optics better describes the fact that the image is the Integrated result of the light from every point at the "boundary". (Huygen's theory").
Here the boundary can be seen as the rear most lens element. If the rear most element is too close to the image plain, there will be sever ray angle, measured from the outer most edge of the lens to the opposite farthest edge of the sensor. So, the ray angle problem is really the function of the distance from the dearest lens to the sensor.
A lens design can reduce the effect if the amount of light from the other edge of the lens is weighted And reduced,so that the ray angle problem is compressed,but this also means an inferior lens design. Basically it means the diameter of there rear element can be reduced because it can be seen, practically, redundant.

Logged
BJL
Sr. Member
****
Offline Offline

Posts: 5129


« Reply #31 on: November 12, 2013, 08:27:37 PM »
ReplyReply

If the rear most element is too close to the image plain, there will be sever ray angle, measured from the outer most edge of the lens to the opposite farthest edge of the sensor. So, the ray angle problem is really the function of the distance from the dearest lens to the sensor.
Not necessarily: you assume that some significant amount of light has to travel from one edge of the rear element to the opposite edge of the sensor, but the existence of lenses with exit pupil well inside the lens and the chief rays striking the lens nearly perpendicular despite the rear element being close to the sensor shows that this is not true. With those lenses, light exiting near one edge of the lens is going almost entirely to that side of the sensor, and not only the chief ray but the whole light cone arriving near an edge of the sensor comes from the part of the rear element near that side.

The difference between wave optics and the geometrical optics approximation is rather unimportant to this topic, and certainly is at large enough aperture sizes (small enough f-stops) where diffraction is a minor effect --- diffraction being the main manifestation of the imperfection in the geometric optics approximation. At small apertures instead, the light cone (of geometrical optics) is narrow, and close to the chief ray, and so it is even more true that the entire light cone reaching a point near the edge of the sensor comes from a part of the rear element close to the same edge.

As far as I know, geometric optics and computational ray tracing is still the way that camera lens design is done, but if you have evidence to the contrary, please let me know.

P. S. There are some useful illustrations at http://www.edmundoptics.com/technical-resources-center/imaging/telecentricity-and-telecentric-lenses-in-machine-vision/
Image space telecentricity is what matters for sensors, and figure 3 makes the situation clearest.
« Last Edit: November 12, 2013, 09:10:51 PM by BJL » Logged
EinstStein
Sr. Member
****
Offline Offline

Posts: 281


« Reply #32 on: November 12, 2013, 09:26:29 PM »
ReplyReply

Let's see two examples.
1. imaging you are standing in front of a 4m x 4m glass window to watch a moon. I bet you can see any difference in the clarity if you cover most of the glass but leave only a peek hole of a quarter coin. You will not experience any ray angle problem.
2. Imaging again you are watching the moon through a high quality, large telescope, yo'll know every mm of the glass matters for the brightness and the image clarity. Now try to put the image on a picture.
How do you explain the difference between the example 1 and 2?
The 2nd example shows what matters for a high quality optical system and the 1st example is a low quality optical system.


Logged
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3637


« Reply #33 on: November 13, 2013, 04:36:38 AM »
ReplyReply

Not necessarily: you assume that some significant amount of light has to travel from one edge of the rear element to the opposite edge of the sensor, but the existence of lenses with exit pupil well inside the lens and the chief rays striking the lens nearly perpendicular despite the rear element being close to the sensor shows that this is not true. With those lenses, light exiting near one edge of the lens is going almost entirely to that side of the sensor, and not only the chief ray but the whole light cone arriving near an edge of the sensor comes from the part of the rear element near that side.

I fully agree with that. The exit pupil is what matters, and that determines the angle of incidence of the image forming rays, and then the microlenses reduce the angle of incidence even further to almost perpendicular (to reduce mask shielding and tunneling/crossover effects). Rear element distance is not very relevant (although there may be some correlation with the lens design).

To give an idea of what we're talking about, the Exit Pupil (approx. 17mm diameter) of my TS-E 24mm is approx. 74mm away from the sensor plane, and the Exit Pupil of the TS-E 45mm (approx. 25mm diameter) is even approx. 95mm away from the sensor plane (it apprears to be near the front of the lens). These will produce image rays that are approaching perpendicular incidence, and that angle will be hardly affected by the microlenses, regardless of their offset.

The slight radial micro-lens offset towards the center of the sensor array only helps to avoid focused light hitting more than one microlens and would potentially allow a different index of refraction and/or shape of the microlens material. Because of the spherical lens shape, the micro-lenses will redirect/condense the light coming from a variety of angles just fine, all becoming more perpendicular.

Cheers,
Bart
« Last Edit: November 13, 2013, 09:11:27 AM by BartvanderWolf » Logged
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3637


« Reply #34 on: November 13, 2013, 04:40:32 AM »
ReplyReply

How do you explain the difference between the example 1 and 2?

Micro-lenses are not an optical image forming device, they are condensers that change the angle of already focused incident light to become more perpendicular.

Cheers,
Bart
Logged
Jim Kasson
Sr. Member
****
Offline Offline

Posts: 825


WWW
« Reply #35 on: November 13, 2013, 10:05:13 AM »
ReplyReply

Micro-lenses are not an optical image forming device, they are condensers that change the angle of already focused incident light to become more perpendicular.

Bart, this is clearly stated (as usual for you), obvious (now, to me), and something I'd never thought of that way before.  Thank you for this insight.  And thanks for all you do for this forum.

Jim
Logged

BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3637


« Reply #36 on: November 13, 2013, 10:56:36 AM »
ReplyReply

Bart, this is clearly stated (as usual for you), obvious (now, to me), and something I'd never thought of that way before.  Thank you for this insight.  And thanks for all you do for this forum.

Hi Jim,

You're welcome. I'm just trying to demystify some things, and explain them in terms of practical use.

This link explains the use of offset (no tilt involved) micro-lenses, including a slight color shift warning for truly telecentric lenses (constant magnification regardless of focus). Most lens designs are not truly telecentric though.

It's just that we do not know how much of a difference there is between the design of the Sony lenses (and their exit pupil distance), and OEM lenses. So there could be some effect, but there is also such a Color Cast effect with MF sensors without micro-lenses, and is handled by a Lens Cast Calibration applied during Raw conversion.

Cheers,
Bart
Logged
EinstStein
Sr. Member
****
Offline Offline

Posts: 281


« Reply #37 on: November 13, 2013, 06:31:52 PM »
ReplyReply

In the digital signal processing terminology the microlens is the sampler and integrator. It is OK too if you want to see it only the integrator part, ignore the sampler. This may better match your view that it is "only the condensor".
Back to your DSP 101 again, if you still have it, check out the window effect of the sampler and integrator, you should find the effect of the window functions. It matters a lot for the recovered signals, "reading, the final image".
Sorry for the boring engineering stuffs.
« Last Edit: November 13, 2013, 06:34:00 PM by EinstStein » Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1678


« Reply #38 on: November 14, 2013, 12:51:05 AM »
ReplyReply

In the digital signal processing terminology the microlens is the sampler and integrator. It is OK too if you want to see it only the integrator part, ignore the sampler. This may better match your view that it is "only the condensor".
Back to your DSP 101 again, if you still have it, check out the window effect of the sampler and integrator, you should find the effect of the window functions. It matters a lot for the recovered signals, "reading, the final image".
Sorry for the boring engineering stuffs.
I actually did "DSP 101" but I must admit that I don't get what you are trying to explain.

I see the sensel as a spatial sampler (turning continuous spatial data into a non-contiguous train of samples). I see the combined motion blur/lens PSF/diffraction/microlens/OLPF/sensel area as some spatial pre filter/integration. Are we discussing the same thing?

-h
Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1678


« Reply #39 on: November 14, 2013, 12:55:32 AM »
ReplyReply

Micro-lenses are not an optical image forming device, they are condensers that change the angle of already focused incident light to become more perpendicular.

Cheers,
Bart
Is this what you are saying?:


Possibly irrelevant to this discussion:
Would you not say that the (multiple sensels) reading the signal from the micro-lens array in front of a plenoptic camera/the Canon 70D are part of an "optical image forming device"?

"The microlens array from a plenoptic camera is placed in front of a 20 pence coin to show scale, and the effect of repetitions that the array gives inside a camera.  Each microlens is 135microns in diameter, with a 0.5mm focal length."
« Last Edit: November 14, 2013, 01:03:57 AM by hjulenissen » Logged
Pages: « 1 [2] 3 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad