Ad
Ad
Ad
Pages: « 1 ... 5 6 [7] 8 »   Bottom of Page
Print
Author Topic: Newbe question 35mm vs MF and dof  (Read 21520 times)
Nick Rains
Sr. Member
****
Offline Offline

Posts: 700



WWW
« Reply #120 on: January 05, 2011, 03:26:12 PM »
ReplyReply

In other words ... are you saying that David's DoF is simply the "maximum possible" DoF given a specific capture?

No, the minimum possible.
Logged

Nick Rains
Australian Landscape Photographer
www.nickrains.com
iPad Publishing
www.photique.com.au
jeremypayne
Guest
« Reply #121 on: January 05, 2011, 03:52:39 PM »
ReplyReply

No, the minimum possible.
Ah ... yes ... that's what I meant to say ...  Roll Eyes
Logged
ErikKaffehr
Sr. Member
****
Online Online

Posts: 7405


WWW
« Reply #122 on: January 05, 2011, 05:17:35 PM »
ReplyReply

Hi,

The problem with DoF is that we really don't know viewing distance and print size at shooting time. You don't say to customer/buyer, sorry, the 65MPixel image was intended for a maximum print size of 8x10"!

The problem with DoF calculations is that it may lead to everything being unsharp. My experiments indicate that loss of sharpness is clearly visible at actual pixels with a CoC of 6 microns on a 6 micron sensor. If it would be visible in an A2 print is another question, probably not.

Best regards
Erik


That's what I mentioned as resolution, but not DOF. The calculation of DOF requires a COC. COC depends on (angular) resolution which involves output magnification and a viewing distance. Different magnification/viewing distance changes DOF, and the circle is round. Even after capture, the COC remains a variable, so DOF cannot be a fixed quantity.

I also think that's what he was thinking of, however DOF is a limiter of Resolution, but then so is diffraction. Resolution, or rather MTF, plays a role, but there is no such thing as an intrinsic DOF (which supposedly is to be unaffected by magnification). DOF requires a COC to be able and calculate it.

Sorry,
Bart


Logged

Nick Rains
Sr. Member
****
Offline Offline

Posts: 700



WWW
« Reply #123 on: January 05, 2011, 08:42:50 PM »
ReplyReply

Hi,

The problem with DoF is that we really don't know viewing distance and print size at shooting time. You don't say to customer/buyer, sorry, the 65MPixel image was intended for a maximum print size of 8x10"!


This is true, and that's why it's a good idea to have a handle on diffraction errors which obviously modify the smallest CoC that a sensor can resolve. I'd rather have a sharp print with less DOF than a less sharp one with more DOF, but, like you say, this will only really show up in an A2 or bigger print from a high quality MFDB.

You just do the best you can with the gear you have!
Logged

Nick Rains
Australian Landscape Photographer
www.nickrains.com
iPad Publishing
www.photique.com.au
David Klepacki
Full Member
***
Offline Offline

Posts: 185


« Reply #124 on: January 06, 2011, 12:23:41 AM »
ReplyReply

The problem with DoF is that we really don't know viewing distance and print size at shooting time. You don't say to customer/buyer, sorry, the 65MPixel image was intended for a maximum print size of 8x10"!
What about a customer who says, "Hi, I am a fine art reproduction photographer.  What is the highest resolution theoretically possible from your 65MP back?  Assuming I am using an adequate lens and given that my reproduction printer is capable of 400 dpi maximum, how large would I be able to print and still have my printer dots able to resolve the smallest resolvable features that this 65MP back can theoretically provide?"

The problem with DoF calculations is that it may lead to everything being unsharp. My experiments indicate that loss of sharpness is clearly visible at actual pixels with a CoC of 6 microns on a 6 micron sensor. If it would be visible in an A2 print is another question, probably not.
It is physically impossible to resolve a feature smaller than the Nyquist limit will allow.  For a Bayer sensor having a pixel size of 6 microns, this means any feature smaller than about 12 microns is not practically resolvable.  

If you really want to have more accurate values of CoC specific to your camera and lenses (and any raw conversion process), you can empirically determine them by shooting something with measurable length and noting the near-far points of your "acceptable sharpness", and then inverting the DOF equations to compute effective CoCs.
« Last Edit: January 06, 2011, 12:34:52 AM by David Klepacki » Logged
ErikKaffehr
Sr. Member
****
Online Online

Posts: 7405


WWW
« Reply #125 on: January 06, 2011, 01:07:04 AM »
ReplyReply

Hi,

I made a quite careful experiment on this and the loss of sharpness is quite visible.

The experiment was done by exactly focusing a test target and than moving the camera to induce larger and larger CoC. So the only difference between the images is that the camera was moved backward a few centimeters. Distance 3 meters focal length 150mm sensor pitch about six microns.

I'd suggest that the problem is that you think resolution. This is more about MTF. MTF for sensor is unaffected by moving the camera but MTF for the lens is reduced. So we don't violate Nyquist limit, just get a higher MTF at Nyquist.

Best regards
Erik






What about a customer who says, "Hi, I am a fine art reproduction photographer.  What is the highest resolution theoretically possible from your 65MP back?  Assuming I am using an adequate lens and given that my reproduction printer is capable of 400 dpi maximum, how large would I be able to print and still have my printer dots able to resolve the smallest resolvable features that this 65MP back can theoretically provide?"
It is physically impossible to resolve a feature smaller than the Nyquist limit will allow.  For a Bayer sensor having a pixel size of 6 microns, this means any feature smaller than about 12 microns is not practically resolvable.  

If you really want to have more accurate values of CoC specific to your camera and lenses (and any raw conversion process), you can empirically determine them by shooting something with measurable length and noting the near-far points of your "acceptable sharpness", and then inverting the DOF equations to compute effective CoCs.

Logged

David Klepacki
Full Member
***
Offline Offline

Posts: 185


« Reply #126 on: January 06, 2011, 01:14:40 AM »
ReplyReply

Erik,

The empirical method I gave above for determining effective CoC values will take into account all MTF as well.  The only thing you have to do is decide what is acceptable sharp to you.
Logged
ErikKaffehr
Sr. Member
****
Online Online

Posts: 7405


WWW
« Reply #127 on: January 06, 2011, 01:39:05 AM »
ReplyReply

David,

Can you then explain the differences visible in my previous post? As far as I can understand you are saying that the images in column 2, having a CoC 1xpixel pitch would be indistinguishable from column 1?


If you check out the Canon 135/2 L test on Photozone you can see that the lens is affected by diffraction at 5.6 (on axis). Pixel pitch on the Canon 5D2 is 6.4 microns and Airy disk diameter is at f/5.6 is 7.5 microns.

I also checked my DoF calculations with DoFMaster it gives +/-2 cm at 6 microns and f/8, my offset was 17 mm at f/8 and 3 meters.

Best regards
Erik




Erik,

The empirical method I gave above for determining effective CoC values will take into account all MTF as well.  The only thing you have to do is decide what is acceptable sharp to you.

« Last Edit: January 06, 2011, 01:48:44 AM by ErikKaffehr » Logged

HCHeyerdahl
Newbie
*
Offline Offline

Posts: 47


WWW
« Reply #128 on: January 06, 2011, 04:38:10 AM »
ReplyReply

Actually, yes there is a simple answer. The S2 has a sensor that is 30mm x 45mm in size. This is 1.25x longer in each linear dimension than full frame 35mm digital.

This means that to get a focal length that shows an equivalent field of view, you will need to use a lens that is 1.25 times longer than its 35mm equivalent. So the proper comparison lens to a 50mm prime would be a 62.5mm lens. So it's not exactly "apples-to-apples" to compare the 50 and the 70.

If you match focal lengths based on the 1.25x rule and shoot the exact same picture with the same settings and make similar sized prints (within the resolution limits of the smaller camera) then the Leica will have 1.25x less depth of field.  Aperture stops run on a factor of 1.4x, so the rough difference between the two formats will be just under one aperture stop of depth of field. However, this only holds true when you are well inside the hyperfocal distance. If you shoot a subject at or near the hyperfocal distance, the DOF of the Leica will start to be noticeably less than the 35mm shot since the hyperfocal distance for the Leica and the 35mm are not the same (ie. the 35mm will hit hyperfocal distance sooner than the Leica).

This article does a good job of summarizing the differences, just use a factor of 1.25x instead of the 1.6x used to compare FF and APS-C.

http://www.bobatkins.com/photography/technical/digitaldof.html

Thanks!  That and the link answers my question  Smiley

Christopher
Logged
David Klepacki
Full Member
***
Offline Offline

Posts: 185


« Reply #129 on: January 06, 2011, 11:58:03 PM »
ReplyReply

[
I made a quite careful experiment on this and the loss of sharpness is quite visible.

The experiment was done by exactly focusing a test target and than moving the camera to induce larger and larger CoC. So the only difference between the images is that the camera was moved backward a few centimeters. Distance 3 meters focal length 150mm sensor pitch about six microns.

I'd suggest that the problem is that you think resolution. This is more about MTF. MTF for sensor is unaffected by moving the camera but MTF for the lens is reduced. So we don't violate Nyquist limit, just get a higher MTF at Nyquist.

Hi Erik,

Offhand, your CoC values are a little confusing to me.  A value of zero for CoC is physically impossible.  It would mean that a mathematical point actually exists in nature and that you have found a way to measure it.

Nevertheless, I believe I understand what you did.  It looks like you have taken an image of a dollar bill with as close to perfect focus as possible.  Then, you moved the camera backward by some millimeters in order to move the plane of sharpest focus slightly in front of the dollar bill, and wish to know why it is no longer in "acceptably" sharp focus.  The dollar bill should still be within the far limit of your depth of field, and therefore it should still appear "acceptably" sharp, which is not what you are seeing.

There are many factors that affect the underlying CoC of a given camera system that can drastically affect the DOF.  Some lenses can focus visible wavelengths better than others, and even lenses of the same focal length can have different Airy disk size at the same aperture, so you cannot always trust generic values found in tables.  In addition, some cameras have additional fixed elements in the optical path, most notably an AA filter.  AA filters vary widely from camera to camera and also affect the CoC.  And, for cameras with Bayer sensors, there is blur introduced from having to interpolate the majority of the image's pixels, since only one-third of the full color image is actually captured.  All these things are not taken into account with any online DOF calculator.
Logged
Nick Rains
Sr. Member
****
Offline Offline

Posts: 700



WWW
« Reply #130 on: January 07, 2011, 12:32:24 AM »
ReplyReply

[
And, for cameras with Bayer sensors, there is blur introduced from having to interpolate the majority of the image's pixels, since only one-third of the full color image is actually captured.  All these things are not taken into account with any online DOF calculator.


One third of the colour information is captured (in a manner of speaking) but 100% of the luminance values are captured and this is where the resolution lies. The pixels or sensels are not interpolated, only the colour information shared between the pixels, and that contains very little 'detail'. Think of the difference between L* and the a* and b* channels in Lab mode.

The AA filter obviously adds blur but the Bayer array? If you took off the coloured filters from the sensor, or used a specialist B+W sensor, your have the same resolution would you not?
Logged

Nick Rains
Australian Landscape Photographer
www.nickrains.com
iPad Publishing
www.photique.com.au
ErikKaffehr
Sr. Member
****
Online Online

Posts: 7405


WWW
« Reply #131 on: January 07, 2011, 12:32:46 AM »
ReplyReply

Hi,

I calculated how much the camera needed to be moved to from "perfect focus" in order to have a CoC of different sizes. So what I call CoC is the diameter of the cone of light leaving the lens at the focus plane. Now, obviously this CoC is not based on a real lens. A real lens cannot render a distant point as point but as disk.

Anyway, I didn't move the camera for a visible difference but calculated how much the camera needed to be moved to achieve a certain CoC (ignoring lens aberrations and diffraction). CoC is not a physical parameter it's just a number used in DoF calculations.

The reason I was doing these tests that one user of Pentax 645D had issues with the camera having perfect focus at infinity but lacking ultimate sharpness on object 100-200 meters away using a 150 mm lens at f/9.5. So I wanted to find out how much small defocus actually affects image quality.

Best regards
Erik

Offhand, your CoC values are a little confusing to me.  A value of zero for CoC is physically impossible.  It would mean that a mathematical point actually exists in nature and that you have found a way to measure it.


Logged

BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3637


« Reply #132 on: January 07, 2011, 05:57:29 AM »
ReplyReply

One third of the colour information is captured (in a manner of speaking) but 100% of the luminance values are captured and this is where the resolution lies. The pixels or sensels are not interpolated, only the colour information shared between the pixels, and that contains very little 'detail'. Think of the difference between L* and the a* and b* channels in Lab mode.

The AA filter obviously adds blur but the Bayer array? If you took off the coloured filters from the sensor, or used a specialist B+W sensor, your have the same resolution would you not?

Hi Nick,

Indeed, almost the same Luminance resolution (only a few percent loss) compared to a monochrome capture. Each Bayer CFA sensel records (some) luminance, but not a full spectrum. The missing spectral data will be interpolated from (many) surrounding sensel positions, so ultimately there is high quality luminance data, and slightly less accurate color data, at each RGB pixel position. The worst possible performance of a Bayer CFA can be expected when trying to resolve between blue versus red features, but those spectrally opposite colors are not commonly found side by side in nature.

Cheers,
Bart
Logged
David Klepacki
Full Member
***
Offline Offline

Posts: 185


« Reply #133 on: January 07, 2011, 11:09:34 AM »
ReplyReply

One third of the colour information is captured (in a manner of speaking) but 100% of the luminance values are captured and this is where the resolution lies. The pixels or sensels are not interpolated, only the colour information shared between the pixels, and that contains very little 'detail'. Think of the difference between L* and the a* and b* channels in Lab mode.

The AA filter obviously adds blur but the Bayer array? If you took off the coloured filters from the sensor, or used a specialist B+W sensor, your have the same resolution would you not?

Hi Nick,

Indeed, almost the same Luminance resolution (only a few percent loss) compared to a monochrome capture.

Both of you are completely wrong here.

First, of course it is not the Bayer sensor itself that introduces any blur, but rather the interpolation process used to estimate the missing image pixels.  There can be large differences in the resulting image acuity due to various different estimation methods, similar to the wide variation of AA filters found in different cameras.  If you really want to get into it here, we can start comparing algorithms, from simple bilinear interpolation to more advanced methods such as adaptive homogeneity or projection onto convex sets, which can show quite a range of blur from an identical raw capture.  Basically, the source of blur in many of these estimation processes comes from the need to smooth local regions of captured pixel information in order to better estimate the missing values more accurately (e.g., no sensor is free of noise).  

Also, it is not true that 100% of the luminance values are captured, nor is it true that it is only within a few percent of a monochrome capture.  Rather, only 50% of the luminance values are actually captured in an image, and the remaining 50% must be estimated.  For example, in a 40MP digital back, only 20MP of luminance information is actually captured (green), and the remaining 20MP must be estimated.  The other 50% of the captured information is basically chrominance information (10MP each for red and blue), which also requires their missing values to be estimated (30MP each for red and blue) in order to have the complete picture with 40MP in each of red, green, and blue.

As for the actual sampling resolution of the green channel in a Bayer sensor, there is much greater difference than what Bart claims above.  The sampling resolution of the green channel of a Bayer sensor (without any AA filter) is inversely proportional to twice the pixel diagonal size.  In the case of 6 micron pixels, this would amount to a sampling resolution of approximately 58.9 lp/mm.  On the other hand, the sampling resolution of a monochrome sensor (also without AA filter) is inversely proportional to twice the pixel width.  So,  in the case of 6 micron pixels, this would amount to a sampling resolution of approximately 83.3 lp/mm.  

So, I find the difference in luminance sampling resolution of a Bayer sensor to be almost 30% less than that of a monochrome sensor.  Unfortunately, Bart's claim of only a few percent difference is unfounded.  The estimation of missing pixels vary widely by camera and/or software, and it is a very bold claim to say that your camera or software ability to estimate these missing pixels is only within a few percent of a monochrome sensor in all or even most situations.

In fact, the whole idea behind multi-shot backs and scanning backs (and Foveon X3 sensors) is to realize the significant extra resolution that a "monochrome" sampling rate can give you.
« Last Edit: January 07, 2011, 04:27:01 PM by David Klepacki » Logged
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3637


« Reply #134 on: January 08, 2011, 10:05:28 AM »
ReplyReply

Both of you are completely wrong here.

Great, we're making progress from the whole scientific world is wrong, to only 2 persons.

Quote
First, of course it is not the Bayer sensor itself that introduces any blur, but rather the interpolation process used to estimate the missing image pixels.

So you are claiming that by increasing the sampling interval for each color to every-other-sensel, instead of each sensel position, there is no effect on resolution? So we're back at the whole industry is wrong now? Seems some evidence is due, finally.

Quote
There can be large differences in the resulting image acuity due to various different estimation methods, similar to the wide variation of AA filters found in different cameras.  If you really want to get into it here, we can start comparing algorithms, from simple bilinear interpolation to more advanced methods such as adaptive homogeneity or projection onto convex sets, which can show quite a range of blur from an identical raw capture.

By all means, enlighten us.

Quote
Also, it is not true that 100% of the luminance values are captured, nor is it true that it is only within a few percent of a monochrome capture.

Luminance is captured at 100% of the sensel positions, bands of color are captured in line with the CFA arrangement. Of course only that part of luminance that penetrates the CFA is captured and contributes to image forming, that's why more exposure is needed than monochrome capture without filters.
 
The earlier remarks/claims have to do with your claims about demosaicing, just like in the beginning of your reaction, "the interpolation process used to estimate the missing image pixels". I'll throw in some empirical evidence about that, namely that luminance resolution is only impacted by a few percent by the demosaicing process:
http://www.xs4all.nl/~bvdwolf/main/foto/bayer/bayer_cfa.htm
Nothing fancy, it's just a simple page I threw together almost 7 years ago, to proof some nonsense statements wrong. Who could have thought it would still be needed what seems like eons later. Oh well.

Cheers,
Bart
« Last Edit: January 08, 2011, 10:26:26 AM by BartvanderWolf » Logged
David Klepacki
Full Member
***
Offline Offline

Posts: 185


« Reply #135 on: January 08, 2011, 01:45:14 PM »
ReplyReply

Great, we're making progress from the whole scientific world is wrong, to only 2 persons.

Look, your understanding of luminance and Bayer sensors is all wrong and is in direct contradiction to the rest of the world, scientific or otherwise.  You can choose to live in this ignorance, or you can choose to expand your knowledge of this area.

So you are claiming that by increasing the sampling interval for each color to every-other-sensel, instead of each sensel position, there is no effect on resolution? So we're back at the whole industry is wrong now? Seems some evidence is due, finally.

By all means, enlighten us.

I never said any such thing.  I am claiming nothing more than the fact that sampling interval affects resolution and that the sampling resolution of luminance in a Bayer sensor is much less than that of the sampling resolution of a monochrome sensor of the same pixel size.  You are the one claiming that a Bayer sensor has "almost the same Luminance resolution (only a few percent loss) compared to a monochrome capture".  This is a direct quote from your previous post here.

And, if you want to be enlightened, try reading the original patent from Eastman Kodak (U.S. patent 3971065) in the words of Bayer himself.  It would seem that Eastman Kodak has represented a respectable portion of the photographic industry.  Don't you agree that maybe the U.S. Patent Office, Eastman Kodak and the rest of the photographic industry got it right, and that perhaps you, Bart van der Wolf, got it wrong?

Luminance is captured at 100% of the sensel positions, bands of color are captured in line with the CFA arrangement. Of course only that part of luminance that penetrates the CFA is captured and contributes to image forming, that's why more exposure is needed than monochrome capture without filters.

All bogus claims by you.  Luminance is NOT captured at 100% of the sensel positions as you say.  To prove you wrong, I cite the words of Bayer as found in the U.S. patent that I referenced above: 

under SUMMARY OF INVENTION, 2nd paragraph, lines 28-34,

"By arranging the luminance elements of the color image sensing array to occur at every other array position, a dominance of luminance elements is achieved in a pattern which has "

And, Bayer goes on to make explicit his claims about luminance and its relation to the green region in column 6, beginning with line 21,

"What is claimed is:
1. A color imaging device comprising an array of light-sensitive elements, which array includes at least (1) a first type of element sensitive to a spectral region corresponding to luminance, (2) a second type of element sensitive to one spectral region corresponding to chrominance, and (3) a third type of element sensitive to a different spectral region corresponding to chrominance, the three types of elements occurring in repeating patterns which are such that over at least a major portion of said array luminance-type elements occur at every other element position along both of two orthogonal directions of said array.

2. A device in accordance with claim 1 where in said luminance-type elements are sensitive in the green region of the spectrum, and the two types of chrominance elements are sensitive in the red and blue regions of the spectrum, respectively ... "

The earlier remarks/claims have to do with your claims about demosaicing, just like in the beginning of your reaction, "the interpolation process used to estimate the missing image pixels". I'll throw in some empirical evidence about that, namely that luminance resolution is only impacted by a few percent by the demosaicing process:
http://www.xs4all.nl/~bvdwolf/main/foto/bayer/bayer_cfa.htm
Nothing fancy, it's just a simple page I threw together almost 7 years ago, to proof some nonsense statements wrong. Who could have thought it would still be needed what seems like eons later. Oh well.

Again, nothing but bogus claims by you trying to be represented as something scientific; however, it is not even close to being such.  The existing scientific literature alone on the topic of demosaicing since the Bayer sensor was introduced in 1976 amounts to hundreds of papers in many different languages.   Just for concreteness, I will cite one here for you:

"New Edge-Directed Interpolation", by Xin Li and Michael T. Orchard, IEEE Transactions on Image Processing, Vol. 10, No. 10, October 2001."

In this paper (and many many others), you will find explicit details about the variation of image acuity resulting from different  methods of estimating the missing image elements of a Bayer sensor.  In the Concluding Remarks of the above paper, the authors support exactly what I have been saying here and state the following,

" We have studied two important applications of our new interpolation algorithm:  resolution enhancement of grayscale images and demosaicking of color CCD samples.  In both applications, new edge-directed interpolation demonstrates significant improvements over linear interpolation on visual quality of the interpolated images."
Logged
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3637


« Reply #136 on: January 08, 2011, 07:24:12 PM »
ReplyReply

All bogus claims by you.  Luminance is NOT captured at 100% of the sensel positions as you say.  To prove you wrong, I cite the words of Bayer as found in the U.S. patent that I referenced above:  

under SUMMARY OF INVENTION, 2nd paragraph, lines 28-34,

"By arranging the luminance elements of the color image sensing array to occur at every other array position, a dominance of luminance elements is achieved in a pattern which has "

And, Bayer goes on to make explicit his claims about luminance and its relation to the green region in column 6, beginning with line 21,

"What is claimed is: 1. A color imaging device comprising an array of light-sensitive elements, which array includes at least (1) a first
type of element sensitive to a spectral region corresponding to luminance, (2) a second type of element sensitive to one spectral region corresponding to chrominance, and (3) a third type of element sensitive to a different spectral region corresponding to chrominance, the three types of elements occurring in repeating patterns which are such that over at least a major portion of said array luminance-type elements occur at every other element position along both of two orthogonal directions of said array.

2. A device in accordance with claim 1 where in said luminance-type elements are sensitive in the green region of the spectrum, and the two types of chrominance elements are sensitive in the red and blue regions of the spectrum, respectively ... "

Are you trying to pull my leg, or do you really not understand that Bryce E. Bayer called the green filtered sensels - Luminosity type elements - because the Human medium wavelength sensitive cones are the major (not only) contributor to Luminance resolution?

In fact he explains in the patent:
Code:
Filters which are selectively transparent in the green region of the spectrum are preferably employed in producing
luminance-type elements, and filters which are selectively transparent in the red and blue spectral regions, respectively,
are preferably employed in producing chrominance-type elements. (The term "luminance" is herein used in a broad sense
to refer to the color vector which is the major contributor of luminance information. The term "chrominance" refers to those
color vectors other than the luminance color vectors which provide a basis for defining an image.)

Now, which part of "The term "luminance" is herein used in a broad sense ..." did you not understand, or are you just selectively shopping to attempt a proving-ground for an untenable case?

I'll spell it out for you, he is describing a weighting.

In fact he immediately continues (with one of the parts you conveniently left out):
Code:
In an important alternative for implementation of the invention, three interlaid patterns, (a green-, a red-, and a
blue-sensitive element pattern) are so arranged that green-sensitive elements (serving to detect luminance) occur at every
other array position, with red-sensitive elements alternating with such green-sensitive elements in alternate rows -- as in the
case for the presently preferred implementation. In the remaining element positions, however, blue-sensitive elements alternate
with red-sensitive elements to produce a luminance-dominated image sampling having a disproportion in the chrominance samples
favoring red over blue. With this arrangement, sampling rates for all three basic color vectors are adjusted respective of the
acuity of the human visual system. That is, blue detail, to which the human visual has least resolution, is sampled the least
frequently . . . green detail, to which the human visual system is most responsive, is sampled most frequently.

Why did you not mention the part above where he stresses: "... blue-sensitive elements alternate
with red-sensitive elements to produce a luminance-dominated image sampling having a disproportion in the chrominance samples ..."?
Again, he is pointing out a weighting. But then you'll probably ask, what did he know in 1975 ...?

A piece of friendly advice, you should really try and read beyond summaries. That is, if you want to understand what is written.

And as for the October 2001 article (could you not come up with something more recent?) you have cited, do you really want me to point out more of your erroneous interpretation, apparently (again) based on a conclusion rather than understanding the real implications? I'm tempted, but what's the use, I'm saying it to deaf-man's ears. You'll just say that everybody else ("the rest of the world, scientific or otherwise") is wrong, probably even the sources you quoted yourself.

Cheers,
Bart
« Last Edit: January 08, 2011, 09:34:16 PM by BartvanderWolf » Logged
David Klepacki
Full Member
***
Offline Offline

Posts: 185


« Reply #137 on: January 09, 2011, 01:36:27 AM »
ReplyReply

Good grief, I did not think it was possible to misinterpret something so clearly articulated by Bayer in a legally worded document, but Bart you still manage to do it.  

All colors have some luminance, so of course technically there must exist some finite luminance at every location of the Bayer sensor, regardless of how small it may be.  However, Bayer clearly delineates between having dominant luminance elements (green) as well as having elements whose luminance values can be relatively negligible in real world images, which he refers to as chrominance elements (red and blue).  If Bayer really believed that luminance was being sampled uniformly at all element locations, he would not have any reason to go through the trouble of explicitly saying things like "arranging luminance elements to occur at every other position" or that his sensor contains color vectors "other than luminance".  

I believe that the source of your misunderstanding is that you confuse the "ubiquitous presence" of luminance in the image with that of what actually defines its luminance resolution.  In a Bayer sensor, the luminance of an image is being sampled independently in three channels, red, green and blue, and the sampling rates are not the same.  The maximum sampling resolution of luminance is that of the green channel, since the green channel occupies 50% of the sensor area, whereas red and blue each occupy only 25% of the sensor area.  However, in no way can these sampling resolutions be combined to match that of a monochrome sensor as you claim, not closely at all.

As a concrete example, consider the Phase One P45+ back, which has 6.8 micron pixels.  The maximum theoretical sampling resolution of luminance in the green channel is roughly 52 lp/mm, while that of the red and blue channels is roughly just under 37 lp/mm.  

Next, consider the Phase One Achromatic+ back, which is identical to the P45+, except that it does not have the Bayer CFA and so is monochrome.  The maximum theoretical sampling resolution of its luminance is roughly 73.5 lp/mm.

Now, your claim that a Bayer sensor has "almost the same Luminance resolution (only a few percent loss) compared to a monochrome capture", amounts to saying that the luminance resolution of the Phase One P45+ should be that similar to the Phase One Achromatic+ back.  And, the only way that can happen is if the missing 78MP of the P45+ color image can be interpolated from its 39MP of actual captured pixels with such precision so as to transform its resolution from 37 lp/mm  /  52 lp/mm  /  37 lp/mm in R, G, B to within a few percent of  73.5 lp/mm  /  73.5 lp/mm  /  73.5 lp/mm in R, G, B, or about 71 lp/mm in each color channel.  

I believe this to be a hogwash claim by you.   Furthermore, if your claim that Bayer sensors have nearly the same luminance resolution as monochrome sensors were true, then there would be no significant resolution advantage with the Achromatic+.  And yet, Phase One does not seem to agree with you and claims that it in fact does, and I agree with Phase One.

There is even an article on Luminous Landscape that can be found here:
http://www.luminous-landscape.com/reviews/cameras/achromatic.shtml

In the above article, Mark Dubovoy and Dr. Claus Molgaard (Chief Technology Officer and VP of Research and Development at Phase One) present detailed evidence where they show that the Achromatic+ monochrome sensor clearly has significantly more resolution than that of the equivalent P45+ sensor that uses a Bayer CFA.

Bart, it is only you who holds beliefs about resolution that fly in the face of everyone else.  Mark Dubovoy does not believe what you claim, nor Dr. Claus Molgaard and Phase One, nor myself.  Please try to produce some evidence where we are all wrong.
« Last Edit: January 09, 2011, 10:53:06 AM by David Klepacki » Logged
ErikKaffehr
Sr. Member
****
Online Online

Posts: 7405


WWW
« Reply #138 on: January 09, 2011, 03:15:13 PM »
ReplyReply

David,

The reference you have is not very clear on the topic. The principal author (Reichmann?) sees no significant difference. Mark Dubovoy is not that clear either. He says that the achromatic back is 2/3 to the P65+ and also that the P65+ is better than the P45. Dr. Klaus is esoteric about IR sensivity and also compares resolution nut I fail to see a great advantage in the samples. But it is very hard to compare a monochrome sample with a color sample anyway. What parameter were used in the conversion?

I may suggest that regardless of the Bayer patent any developer of raw conversion may utilize the information from the sensor to best advantage. Green channel may be used for luminance or all channels.

It's not obvious to me how lateral chromatic aberration would be handled on an achromatic chip.

Best regards
Erik

Good grief, I did not think it was possible to misinterpret something so clearly articulated by Bayer in a legally worded document, but Bart you still manage to do it.  

All colors have some luminance, so of course technically there must exist some finite luminance at every location of the Bayer sensor, regardless of how small it may be.  However, Bayer clearly delineates between having dominant luminance elements (green) as well as having elements whose luminance values can be relatively negligible in real world images, which he refers to as chrominance elements (red and blue).  If Bayer really believed that luminance was being sampled uniformly at all element locations, he would not have any reason to go through the trouble of explicitly saying things like "arranging luminance elements to occur at every other position" or that his sensor contains color vectors "other than luminance".  

I believe that the source of your misunderstanding is that you confuse the "ubiquitous presence" of luminance in the image with that of what actually defines its luminance resolution.  In a Bayer sensor, the luminance of an image is being sampled independently in three channels, red, green and blue, and the sampling rates are not the same.  The maximum sampling resolution of luminance is that of the green channel, since the green channel occupies 50% of the sensor area, whereas red and blue each occupy only 25% of the sensor area.  However, in no way can these sampling resolutions be combined to match that of a monochrome sensor as you claim, not closely at all.

As a concrete example, consider the Phase One P45+ back, which has 6.8 micron pixels.  The maximum theoretical sampling resolution of luminance in the green channel is roughly 52 lp/mm, while that of the red and blue channels is roughly just under 37 lp/mm.  

Next, consider the Phase One Achromatic+ back, which is identical to the P45+, except that it does not have the Bayer CFA and so is monochrome.  The maximum theoretical sampling resolution of its luminance is roughly 73.5 lp/mm.

Now, your claim that a Bayer sensor has "almost the same Luminance resolution (only a few percent loss) compared to a monochrome capture", amounts to saying that the luminance resolution of the Phase One P45+ should be that similar to the Phase One Achromatic+ back.  And, the only way that can happen is if the missing 78MP of the P45+ color image can be interpolated from its 39MP of actual captured pixels with such precision so as to transform its resolution from 37 lp/mm  /  52 lp/mm  /  37 lp/mm in R, G, B to within a few percent of  73.5 lp/mm  /  73.5 lp/mm  /  73.5 lp/mm in R, G, B, or about 71 lp/mm in each color channel.  

I believe this to be a hogwash claim by you.   Furthermore, if your claim that Bayer sensors have nearly the same luminance resolution as monochrome sensors were true, then there would be no significant resolution advantage with the Achromatic+.  And yet, Phase One does not seem to agree with you and claims that it in fact does, and I agree with Phase One.

There is even an article on Luminous Landscape that can be found here:
http://www.luminous-landscape.com/reviews/cameras/achromatic.shtml

In the above article, Mark Dubovoy and Dr. Claus Molgaard (Chief Technology Officer and VP of Research and Development at Phase One) present detailed evidence where they show that the Achromatic+ monochrome sensor clearly has significantly more resolution than that of the equivalent P45+ sensor that uses a Bayer CFA.

Bart, it is only you who holds beliefs about resolution that fly in the face of everyone else.  Mark Dubovoy does not believe what you claim, nor Dr. Claus Molgaard and Phase One, nor myself.  Please try to produce some evidence where we are all wrong.

Logged

David Klepacki
Full Member
***
Offline Offline

Posts: 185


« Reply #139 on: January 09, 2011, 07:26:11 PM »
ReplyReply

Erik,

First, we cannot include the P65+ in any comparison with the Achromatic+, since the sensor is different with different size pixels.  It is the direct comparison of the P45+ and the Achromatic+ that proves wrong the claims made by Bart van der Wolf in this thread.

The LL article is pretty clear on the final conclusion regarding the resolution advantage of the Achromatic+.  Michael apparently did not have a lens that could show well this advantage.  When Mark Dubovoy later did his tests using a Rodenstock HR lens, he was able to clearly show this resolution advantage.  In addition, Dr. Claus Molgaard also confirms Mark's results with yet a different lens of high resolving power.

The Phase One website page on their Achromatic+ back (http://www.phaseone.com/en/Digital-Backs/Achromatic/Achromatic-plus-Technologies.aspx), explicitly states the fact of its higher resolution and the need for good lenses to visually realize this higher resolution,

"... The lack of filters on the sensor also provides the advantage of higher resolution. To take advantage of the extra resolving power, high-resolution lenses are required ..."


The bottom line here is that Bart has made claims here about luminance resolution of Bayer sensors that are not supported by anyone in the photographic industry.  It is obviously true that Phase One does not agree with him (nor Mark or myself who have no financial interest in Phase One).


Logged
Pages: « 1 ... 5 6 [7] 8 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad