Ad
Ad
Ad
Pages: [1] 2 »   Bottom of Page
Print
Author Topic: Bayer Interpolation and Re-sampling question  (Read 4710 times)
xpatUSA
Sr. Member
****
Offline Offline

Posts: 298



WWW
« on: September 28, 2013, 09:37:56 AM »
ReplyReply

Hello,

I've been a Foveon fan for some 5 or so years but have recently acquired a 12MP micro four-thirds camera. Having said that, this question is most certainly not like "Foveon vs. Bayer: which is better?".  It is more like wondering if a Bayer image is 'improved' in the area of color accuracy and noise by down-sizing, similarly to the improvement gained in sharpness.

Background: I produce images for viewing on-screen and never print anything, images are invariably re-sampled smaller, unless cropped for illustrative purposes.

The actual question is: In theory, does re-sampling downward alleviate the so-called Bayer-itis, i.e. the uncertainty of correct colors, the poor color resolution, the occasional strange coloration near high-contrast edges, and whatever other mud is thrown at the Bayer algorithm by those not in love with it? (please note that I am not personally decrying Bayer here).

For example, please consider the effect of properly resizing a 4000x3000px image to 1000x750px - a 25% reduction.
« Last Edit: September 28, 2013, 09:47:20 AM by xpatUSA » Logged

best regards,

Ted
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1678


« Reply #1 on: September 28, 2013, 10:17:11 AM »
ReplyReply

The actual question is: In theory, does re-sampling downward alleviate the so-called Bayer-itis, i.e. the uncertainty of correct colors, the poor color resolution, the occasional strange coloration near high-contrast edges, and whatever other mud is thrown at the Bayer algorithm by those not in love with it? (please note that I am not personally decrying Bayer here).
I believe that the CFA is the problem, not the demosaic (although the latter is needed because of the former).

The CFA leads to aliasing. Aliasing can generally spread to any frequency (including DC). Thus, I would assume that the presence of a CFA leads to the possibility of theoretical errors, no matter how far you downsample as long as long as the scene + lens + OLPF provides sufficient color-difference high-spatial-frequency detail.

I believe that practice shows this to be a minor problem. Interestingly, any camera, including Foveon or acromatic ones (or sampling device in general) can produce aliasing if there is sufficient high-frequency content >= fs/2, and there is not sufficient pre-filtering to suppress that information. Audio recorders usually have sufficient filtering so as to for all intents avoid such aliasing, while cameras do not. Perhaps we will get there with 50MP APS-C sensors (if lenses dont do giant leaps at the same time).

-h
« Last Edit: September 28, 2013, 10:19:42 AM by hjulenissen » Logged
Floyd Davidson
Full Member
***
Offline Offline

Posts: 174



WWW
« Reply #2 on: September 28, 2013, 05:05:14 PM »
ReplyReply

I believe that the CFA is the problem, not the demosaic (although the latter is needed because of the former).

The CFA leads to aliasing. Aliasing can generally spread to any frequency (including DC). Thus, I would assume that the presence of a CFA leads to the possibility of theoretical errors, no matter how far you downsample as long as long as the scene + lens + OLPF provides sufficient color-difference high-spatial-frequency detail.

The Color Filter Array has nothing to do, directly at least, with aliasing.  Aliasing is the result of sampling.  The indirect effects of having a CFA present are that the sampling rate is therefore different for different colors, and that helps to make moire patterns more visible and more annoying.

Quote
I believe that practice shows this to be a minor problem. Interestingly, any camera, including Foveon or acromatic ones (or sampling device in general) can produce aliasing if there is sufficient high-frequency content >= fs/2, and there is not sufficient pre-filtering to suppress that information. Audio recorders usually have sufficient filtering so as to for all intents avoid such aliasing, while cameras do not. Perhaps we will get there with 50MP APS-C sensors (if lenses dont do giant leaps at the same time).

Oversampling is relatively easy for most audio applications, plus electronic filters are now very high quality, which makes audio applications relatively higher quality in that respect than are optical applications.  It is true that 50 MP images will provide a greater degree of oversampling than say 24MP sensors, but realistically it won't be until sensors get up there around 150 to 200MP that anti-aliasing filters will become totally meaningless.

For the OP, downsampling necessarily produces sharper edges with more abrupt transitions from one color to another, and the effect depends entirely on the algorithm used to resample the image.  Regardless of how it is done there is absolutely the effect of a low pass filter that removes all spatial frequencies above 1/2 the sampling rate.  Single edges are sharper, but sequentially repeated fine detail is lost. 
Logged

hjulenissen
Sr. Member
****
Offline Offline

Posts: 1678


« Reply #3 on: September 28, 2013, 05:26:42 PM »
ReplyReply

The Color Filter Array has nothing to do, directly at least, with aliasing.  Aliasing is the result of sampling.  The indirect effects of having a CFA present are that the sampling rate is therefore different for different colors, and that helps to make moire patterns more visible and more annoying.
Quote
If you have a perfectly pre-filtered achromatic sensor with zero aliasing and suddenly insert a CFA filter, you may have aliasing. The CFA filter in that case is the direct cause of aliasing.
Quote
Oversampling is relatively easy for most audio applications, plus electronic filters are now very high quality, which makes audio applications relatively higher quality in that respect than are optical applications.  It is true that 50 MP images will provide a greater degree of oversampling than say 24MP sensors, but realistically it won't be until sensors get up there around 150 to 200MP that anti-aliasing filters will become totally meaningless.
There will be a transition phase.

-h
Logged
Floyd Davidson
Full Member
***
Offline Offline

Posts: 174



WWW
« Reply #4 on: September 28, 2013, 05:51:34 PM »
ReplyReply

If you have a perfectly pre-filtered achromatic sensor with zero aliasing and suddenly insert a CFA filter, you may have aliasing. The CFA filter in that case is the direct cause of aliasing.

That reverses the cause and effect. If the filter is correct for any given sampling rate it will not be correct for any other sampling rate.  Addition of the CFA changes the sampling rate, and therefore the AA filter is no longer correct.  The direct cause is sampling, not filtering.  With or without the CFA, if it is "perfectly pre-filtered" there will be no aliasing.
« Last Edit: September 28, 2013, 06:01:26 PM by Floyd Davidson » Logged

xpatUSA
Sr. Member
****
Offline Offline

Posts: 298



WWW
« Reply #5 on: September 28, 2013, 11:22:54 PM »
ReplyReply

You Gentlemen are certainly having an interesting discussion. Meanwhile I took some shots of a 1951 USAF target modified to include the six colors. After some severe pixel-peeping I did find a reduction in noise level and more homogenous color in the patches after a reduction to 25%. Size reduction was done in two steps in Elements 6 using straight bi-cubic. (Bart would not approve but at least I did not apply CiC's recommended final sharpening step  Huh). I have other reduction algorithms such as Lanczos, Mitchell, et al, in another app. but bi-cubic was good enough for the purpose of the exercise, I thought. I restricted my examination to patches with frequencies below Nyquist.

Take Care,
Logged

best regards,

Ted
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1678


« Reply #6 on: September 29, 2013, 12:21:31 AM »
ReplyReply

That reverses the cause and effect.
Let us agree to disagree. Life is too short to discuss sematics.
Quote
Oversampling is relatively easy for most audio applications, plus electronic filters are now very high quality, which makes audio applications relatively higher quality in that respect than are optical applications.
If cameras were more like audio A/D converters, I like to think that they would be more Nokia-like: sample roughly at crazy high rates (41MP?). Then downsample using the finest digital filters you can afford. In the process you can have practically any passband response (as long as you have sufficient stop-band attenuation), and the downsampled samples will have better signal-to-noise ratio due to averaging.

-h
« Last Edit: September 29, 2013, 01:03:49 AM by hjulenissen » Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1678


« Reply #7 on: September 29, 2013, 01:07:14 AM »
ReplyReply

After some severe pixel-peeping I did find a reduction in noise level and more homogenous color in the patches after a reduction to 25%.
You might find this thread interesting:
downsampling reduces noice! - uh noise! - con't.

It only looks at B&W noise levels, though.

Analysing how the CFA (ideally) affects the raw file is one thing. Analyzing how this fits in with any given real-life demosaic algorithm is going to be harder, as those algorithms tends to be non-linear, signal-adaptive and non-documented.

If you want to figure out how the presence of CFA/Bayer affects real-world images when heavily downsampled, I guess that you have to do real-world processing. The problem with that is that you do not easily get to "switch on/off" the CFA, keeping everything else constant.

I would guess that most real-world issues with the CFA and demosaicing algorithms (and also the most significant difference between various demosaic algorithms) happens at high frequencies (and thus would be cancelled by a high-quality downsampling filter). As my reasoning further up hinted at, there is at least the theoretical possibility that highly periodic structures (fences, roof tiles, feathers etc) will alias into low frequencies, producing "falsely colored patches" that cannot (I believe) be corrected by demosaicing, and will not disappear with downsampling. In those cases, you would have more low-frequency accuracy by having a full-color sensor (or, indeed, by having a softer lens!)

The popularity of Bayer CFAs seems to suggest that this problem is not that large in practice. Or at least that the quality  vs price of competing arrangements is considered by most people to be worse. Perhaps because highly periodic variation in color is uncommon, or perhaps because the kind of errors are easy on the eye.

-h
« Last Edit: September 29, 2013, 01:26:37 AM by hjulenissen » Logged
BartvanderWolf
Sr. Member
****
Online Online

Posts: 3631


« Reply #8 on: September 29, 2013, 04:19:27 AM »
ReplyReply

If the filter is correct for any given sampling rate it will not be correct for any other sampling rate.  Addition of the CFA changes the sampling rate, and therefore the AA filter is no longer correct.  The direct cause is sampling, not filtering.

Floyd is correct. Only the fact that we now start subsampling the color bands, Red and Blue even more than Green, will be the cause of reintroducing aliasing. We reduce color resolution by sparser sampling, but we only reduce luminance resolution a little, because all sensels still contribute (weighted) to luminance.

Quote
With or without the CFA, if it is "perfectly pre-filtered" there will be no aliasing.

Indeed, but you are correctly putting "perfectly pre-filtered" between quotes, because in optics it is virtually impossible to pre-filter (Low-pass filter) perfectly, because that would cause too much loss of resolution. In addition, we'd need to filter Blue and Red more severely than Green, which adds another practical issue.

As for Ted's question, the downsampling algorithm plays a role in propagating (or even introduce new) artifacts as we down-sample.

If the noise spectrum is uniform, we may not be able to reduce noise when downsampling, but the resampling algorithm can introduce some amount of blur.

False color artifacts resulting from the demosaicing of different sampling densities for Red and Blue versus Green, will mostly be visible at the highest spatial frequencies because the amplitude of the artifacts is relatively high compared to the real signal. So downsampling would in that case have a positive effect, but there are other methods that can reduce the false color artifacts without having to sacrifice as much resolution.

IMHO the benefits of a Bayer CFA, smaller file size and high recording frequency of subsequent images, often outweigh the drawbacks of slightly reduced luminance resolution and reduced color accuracy of microdetail. That partially also explains the popularity of Bayer CFA based single shot color captures.

So, when down sampling a 4000x3000px image down to 1000x750px, depending on the algorithm used, you will probably reduce some of the micro detail inaccuracies, provided that their lower spatial frequency aliases have a low enough amplitude.

Cheers,
Bart
Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1678


« Reply #9 on: September 29, 2013, 06:07:50 AM »
ReplyReply

Floyd is correct. Only the fact that we now start subsampling the color bands, Red and Blue even more than Green, will be the cause of reintroducing aliasing. We reduce color resolution by sparser sampling, but we only reduce luminance resolution a little, because all sensels still contribute (weighted) to luminance.
Is it the weapon or the person behind it who cause someone to be injured? We can get into all kinds of philosophical debates over that, but it seems obvious that both are needed. Remove the man or remove the gun, and no-one would get hurt.

With no CFA, there would be less aliasing (ideally none in the constructed example). With the CFA, there is more aliasing (depending on scene, lens, etc). The act of introducing a CFA has a causal effect on the degree of aliasing. I don't get why this should cause much debate.

The sampling system (sensor) and the pre-filter stays constant in both cases. The only change is the introduction of a CFA. This CFA alters the effective sampling characteristics of the system, by making the acromatic sensor partially blind for certain parts of the spectrum at certain spatial locations. If there is significant high-frequency spatial-chromatic detail, you will get aliasing.
Quote from: BartvanderWolf
Quote from: Floyd Davidson
With or without the CFA, if it is "perfectly pre-filtered" there will be no aliasing.
Indeed, but...
I believe that Floyd here summarize the question that the OP seems to be concerned about: does the Bayer arrangement affect lower frequencies (such as would be the expected output of a high-quality downscale)?

The answer is yes, it can. If the prefiltering is unaltered when a CFA is introduced, you have the possibility of aliasing. If you introduce the (theoretically necessary) prefilter cutoff, you will (for practically available filters) affect lower frequencies, and you will render the camera far worse than what is possible for full-resolution output.

The theoretical answer is not complete without some practical consideration. I will hazard a guess that you would have to have some quite unusual scene for color aliasing to be significant/disturbing if you downsample a Bayer CFA image by a factor of 2x2 or more. Exotic bird feathers are my best bet.

-h
« Last Edit: September 29, 2013, 06:19:51 AM by hjulenissen » Logged
xpatUSA
Sr. Member
****
Offline Offline

Posts: 298



WWW
« Reply #10 on: September 29, 2013, 10:05:10 AM »
ReplyReply

Thank you, Gentlemen for the further interesting discussion and thanks also @h for the link.

Here's some results I got with the 1951 target:

Noise (imageJ)



As can be seen, both the peak-to-peak amplitude and the std. dev. of the noise is reduced on a pixel-peeping basis.

Edge Spread (imageJ)



The rise is much less in the downsized image but, again, on a pixel-peeping basis.

And compared to a Foveon SD9 LO res. image



Here we see how similar the end results are, with a somewhat reduced detail MTF in the Bayer image and evidence of aliasing in the Foveon image. For your information, the Bayer image started life with a 4.33um pixel pitch. The Foveon (truly binned on-sensor X2) a whopping 18.24um with no CFA and no 'blur' filter.

A gentle reminder that my target view is around 1000x750px viewed on a 0.294dpi, 1280x1024px monitor so the loss of image absolute spatial resolution is both acceptable and expected.

It almost looks like, from an 'image quality' point of view, my recently acquired Panasonic 12MP GH1 was a waste of money but, oh! the convenience, the X10 live manual focusing, the live histogram, the low-ISO performance, etc., etc., ad naus  Smiley
« Last Edit: September 29, 2013, 12:01:01 PM by xpatUSA » Logged

best regards,

Ted
Floyd Davidson
Full Member
***
Offline Offline

Posts: 174



WWW
« Reply #11 on: September 29, 2013, 12:09:09 PM »
ReplyReply

With no CFA, there would be less aliasing (ideally none in the constructed example). With the CFA, there is more aliasing (depending on scene, lens, etc). The act of introducing a CFA has a causal effect on the degree of aliasing. I don't get why this should cause much debate.
In practice, it isn't true.

The AA filter is chosen, for each and every system, depending on the specific sampling rate.  That rate is indeed changed because a CFA is used... but that is true for any mechanism added to encode color information.   Removing the CFA is not a viable way to reduce aliasing and should not be thought of as if it were. 

The CFA (or another mechanism with the same base problems) is required if recording color information is the goal, and one pays the penalties for that.  The base sensor design has to be higher resolution to record color, and the AA filter must be matched,  if all else is to remain the same. 

Quote
The sampling system (sensor) and the pre-filter stays constant in both cases.
The sampling rate necessarily changes when a CFA is used to encode color information.  It is not reasonable to use the same AA filter for different sampling rates.

The difference of whether a CFA exists is a matter of whether color information is to be recorded.  The design is of course optimized for that.  Including a CFA is not based on resolution requirements.

Quote
The only change is the introduction of a CFA. This CFA alters the effective sampling characteristics of the system, by making the acromatic sensor partially blind for certain parts of the spectrum at certain spatial locations. If there is significant high-frequency spatial-chromatic detail, you will get aliasing.Indeed, but...
But that is also true if the sampling rate is changed for other reasons.  Regardless of the reason, the proper AA filter is required, and it is necessarily optimized for the specific sampling rate.
Quote
I believe that Floyd here summarize the question that the OP seems to be concerned about: does the Bayer arrangement affect lower frequencies (such as would be the expected output of a high-quality downscale)?

The answer is yes, it can. If the prefiltering is unaltered when a CFA is introduced, you have the possibility of aliasing.
If the system is poorly engineered it will provide poor data.  Mismatched design is not a component problem, it's a design problem.

A much larger significance of this with and without a CFA is the fact that with it the color information is captured, and without it no color information is later available.  How much aliasing is of no significance by comparison, assuming one wants a color image!

Quote
If you introduce the (theoretically necessary) prefilter cutoff, you will (for practically available filters) affect lower frequencies, and you will render the camera far worse than what is possible for full-resolution output.
You are ignoring the fact that the "far worse" camera produces far more useful information, granted at lower resolution.
Quote
The theoretical answer is not complete without some practical consideration. I will hazard a guess that you would have to have some quite unusual scene for color aliasing to be significant/disturbing if you downsample a Bayer CFA image by a factor of 2x2 or more. Exotic bird feathers are my best bet.

Downsampling will remove high frequency components.  How the components come to exist initially is not a factor in whether they are removed or not.  Hence if half of those components are real detail and half are aliasing distortion, or some other mix such as 1/4 scene and 3/4 aliasing, makes no difference.  They are all removed by down sampling

Likewise not all aliasing distortion is high frequency, and therefore it is not all removed.  For example if the camera does not even have any AA filtering other than the limited resolution of the lens, there is no reason not to have low frequency aliasing distortion that will not be dramatically changed by down sampling.

Rather obviously the CFA itself is not the issue!  The sensor resolution and how well the AA filter matches it are the operative issues.  The CFA is just a detail (in essence, is recording color information  required or not) that has a known penalty.
Logged

xpatUSA
Sr. Member
****
Offline Offline

Posts: 298



WWW
« Reply #12 on: September 30, 2013, 02:26:57 AM »
ReplyReply

Likewise not all aliasing distortion is high frequency, and therefore it is not all removed.  For example if the camera does not even have any AA filtering other than the limited resolution of the lens, there is no reason not to have low frequency aliasing distortion that will not be dramatically changed by down sampling

Floyd, please help a relative noob. What form does "low frequency aliasing distortion" take?

@h, you said "Aliasing can generally spread to any frequency (including DC)" - could you explain how that happens and what does DC aliasing look like?

Logged

best regards,

Ted
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1678


« Reply #13 on: September 30, 2013, 03:06:58 AM »
ReplyReply

@h, you said "Aliasing can generally spread to any frequency (including DC)" - could you explain how that happens and what does DC aliasing look like?
http://en.wikipedia.org/wiki/Nyquist–Shannon_sampling_theorem

Imagine "sampling" (picking the value at an infinitesimal point) the red waveform above at the points corresponding to the dots. You will notice that there is less than 2 samples per period.

As suggested by the dotted line, this leads to an ambiguity: both the dotted line and the red line are "valid" interpretations of the samples, depending on how those samples were acquired. In many sampling systems, the reconstruction stage assumes that the waveform was limited to [0, fs/2>, thus the dotted line would be reconstructed. And it would be an erroneous frequency that was not present in the input signal. Depending on the frequency of the input, and the sampling rate, the aliasing frequency will "fold", producing anything from 0 frequency to fs/2.

If you sample the input waveform at exactly 1 samples/period, all samples will have the same value, and depending on the phase offset, you might get a DC value of +A, 0, -A or anywhere in-between. If there is the slightest drift in frequency (or sampling period) you would get something "DC-like" that slowly drifts in the range of +/-A. Since this is a very low frequency signal, no amount of lowpass filtering/downsampling might be expected to conceal it.

Image reproduction adds some complication to the application of this theory. The sensor sensels are not point-samplers, but more like area-integrators. And the OLPF, lens flaws, diffraction etc leads to an attenuation of those high frequencies, meaning that the problem is smaller than one might think. On the other hand, blue and red samples in a Bayer CFA are sampling at 1/2 the horizontal and vertical rate, and tend to be insufficiently filtered for some unusual scenes (as I believe that your test-chart shows). Also, the reconstruction in monitors is very far from the theoretic ideal.

-h
« Last Edit: September 30, 2013, 03:27:06 AM by hjulenissen » Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1678


« Reply #14 on: September 30, 2013, 03:22:42 AM »
ReplyReply

In practice, it isn't true.
It seems that both me and you know our sampling theory. Let us leave it at that.

-h
« Last Edit: September 30, 2013, 03:25:00 AM by hjulenissen » Logged
BartvanderWolf
Sr. Member
****
Online Online

Posts: 3631


« Reply #15 on: September 30, 2013, 04:14:39 AM »
ReplyReply

Floyd, please help a relative noob. What form does "low frequency aliasing distortion" take?

Hi Ted,

Aliasing forms a lower spatial frequency alias (seemingly larger in size) of detail that is too small to resolve reliably. Have a look at the following test chart shot from one of the older webpages of Norman Koren about MTF and aliasing:


The detail at the right-hand side cannot be resolved, and results in larger patterns than the actual detail. They often are also lower in amplitude, so if they are mixed with actual detail of that spatial frequency, the aliasing may be less obvious but still add to or subtract from local contrast or color.

Quote
@h, you said "Aliasing can generally spread to any frequency (including DC)" - could you explain how that happens and what does DC aliasing look like?

As illustrated by the above image, the aliases can have different spatial frequencies depending on the exact sampling density and the spatial frequency of the detail. DC aliasing will just change the overall image brightness.

Cheers,
Bart
« Last Edit: September 30, 2013, 09:40:07 AM by BartvanderWolf » Logged
Floyd Davidson
Full Member
***
Offline Offline

Posts: 174



WWW
« Reply #16 on: September 30, 2013, 09:17:18 AM »
ReplyReply

Floyd, please help a relative noob. What form does "low frequency aliasing distortion" take?

@h, you said "Aliasing can generally spread to any frequency (including DC)" - could you explain how that happens and what does DC aliasing look like?

Other answers have been technically correct.  Lets talk about it in as non-technical terms as we can...

For any given sampling rate (which is how close the pixels are to each other on the sensor) all changes in tone value that happen at less than 1/2 the sampling rate will be more or less correctly recorded.  Hence if we shoot a picture of a white picket fence that puts 50 pickets per millimeter on the sensor, if the sensor has at least 100 pixels per millimeter the detail of that fence will be properly recorded. It takes a minimum two pixels to correctly record a picket, so that will be true for any fence with pickets that are wider too.

Also, of there are continuous slats in the fence, with no space between them, hence no variation in tones which makes it  a zero spacial frequency, that will be correctly recorded.  So anything from zero to half the sampling rate is correctly recorded by producing output data that correctly relates to the tonal variation frequency.

But if the frequency of the detail is higher than 1/2 the sensor's sampling rate, the data will not be correctly recorded and instead will be "aliased" to at a lower frequency.   As others have shown, that works out to wider pickets in the fence, and fewer of them.  The key to how wide they are is the comparison to the sampling frequency.  As the spatial frequency of the scene detail goes higher than 1/2 the sensors sampling rate, the recorded data rate goes down rather than up until the spatial frequency gets to exactly the sensor sampling rate (where the data rate becomes zero) and then it goes up again.  The output domain is only between zero and 1/2 the sampling rate, and as the input rate goes up the output is aliased first down and up in cycles between 0 and 1/2 the sampling rate as the input frequency continues to go up..

At each point where the input data has a frequency that is a multiple of 1/2 the sampling rate the output data rate is zero.  Zero spacial frequency just means there is no tone change in the data.  So a picket fence would not appear as black and white alternating lines, but instead would be a solid grey tone value.

Charts and quoting line pairs per millimeter are great, but an example of the actual effect is interesting too!  Lets say you take a picture of a rather long picket fence at an angle, with a short focal length lense to give it maximum perspective distortion.  The fence close to you has a rather low spacial frequency, with big wide slats that are far apart.  But when projected onto a sensor the pickets from off in the distance are very small and close together, representing a very high spacial frequency.  Along the way from close up, where each picket is correctly recorded, to way out in the distance there will be points (at exact muiltiples of 1/2 the sampling rate) where the fence appears not to have pickets, and is just a grey board fence.  Between those points the pickets will gradually become distinct, and then cycle back to one solid tone with no pickets!  (And of course with Bayer filtered color images the Green channel is sampled at a different rate than the Red and Blue channels, so that channel will have the solid tones at different locations than for the other two channels.  That causes the pixel colors to vary with distance, and is what's known as a Moire pattern


Note that "DC" is a reference to electrical signals, and while it isn't technically valid in this discussion it serves the purpose by comparing to AC electrical signals in a way that those familiar with electricity will catch in an instant. DC has no voltage amplitude variations like AC.  Zero in a spacial frequency domain means no change in tonal amplitude variations.
Logged

xpatUSA
Sr. Member
****
Offline Offline

Posts: 298



WWW
« Reply #17 on: September 30, 2013, 09:31:39 AM »
ReplyReply

Thanks h, Bart and Floyd,

I had not realized that what was being described was simple the visual appearance (how they look) of artifacts caused by scene spatial frequencies at or above the various Nyquist frequencies (red, blue, green, diagonal, etc) for the sensor. Pardon my lack of understanding.

There's also a representation of that effect where the MTF curve is shown as a solid line going past 0.5 cy/px up toward 1 cy/px but also a dashed or grayed line going backward in a mirror image from 0.5 cy/px toward 0 cy/px.

« Last Edit: September 30, 2013, 06:36:41 PM by xpatUSA » Logged

best regards,

Ted
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1678


« Reply #18 on: October 02, 2013, 05:03:14 AM »
ReplyReply

Although frequency analysis and the Shannon-Nyquist sampling theorem are well-established concepts with numerous practical and theoretical applications, using them for photography can be unintuitive. It believe one reason is that seeing has a large spatial-domain resolution and a low spatial-frequency-domain resolution (DCT sizes in JPEG is 8x8 pixels). While e.g. hearing has a small temporal domain resolution and large temporal-frequency resolution (block sizes in audio can be 1000s of samples).

-h
Logged
bjanes
Sr. Member
****
Offline Offline

Posts: 2793



« Reply #19 on: October 02, 2013, 07:12:52 AM »
ReplyReply

Thank you, Gentlemen for the further interesting discussion and thanks also @h for the link.

Here's some results I got with the 1951 target:




Another imaging defect results from insufficient sampling of a target with discrete lines which are thicker than the pixel width is ghosting as illustrated in your shot and below where the vertical black line has double grey ghosts due to insufficient sampling.

Bill

« Last Edit: October 02, 2013, 07:17:22 AM by bjanes » Logged
Pages: [1] 2 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad