ejmartin


« Reply #60 on: January 18, 2010, 08:10:22 AM » 
Reply

Emil,
Thanks for your thoughtful response about the spatial correlations between channels. While I do not dispute anything you wrote, it does raise more questions in my mind. For example, if the red and blue channels were over sampled, relying on the partial correlations would add distortion. Note sure what you have in mind here. What do you mean by distortion? As the sampling of the red and blue channel improves without limit, the need for relying on the correlation diminishes. Certainly using the correlation was much more important when we had 6 mPix cameras than now at 24 mPix. The degree of correlation must be dependent on the scene. How is that taken into account? What is the criteria for including the correlation? If the sensor outresolved any lens you could put in front of it by a sufficient factor, then one could do away with AA filters and use very simple interpolation algorithms. Apparently 24MP on a FF DSLR, or 50MP on MFDB is not enough, as Dick's example shows. As for how it is done, one can for example look at the main algorithm used in dcraw, called Adaptive HomogeneityDirected (AHD) demosaicing. The idea is to try interpolating the missing information both vertically and horizontally in a local region, then select the interpolation direction which leads to the smoothest result according to Lab color differences among the adjacent pixels.


« Last Edit: January 18, 2010, 08:31:25 AM by ejmartin »

Logged

emil



ejmartin


« Reply #61 on: January 18, 2010, 08:12:46 AM » 
Reply

Wish someone would write a similar FFT PS plugin for Mac. grrr.



Logged

emil



BartvanderWolf


« Reply #62 on: January 18, 2010, 10:41:52 AM » 
Reply

If you look closely, it's all checkerboard with just the 3 white lines. Yes, and not very useful at final output resolution to display even a suggestion of a sine wave pattern. Cliff, with all due respect (and I do mean that), there is a difference between theoretical reconstruction capability, and useful output resolution. What you have shown is that one can interpolate and thus reconstruct a signal when a good filter is used, but you cannot make the signal/image look good (like the real structure, e.g. a sine wave) at its native size. One needs to enlarge/interpolate to avoid visual artifacts, but at the same time one reduces the apparent  angular, from a fixed viewing distance  resolution. IOW, we are faced with a tradeoff, especially for on screen display, not just a single choice. That's why I raised the subject of down sampling on my web page about that subject. So you can reconstruct up to sqrt(2) higher resolution, superfine detail along the diagonals. Wasn't this what Fuji exploited in their Super CCD sensor? Fuji had no option to record it otherwise without throwing out recorded resolution. They rotated the sensor layout by 45 degrees, and thus were able to expoit the higher diagonal resolution capability of a regular grid, at the expense of diagonal resolution in the rotated sampling. I think it is a good choice from a resolution/capture perspective because most natural (and many manmade) objects have a more dominant horizontal/vertical frequency content, due to (resistance to) gravity. Unfortunately it also requires a 2x larger file size to hold the data without loss. What Fuji understood is that for a reliable visual representation, one has to sacrifice some (in their case diagonal) resolution at the native image size/orientation. That is not a bad choice for an image that ultimately needs to be printed (one of the Fuji goals) even if it is most likely at reduced size. For a reliable visual presentation of 'problematic structures' we need to sacrifice some potential resolution when viewing at 100% zoom setting on e.g. an LCD or similar hor/ver grid, as you have implicitly demonstrated. Luckily, many real life structures are chaotic enough to allow for some visual artifacts to go undetected at small reproduction sizes. Enlargements need all the help they can get though. Cheers, Bart


« Last Edit: January 18, 2010, 11:10:28 AM by BartvanderWolf »

Logged




crames


« Reply #63 on: January 18, 2010, 11:33:09 AM » 
Reply

Yes, and not very useful at final output resolution to display even a suggestion of a sine wave pattern. No question, one is forced to enlarge the image, but when you do, you see the hidden information. Cliff, with all due respect (and I do mean that), there is a difference between theoretical reconstruction capability, and useful output resolution. What you have shown is that one can interpolate and thus reconstruct a signal when a good filter is used, but you cannot make the signal/image look good (like the real structure, e.g. a sine wave) at its native size. One needs to enlarge/interpolate to avoid visual artifacts, but at the same time one reduces the apparent  angular, from a fixed viewing distance  resolution.
IOW, we are faced with a tradeoff, especially for on screen display, not just a single choice. That's why I raised the subject of down sampling on my web page about that subject. I'm just exploring possibilities, not suggesting that all images should be subjected to reconstruction. The original point I intended to make is that it allows you to see whether aliasing is really present in an image or not. So hopefully our eyeball "aliasing detectors" have now been recalibrated. But I think it's also clear that it shows a potential to recover more detail and quality. Yes, there is a resolution tradeoff. But maybe it's not so much a problem when printing, for example, because it's possible to reconstruct/interpolate by 2, then print at 720 dpi instead of 360 dpi, and thereby maintain angular resolution along with the potentiallyreduced artifacts. Cliff



Logged

Cliff



BartvanderWolf


« Reply #64 on: January 18, 2010, 12:06:02 PM » 
Reply

Wish someone would write a similar FFT PS plugin for Mac. grrr. Hi Emil, You could use ImageJ which is all JAVA, and operates in 32bit FP accuracy, not just 8bit. Cheers, Bart


« Last Edit: January 18, 2010, 12:19:28 PM by BartvanderWolf »

Logged




ejmartin


« Reply #65 on: January 18, 2010, 12:24:14 PM » 
Reply

Hi Emil, You could use ImageJ which is all JAVA, and operates in 32bit FP accuracy, not just 8bit. Cheers, Bart I do use ImageJ quite a bit for analysis, along with IRIS and Mathematica (the latter has been very handy for algorithm development). But I would love to have something that is easily integrated into an image processing workflow in Photoshop.


« Last Edit: January 18, 2010, 12:24:59 PM by ejmartin »

Logged

emil



BartvanderWolf


« Reply #66 on: January 18, 2010, 01:35:50 PM » 
Reply

I do use ImageJ quite a bit for analysis, along with IRIS and Mathematica (the latter has been very handy for algorithm development). I expected you did, but I wasn't sure. I know you also use Mathematica, but that's to be expected in an academic environment. But I would love to have something that is easily integrated into an image processing workflow in Photoshop. I see, but then wouldn't we all ... Cheers, Bart



Logged




joofa


« Reply #67 on: January 18, 2010, 02:48:21 PM » 
Reply

Here is the the proceedure:
1. Do the forward FFT. 2. Increase the canvas size on all sides. The ratio of the new size/original size is your interpolation factor. 3. Do the inverse FFT. Zeropadded DFTbased techniques have been used successfully for higher frequency resolution for a long time, for e.g., for the separation of sinusoids that are close in frequency. In this case you are applying them in the reverse direction. Zeropadding in frequency to get higher resolution in the spatial domain. However, insertion of zeros in one domain only lets you have more resolution in the other domain by interpolation with existing samples in that domain. No new detail is created, and the signal in the other domain only gets "stretched", and inbetween points are filled by information from the neighboring samples. And this process is not to be confused with the "aliasing" of the data that can be had during reconstruction by using a reconstruction filter width larger than necessary on the otherwise aliasfree data obtained during sampling.


« Last Edit: January 18, 2010, 03:19:39 PM by joofa »

Logged




crames


« Reply #68 on: January 18, 2010, 09:09:43 PM » 
Reply

Zeropadded DFTbased techniques have been used successfully for higher frequency resolution for a long time, for e.g., for the separation of sinusoids that are close in frequency. In this case you are applying them in the reverse direction. Zeropadding in frequency to get higher resolution in the spatial domain. However, insertion of zeros in one domain only lets you have more resolution in the other domain by interpolation with existing samples in that domain. No new detail is created, and the signal in the other domain only gets "stretched", and inbetween points are filled by information from the neighboring samples. Yes, none of this is new. Do you have any suggestions for eliminating the ripples?



Logged

Cliff



joofa


« Reply #69 on: January 18, 2010, 10:19:38 PM » 
Reply

Do you have any suggestions for eliminating the ripples? Top of the head the following methods may be used: (1) In the approximationbased reconstruction, as opposed to interpolationbased reconstruction, the coefficients in the linear combination (c_i * phi_i), where phi_i are basis functions represented by reconstruction kernel, are typically derived for the l_2 space (Hilbert space) for several reasons. However, in the more general Banach space setting, the l_p norm (p >=1), the error between reconstructed signal and actual signal is a convex function of coefficients c_i. Please note there is no reason to restrict p to integers, and values such as p=1.4, ,etc., are fine. It is observed that l_p with p around 1 has given a better performance on ringing suppression. This is a powerful approach, however, in general, computing the coefficients in spaces other than Hilbert space is not computationally easy. (2) Local presmoothing of signal discontinuity (e.g., sharp edge) before interpolating. (3) A strictly positive reconstruction function. No negative lobes. If the reconstruction filter is approximating, then, error may be larger. (4) A hybrid approach, similar to that suggested by Yaroslavsky may be used. Note: LocalWindowingbased schemes, which are otherwise good for error reduction between reconstructed and original signal, can help with smoothing of the block discontinuity at the end of each segment of data, however, some ringing may remain, because of presence of signal discontinuity (e.g., a sharp edge) elsewhere.


« Last Edit: January 19, 2010, 09:18:41 AM by joofa »

Logged




joofa


« Reply #70 on: January 18, 2010, 11:22:05 PM » 
Reply

Duplicate.


« Last Edit: January 18, 2010, 11:22:55 PM by joofa »

Logged




Jonathan Wienke


« Reply #71 on: January 18, 2010, 11:44:16 PM » 
Reply

Yes, none of this is new. Do you have any suggestions for eliminating the ripples? The solution I devised for suppressing ringing with cubic splines is simple, but seems to be very effective. I established a limit for the spline's z coefficients (as defined here). By limiting z to ± (maximum  minimum) / N, where N is between 8 and 32, ringing and ripples are greatly reduced without affecting spline values in conditions where ringing or ripples are not an issue. When its z coefficients are clamped to zero, the values returned by a cubic spline are identical to linear interpolation, which of course has no issues with ringing. By intelligently limiting z coefficient values, you can alter the behavior of the spline so that it interpolates quasilinearly in conditions where ringing is problematic (highcontrast edges), without affecting the spline's behavior in conditions where ringing is not an issue.



Logged




crames


« Reply #72 on: January 19, 2010, 06:07:26 AM » 
Reply

The solution I devised for suppressing ringing with cubic splines is simple, but seems to be very effective. I established a limit for the spline's z coefficients (as defined here). By limiting z to ± (maximum  minimum) / N, where N is between 8 and 32, ringing and ripples are greatly reduced without affecting spline values in conditions where ringing or ripples are not an issue. When its z coefficients are clamped to zero, the values returned by a cubic spline are identical to linear interpolation, which of course has no issues with ringing. By intelligently limiting z coefficient values, you can alter the behavior of the spline so that it interpolates quasilinearly in conditions where ringing is problematic (highcontrast edges), without affecting the spline's behavior in conditions where ringing is not an issue. How well does your algorithm do at avoiding reconstruction error?



Logged

Cliff



Jonathan Wienke


« Reply #73 on: January 19, 2010, 08:13:34 AM » 
Reply

How well does your algorithm do at avoiding reconstruction error? Not quite as well as sinc; it has trouble with your synthetic image. But it does a pretty good job handling the air conditioner image: [attachment=19565:reconstruction.png] My algorithm is on the left, nearestneighbor is on the right. I have a question for you about Yaroslavsky's discretesinc interpolation algorithm. In his fastsinc interpolation paper on page 15, he states: Discrete sinc functions sincd and sincd defined by (8.10) and (8.24) are discrete point spread functions of the ideal digital lowpass filter, whose discrete frequency response is a rectangular function. Depending on whether the number of signal samples N is odd or even number, they are periodic or antiperiodic with period N, as it is illustrated in Figures 8.2( a ) and 8.2( b ). If one were to do the sincinterpolation twice, once with an odd number of samples and once with an even number of samples (perhaps by deliberately padding the ends of the data with a different number of samples), wouldn't it be possible to use this periodic/antiperiodic property to combine the two interpolations and cancel out the ripples? In Yaroslavsky's diagrams, it looks like the ripple patterns surrounding the signal impulses are approximately mirror images of each other. If so, wouldn't this be a more mathematically elegant approach than Yaroslavsky's adaptive switchtonearestneighborintroublespots approach?


« Last Edit: January 19, 2010, 08:17:06 AM by Jonathan Wienke »

Logged




crames


« Reply #74 on: January 19, 2010, 07:40:12 PM » 
Reply

If one were to do the sincinterpolation twice, once with an odd number of samples and once with an even number of samples (perhaps by deliberately padding the ends of the data with a different number of samples), wouldn't it be possible to use this periodic/antiperiodic property to combine the two interpolations and cancel out the ripples? In Yaroslavsky's diagrams, it looks like the ripple patterns surrounding the signal impulses are approximately mirror images of each other. If so, wouldn't this be a more mathematically elegant approach than Yaroslavsky's adaptive switchtonearestneighborintroublespots approach? My guess (without spending the huge amount of time it would take a dabbler like me to really understand it) is that the odd number samples have a symmetrical spectrum, while the even number spectrum is asymmetrical. Because of the asymmetrical spectrum, filtering in the even case is a compromise because the effect of the filter won't be symmetrical. There might be a problem getting things to cancel out the way you would want. This is what I'm getting from looking looking at pages 3 and 7 of his Lecture 4 Selected Topics. Maybe Joofa can shed some light on it.



Logged

Cliff



ejmartin


« Reply #75 on: January 19, 2010, 08:26:26 PM » 
Reply

I don't think changing the number of samples from N to N+1 is going to make much difference as far as ringing/pattern artifacts are concerned. The sinc interpolation is predicated on the assumption that the signal being reconstructed is bandlimited  that it has no signal power on frequencies beyond Nyquist of the sampling. This works nicely for oscillating patterns like Bart's rings, but not so well for a step edge which has spectrum at all frequencies; the missing spectrum is what would cancel the ringing, and its absence in the reconstruction leads to the overshoots and undershoots near the edge. Changing the samples from N to N+1 will have very little effect for large N  it's just adding an extra row/column of pixels to the image.
So the rings image will work well because its power spectrum matches the one assumed by the sinc filter; but natural images have a quite different power spectrum and so it's not clear that a sinc filter based reconstruction is going to be optimal. It certainly won't be in images with step edges and similar structures.



Logged

emil



Jonathan Wienke


« Reply #76 on: January 19, 2010, 11:40:43 PM » 
Reply

So the rings image will work well because its power spectrum matches the one assumed by the sinc filter; but natural images have a quite different power spectrum and so it's not clear that a sinc filter based reconstruction is going to be optimal. It certainly won't be in images with step edges and similar structures. Yaroslavsky has a discretesinc interpolation algorithm that handles multiple rotations of a scanned text image better than several other common interpolation algorithms. Check out pages 1721 of http://www.eng.tau.ac.il/~yaro/RecentPubli...ion_ASTbook.pdf for a comparison test.



Logged




crames


« Reply #77 on: January 20, 2010, 08:26:42 AM » 
Reply

I found some more code, by one of Yaroslavsky's coauthors: Annti Happonen. In addition to malab code, this includes c source and a compiled windows dll. The various refinements to sinc interpolation appear to be included, such as minimized boundary effects, adaptive sliding windows, etc. Also simultaneous denoise/interpolation, rotating, zooming, etc. I will run some tests on real images when I get a chance.



Logged

Cliff



joofa


« Reply #78 on: January 20, 2010, 04:29:37 PM » 
Reply

Hi Cliff, I haven't verified, but I think you are right regarding the canceling of ringing using even/odd samples in your post above. On Annti Happonen, I'm not sure about his assertion that the best approximation (least square) of a possibly nonbandlimited continuous function (in L_2 space) in the space of bandlimited functions represented by shifted sinc functions as basis, i.e., sum_i (c_i * sinc_i), is given by taking coefficients c_i to be the sampled values of the actual function in L_2. It is not difficult to show that for best approximation the coefficients are actually obtained by filtering the original function with an ideal lowpass filter (sinc) and then sampling. The only way these statements may be reconciled is that samples of the original function and samples obtained after lowpass filtering that function with sinc are the same, which may not be the case for a general L_2 function.


« Last Edit: January 21, 2010, 10:19:25 AM by joofa »

Logged




crames


« Reply #79 on: January 24, 2010, 10:07:17 AM » 
Reply

I tried out the Yaroslavsky/Happonen sinc interpolation routines on a few images. It turns out that his sliding window method performs as advertised and is very effective at avoiding ripple artifacts. Unfortunately there is a tradeoff in that the sliding window method rolls off the higher frequencies enough that, in the end, it was no better in my tests than convention methods like bicubic. (The sliding window has other attractive features such as optional noise reduction, but I did not test that.) While looking for images to test I visited an old thread here that was recently revived, http://luminouslandscape.com/forum/index....showtopic=20242 In that thread there is a comparison between the Sigma SD14 and the Canon 50D where various image defects are blamed on the lack of AA filter in the Sigma. The bridge comparison is a perfect example to illustrate the point of the OP: that you need to apply a reconstruction filter if you are seeing jaggies, "aliasing," blockiness, etc. Links to the raw files were provided here: http://luminouslandscape.com/forum/index....st&p=338080. Here is a crop from the Sigma, before and after simple reconstruction by bicubic. (Note that Sinc interpolation was not used for any of the following examples.) Clearly, before reconstruction the Sigma is showing all the defects usually attributed to the lack of AA filter: jaggies, "false detail", blockiness like "tetris pieces", "grid effect," etc. After reconstruction those defects are almost completely eliminated, as we are now closer to seeing the analog truth within the image. Finally, here are matching pairs of crops comparing the 50D to the SD14 after both have been interpolated to remove sampling artifacts. The Canon by 2x bicubic, and the Sigma by 3.33x (to bring it up to the same scale as the Canon). The Sigma does well considering it has no AA filter and less than 1/3 the pixels.



Logged

Cliff



