Ad
Ad
Ad
Pages: « 1 2 [3] 4 »   Bottom of Page
Print
Author Topic: Naked sensor  (Read 10469 times)
opgr
Sr. Member
****
Offline Offline

Posts: 1125


WWW
« Reply #40 on: February 19, 2012, 01:26:08 AM »
ReplyReply


Okay, thanks.
Attached _DSC0963a
The actual RAW data as a grayscale
contrast and brightness changed for viewing ease.

As you can see in that image, there is nothing to suggest that the sensor is to blame for fabricated detail.
Having said that, I have to admit that it is indeed LR fabricating the errors, which may be due to the relatively dark exposure. Also the data seems slightly out-of-focus for that part of the image, which one might think helps mitigate aliasing, but in this case, sharper is probably better...

Attached _DSC0963b
For reference, I also have attached a more adequate RAW conversion through a proprietary algorithm which shows that it is certainly not necessary to fabricate detail as much, and preserve the soft aliasing. The algorithm works equally well on natural images as on these types of test charts.

In that respect it is interesting to see how DXO performs, because that converter works well on artificial test charts, but hopelessly messes up on natural images.
Logged

Regards,
Oscar Rysdyk
theimagingfactory
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7406


WWW
« Reply #41 on: February 19, 2012, 01:42:24 AM »
ReplyReply

Hi,

Your conversion is better than mine. It reproduces the fine letter text under the image readable too.

On the other hand, your conversion also contains detail above say 120 lp/mm, a line pattern with decreasing frequency. In my view a clear example of fake detail. The test target has increasing frequency up to 200 lp/mm.

Best regards
Erik


Okay, thanks.
Attached _DSC0963a
The actual RAW data as a grayscale
contrast and brightness changed for viewing ease.

As you can see in that image, there is nothing to suggest that the sensor is to blame for fabricated detail.
Having said that, I have to admit that it is indeed LR fabricating the errors, which may be due to the relatively dark exposure. Also the data seems slightly out-of-focus for that part of the image, which one might think helps mitigate aliasing, but in this case, sharper is probably better...

Attached _DSC0963b
For reference, I also have attached a more adequate RAW conversion through a proprietary algorithm which shows that it is certainly not necessary to fabricate detail as much, and preserve the soft aliasing. The algorithm works equally well on natural images as on these types of test charts.

In that respect it is interesting to see how DXO performs, because that converter works well on artificial test charts, but hopelessly messes up on natural images.
Logged

opgr
Sr. Member
****
Offline Offline

Posts: 1125


WWW
« Reply #42 on: February 19, 2012, 03:02:06 AM »
ReplyReply

Hi,

Your conversion is better than mine. It reproduces the fine letter text under the image readable too.

On the other hand, your conversion also contains detail above say 120 lp/mm, a line pattern with decreasing frequency. In my view a clear example of fake detail. The test target has increasing frequency up to 200 lp/mm.

Best regards
Erik



I suppose you could call it "fake detail". The data clearly shows moire, as in: an undulating pattern of increasing frequency that will change to an undulating pattern of lower frequency once NF is reached.

But I always thought of images as having a kind of "flow" for lack of a better term. In a picture of a tree for example the flow of edges goes from the stem, along the branches, to the twigs and the leaves. Or the feathers of a bird, perfectly combed and groomed. There is a certain directionality, along which the eye expects to find edges, and any deviations may even be detectable on subconscious levels. So as long as the "faked" detail at least has a reasonably correct directionality, it may improve the overall viewing experience, while at the same time improving apparent detail.

This is opposed to software that makes significant errors in relatively large detail like Canon's DPP. This creates a noisy, visible or at least detectable grid in the flow, and makes post-processing sharpening a nightmare.

Perhaps there is a more reasonable compromise for aliasing filters. Something that will allow the image reconstruction to know the directionality while at the same time not create excessive contrast edges. As a RAW converter I don't want the aliasing filter to try and cope with color-aliasing:

1) Because you can get away with a lot of softening in color information, allowing anti-aliasing in post-processing.
2) Detail errors primarily occur in the green channel, which then translate to the colorchannels. 

Instead I want it to cope with the lack of continuous samples in green, as follows:

For every green pixel, you would want to sample the slightly larger area that includes part of the direct neighbors, typically a 45degr square that exactly encompasses the original green pixel. But no larger! My gut feeling tells me that would be the optimal anti-aliasing blur required. But the actual physics involved probably won't allow such sampling. I do believe that this is one of the reasons that the original Fuji Super CCD worked so well. As opposed to the new lay-out they came up with, which is not going to solve anything, and will likely introduce unwanted color artifacts. (I suspect that they will soon find all kinds of color smearing on thin dark lines against light backgrounds, as in trees against a sky, color texts on opposing color background…)






Logged

Regards,
Oscar Rysdyk
theimagingfactory
Dale Villeponteaux
Full Member
***
Offline Offline

Posts: 191



« Reply #43 on: February 19, 2012, 02:22:09 PM »
ReplyReply

As ignorant as I am, I should be barred from reading technical threads.  Wasn't it one of Walt Kelly's "Pogo" characters who said, "Sometimes I don't understand you.  And I'm glad!"  My head is swimming; I'm going to lie down.

At least I'm clearer about the extent of my ignorance.  Thanks.
Dale V.
Logged

A modest man, with much to be modest about
opgr
Sr. Member
****
Offline Offline

Posts: 1125


WWW
« Reply #44 on: February 19, 2012, 03:45:07 PM »
ReplyReply

As usual I was too lazy to produce a picture, and started to write my obligatory 1000 words. Obviously, that requires even more energy so I stopped short of 20 or so. But then here is a picture.

If the green squares represent the green pixels to be captured, then I would like to actually capture the equivalent of the dotted squares.

Light capture in a camera at that scale is probably some complex round gaussian function, and I have no idea at all what is involved. It may well be for example that anti-aliasing filter simply can't be scaled properly, so you may have to choose between "no anti-aliasing" or "a little too much". But purely based on intuition I would consider this the optimal amount of blurring for the specific task of debayering.
Logged

Regards,
Oscar Rysdyk
theimagingfactory
opgr
Sr. Member
****
Offline Offline

Posts: 1125


WWW
« Reply #45 on: February 19, 2012, 03:55:19 PM »
ReplyReply

And an example of flow:

Attached a bird's feathers using Canon's DPP. You can clearly see the grittyness of incorrect directional interpolation.
Logged

Regards,
Oscar Rysdyk
theimagingfactory
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1678


« Reply #46 on: February 19, 2012, 03:57:26 PM »
ReplyReply

Light capture in a camera at that scale is probably some complex round gaussian function, and I have no idea at all what is involved. It may well be for example that anti-aliasing filter simply can't be scaled properly, so you may have to choose between "no anti-aliasing" or "a little too much". But purely based on intuition I would consider this the optimal amount of blurring for the specific task of debayering.
would that be your ideal green-channel blurring, or your ideal blurring for all channels?

Does the figure suggest a rotated 2x2 rectangular filter? I would think that a smooth response would be preferreable. Either a gaussian or some kind of windowed sinc.

-h
Logged
opgr
Sr. Member
****
Offline Offline

Posts: 1125


WWW
« Reply #47 on: February 19, 2012, 04:03:24 PM »
ReplyReply

And the res chart for Canon DPP faithfully shows where that grittyness originates.
Also note the edge and corners of the black square.
Logged

Regards,
Oscar Rysdyk
theimagingfactory
opgr
Sr. Member
****
Offline Offline

Posts: 1125


WWW
« Reply #48 on: February 19, 2012, 04:13:44 PM »
ReplyReply

would that be your ideal green-channel blurring, or your ideal blurring for all channels?

Does the figure suggest a rotated 2x2 rectangular filter? I would think that a smooth response would be preferreable. Either a gaussian or some kind of windowed sinc.

-h

My ideal for just the green channel, as I think that color-aliasing is far less of a problem, and more easily dealt with in the RAW converter because you can get away with a lot of blurring of colorinformation before it becomes objectionable.

I would prefer a single black dotted square. (size = sqrt(2) x sqrt(2)). The pattern was created to show the fit.

Normally when scanning truly adjacent pixels, you would likely have some overlapping gaussian function. (I suppose it would be like the equivalent of "umfelt" in old Chromagraph scanners). I don't know what the ideal amount is, but I would like to multiply that ideal spread by sqrt(2).
Logged

Regards,
Oscar Rysdyk
theimagingfactory
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1678


« Reply #49 on: February 20, 2012, 01:45:30 AM »
ReplyReply

My ideal for just the green channel, as I think that color-aliasing is far less of a problem, and more easily dealt with in the RAW converter because you can get away with a lot of blurring of colorinformation before it becomes objectionable.

I would prefer a single black dotted square. (size = sqrt(2) x sqrt(2)). The pattern was created to show the fit.

Normally when scanning truly adjacent pixels, you would likely have some overlapping gaussian function. (I suppose it would be like the equivalent of "umfelt" in old Chromagraph scanners). I don't know what the ideal amount is, but I would like to multiply that ideal spread by sqrt(2).
Do you know the sensel fill-factor with and without micro-lenses? I dont think that you can automatically assume that a single sensel will capture light across the area suggested by pixel pitch?

If you have a square sensor PSF and a very good lense, you would still have "jagged edges" and other artifacts attributed to improper pre-filtering. 

-h
Logged
opgr
Sr. Member
****
Offline Offline

Posts: 1125


WWW
« Reply #50 on: February 20, 2012, 04:48:25 AM »
ReplyReply

Do you know the sensel fill-factor with and without micro-lenses? I dont think that you can automatically assume that a single sensel will capture light across the area suggested by pixel pitch?

I don't think so either, and even the microlenses may not fill the entire area of the pixel pitch, and if they do, then they still can't provide an optimal gaussian spread. So you need some kind of additional filter on top of the sensor that spreads the light at some optimal amount.

Now, I am presuming that there is always some kind of filter on top of the sensor, like an infrared filter and/or antiglare layer, so that there is a minimal amount of spread. The questions under consideration are these:

1) how much spread is optimal?

2) is an anti-aliasing filter able to provide that specific spread?


Way back in my advertising prepress days, we used to have specialized scanner operators for B&W newspaper ads. The one thing that stood them apart from the other scanner operators was their specific skill in controlling "umfelt". The best scanner operators in general were the ones that knew how to select the correct umfelt for both the input and output medium, which btw happens to be one of those skills that no stinking 10.000hours of practice is going to help! Or maybe it would, but you can't expect someone to have that many hours under the belt before becoming a skilled craftsman. So, only the most talented scanner operators would do the newspaper imagery.

The best operator incidentally happened to be a woman, and I guess if it was such a hard to find intuitive skill, then maybe that explains why it is so hard to decide what the optimal anti-aliasing filter should or should not do in an even more complex context. It may well be that the more technically inclined scientists working on these problems aren't exactly the ones to properly judge the resulting pixels in the endproduct. But given the discussions and opinions amongst professional photographers, I am not sure they are equipped to give the scientist the correct feedback either.
Logged

Regards,
Oscar Rysdyk
theimagingfactory
NikoJorj
Sr. Member
****
Offline Offline

Posts: 1063


WWW
« Reply #51 on: February 20, 2012, 06:07:35 AM »
ReplyReply

I recall doing some resolution comparisons with those two cameras some years ago, and came to the conclusion that any increase in pixels numbers of less than 50% is probably not worth bothering with, and therefore I would never upgrade a camera based solely on an increase in pixel numbers of less than 50%, and perhaps not even 50%.
Aaaamen!
See also http://www.luminous-landscape.com/essays/sensor-design.shtml in the same vein.
Logged

Nicolas from Grenoble
A small gallery
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1678


« Reply #52 on: February 20, 2012, 07:56:11 AM »
ReplyReply

1) how much spread is optimal?
Image scaling is really similar to image sampling, except that it all takes place in software, so the choice of conversion algorithm is really flexible. I believe that it is generally believed that a lanczos2/lanczos3 kernel is close to the ideal trade-off for linear kernels.


Of course, there may be other linear kernels that are slightly better, and for every source/destination there will probably be a specific kernel that is better. Finally, non-linear operations is a superset of linear operations, so expect to find some (possibly small) gain by venturing there.
Quote
2) is an anti-aliasing filter able to provide that specific spread?
No. To my knowledge, they are limited to linear functions, positive-only coefficients, and a limited number of PSFs. The Nikon PR material suggests that the D800 has an AA-filter that looks like:
x x
x x
In other words, the image is mirrored spatially, once vertically, then horisontally.

Quote
Way back in my advertising prepress days, we used to have specialized scanner operators for B&W newspaper ads. The one thing that stood them apart from the other scanner operators was their specific skill in controlling "umfelt". The best scanner operators in general were the ones that knew how to select the correct umfelt for both the input and output medium, which btw happens to be one of those skills that no stinking 10.000hours of practice is going to help! Or maybe it would, but you can't expect someone to have that many hours under the belt before becoming a skilled craftsman. So, only the most talented scanner operators would do the newspaper imagery.

The best operator incidentally happened to be a woman, and I guess if it was such a hard to find intuitive skill, then maybe that explains why it is so hard to decide what the optimal anti-aliasing filter should or should not do in an even more complex context. It may well be that the more technically inclined scientists working on these problems aren't exactly the ones to properly judge the resulting pixels in the endproduct. But given the discussions and opinions amongst professional photographers, I am not sure they are equipped to give the scientist the correct feedback either.
I think it is very interesting to compare the knowledge of two such related but slightly different domains.

-h
« Last Edit: February 20, 2012, 07:59:09 AM by hjulenissen » Logged
ndevlin
Sr. Member
****
Offline Offline

Posts: 511



WWW
« Reply #53 on: February 20, 2012, 08:32:36 PM »
ReplyReply


Why don't we reason this one backwards: Nikon has spent an enormous amount of R&D money and a similar amount of production phasing/planning/implementation money, to produce an identical product to the mass-market D800, but for the lack of the AA filter.

The D800E commands a paltry $300 premium.

How many people will buy a D800E who would not have bought a D800 if the D800E had never existed or been mentioned?  Reason tells me that's a fairly low number. 

So...Nikon has done something expensive, for a limited market gain, at a modest price premium. 

Now does anyone really think Nikon did this without having seen a compelling visual results case for it?

Unless the accounting department at Olympus is now running product development group at Nikon, the only logical conclusion is that Nikon saw sufficiently compelling results to produce two variants, trusting that real-world users would see enough difference to buy a lot of 800Es to make the exercise worthwhile.

The testing sure will be fun :-) 

- N.
Logged

Nick Devlin   @onelittlecamera
BernardLanguillier
Sr. Member
****
Offline Offline

Posts: 7975



WWW
« Reply #54 on: February 20, 2012, 08:44:37 PM »
ReplyReply

Unless the accounting department at Olympus is now running product development group at Nikon, the only logical conclusion is that Nikon saw sufficiently compelling results to produce two variants, trusting that real-world users would see enough difference to buy a lot of 800Es to make the exercise worthwhile.

Another daring - complementary - theory might be that they simply listened to their customers?  Grin

Things never happen by chance in Japan. The release by Pentax, Ricoh, Fuji and now Nikon of devices without an AA filter in a very short time frame is probably not a coincidence. Although I have no proof of this, there is a high probability that these companies decided to join force a few years back to do research around the topic of AA filter less sensors and the image processing thereof.

Past public conversations with Nikon guys make me think that some people inside the company are very strongly opposed to AA filter less devices because they go against the core Nikon philosophy since the beginning of digital. This philosophy being to make digital as transparent as possible for users coming from film. Meaning a digital experience devoid of issues that were not present with film. They are really concerned about the support cost of having to deal with users complaining about digital artifacts.

So the reason why they decided to deliver both options with the D800 is probably that they could not reach an internal agreement about selecting only one option... and they did have the means to propose the 2 options to their customers.

Cheers,
Bernard
Logged

A few images online here!
BJL
Sr. Member
****
Offline Offline

Posts: 5129


« Reply #55 on: February 20, 2012, 09:31:53 PM »
ReplyReply

From a crass ROI perspective, all it would take to justify Nikon's decision to offer the D800E is evidence of sufficient demand at a sufficiently high price, without Nikon needing to know or care whether that demand is rationally based. A purist rejection of any resolution loss regardless of any side-effects on IQ is at least a possible source if such demand.

I will again try to turn this around: Nikon and all other DSLR makers routinely go to the extra expense of using AA filters: what does that say about their judgement of a "compelling visual case" for having an AA filter?
Logged
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7406


WWW
« Reply #56 on: February 20, 2012, 10:38:14 PM »
ReplyReply

Hi,

What surprises me is that they essentially seem to use a dual layer of birefringent, with the second layer inverting the effect of the first one, instead of just removing both layers. I presume that they want to keep the thickness and composition of the optical package in front of the sensor.

Best regards
Erik


From a crass ROI perspective, all it would take to justify Nikon's decision to offer the D800E is evidence of sufficient demand at a sufficiently high price, without Nikon needing to know or care whether that demand is rationally based. A purist rejection of any resolution loss regardless of any side-effects on IQ is at least a possible source if such demand.

I will again try to turn this around: Nikon and all other DSLR makers routinely go to the extra expense of using AA filters: what does that say about their judgement of a "compelling visual case" for having an AA filter?
Logged

BJL
Sr. Member
****
Offline Offline

Posts: 5129


« Reply #57 on: February 21, 2012, 03:12:05 PM »
ReplyReply

What surprises me is that they essentially seem to use a dual layer of birefringent, with the second layer inverting the effect of the first one, instead of just removing both layers. I presume that they want to keep the thickness and composition of the optical package in front of the sensor.
Yes, that is the only sense I can make of this unique approve to the absence of low pass filtering: avoiding the need for different versions of other components that have been designed with other optical effects of those birefringent layers taken into account.
Logged
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7406


WWW
« Reply #58 on: February 22, 2012, 12:04:39 AM »
ReplyReply

Hi,


I guess that it does not take much research effort to rotate a slice of a lithium niobate mono crystal 90 degrees and flip sides ;-)

Doing what Nikon does may be smart but doesn't request enormous R&D. Some things do take lot of R&D, lenses for sure, I have great respect for Nikon and others developing all different technologies we have in cameras. I'm specially impressed by the camera ASICs that can process 4-10 images/per second. Try to do that on a hex core PC!

I'm a little bit tried of blaming everything on R&D, prices are set by supply and demand, and also what customers are willing to pay for.

Best regards
Erik

Why don't we reason this one backwards: Nikon has spent an enormous amount of R&D money and a similar amount of production phasing/planning/implementation money, to produce an identical product to the mass-market D800, but for the lack of the AA filter.


The testing sure will be fun :-)  

- N.

« Last Edit: February 22, 2012, 09:55:05 PM by ErikKaffehr » Logged

hjulenissen
Sr. Member
****
Offline Offline

Posts: 1678


« Reply #59 on: February 22, 2012, 02:48:07 AM »
ReplyReply

I'm specially impressed by the camera ASICs that can process 4-10 images/per second. Try to do that on a hex core PC!
I imagine that would be possible, especially if you assume that most high-power systems include a somewhat flexible, powerful GPU.

The reason why PC software appears to be slower may be that the demand is for slightly higher quality/flexibility, or lower development time rather than extreme framerates.

Now, running a hex-core PC with a 150W Nvidia GPU off of a regular DSLR battery while processing 10 raw frames/second would be hard.

-h
Logged
Pages: « 1 2 [3] 4 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad