Ad
Ad
Ad
Pages: « 1 2 3 [4]   Bottom of Page
Print
Author Topic: Resolution and alaising challenge  (Read 23623 times)
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3746


« Reply #60 on: October 27, 2011, 06:11:53 PM »
ReplyReply

I am a bit unsure how to interpret the results and how to share them. Cropped 1:1 TIF exports are ~2.6MB and there are 15 or so files.

You typically want to compare the, let's say 200x200 pixel, center crops of the 'stars'. There you look for asymmetry of the blur in the center. As suggested, a high quality JPEG is suffcient for sharing. Interpretation becomes much easier when you add a concentric circle of 92 pixels over the center of the star (e.g. in Photoshop I make a separate transparent layer with only a circle, by adding a stroke to a circular selection, and copy that layer to other files).

When there is no asymmetry, then there is no issue and you can measure the number of pixels diameter of the central blur spot. The smaller the blur spot is, the higher the resolution. It takes only a little defocus to increase the blur spot diameter. When approaching the center from the outside, there comes a point (the resolution limit) that the light/dark contrast becomes medium gray (or multicolored, or gets a maze like structure) and sometimes you see that the lines switch places (change phase), and ultimately they deviate from their straight path as hyperboles (= aliasing). Inside the 92 pixel circle there can only be aliased information, even though it sometimes looks like the original data.

In a regular grid, it is common that the diagonal resolution is up to 30% higher than the horizontal/vertical resolution, except for some Fuji cameras that have a rotated sensor grid.

Cheers,
Bart
Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1683


« Reply #61 on: October 30, 2011, 03:08:26 PM »
ReplyReply

Camera: Canon EOS 7D
ISO speed: 100
Shutter: 1/1.3 sec
Aperture: f/7.0
Focal length: 116.0 mm
Lense: Canon 70-200mm f/4.0L IS

Image read using dcraw with no demosaic, no gamma, 16 bits:
>>dcraw -D -4 -T IMG_4315-1.CR2

Applied black-point/white-point assuming flat spectrum. Estimated circle centrum manually and drew a circle around it at 92 pixel diameter. Included images are a central crop and a scaled-down version of the entire image. It is evident now that i had uneven lighting (light source at lower left).

Kind of curious if the PSF can be estimated directly when you have the analytic function that generated the reference output and the sensor reading. If a good estimate of the PSF under optimally focused conditions for a given camera and a set of lenses one might speculate how its AA-filter works and how ideal images should optimally be sharpened.

-h
« Last Edit: October 30, 2011, 03:16:25 PM by hjulenissen » Logged
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3746


« Reply #62 on: October 30, 2011, 07:36:24 PM »
ReplyReply

Camera: Canon EOS 7D
ISO speed: 100
Shutter: 1/1.3 sec
Aperture: f/7.0
Focal length: 116.0 mm
Lense: Canon 70-200mm f/4.0L IS

Image read using dcraw with no demosaic, no gamma, 16 bits:
>>dcraw -D -4 -T IMG_4315-1.CR2

Hi -h,

This means that the lens was used at an aperture that allows to resolve details restricted by the diffraction of a nominal f/7.1 aperture lens.

The Raw conversion settings kept the image in linear 16-bit/channel gamma space, but without white balancing.

Quote
Applied black-point/white-point assuming flat spectrum. Estimated circle centrum manually and drew a circle around it at 92 pixel diameter. Included images are a central crop and a scaled-down version of the entire image. It is evident now that i had uneven lighting (light source at lower left).

I'll assume that this black/white-point setting took care of the white balancing.

What I don't get is where the the elliptical distortion comes from. Hence, I can't comment on the suggested better than Nyquist performance.

Quote
Kind of curious if the PSF can be estimated directly when you have the analytic function that generated the reference output and the sensor reading. If a good estimate of the PSF under optimally focused conditions for a given camera and a set of lenses one might speculate how its AA-filter works and how ideal images should optimally be sharpened.

Well, you 'know' the input signal (the print of the target) and the output signal. It's possible to make a model to derive the PSF needed to accomplish that. Whether that is of any use depends on the tools one has to reconstruct the original input signal based on the PSF. An application that uses the user input of a PSF is required to use such info.

What IMHO is probably the easiest approach to derive the PSF, is by using the slanted edge features of the target, in order to approximate the horizontal/vertical Edge Spread Functions (ESFs). The slanted edges of the printed target need to be calibrated (usng the gray scale) to the (presumed linear) gamma of your image. Then one needs to model a PSF to arrive at the observed slope of the oversampled ESFs, which could lead to a 2-dimensional model of the PSF that's reasonably close to reality, assuming there is no motion blur involved.

I'm sorry that I can't reveal more explicit details (although all disclosed details were pertinent to the solution), but it took me a while to figure things out, and it resulted in a proprietary method that I probably want to patent/commercialise.

Cheers,
Bart
Logged
Fine_Art
Sr. Member
****
Offline Offline

Posts: 1094


« Reply #63 on: October 30, 2011, 08:48:34 PM »
ReplyReply

Camera: Canon EOS 7D
ISO speed: 100
Shutter: 1/1.3 sec
Aperture: f/7.0
Focal length: 116.0 mm
Lense: Canon 70-200mm f/4.0L IS

Image read using dcraw with no demosaic, no gamma, 16 bits:
>>dcraw -D -4 -T IMG_4315-1.CR2

Applied black-point/white-point assuming flat spectrum. Estimated circle centrum manually and drew a circle around it at 92 pixel diameter. Included images are a central crop and a scaled-down version of the entire image. It is evident now that i had uneven lighting (light source at lower left).

Kind of curious if the PSF can be estimated directly when you have the analytic function that generated the reference output and the sensor reading. If a good estimate of the PSF under optimally focused conditions for a given camera and a set of lenses one might speculate how its AA-filter works and how ideal images should optimally be sharpened.

-h

What is this? There is no way the blue circle is 92 pixels diameter. If you are getting clean lines all around inside of 92 pixels it is the angular equivalent of saying you are getting more than 5000 lines of resolution on a sensor 5000 pixels wide. It is nonsense, something is wrong with the test.
Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1683


« Reply #64 on: October 31, 2011, 01:16:18 AM »
ReplyReply

92 pixels was the radius, not diameter.my bad
Logged
Cem
Newbie
*
Offline Offline

Posts: 28



WWW
« Reply #65 on: October 31, 2011, 03:36:23 AM »
ReplyReply

Hi -h,

92 pixels was the radius, not diameter.my bad
Yes indeed. The idea behind the 92 pixel diameter is that it gives roughly 288 pixels around the circumference (PI*diameter). Since the target contains 144 cycles, the Nyquist limit of 0.5 cycles per pixel would be equal to 288 pixels.

BTW, at what distance did you shoot the target? The distance should have been 25-50 times of the focal length. I would recommend shooting at around 3 meters for this focal length of 116mm. However, looking at the pixel dimensions of the target in the whole image you have posted, I have the impression that your shooting distance was much closer, is that so? I might be overlooking something of course. Smiley
Logged

Kind Regards,

Cem

Photographs
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1683


« Reply #66 on: October 31, 2011, 04:17:45 AM »
ReplyReply

Hi -h,
Yes indeed. The idea behind the 92 pixel diameter is that it gives roughly 288 pixels around the circumference (PI*diameter). Since the target contains 144 cycles, the Nyquist limit of 0.5 cycles per pixel would be equal to 288 pixels.
So in the figures I posted, one should really think of a circle at half the radius I did (that is what I have done now).
Quote
BTW, at what distance did you shoot the target? The distance should have been 25-50 times of the focal length. I would recommend shooting at around 3 meters for this focal length of 116mm. However, looking at the pixel dimensions of the target in the whole image you have posted, I have the impression that your shooting distance was much closer, is that so? I might be overlooking something of course. Smiley
I estimate distance to be ~3-4 meters. I do have a 1.6x crop-sensor.

-h
Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1683


« Reply #67 on: October 31, 2011, 04:24:16 AM »
ReplyReply

This means that the lens was used at an aperture that allows to resolve details restricted by the diffraction of a nominal f/7.1 aperture lens.
I shot all of my lenses at f/7.1, f/5.6 and the largest aperture available. I have so far only looked into this file which was chosen because it had the most recent time-stamp :-)
Quote
The Raw conversion settings kept the image in linear 16-bit/channel gamma space, but without white balancing.

I'll assume that this black/white-point setting took care of the white balancing.
I wanted dcraw for its file-format-decoding but nothing else. Then I found a suitable "black" spot in the image, and a suitable "white" spot. I did smoothed readings of red, green1, green2 and blue in the black vs the white region, and then applied a subtraction and a division of raw bayer sensel values to stretch the channel-dependent nominal integer range from 2048 to 6600 into a nearly channel independent range from 0 to 1. Of course, there are some remaining spots where one can see remainders of the Bayer pattern, but judging from my histograms, it seems fairly ok.
Quote
What I don't get is where the the elliptical distortion comes from. Hence, I can't comment on the suggested better than Nyquist performance.
See my other posts - I got a factor of two wrong.

-h
Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1683


« Reply #68 on: October 31, 2011, 02:52:18 PM »
ReplyReply

To make things clear: When I said "blackpoint" and "whitepoint", what I meant was that I made a 4-elemtent vector based on the values of red, green1, green2 and blue in manually detected "black" and "white" patches, and from that I found what to subtract and multiply the color channels with to make them span the nominal range 0...1. Due to uneven illumination some parts are slightly more. Due to specular reflections in the scotch, some parts are a lot more - I am clipping those. For instance:
rg1g2b_sub = [2187       2235      2219      2132]
rg1g2b_div =  [4491       5580      5614        2097]
out_image = (in_image - rg1g2b_sub)/rg1g2b_div

The first attachement shows one image histogram after this adjustement.

The other 3 attachements shows a crop of the central part of the test-pattern with an updated circle of 92 pixels diameter at f/4.0, f/5.6 and f/7.1. I see now that the image centre is somewhat off in the f/4 and f/5.6 images.

I think that focusing was quite difficult. Perhaps the best thing to do would be using remote control software, zooming in and stepping the focus motor one step at a time?
« Last Edit: October 31, 2011, 02:54:01 PM by hjulenissen » Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1683


« Reply #69 on: October 31, 2011, 02:59:39 PM »
ReplyReply

For the heck of it, here is the f/4.0 in-camera jpeg (compare with the first crop in my previous post).
Logged
Fine_Art
Sr. Member
****
Offline Offline

Posts: 1094


« Reply #70 on: October 31, 2011, 03:30:55 PM »
ReplyReply

There is no way that is an in camera jpg either, the pixels have different sizes!
You have a magical camera than can change the dimensions of the pixels? I think not. What is it really?
Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1683


« Reply #71 on: October 31, 2011, 04:16:17 PM »
ReplyReply

There is no way that is an in camera jpg either, the pixels have different sizes!
You have a magical camera than can change the dimensions of the pixels? I think not. What is it really?
Not sure what you mean, but I can assure you that I have (to the best of my abilites) provided what I claim to have provided. User error is always a possibility...

The in-camera jpeg is 5184x3456 pixels, the raw file is 5202x3464 as provided from dcraw. I crop a rectangle at the same offset from the upper left corner (1,1) of both (1145:2762, 1611:3186) (explaining the visible offset between files), render it to display at some resolution (nn resampling) and save the result as a png.

-h
« Last Edit: October 31, 2011, 04:18:28 PM by hjulenissen » Logged
Fine_Art
Sr. Member
****
Offline Offline

Posts: 1094


« Reply #72 on: October 31, 2011, 04:38:48 PM »
ReplyReply

If you zoom in on the image there are square pixels, rectangular flat pixels and rectangular vertical pixels. I have never seen an image program do that before; I have never used decraw so I am ignorant of its abilities.
Logged
Pages: « 1 2 3 [4]   Top of Page
Print
Jump to:  

Ad
Ad
Ad