Pages: « 1 2 [3]   Bottom of Page
 Author Topic: Megapixels and print size, a small experiment  (Read 13050 times)
BartvanderWolf
Sr. Member

Offline

Posts: 3909

This approach could produce a simple rule-of-thumb, e.g. in my case something like 105 PPI at 1 metre distance, 52.5 PPI at 2 metres, 210 PPI at 50cm, etc., and double that PPI for higher contrast detail, Vernier acuity, and sharpening.

To expand a little bit further for an 'average' person (which I'm not ) with 1 arc minute acuity (20/20 vision), and for the theoretically best possible visual acuity 0.4 arc minute (due to the size limits of the human eye's cones):

Rule of thumb for the required PPI for viewing (large format) output at a given distance:
• 1 arc minute at 1 metre equals 87.32 PPI.
• 0.4 arc minutes at 1 metre equals 218.3 PPI.
Divide by the viewing distance in metres to find the required minimum PPI.

For the metrically challenged this would become:
• 1 arc minute at 1 foot equals 286.48 PPI.
• 0.4 arc minutes at 1 foot equals 716.2 PPI.
Divide by the viewing distance in feet to find the required minimum PPI.

This is before upsampling to the printer driver's native resolution (to avoid low quality printer driver interpolation, and to allow output sharpening at the printer's native output resolution).

I still recommend to use twice that minimum PPI requirement if one wants to exploit output sharpening to its maximum potential, and make sure that the most critical subjects (requiring Vernier acuity) with high contrast will be accommodated with very high quality.

Here is a nice demonstration of what Vernier acuity is capable of. The best I can achieve with that test is 0.02 pixels at half a metre viewing distance, which would translate for my display resolution to some 3183 PPI at 1 metre, i.e some 25x the display PPI, and some 14.6x the  maximum spatial resolution limit of 0.4 arc minutes. Therefore, the above recommendation of using 2x the PPI that the rule of thumb suggests is not unrealistic.

Cheers,
Bart
 Logged
hjulenissen
Sr. Member

Offline

Posts: 1713

Here is a nice demonstration of what Vernier acuity is capable of. The best I can achieve with that test is 0.02 pixels at half a metre viewing distance, which would translate for my display resolution to some 3183 PPI at 1 metre, i.e some 25x the display PPI, and some 14.6x the  maximum spatial resolution limit of 0.4 arc minutes. Therefore, the above recommendation of using 2x the PPI that the rule of thumb suggests is not unrealistic.
I never heard of Vernier acuity before, thanks for the link.

Not sure that I understand the reasoning, though. The demo seems to indicate that a properly anti-aliased edge can be positioned to within subpixel accuracy. That is fully in line with sampling theory (i.e. reconstruction by a sinc shifted by subpixel amounts). How can this be used as an argument for higher resolution? It shows that if there were such edges in the scene, you would still be able to position them correctly to within subpixel accuracy, provided that good sampling/resampling was used throughout.

-h
 Logged
BartvanderWolf
Sr. Member

Offline

Posts: 3909

I never heard of Vernier acuity before, thanks for the link.

Not sure that I understand the reasoning, though.

Just have a look at a Vernier Caliper scale. That should explain the concept. Human vision can resolve lines that are relatively offset, with more than 10x higher resolution than simple spatial resolution.

Quote
The demo seems to indicate that a properly anti-aliased edge can be positioned to within subpixel accuracy. That is fully in line with sampling theory (i.e. reconstruction by a sinc shifted by subpixel amounts). How can this be used as an argument for higher resolution? It shows that if there were such edges in the scene, you would still be able to position them correctly to within subpixel accuracy, provided that good sampling/resampling was used throughout.

The point being that we need enough pixels to position the anti aliased edges, even if we can no longer resolve the details themselves with our eyes. Edges at an angle will look even less jagged. Our eyes can detect relative displacement with more than 10x higher resolution than the detail size itself.

Cheers,
Bart
 « Last Edit: March 15, 2013, 10:40:26 AM by BartvanderWolf » Logged
hjulenissen
Sr. Member

Offline

Posts: 1713

Just have a look at a Vernier Caliper scale. That should explain the concept. Human vision can resolve lines that are relatively offset, with more than 10x higher resolution than simple spatial resolution.
That part I got.
Quote
The point being that we need enough pixels to position the anti aliased edges, even if we can no longer resolve the details themselves with our eyes. Edges at an angle will look even less jagged. Our eyes can detect relative displacement with more than 10x higher resolution than the detail size itself.
Yes, but if we can detect "blobs" shifted at e.g. 1/10th pixel accuracy when captured and rendered using relatively large pixels, why does this tell us that we need to capture and render them using any smaller pixels? The filtering used in front of the camera sensels, in any image scaling and in the printer/display rendering all are more or less crude approximations to the ideal filters (some cruder than others). This experiment does not prove (AFAIK) that the edge has to be rendered as an infinitely sharp edge?

This reminds me of a debate about the supposed inadequacy of the CD format.

Audiophile: "humans are capable of detecting relative delay between right/left channels of less than 1/44100 second, therefore CD sucks".

Rationalist: "but the sampling theorem tells us that if one channel is delayed by a fraction of a sample, this can be recorded and recreated very accurately"

-h
 Logged
BartvanderWolf
Sr. Member

Offline

Posts: 3909

That part I got.Yes, but if we can detect "blobs" shifted at e.g. 1/10th pixel accuracy when captured and rendered using relatively large pixels, why does this tell us that we need to capture and render them using any smaller pixels?

Because real detail is more accurate than interpolated detail?

Quote
The filtering used in front of the camera sensels, in any image scaling and in the printer/display rendering all are more or less crude approximations to the ideal filters (some cruder than others). This experiment does not prove (AFAIK) that the edge has to be rendered as an infinitely sharp edge?

The experiment shows that Vernier acuity is more than 10x higher than normal spatial acuity (e.g. Landolt C). Sharpness is something else.

Quote
This reminds me of a debate about the supposed inadequacy of the CD format.

It reminds me more of the difference between a low pixel count sensor with or without AA-filter, or resampling with BiLinear or BiCubic filters. It's about accuracy.

Cheers,
Bart
 Logged
hjulenissen
Sr. Member

Offline

Posts: 1713

Because real detail is more accurate than interpolated detail?
Interpolation is required by the sampling theorem for recreating a general waveform. As long as the waveform is limited to <fs/2, it can, in principle be perfectly recovered. Notions of "real detail" vs "interpolated detail" does not fit very well with my understanding of sampling theory.

If your "blob" is several pixels wide, then shifted (e.g. using fractional delay windowed sinc) by a subpixel amount, it can still be within fs/2, accordingly its continous counterpart can be perfectly estimated from its finite set of samples. Resampling it at a higher rate may be a work-around for the deficiencies of a particular system, but I see no general reason to do so.

-h
 Logged
BartvanderWolf
Sr. Member

Offline

Posts: 3909

Interpolation is required by the sampling theorem for recreating a general waveform. As long as the waveform is limited to <fs/2, it can, in principle be perfectly recovered. Notions of "real detail" vs "interpolated detail" does not fit very well with my understanding of sampling theory.

Hi,

Then let's try this, as a practical example:

The image at the top shows 4 lines, 4, 3, 2, and 1 pixel wide, and is divided in 4 horizontal sections. Sections 2 and 3 from the top each are displaced by 1 pixel versus the section above it. Therefore the offset of the 3rd versus the 4th section is 2 pixels. I then downsampled the image to 25% of its original size, and added it underneath on a gray background.

We can still see the, by now one quarter and one half pixel, displacements. However, the 'one quarter pixel wide line' at the right is more accurate and doesn't seem to jump as much as the fatter lines. Also compare the 1 pixel wide line of the image at the top right, with the 1 pixel wide line of the bottom image at the left. That's where the additional resolution can help. If we hadn't recorded the detail with an accuracy of 1/4th of a pixel, we could still see the displacement due to Vernier acuity, but it would look more jagged, less accurate.

That's why I suggest to do use some additional resolution capability, in order to better exploit the capabilities of the printer. That also opens the possibility of higher quality sharpening. But we do not need much more than what we can resolve by eye at the minimum anticipated viewing distance.

Cheers,
Bart
 « Last Edit: March 16, 2013, 07:39:58 AM by BartvanderWolf » Logged
hjulenissen
Sr. Member

Offline

Posts: 1713

Thank you for taking the time to demonstrate your point.

Now, the lines are not bandlimited, and the resampling procedure is unknown. In practice, most cameras/lenses are not capable of such step functions. What happens if you bandlimit the original image to fs/2, then resample using something like lanczos2/3?

The display filtering (boxcar filter for most of us) is one big deviation from ideal theory, unless you have a "retina"-branded display where the errors are shifted to frequencies where they probably does not matter. But I imagine that printers behave very differently: they have at their access a (comparatively) continous space to place the dots, only demands of DR means that they have to dither the quantization error over some area.

-h
 Logged
BartvanderWolf
Sr. Member

Offline

Posts: 3909

Thank you for taking the time to demonstrate your point.

Now, the lines are not bandlimited, and the resampling procedure is unknown. In practice, most cameras/lenses are not capable of such step functions. What happens if you bandlimit the original image to fs/2, then resample using something like lanczos2/3?

Here is the original from the earlier post, and a bandpass limited version of it which was then downsampled with Lanczos3:

Cheers,
Bart
 « Last Edit: March 16, 2013, 10:53:11 AM by BartvanderWolf » Logged
 Pages: « 1 2 [3]   Top of Page