Ad
Ad
Ad
Pages: « 1 ... 5 6 [7]   Bottom of Page
Print
Author Topic: D800 hyperbole  (Read 24216 times)
Christoph C. Feldhaim
Sr. Member
****
Offline Offline

Posts: 2509


There is no rule! No - wait ...


« Reply #120 on: March 28, 2012, 02:58:01 AM »
ReplyReply

The keyword is angular velocity.
If you have a horizontal resolution of lets say 4000 px for a 12 MP camera and a lens with a viewing angle of 20° (FF 100 mm equivalent) you will get problems at least when camera movements of 20°/4000px = 0.005° (one pixel wide) get recorded, probably earlier. With a shutter speed of 1/100 s, which would be in the traditional "safe" zone a horizontal camera movement of  0.005°/(1/100s) = 0.5°/s would cause one pixel wide movement blur, which would result in half the resolution or a fourth the MP, here 12MP-> 3MP. To me this appears to be a  pretty slow angular speed to cause movement blur. If the resolution gets higher the problem gets worse with the square root of the resolution gain in MP. So 4 times the resolution (here from 12 to 48 MP) would only "need" the 0.25°/s angular speed to quarter the MP.
Logged

hjulenissen
Sr. Member
****
Offline Offline

Posts: 1713


« Reply #121 on: March 28, 2012, 05:12:27 AM »
ReplyReply

Sampling, aliasing, quantisation etc are relevant in many fields and in creative audio I can see how the artefacts will be potentially problematic, but as a genuine question, why does this even matter to most photographers? Except as a fun discussion:-)
Aliasing at capture usually does not affect audio recordings in a perceptable way, at least not for sensible equipment constructed the last 20 years.
Aliasing at capture can affect image recordings in a perceptable way, especially for aa-less cameras.
Quote
As a second question, does aliasing introduce detail or just smear what is there? (I think I understand the concepts of sampling, aliasing and bayer matrix construction and understand how sampling is an issue for example in audio or scientific measurements, but can't get my head round how it applies at the camera sensor/RAW conversion level.) Any explanation or links would be good thanks.
When sampling with significant aliasing, you capture "something". This something is a function of the scene and the camera, but it is ambiguous: two (or very many) quite different scenes can generate the exact same raw files. As the raw developer have no other information than the raw files, which of those scenes would you like it to render?

I think that map-making is a good analogy. Imagine making a topology map that is to be represented as 1km x 1km squares ("pixels"). How would you like to calculate each squares value? Measure the elevation above sea level by placing you GPS or something similar at the exact mid of each square? Or would you rather do it by averaging the elevation of all points inside the square? Or would you want to smooth even more?

-h
Logged
MikeMac
Newbie
*
Offline Offline

Posts: 31


« Reply #122 on: March 28, 2012, 10:58:23 AM »
ReplyReply

Aliasing at capture usually does not affect audio recordings in a perceptable way, at least not for sensible equipment constructed the last 20 years.
Aliasing at capture can affect image recordings in a perceptable way, especially for aa-less cameras.When sampling with significant aliasing, you capture "something". This something is a function of the scene and the camera, but it is ambiguous: two (or very many) quite different scenes can generate the exact same raw files. As the raw developer have no other information than the raw files, which of those scenes would you like it to render?

I think that map-making is a good analogy. Imagine making a topology map that is to be represented as 1km x 1km squares ("pixels"). How would you like to calculate each squares value? Measure the elevation above sea level by placing you GPS or something similar at the exact mid of each square? Or would you rather do it by averaging the elevation of all points inside the square? Or would you want to smooth even more?

-h
Thanks for reply, I need to think about this a bit more:-)
Logged
BJL
Sr. Member
****
Offline Offline

Posts: 5182


« Reply #123 on: March 28, 2012, 12:24:00 PM »
ReplyReply

I think that map-making is a good analogy. Imagine making a topology map that is to be represented as 1km x 1km squares ("pixels"). How would you like to calculate each squares value? Measure the elevation above sea level by placing you GPS or something similar at the exact mid of each square? Or would you rather do it by averaging the elevation of all points inside the square? Or would you want to smooth even more?
This is maybe a difference between the theoretical case of aliasing when sampling is measurement at single discrete instants in time, or single points in space [your "midpoint of the square"] vs the case with photography, which is more like averaging light levels over each photosite [your "averaging the elevation of all points inside the square"]. Isn't that some kind of low pass filtering in itself?

But color filter arrays mess the simple view up, and maybe that sampling of each color over only 1/2 to 1/4 of the area is the main villain. Could examples of luminosity aliasing in nearly monochrome subjects be due mainly to the luminosity values given by demosaicing being based mainly on data from green pixels, so that there are gaps in the spatial coverage of those "luminosity" measurements?
Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1713


« Reply #124 on: March 29, 2012, 03:23:27 AM »
ReplyReply

This is maybe a difference between the theoretical case of aliasing when sampling is measurement at single discrete instants in time, or single points in space [your "midpoint of the square"] vs the case with photography, which is more like averaging light levels over each photosite [your "averaging the elevation of all points inside the square"]. Isn't that some kind of low pass filtering in itself?

But color filter arrays mess the simple view up, and maybe that sampling of each color over only 1/2 to 1/4 of the area is the main villain. Could examples of luminosity aliasing in nearly monochrome subjects be due mainly to the luminosity values given by demosaicing being based mainly on data from green pixels, so that there are gaps in the spatial coverage of those "luminosity" measurements?
Yes, integrating the signal over a square is lowpass filtering the signal. It is not a very efficient lowpass filter, though. For a monochrome sensor with 100% fill-rate or perfect microlenses, the "integrate all light within a pixel" idea might be right. For a color-filtered sensor with non-perfect micro-lenses and <100% fill-factor, it is not right.

It is a question of degree: how much aliasing and how much passband blurring will there be if I do operation "X" on my camera/scene-combination. Removing the AA-filter will (everything else equal) tend to increase aliasing, and increase passband sharpness.

-h
Logged
BJL
Sr. Member
****
Offline Offline

Posts: 5182


« Reply #125 on: March 29, 2012, 09:26:35 AM »
ReplyReply

Thanks "h", that seems to confirm my rough reasoning that the use of a color filter array makes aliasing far worse. This is supported I suppose by the worse aliasing hat happens in video made from still sensors where only a selection of the photosites are read at all, so that it is closer to the "point samples" considered in the simplest mathematical models of sampling and aliasing.

So it would be nice if someone could produce an "X3" technology (all color information measured at each spatial location) which works better that the somewhat flawed, noise prone, Foveon implementation. Then the AA filter would be far less needed, or could at least have a lighter touch. I have read of patents on several alternative approaches to X3 from several major sensor makers, using stack of color filters and such, but at most these have been deployed in small, special purpose sensors.
« Last Edit: March 29, 2012, 01:53:34 PM by BJL » Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1713


« Reply #126 on: March 30, 2012, 12:52:54 AM »
ReplyReply

Thanks "h", that seems to confirm my rough reasoning that the use of a color filter array makes aliasing far worse. This is supported I suppose by the worse aliasing hat happens in video made from still sensors where only a selection of the photosites are read at all, so that it is closer to the "point samples" considered in the simplest mathematical models of sampling and aliasing.

So it would be nice if someone could produce an "X3" technology (all color information measured at each spatial location) which works better that the somewhat flawed, noise prone, Foveon implementation. Then the AA filter would be far less needed, or could at least have a lighter touch. I have read of patents on several alternative approaches to X3 from several major sensor makers, using stack of color filters and such, but at most these have been deployed in small, special purpose sensors.
I agree that a Foveon-type sensor would be less prone to aliasing-induced artifacts at a given sensel-pitch, but it will still have luminance aliasing.

I believe that spatially, if sensel density can be some factor X higher, much the same characteristic can be achieved using traditional Bayer technology.

I speculate that this is the reason why we don't see these exotic designs available; it is simply easier and less expensive to shrink current methods, than it is to do revolutionary things at sufficiently small spatial scale and modest cost. Perhaps this trend will continue until we hit some hard quantum law?

I am only adressing spatial behaviour here, noise, saturation, color response etc are also interesting.

-h
(edit: fix my quotes)
« Last Edit: April 03, 2012, 02:45:54 AM by hjulenissen » Logged
MikeMac
Newbie
*
Offline Offline

Posts: 31


« Reply #127 on: April 03, 2012, 02:12:19 AM »
ReplyReply

I agree that a Foveon-type sensor would be less prone to aliasing-induced artifacts at a given sensel-pitch, but it will still have luminance aliasing.

I believe that spatially, if sensel density can be some factor X higher, much the same characteristic can be achieved using traditional Bayer technology.

I speculate that this is the reason why we don't see these exotic designs available; it is simply easier and less expensive to shrink current methods, than it is to do revolutionary things at sufficiently small spatial scale and modest cost. Perhaps this trend will continue until we hit some hard quantum law?

I am only adressing spatial behaviour here, noise, saturation, color response etc are also interesting.

-h

Is this why some of the older MF backs used to have a 3 shot mode? I think that was the name, 3 shots were taken with a different colour filter in front of the sensor, then the shots combined.
Logged
marcmccalmont
Sr. Member
****
Offline Offline

Posts: 1734



« Reply #128 on: April 03, 2012, 02:20:17 AM »
ReplyReply

I agree that a Foveon-type sensor would be less prone to aliasing-induced artifacts at a given sensel-pitch, but it will still have luminance aliasing.

I believe that spatially, if sensel density can be some factor X higher, much the same characteristic can be achieved using traditional Bayer technology.

I speculate that this is the reason why we don't see these exotic designs available; it is simply easier and less expensive to shrink current methods, than it is to do revolutionary things at sufficiently small spatial scale and modest cost. Perhaps this trend will continue until we hit some hard quantum law?

I am only adressing spatial behaviour here, noise, saturation, color response etc are also interesting.

-h

I hope some day we can not only count photons at a photosite but also measure the frequency of the light doing away with color filters, years ago there was an idea of small piezoelectric "spikes" for photosites these would vibrate at the frequency of the light that was hitting them. If you could read out the frequency and voltage for each "piezo-spike" the problem would be solved.
Marc
Logged

Marc McCalmont
Pages: « 1 ... 5 6 [7]   Top of Page
Print
Jump to:  

Ad
Ad
Ad