Ad
Ad
Ad
Pages: « 1 2 3 [4]   Bottom of Page
Print
Author Topic: Best downsize to reduce noise?  (Read 9057 times)
ondebanks
Sr. Member
****
Offline Offline

Posts: 805


« Reply #60 on: January 17, 2012, 11:59:14 AM »
ReplyReply

I think of noise as variance, or standard deviation, or some statistical measure of variation in image values (take your pick).  The 1-pixel image has zero variation.  Hence zero noise.

Same thing happens in the real world when you view something with texture (like a rock or brick wall) and then move away from it.  Eventually you will not see the texture anymore and you'll just see a flat tone.

"The 1-pixel image has zero variation.  Hence zero noise." - Actually, it has noise.
A measurement does not have to have spatial extent (more than 1 pixel, i.e. sampling in image space) for there to be both noise and signal present. In fact, information theory says that there must be noise present.

If I mask off all except 1 pixel on an imaging sensor (let's keep it simple and give it perfect 100% quantum efficiency), and then fire exactly 10,000 photons at that pixel, and it spits out a count of 10,003 electrons...is there still "zero noise"? No...readout noise has added a random extra component, 3 electrons in this case.

But you are right to "think of noise as variance, or standard deviation, or some statistical measure of variation in image values". Where is the variation here? It's temporal, not spatial. If I keep repeating the 10,000 photons experiment, I will keep getting different values - in a statistical distribution, the standard deviation of which is the camera's readout noise.

More realistically, I won't be able to release precisely 10,000 photons every time. Any light source will emit photons per Poisson statistics, so there will also be variations (of the order of the square root of 10,000 = 100) in the photon count reaching the sensor each time. Now I will get output counts like 9913, 10045, 10024, 9935, 10000, 9989, ...a broader distribution, due to the two sources of noise, one external and one internal to the single pixel.


Noise simply means that you cannot exactly measure the correct amount of a signal. There are plenty of light detectors (like photomultiplier tubes) which give no spatial information - they are essentially single-pixel devices. If these gave measurements which were truly noiseless, then using one of these on a telescope would allow me to perfectly measure the brightness of every source in the night sky, right out to the dimmest and furthest galaxies. Such magic would make doing astronomy trivial! Alas, physics says it ain't so...

Ray
Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1615


« Reply #61 on: January 17, 2012, 01:47:27 PM »
ReplyReply

"The 1-pixel image has zero variation.  Hence zero noise." - Actually, it has noise.
A measurement does not have to have spatial extent (more than 1 pixel, i.e. sampling in image space) for there to be both noise and signal present. In fact, information theory says that there must be noise present.

If I mask off all except 1 pixel on an imaging sensor (let's keep it simple and give it perfect 100% quantum efficiency), and then fire exactly 10,000 photons at that pixel, and it spits out a count of 10,003 electrons...is there still "zero noise"? No...readout noise has added a random extra component, 3 electrons in this case.

But you are right to "think of noise as variance, or standard deviation, or some statistical measure of variation in image values". Where is the variation here? It's temporal, not spatial. If I keep repeating the 10,000 photons experiment, I will keep getting different values - in a statistical distribution, the standard deviation of which is the camera's readout noise.

More realistically, I won't be able to release precisely 10,000 photons every time. Any light source will emit photons per Poisson statistics, so there will also be variations (of the order of the square root of 10,000 = 100) in the photon count reaching the sensor each time. Now I will get output counts like 9913, 10045, 10024, 9935, 10000, 9989, ...a broader distribution, due to the two sources of noise, one external and one internal to the single pixel.


Noise simply means that you cannot exactly measure the correct amount of a signal. There are plenty of light detectors (like photomultiplier tubes) which give no spatial information - they are essentially single-pixel devices. If these gave measurements which were truly noiseless, then using one of these on a telescope would allow me to perfectly measure the brightness of every source in the night sky, right out to the dimmest and furthest galaxies. Such magic would make doing astronomy trivial! Alas, physics says it ain't so...

Ray

You put it more clearly than me: temporal and spatial components of noise.

Is non-zero-mean measurement error noise? Is a deterministic bias noise?

Can we make strict definitions that clearly separate errors that are a linear function of input (e.g. diffraction), from non-linear errors (sensor saturation), from noise (photon shot-noise)?

-h
Logged
ejmartin
Sr. Member
****
Offline Offline

Posts: 575


« Reply #62 on: January 17, 2012, 02:16:28 PM »
ReplyReply

"The 1-pixel image has zero variation.  Hence zero noise." - Actually, it has noise.


Yes, but it doesn't look noisy -- even if you look close up    Grin
Logged

emil
BJL
Sr. Member
****
Offline Offline

Posts: 5060


« Reply #63 on: January 17, 2012, 02:24:34 PM »
ReplyReply

And if you are not standing close enough (or the photo is not enlarged enough) to see the reduction in resolution, then you are not able to perceive the full resolution of the original either...nor its higher noise at the pixel level - I guess this is what you mean by "would the visual system do the basic averaging work of down sampling".
This is the scenario of interest to me: the OP was about situations where 24MP is more than enough, and I will take that to mean situations where the proposed lower pixel count of 6MP or whatever provides sufficient resolution. For example, something like viewing the original 6000x4000 and down sampled 3000x2000 versions with 12"x8" prints from a distance of 20", or 6"x4" prints from 10", so with still a healthy 5000 "pixels per viewing distance" for the down-sampled version.

So my follow-up question is whether, once the lower pixel count versions has as much resolution as the eye can use, would our visual system's blurring of the finer detail in the 24MP image do about as good a job of noise reduction?


Aside: One subtle point: our eye-balls jiggle slightly for purposes like edge-detection, making it unreliable to judge the eye's  resolving power from rod-cone density alone. That is, the human eye operates in a way that enhances _edge-sharpness_, possibly at the expense of _resolving power_ with lower contrast details.
Logged
LKaven
Sr. Member
****
Offline Offline

Posts: 767


« Reply #64 on: January 17, 2012, 05:20:21 PM »
ReplyReply

"The 1-pixel image has zero variation.  Hence zero noise." - Actually, it has noise.

You've also moved into the /semantic/, or /representational/ theory of information in an interesting way here.  Since that single bit is not reliably caused to be "1" by signal alone, it may be said to have a conjunctive content.

The question is framed this way: if you take a photography of a homogenous black field with a single-pixel, one-bit camera, and the camera registers a "1", then what is the semantic or representational content of the "1"? 

Framed this way, you can explain the fact that "it doesn't look noisy" (as Emil observes) with recourse to the semantic content.  The "1" bit does not mean just "black expanse" because of the underlying noise content which only by chance did not predominate.
Logged

LKaven
Sr. Member
****
Offline Offline

Posts: 767


« Reply #65 on: January 17, 2012, 05:42:50 PM »
ReplyReply

Aside: One subtle point: our eye-balls jiggle slightly for purposes like edge-detection, making it unreliable to judge the eye's  resolving power from rod-cone density alone. That is, the human eye operates in a way that enhances _edge-sharpness_, possibly at the expense of _resolving power_ with lower contrast details.

Yes, feature detectors in the brain for edge-and-orientation detection were the subject of Hubel and Wiesel's Nobel Prize-winning paper.  Edges in various orientations are some of the most salient things in our visual field, and singular disruptions to a pattern are salient.  This was tested in the cat striate cortex, but one expects the same is true for humans. 

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1363130/
Logged

ondebanks
Sr. Member
****
Offline Offline

Posts: 805


« Reply #66 on: January 17, 2012, 06:00:10 PM »
ReplyReply

You've also moved into the /semantic/, or /representational/ theory of information in an interesting way here.  Since that single bit is not reliably caused to be "1" by signal alone, it may be said to have a conjunctive content.

The question is framed this way: if you take a photography of a homogenous black field with a single-pixel, one-bit camera, and the camera registers a "1", then what is the semantic or representational content of the "1"? 

Framed this way, you can explain the fact that "it doesn't look noisy" (as Emil observes) with recourse to the semantic content.  The "1" bit does not mean just "black expanse" because of the underlying noise content which only by chance did not predominate.

Hey Luke, go easy on me. I'm just a physicist. We don't do semantics. Wink 

Ray
Logged
ErikKaffehr
Sr. Member
****
Online Online

Posts: 6891


WWW
« Reply #67 on: January 18, 2012, 11:50:52 PM »
ReplyReply

Hi,

So what you say is that 9913 = 10045 with some probability ;-)

You may also miss the issue, we can very effectively reduce noise by binning all pixels into one pixel. With 24 MP and 10000 photons per pixel SNR will be around 489000, pretty good. Of course some MTF will be lost in the process. Honestly, all MTF will be lost, but in marketing terms it's just a minor loss, like a few percent.

We then expand the pixel using a decent algorithm like GF, that actually creates new detail, and print at A2 at 720PPI.

The resulting image will be a great piece of art with a wide range of possible interpretations. I cannot see any problems, except some singularities.

We have seen very nice pictures produced with similar algorithm from NASA clearly indicating that the technique actually works:



The above image correctly depicts the planet Kepler 22b with a probability in the interval [0,1].

Best regards
Erik


"The 1-pixel image has zero variation.  Hence zero noise." - Actually, it has noise.
A measurement does not have to have spatial extent (more than 1 pixel, i.e. sampling in image space) for there to be both noise and signal present. In fact, information theory says that there must be noise present.

If I mask off all except 1 pixel on an imaging sensor (let's keep it simple and give it perfect 100% quantum efficiency), and then fire exactly 10,000 photons at that pixel, and it spits out a count of 10,003 electrons...is there still "zero noise"? No...readout noise has added a random extra component, 3 electrons in this case.

But you are right to "think of noise as variance, or standard deviation, or some statistical measure of variation in image values". Where is the variation here? It's temporal, not spatial. If I keep repeating the 10,000 photons experiment, I will keep getting different values - in a statistical distribution, the standard deviation of which is the camera's readout noise.

More realistically, I won't be able to release precisely 10,000 photons every time. Any light source will emit photons per Poisson statistics, so there will also be variations (of the order of the square root of 10,000 = 100) in the photon count reaching the sensor each time. Now I will get output counts like 9913, 10045, 10024, 9935, 10000, 9989, ...a broader distribution, due to the two sources of noise, one external and one internal to the single pixel.


Noise simply means that you cannot exactly measure the correct amount of a signal. There are plenty of light detectors (like photomultiplier tubes) which give no spatial information - they are essentially single-pixel devices. If these gave measurements which were truly noiseless, then using one of these on a telescope would allow me to perfectly measure the brightness of every source in the night sky, right out to the dimmest and furthest galaxies. Such magic would make doing astronomy trivial! Alas, physics says it ain't so...

Ray

« Last Edit: January 19, 2012, 12:04:29 AM by ErikKaffehr » Logged

ondebanks
Sr. Member
****
Offline Offline

Posts: 805


« Reply #68 on: January 19, 2012, 04:44:18 AM »
ReplyReply

You may also miss the issue, we can very effectively reduce noise by binning all pixels into one pixel. With 24 MP and 10000 photons per pixel SNR will be around 489000, pretty good. Of course some MTF will be lost in the process. Honestly, all MTF will be lost, but in marketing terms it's just a minor loss, like a few percent.

We then expand the pixel using a decent algorithm like GF, that actually creates new detail, and print at A2 at 720PPI.

The resulting image will be a great piece of art with a wide range of possible interpretations. I cannot see any problems, except some singularities.

We have seen very nice pictures produced with similar algorithm from NASA clearly indicating that the technique actually works:

The above image correctly depicts the planet Kepler 22b with a probability in the interval [0,1].

Best regards
Erik

 Cheesy Cheesy Very good, Erik! Of course there are also those CSI-style TV/movie blunders where they can somehow magically take a low-res still of a crowd and "enhance...ok, zoom in to that person...enhance...zoom in on his hand...enhance...yes, just that knuckle...enhance...enhance more...zoom again...there! see? there's at least a milligram of the victim's blood dried onto that hair follicle!"

Unfortunately this goes back to "Bladerunner", which is otherwise an outstanding movie.

So what you say is that 9913 = 10045 with some probability ;-)

Not quite: 9913 can only equal 9913. What one can say is that 9913 and 10045 are both independent attempts (samples) to measure the underlying mean flux rate of 10000 per exposure. Or that 9913 and 10045 are both drawn from an approximately Poisson distribution with a mean of 10000 and a standard deviation of slightly more than 100 (including some low amount of readnoise).

In normal circumstances, you don't know that the true underlying flux rate is 10000 photons per exposure (if you knew, why would you bother trying to measure it and keep getting it slightly wrong?). But you really want to come as close as possible to finding out that rate; the closer you get, the more you've reduced the effect of noise. And you do know that the maximum-likelihood estimate (MLE) of this rate is simply the average of all the samples you've measured - so the more samples you obtain, the closer your MLE gets to the true value; the better the signal to noise.

Ray
Logged
Pages: « 1 2 3 [4]   Top of Page
Print
Jump to:  

Ad
Ad
Ad