Ad
Ad
Ad
Pages: [1] 2 »   Bottom of Page
Print
Author Topic: remove the Bayer data and quadruple your resolution in B&W?  (Read 6262 times)
bwana
Full Member
***
Offline Offline

Posts: 183


« on: March 14, 2012, 09:05:41 AM »
ReplyReply

As my coarse understanding of digital photgraphy is still new, please be patient with me. The image sensor of a digicam sits behind a bayer filter. Each image pixel is derived from four photosites. Each photosite behind a colored filter so that a pixel is constituted by 4 photosites-2 green, 1 red and 1 blue. This pixel is then the fundamental data unit transmitted to the pc. When displaying to a screen, (which consists of red, green and blue phosphors - or other type of color element) the pixel is broken down into color elements by the video driver. These are then sent to the screen for display.

What if the raw image file was used so that the data from each photosite was converted to a pixel. Of course, all color information would be lost since the only data each photosite could yield is luminance. But for black and white photography, only luminance is needed. The computer would then receive data from 4x as many pixels since we have now changed the mapping from 4 photosites-> 1 pixel to 1 photosite -> 1 pixel.

I tried to read about raw file formats, TIFF, but the literature is rather opaque. Is this concept valid or flawed?
Logged
deejjjaaaa
Sr. Member
****
Offline Offline

Posts: 743


« Reply #1 on: March 14, 2012, 11:54:30 AM »
ReplyReply

The computer would then receive data from 4x as many pixels since we have now changed the mapping from 4 photosites-> 1 pixel to 1 photosite -> 1 pixel.

actually (in general, certain special demosaicking methods excluded) one sensel is contributing to 4 resulting pixels, not to 1 pixel... 
Logged
bwana
Full Member
***
Offline Offline

Posts: 183


« Reply #2 on: March 14, 2012, 12:55:19 PM »
ReplyReply

I think of the sensor as the cookie dough and the pixel as the cookie cutter. To make properly shaped cookies, you cant overlap the cookie cutter placement. However I get the sense that you are saying that pixels are derived from overlapping photosite sampling. In other words, are you saying that each photosite contributes data to ALL the surrounding pixel groups? I dont really know what is actually being done during demosaicising in terms of the specific algorithm.  Even so, that would diminish resolution since you never have 1 photosite->1 pixel mapping.
Logged
EricV
Full Member
***
Offline Offline

Posts: 128


« Reply #3 on: March 14, 2012, 01:14:42 PM »
ReplyReply

When a digital color image is mapped from raw pixels to processed pixels, the mapping is 4 pixels (R,G,B,G) -> 4 pixels (RGB,RGB,RGB,RGB).  Yes, there is necessarily some loss in resolution, due to the color interpolation, but less than a factor of four.  If you simply treat each raw pixel value as a luminance, rather than a color, you will get very strange color effects.  For example, a bright red apple will light up only one pixel out of four, which is hardly the result you want.  To really achieve better resolution for B&W, the proper solution is to remove the Bayer filter, so that every pixel sees all colors. 
Logged
bwana
Full Member
***
Offline Offline

Posts: 183


« Reply #4 on: March 14, 2012, 02:31:00 PM »
ReplyReply

ok, i see your point-yes the image might look pepperred with luminance fluctuations that would ruin the extra resolution. OTOH, if you knew how much luminance each color of the Bayer filter absorbed, you could compensate for that. For example, assume the green bayer filter absorbs 10%, the red 20% and the blue 20%. You could adjust the luminance measured by each photosite according to the bayer filter in front of it. So, this seems straightforward, cool if i could write a javascript implementation-that way it could run in a browser.
« Last Edit: March 14, 2012, 04:34:47 PM by bwana » Logged
John Nollendorfs
Sr. Member
****
Offline Offline

Posts: 308


« Reply #5 on: March 14, 2012, 05:07:32 PM »
ReplyReply

Or, you could just average all the luminance values, but the end result would hardly qualify as increasing resolution. The problem with your suggestion of course is the Bayer filters. And since that's integral in the manufacturing process of the sensor, it can't be removed. Spend your time dreaming  about a 41 MP phone camera instead! ;-)
Logged
bwana
Full Member
***
Offline Offline

Posts: 183


« Reply #6 on: March 14, 2012, 05:35:42 PM »
ReplyReply

Or, you could just average all the luminance values, ...

what? I dont get it. i said nothing about averaging the luminance values. My point is that the averaging that is being used in the Bayer demosaicising algoritm is what decreases resolution. By eliminating averaging and going to 1 photosite-> 1 pixel, we increase resolution.
Logged
Nigel Johnson
Full Member
***
Offline Offline

Posts: 124


« Reply #7 on: March 14, 2012, 07:37:37 PM »
ReplyReply

…if you knew how much luminance each color of the Bayer filter absorbed, you could compensate for that. For example, assume the green bayer filter absorbs 10%, the red 20% and the blue 20%. You could adjust the luminance measured by each photosite according to the bayer filter in front of it. So, this seems straightforward…

The thing that you are missing is that the absorption of each of the filters is dependent upon the colour of the light at each of the pixels. For example the green filter lets most green light pass through, but little magenta light - therefore a low luminance at the pixel behind the green filter can be caused by, amongst many other possibilities, a dark green or a bright magenta. Similar problems apply to the red and blue filters. Thus, for a single pixel, luminance cannot be calculated without knowing the colour of the light (to be fully accurate without knowing the filter's spectral response, the pixel's spectral response and the spectrum of the light falling on the pixel). The de-mosaicing process uses data from several pixels to estimate the colour and luminance at each of the pixels; for most sensors this process also has to account for the effect of the anti-aliasing filter which spreads the incident light between adjacent pixels.

Regards
Nigel

P.S. Whilst sensors without Bayer filters are available for scientific imaging purposes I do not believe that any new normal photographic cameras are currently available without such filters; although such cameras have been offered in the past. They tended to sell in small numbers and be expensive and their utility has reduced with higher resolution and more sensitive sensors and better de-mosaicing (the sensitivity comment is due to the fact that the omission of the Bayer filters allowed for higher sensitivity - important when sensors were very noisy at even low ISO).
Logged
bwana
Full Member
***
Offline Offline

Posts: 183


« Reply #8 on: March 14, 2012, 08:42:13 PM »
ReplyReply

tnx for the example and helping me follow through on my flawed logic. put another way, if i shine red light  through a bayer filter, i will not get a homogeneous gray tone on the sensor - . The filter absorbs different light frequencies differently. i need to know the frequency of the light so i can adjust the luminance of what's coming through each of the individual bayer filter-lets. And with a real image, with all those different light frequencies at the different locations in the image, that's impossible.

Even if I assumed that incident light on the bayer filter was 6500 K (when in fact it is not), the net result of the calculated luminance would appear as a speckled image.
Logged
AlanPezzulich
Newbie
*
Offline Offline

Posts: 19


« Reply #9 on: March 14, 2012, 10:23:25 PM »
ReplyReply

Look at it this way. Each photosite produces one pixel. Only one color is measured at the site. The other two colors are calculated from adjacent photosites.
Logged
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3454


« Reply #10 on: March 15, 2012, 03:50:11 AM »
ReplyReply

Even if I assumed that incident light on the bayer filter was 6500 K (when in fact it is not), the net result of the calculated luminance would appear as a speckled image.

The net result is not an image ..., it's a collection of color filtered sample data. The band-pass samples are complemented for the color info that was not sampled at that position, in a process called Bayer CFA demosaicing. Therefore each sample position will end up with a full RGB representation at each sample position and which is partially based on measured data, and partially based on advanced interpolation from many (sometimes more than 8) of the immediately surrounding data samples.

Luminance interpolation can be very accurate because each sample position records some luminance, whereas chroma interpolation will have reduced accuracy due to the lower sampling density. Fortunately, chroma information in real image scenes doesn't fluctuate as rapid as luminance information, so a Bayer CFA can produce very good images with high luminance resolution (which is the more important feature for the human visual system).

Cheers,
Bart
« Last Edit: March 15, 2012, 06:32:05 PM by BartvanderWolf » Logged
Nigel Johnson
Full Member
***
Offline Offline

Posts: 124


« Reply #11 on: March 15, 2012, 03:39:57 PM »
ReplyReply

…i need to know the frequency of the light so i can adjust the luminance of what's coming through each of the individual bayer filter-lets.

A further complication is that very little natural light is monochromatic (ie only contains light of a single frequency) the majority of light contains a mixture of many frequencies either smoothly varying in intensity with frequency (eg natural light, incandescent 'tungsten' bulbs) or having a very spiky spectrum with most energy coming from a few frequencies (eg sodium vapour lights, most fluorescent lights), much rarer are monochromatic light sources (eg lasers, most LEDs). In addition objects illuminated by the light will also reflect the spectrum with varying degrees of smoothness meaning that the light picked up by the camera will contain varying proportions of different frequencies of light; the light from a highly saturated colour will tend to have most of the energy close to a single frequency (only a fully saturated colour would show light of a single frequency and these are extremely rare) whilst a less saturated colour will have a much wider spread of frequencies for a single colour (an example of this can be seen on the Cambridge in Colour site at http://www.cambridgeincolour.com/tutorials/color-perception.htm. This site includes other useful tutorials.

Regards
Nigel
Logged
Nigel Johnson
Full Member
***
Offline Offline

Posts: 124


« Reply #12 on: March 15, 2012, 03:46:41 PM »
ReplyReply

For those who don't count in smileys, the third sentence in Bart's message should have read:

Therefore each sample position will end up with a full RGB representation at each sample position and which is partially based on measured data, and partially based on advanced interpolation from many (sometimes more than 8) of the immediately surrounding data samples.

Regards
Nigel

(Bart, not certain if you are aware that ticking the 'Don't use smileys.' check box under 'Additional Options…' in the Post Reply window prevents text being converted to smileys.)
Logged
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3454


« Reply #13 on: March 15, 2012, 06:30:44 PM »
ReplyReply

(Bart, not certain if you are aware that ticking the 'Don't use smileys.' check box under 'Additional Options…' in the Post Reply window prevents text being converted to smileys.)

Hi Nigel,

Thanks for catching that. I just didn't have/take the time to read my own response.

Cheers,
Bart
Logged
bwana
Full Member
***
Offline Offline

Posts: 183


« Reply #14 on: March 15, 2012, 07:31:33 PM »
ReplyReply

thank you all for your extensive and complete replies. so, is there a piece of software that will let me see what is on the image sensor, unprocessed? I am just curious. I tried imageJ (from the NIH) but it doesnt read common raw photo file formats.
Logged
ejmartin
Sr. Member
****
Offline Offline

Posts: 575


« Reply #15 on: March 15, 2012, 11:18:08 PM »
ReplyReply

thank you all for your extensive and complete replies. so, is there a piece of software that will let me see what is on the image sensor, unprocessed? I am just curious. I tried imageJ (from the NIH) but it doesnt read common raw photo file formats.

http://www.rawdigger.com/
Logged

emil
bwana
Full Member
***
Offline Offline

Posts: 183


« Reply #16 on: March 16, 2012, 08:00:29 AM »
ReplyReply


Thank you ejmartin. I've been looking for something like this for a while to avoid the layers of crap processing that obscure the original data on the sensor. In addition, the intelligent and honest presentation of the histogram and it's limitations is enlightening. Perhaps it is the Russian psyche that enables deep thinking and clear analysis- something that millions of native English speakers who inhabit the net have not matched. Or perhaps my knowledge is so limited that I do not know if these thoughts have been articulated previously. Another source I find enlightening in an introductory level is cambridgeincolour. As well as specific discussions by unique individuals on this site, as yourself.
Logged
joofa
Sr. Member
****
Offline Offline

Posts: 485



« Reply #17 on: March 16, 2012, 02:46:51 PM »
ReplyReply

thank you all for your extensive and complete replies. so, is there a piece of software that will let me see what is on the image sensor, unprocessed? I am just curious. I tried imageJ (from the NIH) but it doesnt read common raw photo file formats.

Photoshop Mac users can use a freely available Raw Import plugin from the website in my signature that will let you (1) see the actual raw data (2) separate out the raw data into 4 R,G,G,B images, or (3) do a rudimentary demosaic outside of ACR. For more information please see the following link:

http://www.luminous-landscape.com/forum/index.php?topic=20388.msg483963#msg483963

Joofa
Logged

Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins
Damir
Full Member
***
Offline Offline

Posts: 173


« Reply #18 on: March 17, 2012, 06:17:02 AM »
ReplyReply

As my coarse understanding of digital photgraphy is still new, please be patient with me. The image sensor of a digicam sits behind a bayer filter. Each image pixel is derived from four photosites. Each photosite behind a colored filter so that a pixel is constituted by 4 photosites-2 green, 1 red and 1 blue. This pixel is then the fundamental data unit transmitted to the pc. When displaying to a screen, (which consists of red, green and blue phosphors - or other type of color element) the pixel is broken down into color elements by the video driver. These are then sent to the screen for display.

What if the raw image file was used so that the data from each photosite was converted to a pixel. Of course, all color information would be lost since the only data each photosite could yield is luminance. But for black and white photography, only luminance is needed. The computer would then receive data from 4x as many pixels since we have now changed the mapping from 4 photosites-> 1 pixel to 1 photosite -> 1 pixel.

I tried to read about raw file formats, TIFF, but the literature is rather opaque. Is this concept valid or flawed?

It will be valid if there is no Bayer filter. As Bayer filter already distort data e.g. no information in blue if you take photo of yellow flower, information in blue photosite need to be restored from red - green photosite, therefore you need to do demosiac.
« Last Edit: March 17, 2012, 06:18:48 AM by Damir » Logged
t6b9p
Newbie
*
Offline Offline

Posts: 34


WWW
« Reply #19 on: March 17, 2012, 10:58:21 AM »
ReplyReply

Quote
The thing that you are missing is that the absorption of each of the filters is dependent upon the colour of the light at each of the pixels.

True, but if you work with 830nm B&W IR things change and this is a topic I have been interested in for a while with regards to IR 830nm converted DSLR. Beyond this wavelength the Bayer dyes react much more uniformly to changing wavelength as they become more transparent to IR. The suggestion of a "fixed" ratio correction for the Bayer mosaic in an undemosiaced image seems to be more of a possibility in this application.
Logged
Pages: [1] 2 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad