Ad
Ad
Ad
Pages: [1] 2 3 »   Bottom of Page
Print
Author Topic: RawDigger 0.9.18 is out, with sRAW analysis and TIFF export  (Read 13529 times)
Iliah
Sr. Member
****
Offline Offline

Posts: 410


« on: July 16, 2013, 02:40:43 AM »
ReplyReply

Download

Changes

New features

Data export as TIFF
Menu -> File -> Export TIFF)
This feature allows exporting RGB rendition, composite RAW, or single RAW channel as a TIFF file.
For more detailed discussion please see Data Export section in manual.
Better analyses of Canon sRAW data format

New settings:
Preferences -> Data Processing -> Show YCbCr data for Canon sRAW files
Preferences -> Data Processing -> Do not interpolate Cb/Cr channels data (Canon sRAW) The first setting allows to switch off conversion of YCbCr into RGB for Canon sRAW format, while the second switches off interpolation of Cb/Cr channels

New setting
Preferences -> Display Options -> Display RGB Render in RAW colors
This setting allows to switch off the conversion of RGB render into sRGB. This way the output remains in camera "color space".
For RAW data containing four different colors (RGBE, CMYG) this also switches off RGB render display. TIFF export for such RAW is still possible and results in a four-channel TIFF.

Added camera support for:

Baumer TXG14
Canon EOS C500
Nikon D5200
OmniVision OV5647 (Raspberry Pi)
Panasonic DMC-GF6
Samsung NX300, NX1100, NX2000
Sony NEX-3N

Minor changes:

CGATS output reworked to include additional data and for better standard conformance
« Last Edit: July 16, 2013, 02:44:42 AM by Iliah » Logged
BartvanderWolf
Sr. Member
****
Online Online

Posts: 3568


« Reply #1 on: July 16, 2013, 03:29:35 AM »
ReplyReply

Hi Iliah,

Thanks for this useful application that can help to analyze Raw data files, and gain a better understanding of what's going on under-the-hood.

If I may, there is one useful (optional) statistical metric that could be a useful addition for geeks.

When a shot is made of a featureless uniformly lit surface, the subtraction of the usually 2 Green filtered sensel responses per [R,G1,G2,B] set can provide useful info. If the subject and lighting is uniform, then the two green sensels should give the same output (except for read noise). If they don't, and it is measured over a larger area, a calibration bias may be detected (their average value is different). When their per channel standard deviation noise is subtracted in quadrature, read noise can be pretty accurately determined in a single shot. The subtraction will eliminate pattern noise because it is not random, and only shot-noise will remain. It can also detect outliers as hot or dead Green filtered sensels.

It could also be used to indirectly improve uniform lighting of a scene by minimizing the difference between the 2 green channel averages, but other channels could also be used for that.

Comparing spatial differences between sensels within a channel could also indicate a measure of focus-quality on highly detailed subjects or focus bracketing sequences.

One could also think about implementing a noise spectrum measurement, for the detection of on-sensor, or in-ADC pipeline, noise reduction.

Also estimated White-balance Raw channel multipliers would be useful.

Just some thoughts for when you run out of ideas Wink

Thanks again for making RawDigger available.

Cheers,
Bart
Logged
Iliah
Sr. Member
****
Offline Offline

Posts: 410


« Reply #2 on: July 16, 2013, 03:42:50 AM »
ReplyReply

Dear Bart,

> If the subject and lighting is uniform, then the two green sensels should give the same output (except for read noise)

Let's consider this screenshot, Canon 1DMkIV, fairly uniform light. I'm under impression that there may be some other reasons for difference in green channels, not just different read noise. How to check?
Logged
BartvanderWolf
Sr. Member
****
Online Online

Posts: 3568


« Reply #3 on: July 16, 2013, 04:43:48 AM »
ReplyReply

Dear Bart,

> If the subject and lighting is uniform, then the two green sensels should give the same output (except for read noise)

Let's consider this screenshot, Canon 1DMkIV, fairly uniform light. I'm under impression that there may be some other reasons for difference in green channels, not just different read noise. How to check?

Hi Iliah,

The G1 and G2 averages are pretty close, so assuming the lighting was uniform (and there was no surface structure !), the directly neighboring sensels should subtract to zero (or with a small constant bias), except for 'noise'. So the approx. read+photon(!) noise is sqrt(20.0^2 +19.5^2) /sqrt(2) = 19.75. Since we can know the photon noise level if we know the gain that was used to produce this average level, we'd need to subtract the shot noise to be left with only the read-noise. I'm confident you guys can figure that out. Otherwise we'd just get random noise (without pattern noise which subtracts to an average of zero) almost as accurate as subtracting different exposure pairs, and that also allows to quantify the PRNU noise (for the channels involved).

I've made a simple ImageJ script available (via this thread) that uses a Raw data dump from DCraw to separate the channel data into separate files, and optionally subtract the G1 and G2 channels. That can IMO produce very interesting information.

Of course the spatial displacement of the 4 channels should be taken into account when image detail is involved, but for G1, G2 analysis, they are spatially very close (1.4x sensel-pitch diagonally and 2x sensel pitch orthogonally, compared 2.8x and 2 times for R and B) so especially for uniform (defocused) areas it's basically the quantified electronic differences that we measure (with a minimum of risk for e.g. amp.glow influences and vignetting or light fall-off). Between different color channels there are of course also the White-balance differences to take note of.

Hope that helps.

Cheers,
Bart
« Last Edit: July 16, 2013, 05:11:26 AM by BartvanderWolf » Logged
opgr
Sr. Member
****
Offline Offline

Posts: 1125


WWW
« Reply #4 on: July 17, 2013, 01:21:16 AM »
ReplyReply

Okay, how about this:

1. Suppose dG = G1-G2

2. Compute sqrt(avg(dG^2))

This should likely be a good estimator for total noise. Note, you don't want to take avg(dG) which just equals avg(G1)-avg(G2) because that would always return ~zero given enough pixels.

Then to separate shot noise and read noise:
Take several measurements for different iso, create a curve. The curve should reduce as iso reduces, not to zero, but to the read-noise value.
Logged

Regards,
Oscar Rysdyk
theimagingfactory
Iliah
Sr. Member
****
Offline Offline

Posts: 410


« Reply #5 on: July 17, 2013, 01:44:37 AM »
ReplyReply

Can you verify that avg(g1) - avg(g2) is zero?
Logged
BartvanderWolf
Sr. Member
****
Online Online

Posts: 3568


« Reply #6 on: July 17, 2013, 02:00:15 AM »
ReplyReply

Okay, how about this:

1. Suppose dG = G1-G2

2. Compute sqrt(avg(dG^2))

This should likely be a good estimator for total noise. Note, you don't want to take avg(dG) which just equals avg(G1)-avg(G2) because that would always return ~zero given enough pixels.

Then to separate shot noise and read noise:
Take several measurements for different iso, create a curve. The curve should reduce as iso reduces, not to zero, but to the read-noise value.

Hi Oscar,

Yes, that's the basic idea. It also allows to separate random (shot+read) noise from pattern noise (PRNU) when we subtract it from total noise. So one measurement on a single file, can give us (with a bit of extra work and a few files) the main components that make up the total noise. Could provide a lot of insight.

Basically it could also be used for the Red and Blue channels of a common Bayer CFA arrangement, but those filtered sensels are spatially a bit more distant diagonally, so they may be a bit more sensitive to picking up uneven lighting and/or vignetting and light fall-off and e.g. sensel aperture mask shading effects near corners. However, RawDigger has to also consider alternative filter arrangement, which complicates the programming effort, I realize that.

Of course it is possible to also calibrate for a gradual slope in the sampled area and calibrate that out, so the actual noise measurements gain even a bit more accuracy if the sample is not taken from the image center, and it could additionally produce a Lens Cast Calibration (LCC) file, although that might be a bit out of the scope of what RawDigger is envisioned to do.

Interesting stuff.

Cheers,
Bart
Logged
alextutubalin
Newbie
*
Offline Offline

Posts: 13


« Reply #7 on: July 17, 2013, 02:03:02 AM »
ReplyReply

Can you verify that avg(g1) - avg(g2) is zero?

I've downloaded sample Sony A77 image from imaging-resource site. ISO200, several resolution targets and two Color Checkers on gray field.

The avg(g1) - avg(g2) is definitely not zero. For  250x250 selection: G1avg = 1616, G2avg = 1626. Another selection: 1576/1586 (both selections are made on gray area)

Also, the difference is different Smiley on different colors:
 A2 (orange) patch on color checker image (same camera, same sample image): 1372/1367
 A3 (deep blue) patch: 714/727 ( 2% difference!)
 B3 (green) patch: 1525/1526 (practically same on green!)
 F1 (blue) patch: 3019/3038

So, it looks like two greens are slightly different in color response too
 
« Last Edit: July 17, 2013, 03:25:43 AM by alextutubalin » Logged

Alex Tutubalin
RawDigger:  http://www.rawdigger.com
LibRaw: http://www.libraw.org
opgr
Sr. Member
****
Offline Offline

Posts: 1125


WWW
« Reply #8 on: July 17, 2013, 02:14:38 AM »
ReplyReply

I've downloaded sample Sony A77 image from imaging-resource site. ISO200, several resolution targets and two Color Checkers on gray field.

The avg(g1) - avg(g2) is definitely not zero. For  250x250 selection: G1avg = 1616, G2avg = 1626. Another selection: 1576/1586.

Also, the difference is different Smiley on different colors:
 A2 (orange) patch on color checker image (same camera, same sample image): 1372/1367
 A3 (deep blue) patch: 714/727 ( 2% difference!)
 B3 (green) patch: 1525/1526 (practically same on green!)
 F1 (blue) patch: 3019/3038

So, it looks like two greens are slightly different in color response too
 

There are probably differences in the two green channels because of different spill-overs, different read-out channels, and simple statistics. Please compute the confidence interval of the difference. Like I said the difference is approximately zero given enough pixels. It is not exactly zero. But I believe it is not a statistically relevant parameter. Taking the absolute value of the difference is far more relevant and interesting.




Logged

Regards,
Oscar Rysdyk
theimagingfactory
opgr
Sr. Member
****
Offline Offline

Posts: 1125


WWW
« Reply #9 on: July 17, 2013, 02:16:30 AM »
ReplyReply

I've downloaded sample Sony A77 image from imaging-resource site. ISO200, several resolution targets and two Color Checkers on gray field.


Sony did have that funky sensor once with different colors for G1 and G2. Is the A77 one of those?
Logged

Regards,
Oscar Rysdyk
theimagingfactory
Iliah
Sr. Member
****
Offline Offline

Posts: 410


« Reply #10 on: July 17, 2013, 02:23:15 AM »
ReplyReply

Sony did have that funky sensor once with different colors for G1 and G2. Is the A77 one of those?

No, it is not that extreme. But generally in my testing those 2 green channels can't be treated equally.
Logged
alextutubalin
Newbie
*
Offline Offline

Posts: 13


« Reply #11 on: July 17, 2013, 02:27:34 AM »
ReplyReply

Sony did have that funky sensor once with different colors for G1 and G2. Is the A77 one of those?

The 'funky' RGBE (Emerald) sensor was in Sony F828 camera.

In A77 the G1/G2 are very close, but there is systematic difference:
 on blue color checker patches G1 is less than G2 (up to 1.5-2% difference)
 on green patches G1 and G2 are near the same
 on yellow (and red) patches G2 is less than G1

Logged

Alex Tutubalin
RawDigger:  http://www.rawdigger.com
LibRaw: http://www.libraw.org
alextutubalin
Newbie
*
Offline Offline

Posts: 13


« Reply #12 on: July 17, 2013, 02:57:46 AM »
ReplyReply

There are probably differences in the two green channels because of different spill-overs, different read-out channels, and simple statistics. Please compute the confidence interval of the difference.

(Standard) Deviation on relative small samples (100x100 or 250x250) is due to photon noise (for 1000 electrons generated and 1e/1ADU amplification deviation will be about 32).

So, statistically speaking, you need to shot many samples, then average them (subtract dark averaged dark frame and do other things astrophotographers do with shots).

Anyway, for Sony A77 (and for Leica M9, just tested it) the G1/G2 difference is systematic. It is very easy to see, just take color checker shot (or download ready image from Imaging resource), run RawDigger, use Grid Tool to capture Color Checker data, export it to Excel CSV file and explore.
Logged

Alex Tutubalin
RawDigger:  http://www.rawdigger.com
LibRaw: http://www.libraw.org
BartvanderWolf
Sr. Member
****
Online Online

Posts: 3568


« Reply #13 on: July 17, 2013, 03:15:48 AM »
ReplyReply

So, it looks like two greens are slightly different in color response too.

Hi Alex,

That's not uncommon. I have a similar discrepancy with my 1Ds3, its G1 sensels on average give an up to 1.605 ADU higher response than the G2 sensels, also in the image center. That is already an interesting observation, because that means that Raw converters can consider the G1 and G2 samples as separate channels or as a single channel, with different effects. For example DCRaw and RawTherapee allow the user to choose how the green sensels are to be used, as the same or as different channels. It can e.g. cause luminance noise in smooth (e.g. sky) gradients which is due to relatively poor G1/G2 calibration, so it is a useful metric by itself.

There can also be other biases introduced by e.g. vignetting or sensel mask shading and tunneling effect when we get away from the center of the image, but for modest sized areas it will not impact the noise analysis too much.

However, the averages over a reasonable sample area size are usually not miles apart, and that makes the noise analysis based around their average still very useful and accurate enough. A given standard deviation based on a mean difference of 2 produces pretty much the same amplitudes.

Cheers,
Bart
Logged
BartvanderWolf
Sr. Member
****
Online Online

Posts: 3568


« Reply #14 on: July 17, 2013, 03:22:55 AM »
ReplyReply

Can you verify that avg(g1) - avg(g2) is zero?

Hi Iliah,

Ideally, for noise analysis, one would like to subtract two Raw files in their entirety, just like astronomers do to calibrate for noise per sensel, and to eliminate the influence of PRNU. That would make sure that only random noise and exposure (shutter time and sticky aperture blade) inaccuracies could be present at the sensel level. Unfortunately, we do not always have exposure pairs available for analysis.

So, as a second best option we could consider the G1/G2 statistics, but as always we need to understand what we are looking at.

Cheers,
Bart
Logged
Iliah
Sr. Member
****
Offline Offline

Posts: 410


« Reply #15 on: July 17, 2013, 03:24:55 AM »
ReplyReply

Dear Bart,

I would suggest to try to determine how the exposure affects the G1/G2 ratio, say, we have a reasonably stable light source, like a halogen bulb, and a long lens covered with a styrofoam cup. Let's say aperture is fixed 2-3 stops from wide open, and only the shutter speed varies. If folks here are interested to shoot those and process we all will have some numbers to evaluate.
Logged
opgr
Sr. Member
****
Offline Offline

Posts: 1125


WWW
« Reply #16 on: July 17, 2013, 04:22:05 AM »
ReplyReply

Variance is a bitch, and reducing variance takes exponential effort. One possible way to see how safe your estimate really is: plot the average difference in relation to count. It should vary wildly initially, and then vary less and less as the count increases. You would get a useful view of where variance becomes irrelevant for your purposes, and where you can relatively confidently say what the average will be.

Another option would be to simply take the minimum difference of the 4 direct neighbours of any given pixel within a single CFA plane. This is additionally useful if you later want to use differences for edge-detection. An edge occurs on the maximum of those 4 differences, so that maximum should at least exceed the average Dmin in order to be considered relevant.

Another thought: while the DCRaw code is an accomplishment in itself, the coding is excessively horrible. It might very well introduce integer errors where we least expect them. For example, I see both of the following being used and they can yield different results for signed integers:

(a+b)>>1
(a+b) / 2
Logged

Regards,
Oscar Rysdyk
theimagingfactory
alextutubalin
Newbie
*
Offline Offline

Posts: 13


« Reply #17 on: July 17, 2013, 04:46:14 AM »
ReplyReply

Variance is a bitch, and reducing variance takes exponential effort. One possible way to see how safe your estimate really is: plot the average difference in relation to count. It should vary wildly initially, and then vary less and less as the count increases. You would get a useful view of where variance becomes irrelevant for your purposes, and where you can relatively confidently say what the average will be.
I think, it is better (and more scientific) to use t-statistics criterion. According to it, for 10k pixels (200x200 bayer sample) with average value 1000 and same standard deviation 33 (photon noise), 1-level difference is significant for 0.9 level of confidence and 2-level difference is enough for 0.99 confidence.

For example, I see both of the following being used and they can yield different results for signed integers:

(a+b)>>1
(a+b) / 2


In RAW data all values are non-negative and relatively small (10-16 bit), way smaller than largest possible signed 32-bit integer.
With this restrictions both averaging methods will produce same results.

« Last Edit: July 17, 2013, 05:14:16 AM by alextutubalin » Logged

Alex Tutubalin
RawDigger:  http://www.rawdigger.com
LibRaw: http://www.libraw.org
Iliah
Sr. Member
****
Offline Offline

Posts: 410


« Reply #18 on: July 17, 2013, 05:14:44 AM »
ReplyReply

Variance is a bitch

Dear Oscar,

This discussion may be a bit theoretical without experiments and measurements to support or reject theories. Experiments for this particular problem are not very difficult but very educative. I think it would be really good to study real data.
Logged
opgr
Sr. Member
****
Offline Offline

Posts: 1125


WWW
« Reply #19 on: July 17, 2013, 05:22:30 AM »
ReplyReply

I think, it is better (and more scientific) to use t-statistics criterion. According to it, for 10k pixels (200x200 bayer sample) with average value 1000 and same standard deviation 33 (photon noise), 1-level difference is significant for 0.9 level of confidence and 2-level difference is enough for 0.99 confidence.

Yes, with 2 caveats:
1. you have to take the stddev of the actual samples (not a theoretical value from photon noise)
2. this criterion is meant for truly uncorrelated data, I don't know whether it can be used indiscriminately on correlated data which is what you are trying to establish: i.e. you are basically trying to answer the question "is there a determinate deviation between the green channels?" with some confidence level.

I wonder whether it is worth the effort since we are not so much interested in that difference as we are interested in the actual noise, possibly measured from the separate CFA planes which also eliminates any cross-channel correlation.

(Just wondering out loud, I am in no way dismissing the effort and usefulness of RAWDigger and all of its statistics. Besides, my statistics knowledge is too rusty to be trusted anyway.)




In RAW data all values are positive and relatively small (10-16 bit), way smaller than largest possible signed 32-bit integer. With this restrictions both averaging methods will produce same results.

Yes, except that the lossless RAW compression of some manufacturers represent difference signals, which obviously represent negative values. This then eliminates one of the restrictions mentioned. If DCRaw is doing the conversion to RGB and doing so incorrectly… (Not to mention incorrect rounding on the in-camera encoding chip to name just one of many other pitfalls).

The point i suppose is this: There is a "certain" amount of "uncertainty" involved in the numbers you use for statistics. If what you are trying measure falls too far below that certainty you should consider whether what you're trying to measure is worth the effort. You may draw incorrect conclusions because there might not be enough precision available in your data. The precision problem is not your fault. Drawing the wrong conclusions however...


 
Logged

Regards,
Oscar Rysdyk
theimagingfactory
Pages: [1] 2 3 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad