Ad
Ad
Ad
Pages: « 1 2 [3] 4 »   Bottom of Page
Print
Author Topic: DxO Sensor Mark  (Read 15165 times)
Peter van den Hamer
Newbie
*
Offline Offline

Posts: 43


« Reply #40 on: February 02, 2011, 04:39:24 PM »
ReplyReply

I don't think there is really a useful quantitative measure of chroma noise for raw data.  One doesn't really have chroma data until after demosaic and transform to a color space, and that depends on a number of other factors (demosaic algorithm, transform method and input profile used, for example).

Why do you think that the color sensitivity metric is measured before demosaicing and transform to a standard color space? The article you quoted on color sensitivity and color filter response curves suggested to me that the numbers are computed in a standard color space like sRGB. Why else would an ill-conditioned 3x3 color transformation matrix decrease the color sensitivity score (as in the Canon 500D) if the number of discernable colors is estimated in the color space of the raw file?

Peter
Logged
ejmartin
Sr. Member
****
Offline Offline

Posts: 575


« Reply #41 on: February 02, 2011, 09:24:39 PM »
ReplyReply

Why do you think that the color sensitivity metric is measured before demosaicing and transform to a standard color space? The article you quoted on color sensitivity and color filter response curves suggested to me that the numbers are computed in a standard color space like sRGB. Why else would an ill-conditioned 3x3 color transformation matrix decrease the color sensitivity score (as in the Canon 500D) if the number of discernable colors is estimated in the color space of the raw file?

Peter

Looking at the DxO documentation, it's not at all clear what they are measuring, but you are right -- the score could only depend on the matrix transform if they were measuring color sensitivity within the sRGB gamut.  However, given the way they do all their other measurements, I suspect the result is calculated from the raw data rather than via demosaic etc.  From the error ellipsoid in the raw data, one can transform it to an error ellipsoid in a standard color space and determine the number of such ellipsoids that fit inside sRGB gamut.
Logged

emil
Peter van den Hamer
Newbie
*
Offline Offline

Posts: 43


« Reply #42 on: February 03, 2011, 04:11:50 AM »
ReplyReply

Looking at the DxO documentation, it's not at all clear what they are measuring, but you are right -- the score could only depend on the matrix transform if they were measuring color sensitivity within the sRGB gamut.  However, given the way they do all their other measurements, I suspect the result is calculated from the raw data rather than via demosaic etc.  From the error ellipsoid in the raw data, one can transform it to an error ellipsoid in a standard color space and determine the number of such ellipsoids that fit inside sRGB gamut.

So we agree on the need to do color space conversion and above all the need to avoid 'etc' (e.g. sharpening, noise reduction and whatever a real world raw converter might do).

Hence my attempt to explain the metric by linking it to the term "chroma noise". "Color sensitivity" or worse "portrait usecase" don't give much of a hint.

The demosaicing indeed sounds unnecessary because there is no spatial information in the signal. But they need to estimate per-pixel noise levels starting with Bayer matric data. A quick and dirty approach would be to demosaic. But I guess you are right (given their "science style") that they measure noise in each color plane and then compensate for the differences in resolution of the various color planes. Thanks.
« Last Edit: February 03, 2011, 03:42:44 PM by Peter van den Hamer » Logged
ejmartin
Sr. Member
****
Offline Offline

Posts: 575


« Reply #43 on: February 03, 2011, 08:49:48 AM »
ReplyReply

The problem with using demosaic is that it adds a whole host of poorly controlled variables.  Demosaic error will be a substantial contributor to chroma noise, since it is a mis-estimation of missing colors.  Just look at the variety of algorithms that have been coded into dcraw/libraw.

BTW, there is a second possibility for such a measure -- convert to Lab rather than sRGB using matrix transform.  Then instead of determining the number of distinguishable colors within the sRGB gamut, one could determine the number of colors within the 'camera gamut' ie the image of the XYZ parallelipiped bounded by the camera primaries.  The latter would measure the total number of colors the camera can reproduce, rather than just the number within the sRGB gamut.  Typical 'camera gamuts' defined this way are closer to prophoto.
Logged

emil
Ernst Dinkla
Sr. Member
****
Offline Offline

Posts: 2725


« Reply #44 on: February 04, 2011, 09:42:26 AM »
ReplyReply

Peter,

A nice explication of the DxO numbers and a good defence here in the forum. As far as I can follow the subject :-) You deserve more than an aluminum medal I would say.

On the good color accuracy of Capture One as mentioned in the comments, is a sensor without an anti-aliasing screen not the best base to start from and more likely the cause of its reputation? In that sense the K5 sensor should be quite capable too. A camera I recommended to an artist with a tight budget who liked to document his paintings. Before I discovered your article here. Whether the DxO categories have any value: landscape, sports etc, for the gamut volume/color distinction the term "reproduction photography" would have been more adequate than "portrait".


met vriendelijke groeten, Ernst Dinkla

New: Spectral plots of +230 inkjet papers:
http://www.pigment-print.com/spectralplots/spectrumviz_1.htm

Logged
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 6918


WWW
« Reply #45 on: February 05, 2011, 02:15:37 AM »
ReplyReply

Hi,

I agree with all your points. Regarding color accuracy, much depends on color processing, it's about sensor, raw-processing, color profiles and so on. I'd really suggest that using a Macbeth Color Checker is a good idea. In my Windows days I used a program called Picture Window Pro that could use a Color Checker to do accurate color matching.

http://dl-c.com/content/view/47/74/


Best regards
Erik

Peter,

A nice explication of the DxO numbers and a good defence here in the forum. As far as I can follow the subject :-) You deserve more than an aluminum medal I would say.

On the good color accuracy of Capture One as mentioned in the comments, is a sensor without an anti-aliasing screen not the best base to start from and more likely the cause of its reputation? In that sense the K5 sensor should be quite capable too. A camera I recommended to an artist with a tight budget who liked to document his paintings. Before I discovered your article here. Whether the DxO categories have any value: landscape, sports etc, for the gamut volume/color distinction the term "reproduction photography" would have been more adequate than "portrait".


met vriendelijke groeten, Ernst Dinkla

New: Spectral plots of +230 inkjet papers:
http://www.pigment-print.com/spectralplots/spectrumviz_1.htm


Logged

Peter van den Hamer
Newbie
*
Offline Offline

Posts: 43


« Reply #46 on: February 05, 2011, 06:38:39 AM »
ReplyReply

On the good color accuracy of Capture One as mentioned in the comments, is a sensor without an anti-aliasing screen not the best base to start from and more likely the cause of its reputation?

Nice to hear from you again.

Would removal/omission of an anti-aliasing filter (Phase One backs) improve color accuracy? In theory, it shouldn't for larger areas: it just blurs the image slightly to remove details with the sensor couldn't resolve. A blured color test patch should have the same color as a sharp test patch. But aliasing with a low-res Bayer matrix sensor can give weird colors on fine striped details under special conditions.

From what I read, it sounds like the color accuracy of a Phase One back and a Capture One raw converter "just" reflect a lot of attention to accurate profiling of the camera. It probably helps if the entire workflow is supported by one vendor. Maybe they calibrate individual backs in the factory. Undoubtedly some of the legend is also just good marketing, given that user "error" can easily screw up color accuracy ;-)

In that sense the K5 sensor should be quite capable too. A camera I recommended to an artist with a tight budget who liked to document his paintings. Before I discovered your article here. Whether the DxO categories have any value: landscape, sports etc, for the gamut volume/color distinction the term "reproduction photography" would have been more adequate than "portrait".

Just to avoid any misunderstandings: DxOMark Sensor does not directly measure color accuracy, but it does measure color noise.
Color accuracy is probably more about doing the things right (e.g. illuminant, homogeneous lighting, Raw vs JPG, printer profiles, screen profiles) than having the best equipment. That said, the K5 (or D7000 or A580) is currently state of the art for its price. I can imagine that X-Rite's Color Checker Passport would help for reproductions, although a graphic artist may only want to take a snapshot with flash and use the JPG straight from the camera. Until you convince them to calibrate their monitor, the rest can wait.
Logged
bjanes
Sr. Member
****
Offline Offline

Posts: 2714



« Reply #47 on: February 05, 2011, 10:37:49 AM »
ReplyReply

It is the number of distinguishable colors in the raw data output:

http://dxomark.com/index.php/en/Learn-more/DxOMark-database/Measurements/Color-sensitivity

basically the number of ellipsoids the size of a noise std dev deltaR*deltaG*deltaB that fit within the 'gamut' of the camera (the overall range of R,G,B it can record).

Dxomark does not fully document many of its methods, but they do state that the data are input into the DxoAnalyzer, and one can glean a lot of information from documentation for that product. For example, color sensitivity is plotted as ellipses on a CIE Lab chart. As Emil has stated, the color sensitivity of the camera depends on how many of these ellipses (actually ellipsoids in a 3 dimensional plot) fit into the camera space.



How much sensitivity is needed for human perception depends on the color sensitivity of the human visual system, which can be shown by Macadam ellipses. The plot here is in the CIE xyY space at a given luminance. The CIE xyY space is not perceptually uniform, and the greens are exaggerated. Again an ellipsoid fitting calculation could be done to determine the number of colors.



So it is a measure of color richness.   As a practical matter, it should be combined with information from the metamerism index (the degree to which the subspace spanned by the CFA spectral responses overlaps with that of the CIE standard observer) as well as the information that DxO measures on the map from color primaries of the camera to sRGB primaries.  For instance if there are large coefficients in the latter one will have larger chroma noise when mapped to standard output color spaces; see

http://dxomark.com/index.php/en/Our-publications/DxOMark-Insights/Canon-500D-T1i-vs.-Nikon-D5000/Color-blindness-sensor-quality

Basically large coefficients in the color matrix lead to amplification of chroma noise. 

Color accuracy is demonstrated by a CIE Lab plot at a given luminance, similar to what is done in Imatest:



Actually, the Phase One P65+ has the same problem in the red channel as the Canon 500D analysis to which Emil refers: it has nearly as much response to green as red.



The D3x red channel has a better response and its matrix coefficients are smaller. However, the greater sensor area of the P65+ results in less chrominance noise and its print color sensitivity is slightly better than that of the D3x. However, the D3x has a better per pixel (screen) sensitivity. Whether or not such differences can be perceived in prints is open to question.

Regards,

Bill



Logged
Peter van den Hamer
Newbie
*
Offline Offline

Posts: 43


« Reply #48 on: February 10, 2011, 05:57:43 PM »
ReplyReply

I did some minor maintenance on the diagrams in the DxOMark Sensor article:
  • new tests data for Olympus PEN EPL2 and Olympus XZ-1 (Feb 4th 2011)
  • existence of 80 MPixel cameras (extended MPixel scale)
  • shifted Launch Date scale (to prepare for new models)
  • added Canon 600D and Canon 1100D and Phase One IQ180 and Leaf Aptus-II 12 (also 80 MPixel) to Figure 3

Note that both Canon cameras and both 80 MPixel medium-format models have not been tested yet by DxOMark, so you will only encounter them in Figure 3.

Peter
Logged
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3012


« Reply #49 on: February 11, 2011, 05:36:00 AM »
ReplyReply

I did some minor maintenance on the diagrams in the DxOMark Sensor article:

Hi Peter,

Thanks for that. There is however something that 'annoys' me, since it may mislead some casual readers.

Your comparison between coarser or denser sampling suggests that the signal to noise statistics are basically identical. You also state that "This gets us back to “smaller pixels give higher noise levels per-pixel”. But per-sensor-pixel noise is the wrong metric for prints (or, for that matter, any other way to view an image in its entirety)".

The "any other way to view an image in its entirety", only applies to identical area measurements, same size (downsampled) output (and disregards the effect of non-linear gamma conversion of noise). It does not hold when the additional sensor density is required and used for producing larger output (a major reason why I would consider buyng a system with higher MP count such as a MF back). Then the reduced per pixel DR due to smaller charge capacity will hurt image quality. Sensels will either saturate, or shadows will drown in noise, when the scene offers common luminance ranges. Downsampling saturated or read-noise dominated sensels will not produce the same statistical S/N ratio as a larger sensel would.

That's why the DxO DR data are given for both downsampled (identical output/"print" size), and per sensel (native size) scenarios. Those who need large format output should seriously look at the per sensel data rather than the downsampled "Print" metrics.

Cheers,
Bart
Logged
Peter van den Hamer
Newbie
*
Offline Offline

Posts: 43


« Reply #50 on: February 11, 2011, 11:42:14 AM »
ReplyReply

"Thanks for that. There is however something that 'annoys' me, since it may mislead some casual readers."

It is indeed tricky material. It is easy to compare things in the wrong way and then reach very wrong conclusions. Expecially when the math is not entirely intuitive.

You seem to disagree with a key conclusion, so it may help walk though the intermediate steps explained in the article. So let's see what we agree/disagree on step-by-step:

1) If you need a bigger print, you may need more MPixels than a smaller print. I think we agree there. Some people want razor-sharp large prints - fine (technically I don't care if it is a real need or not). Then a 40, 60 or 80 MPixel camera could help. A 10 Mpixel camera simply will lose some information when the scene contains enough high-frequency information (Nyquist).

2) DxOMark Sensor scores are basically only about noise and dynamic range. Resolution may be relevant, but is an independent measurement. DxO says: if you need/desire a particular amount of Mpixels, just filter out any camera that don't have that. For any cameras that has enough resolution, you then compare the noise as follows..

3) To compare noise and dynamic range of cameras, you have to convert them to the same resolution. Say you need 20 MPixels, but want to compare a 40 MPixel camera to a 160 MPixel camera, noise levels or dynamic range per pixel/sensel simply gives a misleading answer. Per sensel measurements would certainly tell you that the 160 MPixel camera has higher per-sensel noise. In reality the 160 MPixel camera may have better, worse or equal noise to the 40 MPix camera when compared at the same resolution. This "same resolution" could be 20 MPixels or 40 Mpixels, etc. Your "smaller pixels will hold less charge" (correct) and will have a worse Dynamic Range is not correct when you scale from per-pixel to per-image.

4) From your text I think you may believe (3) and the scaling formulas explained in the article when applied to "downsampling". Although I deliberately avoided formulas, the math for this is in my article and in a more technical white paper by DxO itself.

5) "Upsampling" (uprezzing in Michael-speak) is not relevant if you only consider camera's that have enough resolution. So in practice, you could avoid having to worry about it. One could, if somebody really wants this, extend the story to cover upsampling as well as downsampling, but the story becomes really tricky. You can upres without increasing per-pixel noise simply by duplicating pixels or by linear interpolation. But then you have suspicious artefacts in the image. The trick to make it look natural is to add just the right amount of noise (see stories about advanced sharpening techniques) to mask this. Actually the fact that DxOMark has a few 4 and 6 MPixel cameras that are scaled to 8 MPixel show that they use their scaling formula for both downscaling and upscaling. My gut feeling is that this is technicall/scientifically fair, but to be sure you need to read up on pretty theoretical information theory.

To get back to a more practical level:
Those who need large format output should seriously look at the per sensel data rather than the downsampled "Print" metrics.

I really don't recommend this unless all the cameras you are comparing happen to have the same resolution. You have to compensate for resolution differences to compare noise levels. If you want to see resolution itself, check resolution numbers or preferably real resolution measurements that include the optics (lens, CFA, AA filter). 

That's why the DxO DR data are given for both downsampled (identical output/"print" size), and per sensel (native size) scenarios.

Interesting question. Why provide per-pixel measurements if Peter (and I think also DxO) don't recomment using them for comparison purposes? As far as I know they don't even give you an easy way to compare per-pixel ("screen") data. I consider the per-pixel data as raw data. It is useful for manufacturers or others (like me) who want to check the computations.

Peter

PS: I ignored you "non-linear gamma correction" as I don't think it impacts the discussion. You can do gamma correction after generating the appropriate output resolution. There should only be minor differences if you do the corrections in the "wrong" order.
Logged
joofa
Sr. Member
****
Offline Offline

Posts: 485



« Reply #51 on: February 11, 2011, 12:22:16 PM »
ReplyReply


3) To compare noise and dynamic range of cameras, you have to convert them to the same resolution. Say you need 20 MPixels, but want to compare a 40 MPixel camera to a 160 MPixel camera, noise levels or dynamic range per pixel/sensel simply gives a misleading answer. Per sensel measurements would certainly tell you that the 160 MPixel camera has higher per-sensel noise. In reality the 160 MPixel camera may have better, worse or equal noise to the 40 MPix camera when compared at the same resolution. This "same resolution" could be 20 MPixels or 40 Mpixels, etc. Your "smaller pixels will hold less charge" (correct) and will have a worse Dynamic Range is not correct when you scale from per-pixel to per-image.


In talking about "per-image" statistics what is the notion of "noise" when the "signal", and how that signal is affected with which methodology is chosen to do resampling, is not even considered in your analysis above? Can this type of noise be treated in isolation to the signal?

Some related readings:

http://forums.dpreview.com/forums/read.asp?forum=1022&message=37680938
http://forums.dpreview.com/forums/read.asp?forum=1012&message=37572900


Joofa
« Last Edit: February 11, 2011, 12:24:23 PM by joofa » Logged

Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins
Peter van den Hamer
Newbie
*
Offline Offline

Posts: 43


« Reply #52 on: February 11, 2011, 04:35:42 PM »
ReplyReply

In talking about "per-image" statistics what is the notion of "noise" when the "signal", and how that signal is affected with which methodology is chosen to do resampling, is not even considered in your analysis above? Can this type of noise be treated in isolation to the signal?

I looked at both postings, but not the entire threads. So I am trying to guess what you mean, and where you might be going with this.

Q: Can noise levels be measured using regular photos, e.g. of a cat? It would be hard to distinguish signal from noise, wouldn't it?
A: That's not how the DxOMark measurements were done.
DxOMark's "Protocol" documentation says that they measure noise using RAW images of neutral-density filters (=patches) that are backlit using a large diffuse light source. You can measure noise by looking at spatial (repeatable) variation: fixed-pattern noise. And by looking at temporal fluctuations (temporal noise) when you take lots of identical images of the identical source. As far as I know (I asked them in an E-mail) their published numbers are FPN and temporal noise added up. This means that in theory one image suffices.

Q: But could it be done with a real photo, e.g. of a statue of a cat?
A: I wouldn't. And DxOMark Sensor doesn't.
You would need to take multiple images to be able to distinguish noise (variation) from singal (average). But this would measure less noise that what DxO defines as noise (because you would miss FPN like dark current non-uniformity and photo-response non-uniformity). And a detailed scene would make the setup unnecessarily sensitive to vibrations and drift: you would see fake noise at sharp edges.

Q: When noise measured at one resolution is scaled to a reference resolution, is this sensitive to the scaling algorithm?
A: No. In DxOMark's procedure they measure noise of a 20 MPixel sensor at  20 MPix (MSensel) resolution. Then the resulting signal-to-noise ratio is corrected using a simple theoretical model. So there is no rescaling algorithm involved. In my article, I provide one or two examples of this that I checked by hand.

Q: Would you get the exact same results if you took the test image, rescaled it and then measured noise? In other words is the "simple theoretical model" accurate?
A: The model used for scaling corresponds to what a simple binning algorithm would do (e.g. replacing 2x2 small pixels by one fat one), assumes that you have a competent implementation (e.g. do the measurements and calculations in enough precision), and assumes Poisson noise with no correlation between pixels. It should thus be pretty accurate for the photon shot noise and dark current noise. The scaling may not apply for the FPN, but its scaling cannot be predicted, and it should be smallish. So the model is pretty accurate - and significantly, the model doesn't need to be fully accurate. It is just meant to provide a handicap to compensate for resolution differences: it doesn't attempt to accurately simulate actual devices.

Q: Do the numbers based on images of test patches have relevance to a real scene? Like Schroedinger's cat or the statue of a cat or somebody's cat?
A: Yes. Overal behavior of a sensor is reasonably well understood. Just like you can characterize the noise in an audio amplifier, rather than having to measure noise specifically when playing Beethoven sonates or even Neil Young.

Q: But what if the test patches do not generate homogeneous light patches on the sensor? How to deal with offset (=blackpoint) in the processing?
A: Measurement setups are indeed never perfect. These are serious issues. I mentioned those kinds of problems in an earlier posting.
Engineers and scientists will point out that numerous questions remain about test details for any precision measurement: e.g. light source homogeneity, light source stability, finite test patch size, vignetting, dust on the source, dust on the optics. I can assure you (I worked for years in labs) that precisions measurements are a major headache. Some of these issues are nowadays covered by international standards where the experts jointly develop measurement protocols. DxOMark is active in some of these committees (source: LinkedIn and private communications). And DxO says that outside engineers regularly get to see the setup and discuss the procedures used. This is normal in engineering: if you challenge my measurement results, I either need to exhaustively document measurement details and you review them, and/or you send in experts to see if you can find a flaw in the measurments. You can bet that a major manufacturer will contact DxOMark whenever their products get lower scores than hoped for.
You can do a rough check yourself by examining the slopes of various graphs against theory. But the data seems good enough for comparing sensors. And checking for ever more subtle pitfalls in measurements is best left to the manufacturers who hope to see performance increases (that are increasingly hard to measure) in their latest designs.

By the way, http://peter.vdhamer.com/2010/12/25/harvest-imaging-ptc-series/ is a posting about how to measure noise in sensors. I just summarized the material. The source is Albert Theuwissen, an expert on sensor design and modeling.
Logged
joofa
Sr. Member
****
Offline Offline

Posts: 485



« Reply #53 on: February 11, 2011, 04:55:12 PM »
ReplyReply

Peter,

The point is that one can live in world of theory, photon shot noise, pixel level DR measurement, etc. The real world does not necessarily operate like this all of the time. At the end of the day, the usual problem is that we have a single image and all our notions of signal, noise, SNR, of "image-level" statistics must come from the image samples. If the image has FPN buried in it then so be it. Can't do anything about that. What has happened has happened.

The sensor people have a very different goal in life. They can even spend their life with a single pixel as all that matters to them is the notion of DR, noise etc. on that pixel. But that is not necessarily the goal of photography and any image processing applied to it.

Either we admit that we little theory to talk meaningfully about image-level statistics or we can try to interpret the image level statistics in a different light, which as a starting point and initial conditions takes the usual "sensor-level" stuff such as shot noise, read noise, DR, and builds a theory out of it that explains things in practise.

Sincerely,

Joofa
Logged

Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins
Peter van den Hamer
Newbie
*
Offline Offline

Posts: 43


« Reply #54 on: February 11, 2011, 06:54:36 PM »
ReplyReply

At the end of the day, the usual problem is that we have a single image and all our notions of signal, noise, SNR, of "image-level" statistics must come from the image samples. If the image has FPN buried in it then so be it. Can't do anything about that. What has happened has happened.

The sensor people have a very different goal in life. They can even spend their life with a single pixel as all that matters to them is the notion of DR, noise etc. on that pixel. But that is not necessarily the goal of photography and any image processing applied to it.

Joofa,

There are indeed people who look at noise and DR top down (users, people who review cameras) and bottom up (experts in the underlying technology). I looked into both because the bottom-up expertise can help find problems in the top-down experience. And vice versa.

DxOMark seems pretty close to your ideal of system-level testing: they test unmodified production cameras by literally taking pictures of gray charts under varying ISO settings. This creates raw files written to memory cards. The patches on the gray charts happen to be round and glass rather than rectangular and printed on paper. But that is because they need to sometimes measure at higher precision than was needed for yesterday's cameras.

Another thing that may bug people is that they report resolution and noise results quite separately from each other: they are 2 different benchmarks (one does ONLY noise, the other does MAINLY resolution). It certainly deviates from the more subjective approach of taking an image of a tree and concluding "the 80 MPixel back is a lot better than my 60 MPixel back".

The final point that could be strange is that they show all their data as numbers. You don't get to see the actual patches like other review sites. I can image that that doesn't appeal to some types of users. For quick-and-dirty choices, DxOMark's top level score should be more than enough. But they certainly don't provide test pictures taken in the field like say Photozone.de does to supplement its graphs. DxOMark's strategy is apparently to link to other reviews that specialize in that kind of thing.

Peter
« Last Edit: February 12, 2011, 09:46:08 AM by Peter van den Hamer » Logged
Peter van den Hamer
Newbie
*
Offline Offline

Posts: 43


« Reply #55 on: February 12, 2011, 11:33:16 AM »
ReplyReply

I fixed some Greek letters that had dropped out during the text transfer. If you find any others errors, please let me know.
Logged
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 6918


WWW
« Reply #56 on: February 12, 2011, 02:09:53 PM »
ReplyReply

Hi,

The problem with sample images is that very few are usable. Very few images have in focus details at corners. Even if the images are OK it may be impossible to compare two different lenses if subjects or conditions are not identical.

Best regards
Erik

Joofa,

There are indeed people who look at noise and DR top down (users, people who review cameras) and bottom up (experts in the underlying technology). I looked into both because the bottom-up expertise can help find problems in the top-down experience. And vice versa.

DxOMark seems pretty close to your ideal of system-level testing: they test unmodified production cameras by literally taking pictures of gray charts under varying ISO settings. This creates raw files written to memory cards. The patches on the gray charts happen to be round and glass rather than rectangular and printed on paper. But that is because they need to sometimes measure at higher precision than was needed for yesterday's cameras.

Another thing that may bug people is that they report resolution and noise results quite separately from each other: they are 2 different benchmarks (one does ONLY noise, the other does MAINLY resolution). It certainly deviates from the more subjective approach of taking an image of a tree and concluding "the 80 MPixel back is a lot better than my 60 MPixel back".

The final point that could be strange is that they show all their data as numbers. You don't get to see the actual patches like other review sites. I can image that that doesn't appeal to some types of users. For quick-and-dirty choices, DxOMark's top level score should be more than enough. But they certainly don't provide test pictures taken in the field like say Photozone.de does to supplement its graphs. DxOMark's strategy is apparently to link to other reviews that specialize in that kind of thing.

Peter
Logged

Peter van den Hamer
Newbie
*
Offline Offline

Posts: 43


« Reply #57 on: February 12, 2011, 04:50:44 PM »
ReplyReply

FYI: I fixed a typo in Figures 2 and 3 (Powershot G7 -> G9).
I also added 11 new cameras to Fig3. These are important cameras that haven't been tested (yet).
The fact that they are listed is not an indication that all will be tested. We can guess that various (e.g. Canon 600D) will be tested while some may not (Hasselblads?, Leica S2).

[Update: DxO wrote in a forum that they would like to get hold of (rent/borrow) a Leica S2]

Panasonic / Lumix DMC GF2 / 12.1 MPixel
Canon / EOS 1100D / 12.2
Fujifilm / FinePix X100 / 12.3
Olympus / SP 610 UZ / 14.0
Samsung / NX 11 / 14.6
Canon / EOS 600D / 17.9
Leica / S2 / 37.5
Hasselblad / H4D-50 / 50.1
Hasselblad / H4D-60 / 60.0
Phase One / IQ180 / 80.0
Leaf / Aptus-II 12 / 80.0

« Last Edit: February 13, 2011, 04:23:54 AM by Peter van den Hamer » Logged
joofa
Sr. Member
****
Offline Offline

Posts: 485



« Reply #58 on: February 12, 2011, 06:10:04 PM »
ReplyReply

Peter,

I understand what you are saying and appreciate the effort regarding measuring "sensor-level" statistics. But my point was that such numbers don't necessarily describe "image-level" statistics. The way I see it we should make a distinction between the two.

Joofa
Logged

Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins
Peter van den Hamer
Newbie
*
Offline Offline

Posts: 43


« Reply #59 on: February 13, 2011, 04:34:38 AM »
ReplyReply

I understand what you are saying and appreciate the effort regarding measuring "sensor-level" statistics. But my point was that such numbers don't necessarily describe "image-level" statistics. The way I see it we should make a distinction between the two.

If that means taking an arbitrary photo and measuring the noise, you will get stuck unless the subject is well defined (patches, test chart): as you seem to suspect you cannot distinguish between signal and noise if you can't accuractly predict the signal. So it sounds like a dead end for automated/objective testing. And taking a photo of a well defined subject (patches, test chart) is what DxO is doing.

What may bother you is that a human (as opposed to software) can sometimes compare two images and tell you which camera is better. The problem is that humans compare the signal+noise to their expectation of the signal. They know whether or not a cat's fur looks grainy.

Peter
Logged
Pages: « 1 2 [3] 4 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad