Ad
Ad
Ad
Pages: « 1 [2] 3 4 5 »   Bottom of Page
Print
Author Topic: DxO marks  (Read 13020 times)
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7332


WWW
« Reply #20 on: December 17, 2012, 01:50:41 PM »
ReplyReply

Hi,

The "print" mode essentially does something pretty similar.

Best regards
Erik


Would it be possible to plot DxO score versus total sensor area (pixel count x pixel size)?  That might show a pretty high correlation.
Logged

EricV
Full Member
***
Offline Offline

Posts: 129


« Reply #21 on: December 17, 2012, 02:25:46 PM »
ReplyReply

Sorry, I meant sensor charge capacity, not area.  Charge capacity includes both pixel count and pixel full well depth (which is or course related to pixel area, but not quite the same), in the way which matters for noise in a final print.
Logged
Peter van den Hamer
Newbie
*
Offline Offline

Posts: 43


« Reply #22 on: December 17, 2012, 03:07:25 PM »
ReplyReply

Thank you for an excellent post. One area that was not covered is the difference between pixel binning in the sensor (hardware binning) and binning post capture by downsampling the image (software binning). At very low levels of illumination where read noise predominates over photon noise, software binning can not make a small pixel sensor perform as well as a larger pixel sensor with the same total sensor area.

Thanks. Actually there is mention (also in the older article) of this change binning technique. See endnotes 44 and 45 and associated body text:

Quote
"Note that although this scaling story holds for photon shot noise and dark current shot noise, other noise sources don’t necessarily scale the same way. In particular, some very high-end CCDs can use a special analog trick (“charge binning”) to sum the pixels, thus reducing the amount of times that a readout is required. This would reduce temporal noise by a further sqrt(N) where N is the number of pixels that are binned. Apart from the fact that only exotic sensors have this capability (Phase One’s Pixel+ technology), DxOMark’s data suggest that this extra improvement doesn’t play a significant role."
Logged
Peter van den Hamer
Newbie
*
Offline Offline

Posts: 43


« Reply #23 on: December 17, 2012, 03:47:14 PM »
ReplyReply

the label Portrait did not cover the content well, we all agree. But I considered it a good measurement for art reproduction jobs with ample (full spectrum) light allowing optimal settings for all components including a low Iso setting. The jobs where you would expect color calibration, profiling from camera to print and larger print sizes, the last revealing chroma noise when available which could influence a 1:1 reproduction of colors (with the original still there to check against). The dynamic range of a sensor will not be challenged when reproducing reflective art originals so is of less importance. Maybe "Portrait" could be replaced with "Still Life" or "Reproduction".

I agree that the Color Sensitivity benchmark should be very suitable for art reproduction work. But that is a niche for most of us. You might call it "studio". In the text I questioned whether people can actually see chroma noise under these conditions. I have no proof other than that people can hardly see the luminance noise at medium gray at 100 ISO, and that chroma noise at the pixel level should be even harder to see. Anybody feel like generating some test images (of gray patches) using MatLab to simulate what the latest cameras can achieve?

The Print "filter" on the data may not reflect daily practice for camera users that work with large format printers. I have not checked that again but I got the impression some years ago that the print filter could be too conservative in time. The filter brings sensor qualities much closer to one another than what large prints can reveal.

The print-mode data is just a way to normalize pixel-level noise to a common resolution. I agree that if you print large enough, you can do some serious pixel peeping - although that is easier and cheaper to do by clicking on "100%" screen viewing. I guess DxOMark considered pixel peeping more common on screens - even by non-photographers. Only some photographers (the nerdier ones?) inspect a print with their nose touching the print Wink

You probably already know this, but (to be on the safe side) this particular DxOMark benchmark data does not cover the contribution of sensor resolution to print quality. So if you pixel peep, this benchmark tells you what you will see when you are admiring the creamy richness of noise-free bokeh gradients Tongue
Logged
Peter van den Hamer
Newbie
*
Offline Offline

Posts: 43


« Reply #24 on: December 17, 2012, 04:23:05 PM »
ReplyReply

Would it be possible to plot DxO score versus total sensor charge capacity (pixel count times pixel full well capacity)?  That might show a pretty high correlation.

The total charge capacity should be a good indicator of signal to noise ratio in highlights (a somewhat academic number as we wouldn't really see that without extreme post-processing). The dynamic range and low-light ISO benchmarks also depend on noise in the lowlights/shadows.
Logged
Ray
Sr. Member
****
Offline Offline

Posts: 8878


« Reply #25 on: December 17, 2012, 04:53:55 PM »
ReplyReply

I agree that the Color Sensitivity benchmark should be very suitable for art reproduction work. But that is a niche for most of us. You might call it "studio". In the text I questioned whether people can actually see chroma noise under these conditions. I have no proof other than that people can hardly see the luminance noise at medium gray at 100 ISO, and that chroma noise at the pixel level should be even harder to see. Anybody feel like generating some test images (of gray patches) using MatLab to simulate what the latest cameras can achieve?

The print-mode data is just a way to normalize pixel-level noise to a common resolution. I agree that if you print large enough, you can do some serious pixel peeping - although that is easier and cheaper to do by clicking on "100%" screen viewing. I guess DxOMark considered pixel peeping more common on screens - even by non-photographers. Only some photographers (the nerdier ones?) inspect a print with their nose touching the print Wink

You probably already know this, but (to be on the safe side) this particular DxOMark benchmark data does not cover the contribution of sensor resolution to print quality. So if you pixel peep, this benchmark tells you what you will see when you are admiring the creamy richness of noise-free bokeh gradients Tongue

Peter,
There are a few issues here which could do with more clarification.

Graphs are designed to highlight differences. That's their purpose. If one camera has just 0.1EV difference in DR compared with another camera, a graph is designed to clearly show it. That's fine. I have no objection.

However, it is important also to know the practical significance of such differences on the print or on the monitor at specific degrees of enlargement.

For example, the normalised print size of 8"x12" that DXO use is rather small, for good reasons no doubt, so no interpolation is required for the smallest resolution cameras that have been tested, such as the Canon 10D.

Now, we all know that different RAW converters produce slightly different results. One converter may apply greater default noise reduction which is beyond the user control, and another may produce slightly sharper default results but with greater noise.

Likewise, one particular interpolation algorithm may produce more detailed images than another.

The question that I have is this. Just how reliable are these 'normalised' results at 8"x10” when images are interpolated for the purpose of making much larger prints, using the same RAW converter?

I assume that differences due to the different handling of different brands of RAW images by the same converter will exist, but they may be negligible in practice. Would you agree?

Another issue which I think requires more clarifiction is the paractical significance of those value differences shown on the graphs, whether they be dB, bits or EV.

I gather from DXO’s articles that a difference of less than 1 bit in Color Sensitivity, and a difference of less than 0.5EV in DR may not be noticeable. We need to elaborate on such issues. How does print size affect such assessments as to what’s noticeable and/or significant?
Logged
Peter van den Hamer
Newbie
*
Offline Offline

Posts: 43


« Reply #26 on: December 17, 2012, 05:05:16 PM »
ReplyReply

One very small quibble:When DR drops by 1EV for every doubling of ISO, that is hardly impressive -- it is exactly what you would get by not changing ISO and simply underexposing more and more, then multiplying the raw image by factors of two as needed.

Changing the ISO is just a term for underexposing and compensating in the camera (via analog and sometimes digital scaling). Your "exactly what you would get" only applies to an ideal case. Approximating the ideal case is IMO pretty impressive.

The ability to effectively adjust ISO, in a way which improves image quality compared to underexposing, would show up as a DR curve which drops less than 1 EV per ISO doubling.

Sounds like you are asking for something physically impossible at high ISO? See Figure 7 for an example: the 1Dx at 51200 ISO does break the law, but cheats a bit (according to DxOMark) by starting to apply extra "smoothing" aka noise filtering at 51200 ISO. At low ISO, get less than a 1 EV increase per ISO halving, but this is because you are saturating against the maximum signal to noise ratio which the unamplified ("base iso") sensor can handle. So at low ISO - although you do drop less than 1 EV per ISO doubling - this leads to weaker low ISO DR.

So I think my original statement is quite accurate, although it might have alternatively phrased it as "1 EV DR increase per ISO halving" rather than "1 EV DR decrease per ISO doubling".
Logged
Fine_Art
Sr. Member
****
Offline Offline

Posts: 1087


« Reply #27 on: December 17, 2012, 08:03:24 PM »
ReplyReply

Ray,

"Product of" would obviously be wrong, but I guess you meant "function of".
But that claim is like saying "for a given lens, increasing sensor resolution will not reduce overall image sharpness" - which is kind of hard to disagree with  Smiley

So, given that you recommend upgrading to a high res sensor to utilize the full capabilities of lenses (which I kind of did myself: 6 MPix APS-C 10D to 21 MPix FF 5D2), let's check how much this helps using your own example: a Canon 50mm/1.4 lens, used on full-frame cameras, and measuring the max/max/max resolution using DxOMark data.

This gives 55 line_pairs/mm on a 12.7 MPix Canon 5D. Assuming the lens could keep up with the increased resolution of the 5D2 (there is no DxO lens data for 5D3 yet), it would give 71 lp/mm (= 55 lp/mm * sqrt(21.1/12.7)).

Firstly, the highest resolution for any lens on any camera measured by DxOMark so far is only 67 lp/mm. So we can't expect 71 lp/mm for a lowly Canon 50/1.4 (successor is expected). Instead, the measured data for the 50mm/1.4 lens on the 5D2 is 63 lp/mm. This is a 15% increase compared to the three years older 5D design.

This confirms your (somewhat unrefutable) claim that higher resolution sensors contribute to higher resolution images. But it also shows that lenses don't really keep up with sensor resolution increases:
  • decreasing the pixel pitch by 30% (=increasing the MPixel count by 75%) only results in a 15% lineair resolution increase.
  • some of that 15% increase is probably due to unrelated camera improvements. Compare the pricey 85mm/1.4D on the Nikon D300s to the D700: both have a 12 Mpixel sensor resolution, but 8% lp/mm overall resolution difference (AA filter? crosstalk? processing?). So a modern 12 MPixel full-frame camera would presumably give better results than the old Canon 5D design. So the contribution of sensor resolution to the 15% measured overal resolution improvement might be below 10%. We could test this as soon as DxOMark tests the lens on the 5D3 (newer, roughly same resolution as 5D2).
  • the 5D had huge pixels for its time. The 8 MPixel 350D was already out. If you scale the full frame 5D down to 1.6x APS-C, you get a mere 5 MPixel camera. So the 5D had unusually low resolution for its time. A simplistic calculation for the Canon 5D says that it cannot outresolve 59 lp/mm (59 pairs/mm * 2 lines/pair * 24mm * 59 * 2 * 36mm = 12 MPixels).
  • the single figure resolution numbers provided by DxOMark are at each lens' best aperture, at the best zoom setting, and in the middle of the image. This max/max/max measurement obviously flatters the true capabilities of the lens.
  • we are doing the excercise at full-frame. Many people have smaller sensors. An APS-C sensor has 1.5 or 1.6 times smaller pixels than a full-frame sensor with the same resolution. This means that "increasing resolution to get the most out of your lenses" will probably give even less benefit for APS-C or smaller cameras. Arguably you need more than a 12 MPixel medium format camera, but these are not for sale, and users are less likely to crop there.

The story that if you migrate from APS-C to full-frame, that you should increase resolution if you plan to crop back to APS-C makes a lot of sense, assuming that you owned an FX lens on a DX camera. Incidentally, the 36.6 MPixel D800(E) and the 16 MPixel D7000 both have a pixel pitch of 4.8-ish micrometer. The numbers are so alike, it may not be a coincidence (some product manager said "scale D7000 sensor to full-frame!"). The blue scaling line in Figure 6 shows that the D800 even performs pretty comparably to 2 (actually 2.25) D7000 sensors tiled side-by-side.

Peter

Great article Peter.

I have to disagree with Ray. You can't just increase sensor resolution forever, expecting the lenses to handle it. The camera system will function as well as the weakest part. It so happens that lenses are still way ahead of FF sensors. All you have to do is look at the pixel size of the cheap compacts. Of course the lenses on our SLR systems are as good or better than the compacts. Ray is correct now, yes. The terminology assumed lenses can handle anything which is not right.

A Bayer system would hit the wall around 2x the red wavelength 720nm or .7 microns = 1.4 microns. Most DSLR sensors are 5 to 6 microns for FF or 4 to 5 microns for APS-C. There is still free resolution for now.
Logged
dreed
Sr. Member
****
Offline Offline

Posts: 1223


« Reply #28 on: December 17, 2012, 10:44:44 PM »
ReplyReply

For further reading on the topic of sensor noise ...

http://theory.uchicago.edu/~ejm/pix/20d/tests/noise/
Logged
Ray
Sr. Member
****
Offline Offline

Posts: 8878


« Reply #29 on: December 17, 2012, 11:31:26 PM »
ReplyReply

Great article Peter.

I have to disagree with Ray. You can't just increase sensor resolution forever, expecting the lenses to handle it. The camera system will function as well as the weakest part. It so happens that lenses are still way ahead of FF sensors. All you have to do is look at the pixel size of the cheap compacts. Of course the lenses on our SLR systems are as good or better than the compacts. Ray is correct now, yes. The terminology assumed lenses can handle anything which is not right.

A Bayer system would hit the wall around 2x the red wavelength 720nm or .7 microns = 1.4 microns. Most DSLR sensors are 5 to 6 microns for FF or 4 to 5 microns for APS-C. There is still free resolution for now.

I think you've got things back to front here when you write; "You can't just increase sensor resolution forever, expecting the lenses to handle it."

Lenses do not handle sensor resolution. It's the other way round. A lens will always project an image of a certain quality and resolution, depending on the aperture used, regardless of the quality of the sensor.

However, something is always lost as the sensor records the projected image from the lens, even when it's a low quality lens and high pixel-count sensor.

If the lens used is a high quality lens, then of course more is lost in the recording process than would be the case using a low quality lens with the same sensor.

However, setting aside semantics, you are correct that one cannot expect unlimited increases in sensor resolution to continue to provide increased resolution from the same lens. As I mentioned in my reply to Peter, there's a situation of diminishing returns that applies so that at some point any increase in the resolution of the recorded image due to increases in sensor resolution will be so small that it will be unnoticeable in practice.

You can test this for yourself if you've not discarded all your earlier, low resolution cameras, and you don't even have to acquire a low-resolution lens for the purpose. Any 35mm format lens can be considered low resolution from F11 to F32, if it stops down that far.
Logged
EricV
Full Member
***
Offline Offline

Posts: 129


« Reply #30 on: December 17, 2012, 11:39:37 PM »
ReplyReply

Changing the ISO is just a term for underexposing and compensating in the camera (via analog and sometimes digital scaling). Your "exactly what you would get" only applies to an ideal case. Approximating the ideal case is IMO pretty impressive.
Digital scaling, either in camera or in post-processing (multiplying raw image integer values on a computer long after the exposure), is exact and requires no impressive electronics.  The impressive electronics for ISO control is analog scaling which multiplies signal and shot noise more than readout noise, improving the slope of the curve at low ISO (where readout noise is significant).
Logged
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7332


WWW
« Reply #31 on: December 17, 2012, 11:42:59 PM »
ReplyReply

Hi,

It's different. Sensors with on chip converters seem to have very different characteristics compared with sensors with off chip converters.

Best regards
Erik

Digital scaling, either in camera or in post-processing (multiplying raw image integer values on a computer long after the exposure), is exact and requires no impressive electronics.  The impressive electronics for ISO control is analog scaling which multiplies signal and shot noise more than readout noise, improving the slope of the curve at low ISO (where readout noise is significant).
Logged

Fine_Art
Sr. Member
****
Offline Offline

Posts: 1087


« Reply #32 on: December 18, 2012, 01:20:16 AM »
ReplyReply

I think you've got things back to front here when you write; "You can't just increase sensor resolution forever, expecting the lenses to handle it."

Lenses do not handle sensor resolution. It's the other way round. A lens will always project an image of a certain quality and resolution, depending on the aperture used, regardless of the quality of the sensor.

However, something is always lost as the sensor records the projected image from the lens, even when it's a low quality lens and high pixel-count sensor.

If the lens used is a high quality lens, then of course more is lost in the recording process than would be the case using a low quality lens with the same sensor.

However, setting aside semantics, you are correct that one cannot expect unlimited increases in sensor resolution to continue to provide increased resolution from the same lens. As I mentioned in my reply to Peter, there's a situation of diminishing returns that applies so that at some point any increase in the resolution of the recorded image due to increases in sensor resolution will be so small that it will be unnoticeable in practice.

You can test this for yourself if you've not discarded all your earlier, low resolution cameras, and you don't even have to acquire a low-resolution lens for the purpose. Any 35mm format lens can be considered low resolution from F11 to F32, if it stops down that far.


Ok. I was pretty sure you knew what you were talking about, i just thought the wording was not right.
Logged
BJL
Sr. Member
****
Offline Offline

Posts: 5124


« Reply #33 on: December 18, 2012, 09:35:49 AM »
ReplyReply

The impressive electronics for ISO control is analog scaling which multiplies signal and shot noise more than readout noise, improving the slope of the curve at low ISO (where readout noise is significant).
That is only impressive as a solution to the problem of the part of the read noise that enters the signal after that analog amplification is applied, such as noise that enters during transportation of the analog signal across the edge of the sensor and then off the sensor to an off-board ADC. More impressive still is avoiding that part of the noise entirely by doing the ADC earlier, at the edge of the sensor, reducing sensor noise to mostly just the dark noise from within the photosites themselves. Some hallmarks of this avoidance of read noise from "downstream" sources are that the sensor noise is lower, and scales in the same way as signal and photon shot noise, so that SNR scales exactly with Exposure Index(*). These desirable characteristics are shown by many of the newer sensors used by Sony, Nikon, Olympus and Panasonic.


(*) A little rant on misuse of jargon: can we please stop using "ISO" as the name for multiple related but different measurements?

The "ISO" dial on a camera is used in adjusting exposure index, and is usually based on the ISO-defined "ISO Standard Output Sensitivity" Ssos or perhaps the "ISO Recommended Exposure Index" Srei, not other ISO defined quantities such as the rarely-used "ISO noise based speed" measure Sn40 and Sn10, or the "ISO saturation speed" Ssat (used by DxO). Most modern cameras should indicate in the EXIF which of Ssos or Srei is used.

See the CIPA document http://www.cipa.jp/english/hyoujunka/kikaku/pdf/DC-004_EN.pdf in which the ISO sensitivity standards for "Standard Output Sensitivity" and "Recommended Exposure Index" were originally described (and which is free, unlike the official ISO documents like 12232). Some readers might want to skip the technical details and look at the Overview on page (i) and the Explanation on page 20, where the disadvantages of using "ISO saturation speed" as a sensitivity measure are mentioned.
« Last Edit: December 18, 2012, 09:55:41 AM by BJL » Logged
NikoJorj
Sr. Member
****
Offline Offline

Posts: 1063


WWW
« Reply #34 on: December 18, 2012, 09:58:33 AM »
ReplyReply

First, many thanks to the author for this thorough insight into DxOMark!

(*) A little rant on misuse of jargon: can we please stop using ""ISO" as the name for multiple related but different measurements?
I completely agree : as many sensitivity measurements coexist in the same norm, it would be nice to know if one is looking at a saturation-based, SNR-based, gray level-based one or a REI (which is a cool acronym for a completely arbitrary number just made to sound good in tech specs Grin ).

Quote
[...] where the disadvantages of using "ISO saturation speed" as a sensitivity measure are mentioned.
The disadvantages are the same for all output-based methods, they take into account a rendered image, and are biased by rendering choices : tone curve, highlight reconstruction, and so on.
 
A saturation-based method makes much more sense to my eyes if it uses saturation of the raw file, as I understand DxOMark does.
I'd even be tempted to think that it is the only unambiguous method for determining senstivity, not being influenced by rendering choices! But at least, it's the most relevant one for someone shooting raw, even if it's outside the ISO norm.
« Last Edit: December 18, 2012, 10:04:53 AM by NikoJorj » Logged

Nicolas from Grenoble
A small gallery
fike
Sr. Member
****
Offline Offline

Posts: 1373


Hiker Photographer


WWW
« Reply #35 on: December 18, 2012, 10:26:52 AM »
ReplyReply

The one aspect of the sensor quality race that I feel is missed is when you actually want to USE all those additional pixels on the new sensors.

I like to print LARGE!  Really LARGE.  Things like 72" x 48" are awesome, and it is what I do.  For me resolution matters...mostly in how many or few frames I need to include in my mosaic panoramic image to get somewhere between 240 DPI and 300 DPI where I like to print.  The normalization of resolution is necessary for comparison when folks won't use all that excess resolution.  For this reason, the screen measurements are slightly more helpful to me.

The other area where that resolution is NOT excess (and thus resolution normalization isn't helpful) is when you want to achieve a long reach with your gear...wildlife photography for example. This is the reason I have, for years, kept APS-C.  With the extra-dense pixel resolution I get additional reach out of my 400 mm lens.  In Nikon world, the D800 has made that reasoning mostly obsolete with a cropped sensor pixel pitch on a full-frame sensor. 

One of the key factors that the article clearly explains is that noise doesn't go up with resolution because it is represented as a ratio--signal to noise ratio.  When you downsample (normalize the resolution for comparison) generally the higher resolution cameras have came out marginally ahead when compared to prior sensor generations.  This normalization makes things easy and clear for noise comparison, but not when you really want all those pixels.  What you begin to realize is that with higher resolution sensors, you can no longer print noise-free at 240 DPI but instead have to move up to 300 DPI.  This means newer pixels aren't as useful at the per-pixel level even though in the end they do help me get to larger prints, just not in a linear fashion.
Logged

Fike, Trailpixie, or Marc Shaffer
marcshaffer.net
TrailPixie.net

I carry an M43 ILC, a couple of good lenses, and a tripod.
BJL
Sr. Member
****
Offline Offline

Posts: 5124


« Reply #36 on: December 18, 2012, 10:30:46 AM »
ReplyReply

... REI (which is a cool acronym for a completely arbitrary number just made to sound good in tech specs Grin ).
The disadvantages are the same for all output-based methods, they take into account a rendered image, and are biased by rendering choices : tone curve, highlight reconstruction, and so on.
 
A saturation-based method makes much more sense to my eyes if it uses saturation of the raw file, as I understand DxOMark does.
I'd even be tempted to think that it is the only unambiguous method for determining sensitivity ...
The REI serves one purpose, probably of little interest to us though: getting out-of-the-camera JPEG's that look reasonable when using fancy multi-zone automated metering systems in casual point-and-shoot photography. My guess is that DSLR's and other system cameras use SOS, not REI.

Saturation-based makes sense for one purpose: wanting to know how much highlight headroom you have in the raw files if you expose "on meter", with no exposure compensation. In that case, the situation is roughly that
- if Ssat = SOS then raw files have a 1/2 stop of highlight headroom: parts of the scene at metered average luminosity are placed 1/2 stop below 18% of maximum raw level.
- if Ssat < SOS then you get more raw highlight headroom (with no exposure compensation). For example, if the difference is one stop, like the "ISO" dial giving SOS=400 but DxO giving Ssat=200, then you get  1 1/2 stops of raw highlight headroom with no exposure compensation.

On the other hand, when comparing noise level and DR at elevated exposure index, only SOS makes sense, because that compares at the same shutter speed when using the same aperture ratio in the same lighting. Pushing the DR and SNR 18% curves down to the left and thus down due to RAW files having more than 1/2 stop of highlight headroom (which DxO does) makes no sense when comparing low-light performance, so those graphs are best read by pushing the dots at ISO 200, 400, etc. back into alignment. Fortunately it seems that the "full SNR" curves as DxO are labelled with the camera's "ISO speed" settings or 200, 400, etc., so those can be compared without adjustment.
« Last Edit: December 18, 2012, 10:38:43 AM by BJL » Logged
Fine_Art
Sr. Member
****
Offline Offline

Posts: 1087


« Reply #37 on: December 18, 2012, 11:32:22 AM »
ReplyReply

Good timing on this new DxO report:

"The example above is based on data from DxOMark's database of test reults for more than 2,700 camera and lens combinations. These tests reveal that, on average, about 45% of the resolution is lost due to lens defects."

http://www.dpreview.com/news/2012/12/17/dxomark-introduces-perceptual-mpix-score-for-lenses
Logged
Peter van den Hamer
Newbie
*
Offline Offline

Posts: 43


« Reply #38 on: December 18, 2012, 04:54:54 PM »
ReplyReply

I like to print LARGE!  Really LARGE.  Things like 72" x 48" are awesome, and it is what I do.  For me resolution matters...mostly in how many or few frames I need to include in my mosaic panoramic image to get somewhere between 240 DPI and 300 DPI where I like to print.  The normalization of resolution is necessary for comparison when folks won't use all that excess resolution.  For this reason, the screen measurements are slightly more helpful to me.

Ernst Dinkla said something similar, and he also has huge printers (A0, I guess).

Firstly, DxOMark Camera Sensor doesn't factor in sensor resolution. You are supposed to keep an eye on that requirement yourself.

In print mode, the normalization makes sure that you compare sensors with different resolutions by viewing them in the same way. For example, viewing the print (or a fixed section of the print) at a distance proportional to the size of the print: you view the same information at constant opening angle.

As mentioned in the article, it is easiest to explain when you downscale the higher res to the lower res. But I think it also applies if you would upscale if you algorithm tries hard to avoid uncorrelated noise in the "made up" pixels.

If you print higher resolution images at larger sizes (e.g. maintaining a constant DPI value) while viewing at a constant distance, screen mode is probably indeed more suitable.

Peter
« Last Edit: December 19, 2012, 03:56:10 PM by Peter van den Hamer » Logged
qwz
Jr. Member
**
Offline Offline

Posts: 82



WWW
« Reply #39 on: December 20, 2012, 01:54:22 PM »
ReplyReply

Very detailed explanation, thanks.
But article says practically nothing about colour.
DXO gives some points with decimals (strangely called 'bits' - but term bit means binary digit!) to colour but it appear to be a simple measure of noise of three colour channels, nothing more.
But, in practical terms, its only relevant Blue channel, cause it less sensitive in all modern cameras.
Moreover, DXO calls this strange points meaningful for portraits! But no one shoots portraits for per-channel looking for RAW data.

DXO simply fails to say something useful about colour. And i wonder why. Cause people buy megapixels and noise levels?

Turning back to colour, its more interesting and much more critical to know, firstly the spectral transmission of the lens (because DXO is only company testing both lens and sensors), and secondly actual spectral sensitivity of sensels (and colour gamut of cameras).
It differs much between vendors and models (but not samples of this expensive stuff:-).

Software correction fundamentally cannot helps with this problem (until we convert photographer to painter;-) because theres many (infinite number, actually) of different stimuluses in real world which appears on three-chromatic cameras sensor like the same R-G-B values.
Of course, improving tonal resolution can help but not much (and in modern cameras with at least 12bit processing). It happens because spectral sensitivity of colour filter arrays on sensels is not ideal and each channel has some small (or big) differences with spectral sensitivities of human eye (and other sensors too).

Does DXO points says something about it? Nope. Pity.
Does we have many other ways to measure noise of digital cameras with enough practical accuracy? Yes we have it on many websites (and can shoot themselves or ask people to make samples and share RAW files).
And, finally, what so "exciting" in DXO points - simple website database interface giving us illusion of knowledge? Not much.
Logged
Pages: « 1 [2] 3 4 5 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad