Ad
Ad
Ad
Pages: « 1 2 [3] 4 »   Bottom of Page
Print
Author Topic: Why are only MF 16 bit?  (Read 16972 times)
wildlightphoto
Sr. Member
****
Offline Offline

Posts: 654


« Reply #40 on: December 02, 2011, 06:23:34 AM »
ReplyReply

..Graphs tends to counteract that human flaw..

And graphs can be misleading when used out of context or when used to tell an incomplete story.
Logged
theguywitha645d
Sr. Member
****
Offline Offline

Posts: 970


« Reply #41 on: December 02, 2011, 07:59:30 AM »
ReplyReply

And graphs can be misleading when used out of context or when used to tell an incomplete story.

You can't blame the data. The problem is in the interpreter, the human interpreter.
Logged
Graham Mitchell
Sr. Member
****
Offline Offline

Posts: 2282



WWW
« Reply #42 on: December 02, 2011, 08:03:48 AM »
ReplyReply

Graham -

If you have a chance, could you shoot the same scene at ISO 400, to see what shows up?

Hi Geoffrey,

I shot the same both again with both cameras, at ISO 400. Note that there was no pushing this time - just shooting a plain iso 400 image with matching histograms. The Canon had less noise initially but I'm pretty sure that Canon applies noise reduction in camera before the raw file is saved. There are tell-tale signs of NR. I still had to add chroma NR to the Canon file as the colour patches were a bit ugly.

With the Leaf, it must be done manually. Here is the end result. By the time I matched the Canon's noise level, the Leaf still has more detail. I expect the Canon would pull ahead at iso 800 and above. (Leaf on the right. Click on http://bit.ly/u4fRp9 to see the the image directly in browser. LL scales large images down so this will probably not be displayed at 100%)



« Last Edit: December 02, 2011, 08:07:02 AM by Graham Mitchell » Logged

Graham Mitchell - www.graham-mitchell.com
wildlightphoto
Sr. Member
****
Offline Offline

Posts: 654


« Reply #43 on: December 02, 2011, 08:04:37 AM »
ReplyReply

You can't blame the data. The problem is in the interpreter, the human interpreter.

Absolutely true.  In my day job I'm writing engineering software, I work with technical issues daily, and I see every day how easy it is to misuse or misinterpret data if the Big Picture isn't considered.  This forum has some of the most obsessive number-worshiping I've seen anywhere but lacks the Big Picture vision: "do you like the photos?"
Logged
theguywitha645d
Sr. Member
****
Offline Offline

Posts: 970


« Reply #44 on: December 02, 2011, 09:33:58 AM »
ReplyReply

Absolutely true.  In my day job I'm writing engineering software, I work with technical issues daily, and I see every day how easy it is to misuse or misinterpret data if the Big Picture isn't considered.  This forum has some of the most obsessive number-worshiping I've seen anywhere but lacks the Big Picture vision: "do you like the photos?"

I absolutely agree. Data without the context is like owning a boat without access to water, it might be very nice, but ultimately useless.
Logged
PierreVandevenne
Sr. Member
****
Offline Offline

Posts: 510


WWW
« Reply #45 on: December 02, 2011, 10:08:29 AM »
ReplyReply

Here is my partial list of the Image Quality Chain:
Lens Hood / Flare > Lens coating > lens > aperture/shutter > body's internal blackness > IR filter > microlenses > AA filter (or lack thereof) > sensor size > sensor pixel type > readout speed > sensor-to-AD-convertor path, A/D convertor (both bit depth and quality) > heat sinking / cooling > raw file compression > black calibration > in camera raw data manipulation > characteristic curve > ICC profile > demosaic algorithm > deconvolution algorithm > noise reduction type > up-res or down-res algorithm > sharpening
Any one of the above can influence the final image. It is a system, and no one component is as important as the overall system.

Quite a good list indeed. A/D converters issues, read noise and well capacity are still worth discussing from time to time imho as they allow debunking of certain advertising claims.
Logged
deejjjaaaa
Sr. Member
****
Offline Offline

Posts: 966



« Reply #46 on: December 02, 2011, 10:10:26 AM »
ReplyReply

but I'm pretty sure that Canon applies noise reduction in camera before the raw file is saved.

can you post Canon's raw file if you don't mind ?
Logged
wildlightphoto
Sr. Member
****
Offline Offline

Posts: 654


« Reply #47 on: December 02, 2011, 10:27:31 AM »
ReplyReply

Quite a good list indeed. A/D converters issues, read noise and well capacity are still worth discussing from time to time imho as they allow debunking of certain advertising claims.

The supposed de-bunking is equally misleading if it doesn't consider the entire imaging chain.
Logged
ondebanks
Sr. Member
****
Offline Offline

Posts: 833



« Reply #48 on: December 02, 2011, 10:39:18 AM »
ReplyReply

The supposed de-bunking is equally misleading if it doesn't consider the entire imaging chain.

Not necessarily. If it can be shown that any point in the imaging chain makes it physically impossible for the final output to be as claimed, then there is no need to consider the rest of the chain.

Ray
Logged
deejjjaaaa
Sr. Member
****
Offline Offline

Posts: 966



« Reply #49 on: December 02, 2011, 11:04:29 AM »
ReplyReply

Not necessarily. If it can be shown that any point in the imaging chain makes it physically impossible for the final output to be as claimed, then there is no need to consider the rest of the chain.

Ray

may be our "16bit" friends really need 16bit to encode the noise because their raw converters can't dither properly, instead relying on the noise in data, or in fact the noise from their sensors has some non random pattern and they really need it for some kind of denoising.
Logged
wildlightphoto
Sr. Member
****
Offline Offline

Posts: 654


« Reply #50 on: December 02, 2011, 11:43:46 AM »
ReplyReply

Not necessarily. If it can be shown that any point in the imaging chain makes it physically impossible for the final output to be as claimed, then there is no need to consider the rest of the chain.

And bees can't fly.  I know this doesn't satisfy the left brain but assumptions often predispose the outcome and IMHO the best way to compare imaging systems is to use them as they'd typically be used and compare the final results.

What I know is that clients, gallery owners, and random visitors to my website make a point of commenting on the clarity, color richness and gradation, and detail of my photos, both in print and on the web, compared with their own and with other gallery prints.  I can assure you that my meager processing skills and very basic software are not an advantage.  My camera uses a 16-bit ADC.  Perhaps under ideal conditions that's irrelevant but under less-than-ideal conditions when I have to push the files' limits it falls apart much less often than files from 12- and 14-bit cameras.  Would your left brain like to explain this?
Logged
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7680


WWW
« Reply #51 on: December 02, 2011, 01:23:04 PM »
ReplyReply

Hi Doug,

This is not really a response to your posting, more like a reflection on many of the recent postings.

The question in the OP was "Why are only MF 16 bit?", and the answer to that question is clearly that no CCD or CMOS-based cameras intended for photography utilizes more than 14 bits.

Another question why MF gives better results than smaller formats. The two obvious reasons are that they collect more photons and their lenses are less stressed for MTF. Neither of this relates even remotely to number of bits in the ADC. Another factor is  that DSLRs as a rule have an OLP filter, which makes them need more sharpening. Sharpening enhances noise.

Finally, comparisons are often made against Canon DSLRs either the 5DII or the D1sIII, two cameras with a very high readout noise. One poster, Marc McCalmont, who owns both P45+ and Pentax K5 has found that the Pentax actually had better image quality at least in some sense. He now upgraded to IQ180 on Alpa (I think).

http://www.luminous-landscape.com/forum/index.php?topic=50895.0

Marc has made many good contributions to this forum.

I don't have a good explanation why your DMR back is so good, but it has probably little to do with 16 bits in the ADC. If you really want to find out how many bits it is actually using, download a copy of Imatest and make a good exposure on a Stouffer wedge with DR 4.1.

Best regards
Erik

Actually, bees can fly. This piece of urban legend come from people applying data and assumptions incorrectly. There is no scientific evidence to suggest the bee is an aero-impossibility.

What do you mean by "less than ideal"?
« Last Edit: December 02, 2011, 10:48:28 PM by ErikKaffehr » Logged

hjulenissen
Sr. Member
****
Offline Offline

Posts: 1683


« Reply #52 on: December 02, 2011, 04:14:03 PM »
ReplyReply

Absolutely true.  In my day job I'm writing engineering software, I work with technical issues daily, and I see every day how easy it is to misuse or misinterpret data if the Big Picture isn't considered.  This forum has some of the most obsessive number-worshiping I've seen anywhere but lacks the Big Picture vision: "do you like the photos?"
I disagree. I find the technical discussions on this site interesting and sober wrgt real-life im
portance.
Logged
ondebanks
Sr. Member
****
Offline Offline

Posts: 833



« Reply #53 on: December 02, 2011, 04:37:56 PM »
ReplyReply

And bees can't fly.  I know this doesn't satisfy the left brain but assumptions often predispose the outcome and IMHO the best way to compare imaging systems is to use them as they'd typically be used and compare the final results.

What I know is that clients, gallery owners, and random visitors to my website make a point of commenting on the clarity, color richness and gradation, and detail of my photos, both in print and on the web, compared with their own and with other gallery prints.  I can assure you that my meager processing skills and very basic software are not an advantage.  My camera uses a 16-bit ADC.  Perhaps under ideal conditions that's irrelevant but under less-than-ideal conditions when I have to push the files' limits it falls apart much less often than files from 12- and 14-bit cameras.  Would your left brain like to explain this?

Doug,

I'm not sure if you were addressing that question to me specifically, since at no point in this thread have I disagreed with using 16 bits in an ADC. (It may be oversampling, but that does no harm, and may do some good; speaking of "push the files' limits", look over at a problematic image I posted in the Astrophotography and MFD thread today to see why I do advocate at least 14 bits).
I was making a more general point; sort of restating the old adage that "you can't make a silk purse out of a sow's ear". The final elements in the imaging chain can be out of this world, but they won't help if there is something mediocre further upstream. That's all.

Your photos are indeed lovely. I totally agree with those (bcooter, Yair, et al.) who maintain that as long as the photo works, it matters little how it was made. But as you say yourself, and as bcooter illustrated in his final image, sometimes the camera tech determines whether or not the photo can be made to work, or made at all. That's why we need to have threads like this - it's not for the sake of "obsessive number-worshipping" as someone said above: it's simply so that we can make educated choices, to make our photos work in challenging light.

I speak from experience: to a certain extent, I made an uneducated choice about MFD - there simply was no information out there on my application -, but I've learnt a huge amount since (approaching it as I would approach a new research topic), and now I make a point of trying to educate others about imaging technology in general.

So, which 16-bit camera do you use?

Ray
Logged
LKaven
Sr. Member
****
Offline Offline

Posts: 832


« Reply #54 on: December 02, 2011, 09:14:26 PM »
ReplyReply

Ii recently acquired a leica S2.  On a recent trip to the Eastern Sierras (California), I was fortunate enough
to witness an incredible sunset, and in particular the "after glow" over the mountains.  The sky was painted
with shades of red, yellow, pink, orange, more vivid and varied than I have ever seen.  Along with the S2,
I had a nikon D700.  I set both up at the same time, and took repeated images.  The D700 could not capture
the colors to the extent that the Leica S2 did.  The range of colors, the tonal gradations, the gradual shifts
from one color to the next, was clearly superior on the S2 vs. the D700.  It was visible on my calibrated
monitor, and even more so on a print. 

Indeed, throughout this trip, the "micro contrast" and tonal range was clearly superior on the MF S2 than
on the D700.  Whether it was rocks, desert sand dunes, salt crystals on the salt flats (Death Valley), there
was a clear distinction.  I do not mean merely in terms of resolution, but in the fine tonal contrast that
lends "texture" and a 3-D appearance to the objects in the photo.

Obviously, this is NOT a scientific study, but it was a side-by-side comparison.  I cannot explain why there
is such a distinction, whether it is CMOS vs. CCD, 14 vs 16 bit, or the algorithms used to interpret the
collected photons.  It is just an observation.  When I show the images to colleagues they too can identify
the S2 vs. the D700 images.

Yes, but the same comparison between the S2 and the D3x would come up more even-handed.  The D3x with the 24.5MP Sony Exmor on board has that kind of color gradation and polish, and a stunningly noise-free capture.  I'd expect the newest Sony 35mm full-frame sensors to be very good indeed.
Logged

LKaven
Sr. Member
****
Offline Offline

Posts: 832


« Reply #55 on: December 02, 2011, 09:19:57 PM »
ReplyReply

And bees can't fly.  I know this doesn't satisfy the left brain but assumptions often predispose the outcome and IMHO the best way to compare imaging systems is to use them as they'd typically be used and compare the final results.

What I know is that clients, gallery owners, and random visitors to my website make a point of commenting on the clarity, color richness and gradation, and detail of my photos, both in print and on the web, compared with their own and with other gallery prints.  I can assure you that my meager processing skills and very basic software are not an advantage.  My camera uses a 16-bit ADC.  Perhaps under ideal conditions that's irrelevant but under less-than-ideal conditions when I have to push the files' limits it falls apart much less often than files from 12- and 14-bit cameras.  Would your left brain like to explain this?
There are no 16-bit cameras made today.  Nor 15-bit cameras.

If you are using a large-sensor camera and working at moderate print sizes, you will get the best of that surface area, yielding a high number of samples for each unit area of the final image.  The high fidelity doesn't come from having a high bit-depth sample, but from having an abundance of samples from a large sampling area to average together.
Logged

ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7680


WWW
« Reply #56 on: December 02, 2011, 10:45:57 PM »
ReplyReply

Hi!

Glad to hear that you enjoy your Leica S2. Just a few comments.

To begin with CCD or CMOS doesn't play any role for color. Color is determined by the CGA (color grid array) and post processing.

There are a lot of terms describing image quality that are less than well defined. Unfortunately it's not always clear what is meant with the wording. For instance you use the term "microcontrast". I haved been told that it means MTF at high frequency.

I'd just point at a few things:

- The Leica S2 has 40 MPixels and some of the best lenses ever made. So high MTF at pixel pitch is really expected.
- You compare with a 12 MP camera intended for low light shooting. To make a correct comparison you would upscale the Nikon image a factor of two, if you don't do it the printer driver will, that is a lot. The D700 is intended for low light, so it's CGA is probably optimized for high ISO and not good color separation.
- Your Leica S2 has no OLP (Optical Low Pass) filter. The D700 is OLP filtered, so it will need more sharpening.
- Nikon D700 is not really a low noise high DR camera. The D3X is much more relevant

All the above said, your findings are expected. Even if you used a D3X the Leica S2 would be better, but I have little doubt that an S3 or S4 having 80 MPixels using a Sony Exmoor type sensor would be far superior to the S2. These sensors have several thousand of on chip ADCs and have very low read noise.

The reasons that the Leica S2 is "better" than the Nikon D3x are:
- Larger sensor that collects more photons
- Better resolution in pixels
- Probably better lenses, but that may vary from lens to lens and from sample to sample
- It is quite possible that the CGA on the Leica will give better color separation

Best regards
Erik

Ii recently acquired a leica S2.  On a recent trip to the Eastern Sierras (California), I was fortunate enough
to witness an incredible sunset, and in particular the "after glow" over the mountains.  The sky was painted
with shades of red, yellow, pink, orange, more vivid and varied than I have ever seen.  Along with the S2,
I had a nikon D700.  I set both up at the same time, and took repeated images.  The D700 could not capture
the colors to the extent that the Leica S2 did.  The range of colors, the tonal gradations, the gradual shifts
from one color to the next, was clearly superior on the S2 vs. the D700.  It was visible on my calibrated
monitor, and even more so on a print. 

Indeed, throughout this trip, the "micro contrast" and tonal range was clearly superior on the MF S2 than
on the D700.  Whether it was rocks, desert sand dunes, salt crystals on the salt flats (Death Valley), there
was a clear distinction.  I do not mean merely in terms of resolution, but in the fine tonal contrast that
lends "texture" and a 3-D appearance to the objects in the photo.

Obviously, this is NOT a scientific study, but it was a side-by-side comparison.  I cannot explain why there
is such a distinction, whether it is CMOS vs. CCD, 14 vs 16 bit, or the algorithms used to interpret the
collected photons.  It is just an observation.  When I show the images to colleagues they too can identify
the S2 vs. the D700 images.

Logged

ejmartin
Sr. Member
****
Offline Offline

Posts: 575


« Reply #57 on: December 03, 2011, 12:29:04 AM »
ReplyReply

In regard to the 5D2, its poor shadows have multiple causes, some of the major ones have to do with high base ISO read noise (the 27 electron figure quoted above is really poor; the D3x has more like 6 electrons of read noise, a two stop advantage), as well as a large amount of pattern noise which throws off the demosaic as well as making the image look like it was shot through a burlap sack, and finally poor color separation esp between green and red color filters requiring color amplification and therefore color noise amplification when the input profile is applied.  This latter effect should not be underestimated; here is a comparison of the 5D2 before and after application of a simple matrix profile:

Logged

emil
joofa
Sr. Member
****
Offline Offline

Posts: 488



« Reply #58 on: December 03, 2011, 01:18:55 AM »
ReplyReply

There are a lot of terms describing image quality that are less than well defined. Unfortunately it's not always clear what is meant with the wording. For instance you use the term "microcontrast". I haved been told that it means MTF at high frequency.

Unfortunately, it is not always feasible for ordinary users to determine stuff such as MTF, "microcontrast", etc. Special test charts, methodology and software have to be used. And, still what about real images that you have acquired? I.e., images of landscapes, cats, oranges, etc., and not some test charts in controlled setting. To complicate matters further, arguing what is lens sharpness, pixel pitch, FOV,  and what not. At the end of the day it is the image we are after and it will be useful to have notions of image quality purely based upon on pixel data and detached from sensor pitch, lens, aperture, image display size, etc.

Realizing the vagueness and difficulty associated with this paradigm I developed a measure of image detail, JIDM, for which a user has to just run it through Photoshop (or ImageJ) and it gives you a number in [0-1] range, where higher means more detail. As an example see below:


The good thing is that one can select an area of an image using Photoshop marquee tool and it will only do the detail measure analysis in that area. You can download it freely from my website. At this stage it is only Photoshop CS3, Mac OS 10.6. It is not perfect, and there is room for improvement, but it helps me in some of my analyses, and may be you find it useful, or have suggestions for improvements.

And, BTW, while  you are there, you might like to get hold of Mac versions of FFT/IFFT plugins that are quite helpful in certain situations.

Sincerely,

Joofa
Logged

Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7680


WWW
« Reply #59 on: December 03, 2011, 01:45:40 AM »
ReplyReply

Hi,

Thank you for sharing the plugin. I did not test it because I use different OS, Photoshop and so on.

I absolutely agree on the image being what photography is about. On the other hand, understanding the technical aspects enables us to make educated choices when spending our money, which used to be a finite resource to most of us, and also to make best use of our investment.

Best regards
Erik


Unfortunately, it is not always feasible for ordinary users to determine stuff such as MTF, "microcontrast", etc. Special test charts, methodology and software have to be used. And, still what about real images that you have acquired? I.e., images of landscapes, cats, oranges, etc., and not some test charts in controlled setting. To complicate matters further, arguing what is lens sharpness, pixel pitch, FOV,  and what not. At the end of the day it is the image we are after and it will be useful to have notions of image quality purely based upon on pixel data and detached from sensor pitch, lens, aperture, image display size, etc.

Realizing the vagueness and difficulty associated with this paradigm I developed a measure of image detail, JIDM, for which a user has to just run it through Photoshop (or ImageJ) and it gives you a number in [0-1] range, where higher means more detail. As an example see below:


The good thing is that one can select an area of an image using Photoshop marquee tool and it will only do the detail measure analysis in that area. You can download it freely from my website. At this stage it is only Photoshop CS3, Mac OS 10.6. It is not perfect, and there is room for improvement, but it helps me in some of my analyses, and may be you find it useful, or have suggestions for improvements.

And, BTW, while  you are there, you might like to get hold of Mac versions of FFT/IFFT plugins that are quite helpful in certain situations.

Sincerely,

Joofa
Logged

Pages: « 1 2 [3] 4 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad