Ad
Ad
Ad
Pages: « 1 2 [3] 4 »   Bottom of Page
Print
Author Topic: Is the D800E expected to have noticeable better IQ than the D800?  (Read 14869 times)
nma
Full Member
***
Offline Offline

Posts: 157


« Reply #40 on: February 19, 2012, 12:10:58 PM »
ReplyReply

BJL: I completely agree with you; oversampling is the way to go. In fact, I would go a bit further: If it were practical, digital imaging could be improved by increasing sampling to the point that the optical (lens) components were the fundamental limitation to image quality. If this could be achieved, issues like aliasing could be made negligible. This solution would not necessarily sentence us to living with huge files. The oversampled raw image could be processed in the camera aand downsampled to increase SNR at the pixel level,  reducing the size to more manageable proportions.

The approach of oversampling, followed by filtering then down sampling is used in digital audio to improve the performance of CD playback. Why not in digital photography?
Logged
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3586


« Reply #41 on: February 19, 2012, 12:52:19 PM »
ReplyReply

The approach of oversampling, followed by filtering then down sampling is used in digital audio to improve the performance of CD playback. Why not in digital photography?

Because there's a difference between sampling a few channels per unit time, versus tens of millions ...?

Cheers,
Bart
Logged
Scott O.
Sr. Member
****
Offline Offline

Posts: 312


WWW
« Reply #42 on: February 19, 2012, 01:15:35 PM »
ReplyReply

There are just too many questions about the D800E for me to be able to make a decision without seeing meaningful comparisons form real people who are using them.  If I have to make a decision between models before I see these comparisons (due to my position on B&H's waiting list) I will choose the D800.  After all, the D800 will be probably be ever so much sharper than anything else we now have...
Logged

hjulenissen
Sr. Member
****
Offline Offline

Posts: 1677


« Reply #43 on: February 19, 2012, 01:26:29 PM »
ReplyReply

Because there's a difference between sampling a few channels per unit time, versus tens of millions ...?

Cheers,
Bart
CD does 44100 samples a second at 16 bits precision.

Photography does 10s of megapixels of samples a frame at ~14 bit precision.

I think those are the most valid dimensions to compare.

I believe that Dr Eric Fossum have suggested sensors capable of registering (close to) every single photon hitting the sensor. If that is ever possible, you would be limited by the physics of light only. And you would have a very high spatial resolution, very low intensity resolution signal whose noise characteristics may or may not make it similar to the flopped SACD format that was suggested as an upgrade to the CD format a few years ago.

-h
Logged
LKaven
Sr. Member
****
Offline Offline

Posts: 798


« Reply #44 on: February 19, 2012, 03:34:56 PM »
ReplyReply

CD does 44100 samples a second at 16 bits precision.

Photography does 10s of megapixels of samples a frame at ~14 bit precision.

I think those are the most valid dimensions to compare.

I believe that Dr Eric Fossum have suggested sensors capable of registering (close to) every single photon hitting the sensor. If that is ever possible, you would be limited by the physics of light only. And you would have a very high spatial resolution, very low intensity resolution signal whose noise characteristics may or may not make it similar to the flopped SACD format that was suggested as an upgrade to the CD format a few years ago.

I agree.  I think we are progressing from the intelligible pixel-level information to the level where pixels are considered in a more abstract information-processing model.  In other words, as pixels become smaller, they undergo a qualitative shift from being intelligible as picture elements to being unintelligible as picture elements, but useful as information. 

Fossum's "Jot" sensor imagines a sensor in which each photosite captures at the most a single photon.  In what he calls the "sub-diffraction layer" there is still an abundance of veridical information that can be exploited by means known and unknown.

Tying it all together is the notion (which we've just started to explore in recent threads) that there is an information-preserving property in the oversampling methods that is not present in the non-oversampling methods, and that this is evident even at smaller viewing sizes.  Identifying what those properties are and what information is preserved -- and how it could be best preserved -- would be a useful undertaking.

Luke
Logged

BJL
Sr. Member
****
Offline Offline

Posts: 5124


« Reply #45 on: February 19, 2012, 06:01:12 PM »
ReplyReply

CD does 44100 samples a second at 16 bits precision.

Photography does 10s of megapixels of samples a frame at ~14 bit precision.

I think those are the most valid dimensions to compare.
Yes, so we might have a way to go. However photography does have the advantage of massive parallelism, with the column-parallel approach of Sony. Each column of a D800 sensor gives about 5000 photosites to be handled by the ADC at the bottom, so even in some imagined 60fps super resolution video, only about 300,000 samples per second, far lower than off-board ADCs do now. Read-out beyond the ADC might need Gb/s digital signal handling, but that it not so hard these days for digital signal transmission and storage.

The bigger problem for now is per photosite read noise levels: they need to be kept safely below photon shot levels in the photosites over the "photographically interesting" of subject brightness levels, and that gets harder with very many, very small photosites.

Note: with the idea of the outputs from individual cells on a sensor being not "picture elements" in themselves, but "atoms" combined in large numbers into photographically significant output, I prefer to talk the sensor's cells as "photosites" rather than "pixels"; the pixels used to produce the final displayed image will likely be constructed from cell-level signals in subsequent processing. Actually, they always are, with demosaicing, moiré removal and such.
« Last Edit: February 19, 2012, 06:03:29 PM by BJL » Logged
LKaven
Sr. Member
****
Offline Offline

Posts: 798


« Reply #46 on: February 19, 2012, 10:10:24 PM »
ReplyReply

Note: with the idea of the outputs from individual cells on a sensor being not "picture elements" in themselves, but "atoms" combined in large numbers into photographically significant output, I prefer to talk the sensor's cells as "photosites" rather than "pixels"; the pixels used to produce the final displayed image will likely be constructed from cell-level signals in subsequent processing. Actually, they always are, with demosaicing, moiré removal and such.

Right, it would help to keep the semantic levels distinct.
Logged

NikoJorj
Sr. Member
****
Offline Offline

Posts: 1063


WWW
« Reply #47 on: February 20, 2012, 02:23:57 AM »
ReplyReply

Check out: http://echophoto.dnsalias.net/ekr/index.php/photoarticles/49-dof-in-digital-pictures?start=1

The top right image would be considered sharp by most DoF tables. The left column shows effects of diffraction, and in my view it's quite obvious that we start loosing sharpness already at f/8.

That said sharpening plays an important role: http://echophoto.dnsalias.net/ekr/index.php/photoarticles/49-dof-in-digital-pictures?start=2
Many thanks for those very useful tests!
IMHO, it's evident the diameter of the airy disk does not correlate with the one of a misfocus-generated circle of confusion - and as well illustrated in the deconvolution thread, most diffraction effects can be very decently cured with an appropriate sharpening.

Back to topic, I'd see three arguments pro-OLPF :
- even in landscape use, here in Europe there can always be a tile roof or stone wall to have the nasty repetitive pattern at the nasty frequency,
- the effects of OLPF blurring are more easily overcome in processing than some of the aliasing artifacts,
- anyway, if you want a very high quality print, you have to keep the output ppi beyond a certain threshold (around 300ppi?), to have the pixel-level artefacts and/or lack of textures mostly hidden by the printing process and not exposed in plain sight.
Logged

Nicolas from Grenoble
A small gallery
nma
Full Member
***
Offline Offline

Posts: 157


« Reply #48 on: March 01, 2012, 10:39:37 AM »
ReplyReply

Because there's a difference between sampling a few channels per unit time, versus tens of millions ...?

Cheers,
Bart

Hello Bart,

Are we overtaken by events? The announcement of the Nokia 41 MP camera phone seems to be just the development I was suggesting. From http://europe.nokia.com/pureview, "How good can a pixel be?It’s not about the amount of the pixels, it’s what you do with them. Nokia PureView imaging technology can distil 7 pixels into 1 for stunningly sharp and clear 5 MP photos that are easy to share.

« Last Edit: March 01, 2012, 10:42:01 AM by nma » Logged
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3586


« Reply #49 on: March 01, 2012, 11:22:29 AM »
ReplyReply

Hello Bart,

Are we overtaken by events? The announcement of the Nokia 41 MP camera phone seems to be just the development I was suggesting. From http://europe.nokia.com/pureview, "How good can a pixel be?It’s not about the amount of the pixels, it’s what you do with them. Nokia PureView imaging technology can distil 7 pixels into 1 for stunningly sharp and clear 5 MP photos that are easy to share.

It is unlikely that low-pass filtering is applied (because that is a computation intensive operation), other than relying on diffraction and optical aberrations. Binning of pixels can be done relatively fast (maybe on chip), but the result will suffer from aliasing artifacts. We'll see what kind of quality the actual production phonecams produce.

It is no surprise that sooner or later someone would actually implement such simplified oversampling technoloy, since Dr. Eric Fossum already discussed that direction in 1995, and Nokia/TexasInstruments' implementation is not even close to his updated (2008) vision of single photon detectors (jots).

This lecture explains a bit more about the 0.1 micron pitch Jots towards the end of the 1 hour talk, from the inventor of the CMOS image sensor.

Cheers,
Bart
« Last Edit: March 01, 2012, 11:38:49 AM by BartvanderWolf » Logged
opgr
Sr. Member
****
Offline Offline

Posts: 1125


WWW
« Reply #50 on: March 01, 2012, 12:28:02 PM »
ReplyReply

It is unlikely that low-pass filtering is applied (because that is a computation intensive operation), other than relying on diffraction and optical aberrations. Binning of pixels can be done relatively fast (maybe on chip), but the result will suffer from aliasing artifacts. We'll see what kind of quality the actual production phonecams produce.

I have to disagree with you strongly.

1. low-pass filtering is absolutely not a computationally intensive operation. Most practical filters can be decomposed in a horizontal and vertical operation, making it very easily applicable in processing pipelines.

2. Binning is exactly the equivalent of one such low pass filter (block filter).

3. The amount of aliasing will depend on the amount of **overlap** chosen in the filtering. Suppose you want to take every 2x2 sensels for a single resulting pixel. You can then also take 4x4 sensels for every pixel, where there is a 1 sensel overlap between each block. This is what will reduce aliasing artifacts in exactly the way one wants.

Logged

Regards,
Oscar Rysdyk
theimagingfactory
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3586


« Reply #51 on: March 01, 2012, 02:44:12 PM »
ReplyReply

I have to disagree with you strongly.

1. low-pass filtering is absolutely not a computationally intensive operation. Most practical filters can be decomposed in a horizontal and vertical operation, making it very easily applicable in processing pipelines.

On a Bayer CFA sensor? Sure, the demosaicing could be dumbed down by not calculating the missing spectral bands, but still. Nokia say they added a separate processor for the pixel scaling which hands off the result to the image processor. Doesn't sound like a simple column and row process, does it?

Quote
2. Binning is exactly the equivalent of one such low pass filter (block filter).

Block filter on a Bayer CFA? It would require a significantly more complex readout than that, especially when digital zoom is involved. It will be interesting to see what the battery life is when using the camera a lot. The second processor must use some power it would seem.

Quote
3. The amount of aliasing will depend on the amount of **overlap** chosen in the filtering. Suppose you want to take every 2x2 sensels for a single resulting pixel. You can then also take 4x4 sensels for every pixel, where there is a 1 sensel overlap between each block. This is what will reduce aliasing artifacts in exactly the way one wants.

Yes, I'm well aware of what could be done, what would be optimal, and how much consessions to the optimal solution would save in processing overhead. What we do not know is what consessions were actually made. I also know that Texas Instruments is not new to the field of DSP, so maybe they found an elegant and energy efficient solution to produce those 5-8 MP output images. The 'full resolution' image would require regular demosaicing, and is probably not that high resolution, especially after noise reduction. The fixed focus lens (5 elements in 1 group, with all lens surfaces being aspherical), while a bit bulky on a phone, should make a good contribution to image quality though.

Cheers,
Bart
Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1677


« Reply #52 on: March 02, 2012, 05:28:12 AM »
ReplyReply

I have to disagree with you strongly.

1. low-pass filtering is absolutely not a computationally intensive operation. Most practical filters can be decomposed in a horizontal and vertical operation, making it very easily applicable in processing pipelines.
41 million sensels at 12-14 bits each is still a considerable amount of data to transfer from sensor to memory, to do a single pass or more across, even if the filter itself was simple.

In a large-scale custom design like this, it might make more sense to do the appropriate amount of pre-filtering in lense/olpf, and then do binning within the sensor, so that externally it appears to be an 8 megapixel sensor (or whatever).

Nokia have previously introduced new ideas with "depth-coding" fixed focus lense/sensor cameras. I expect them to do something clever here as well. Too bad that everyone seems hung up on the "42 MP" headline, not the real perceptible IQ (that might be very good for a cellphone).

-h
Logged
Johnny_Boy
Full Member
***
Offline Offline

Posts: 133


« Reply #53 on: March 04, 2012, 10:46:51 PM »
ReplyReply

Guys, you lost me at "hello"  Cheesy.
(Wow, a lot of technical details. So D800E should have better IQ than D800? Smiley)
Logged
marcmccalmont
Sr. Member
****
Offline Offline

Posts: 1724



« Reply #54 on: March 04, 2012, 11:54:24 PM »
ReplyReply

Guys, you lost me at "hello"  Cheesy.
(Wow, a lot of technical details. So D800E should have better IQ than D800? Smiley)
OK the simple answer is if your subject is a bush yes the 800E would have better IQ if your subject is a buisiness man in a suit behind a screen door in a brick building, the 800 would be a better choice
Marc Smiley
Logged

Marc McCalmont
BernardLanguillier
Sr. Member
****
Online Online

Posts: 7906



WWW
« Reply #55 on: March 05, 2012, 06:18:33 PM »
ReplyReply

More D800-D700 high ISO samples comparisons:

http://nikonrumors.com/2012/03/05/another-nikon-d700-vs-nikon-d800-high-iso-comparison.aspx/

It looks like the D800 is about one stop better once resed down to 12 mp.

That is without Topaz application of course... Considering the progress made in 3 years by noise reduction packages, it can probably be said fairly confidently that the D800 will be close to 2 stops better in terms of actual prints compared to the days when the D700/D3 were released.

Cheers,
Bernard
Logged

A few images online here!
julianv
Newbie
*
Offline Offline

Posts: 28


« Reply #56 on: March 08, 2012, 11:59:34 PM »
ReplyReply

Comments from Nikon on the moiré issue, and a couple of additional D800 vs D800E crops for comparison.

http://www.nikonusa.com/Learn-And-Explore/Nikon-Camera-Technology/gy43mjgu/1/Moire-and-False-Color.html

I don't see a lot of difference between the comparison images.  If this is really the story, I don't understand why Nikon would go to the trouble of producing two different versions.  Perhaps the advantages of the E are more apparent in RAW.
Logged
Mulis Pictus
Jr. Member
**
Offline Offline

Posts: 79


WWW
« Reply #57 on: March 09, 2012, 12:20:26 PM »
ReplyReply

I don't see a lot of difference between the comparison images.  If this is really the story, I don't understand why Nikon would go to the trouble of producing two different versions.  Perhaps the advantages of the E are more apparent in RAW.

For some reason NikonUSA.com put scaled down crops there - which is non-sense when you want to compare resolution. One of the comparison images at the original size is used in the article on the mansurovs.com here:

http://mansurovs.com/wp-content/uploads/2012/02/Nikon-D800-vs-D800E-Sharpness.jpg

(whole article http://mansurovs.com/nikon-d800-vs-d800e)

When you look at the original D800E photo http://chsvimg.nikon.com/lineup/dslr/d800/img/sample02/img_05_l.jpg, you can see that the crops at the nikonusa.com are indeed scaled down. Hopefully that's only mistake and not intention to drive people away from D800E ;-)
Logged

Scott O.
Sr. Member
****
Offline Offline

Posts: 312


WWW
« Reply #58 on: March 09, 2012, 01:44:41 PM »
ReplyReply

Anyone else notice the fairly limited depth of field at f8?
Logged

Ray
Sr. Member
****
Offline Offline

Posts: 8880


« Reply #59 on: March 09, 2012, 10:13:34 PM »
ReplyReply

Comments from Nikon on the moiré issue, and a couple of additional D800 vs D800E crops for comparison.

http://www.nikonusa.com/Learn-And-Explore/Nikon-Camera-Technology/gy43mjgu/1/Moire-and-False-Color.html

I don't see a lot of difference between the comparison images.  If this is really the story, I don't understand why Nikon would go to the trouble of producing two different versions.  Perhaps the advantages of the E are more apparent in RAW.

In that Nikon article on moire, Nikon mentions 3 times that the increase in resolution from the D800E is slight. If one compares the crops that are claimed to be 100%, in the link provided by Mulis Pictus, http://mansurovs.com/wp-content/uploads/2012/02/Nikon-D800-vs-D800E-Sharpness.jpg  , one can see that the increase in resolution is indeed slight... very slight.

If this example that Nikon has provided is a fair and typical example of what to expect from the D800E, then all I can say is that the benefits of having no AA filter are truly trivial. When examining those crops at 200%, I feel like I'm comparing a 15mp sensor with a 16mp sensor, both of which have equal strength AA filters.

Perhaps Nikon have decided to offer a version of the D800 without AA filter purely for market research purposes. They want to find out just how big the market may be for an AA-less 35mm format camera. They are charging more for the D800E to ensure they don't make a loss if only 6 people buy the model.  Grin
Logged
Pages: « 1 2 [3] 4 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad