Ad
Ad
Ad
Pages: [1] 2 3 ... 6 »   Bottom of Page
Print
Author Topic: Is This The End Game?  (Read 17147 times)
Quentin
Sr. Member
****
Offline Offline

Posts: 1123



WWW
« on: July 17, 2005, 04:43:39 AM »
ReplyReply

Quote
For those deeply in the topic I recommend the scientific study
over Bayer array based digital capture vs. film, on both
resolution and dynamic range, at ClarkVision.  You can start
from the equivalent digital resolution chart at:

http://clarkvision.com/imagedetail/film.vs.digital.1.html

which states Michael's projection that the P45 would provide
scanned 8x10 film quality highly improbable.

Leping Zha
Landscape Photogrpher and Ph.D. in Physics
www.lepingzha.com
Yes, but in my opinion this comparison is a load of bull.  

Quentin, BA (Hons), MSc, MCIArb, CNI, Freeman of the City of London and so on (all genuine)...  )
Logged

Quentin Bargate, ARPS, Author, photographer entrepreneur and senior partner of Bargate Murray, Law Firm of the Year 2013
BernardLanguillier
Sr. Member
****
Offline Offline

Posts: 8387



WWW
« Reply #1 on: July 18, 2005, 03:23:26 AM »
ReplyReply

Quote
I'd rather wait a couple years, save the kidney, and get it for 10K. Prices are dropping fast on high end digitals it looks like. Well, maybe not that high end, so you got me there. But at least for the DSLRs of the professional qualities that were 6K two years ago.
Besides, other manufacturers will probably release soon after Phase One backs based on the same sensors at significantly lower prices...

Mamiya being a good candidate... providing they manage to release the first generation to start with... :-)

Regards,
Bernard
Logged

A few images online here!
Mark D Segal
Contributor
Sr. Member
*
Offline Offline

Posts: 7126


WWW
« Reply #2 on: July 18, 2005, 03:42:28 PM »
ReplyReply

I agree with those who say that nothing counts more than actual experience and real prints of real photographs meant to be enjoyed (i.e. this eliminates line charts). I believe the weakest link in the digital chain is now the color gamut of ink jet printers when using archival pigmented inks on matte surfaces - eventhough that too has improved by leaps and bounds over the past several years - but there is more to go. As for lenses, used at the their optimal f/stops, one doesn't need lenses costing in the thousands to get EXCELLENT visible resolution with film or digital.

I can get very acceptable results from film and from digital, and I am working on both at the same time, but digital wins hands-down - to the extent my "goal" is to get my prints from color negatives looking as clean, sharp and well balanced as my prints from a Canon 1Ds. Quite a challenge. Let us compare the workflow for ROUTINE images that require the least amount of post capture repair work:

Film:
-clean the negative
-do a pre-scan
-fine-tune the scanner software till the image seems OK
-do the final scan
-open the image in photoshop
-apply Neat Image to get rid of the grain (even 100ASA film)
-careful spotting to get rid of all the crud step one failed to achieve;
-use PK Capture Sharpen to rescue lost acutance from above steps;
-fiddle with color balance and contrast in Photoshop (scanning software never perfect);
-final sharpen and print.
(Total Time: about 45 minutes to an hour per picture).

Digital:
-download the flash card;
-adjust and convert in camera RAW;
-PK Capture Sharpen;
-a contrast tweak in Photoshop;
-set image size for printing;
-final sharpen and print.
(Total Time: about 2 minutes, or less).

Comparing results, the 1Ds will still outperform the most carefully executed film scan using slow color film that has excellent shadow detail and a Minolta Dimage 5400 PPI scanner. I know this is an old story by now, but from my personal, immediate - NOW for NOW experience - that's just how it is.
Logged

Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....." http://www.luminous-landscape.com/reviews/film/scanning_workflows_with_silverfast_8.shtml
Jonathan Wienke
Sr. Member
****
Offline Offline

Posts: 5759



WWW
« Reply #3 on: July 19, 2005, 02:16:18 AM »
ReplyReply

Quote
Quote
The lack of detail in some white checks would, to me, indicate a bit of overexposure.  
Jonathan! Overexposing an image! Do you realize what you are saying? Is this even conceivable   .
Actually, that image is about 1/3 of a stop over what I would have ideally preferred; the ACR conversion exposure setting was -0.65, while I prefer it to be about -0.30 for the absolutely best results. But most of the clipping cropped up when converting to sRGB for web display, which is why I prefer 16-bit ProPhoto or ARGB for printing.
Logged

Jonathan Wienke
Sr. Member
****
Offline Offline

Posts: 5759



WWW
« Reply #4 on: July 19, 2005, 07:25:30 PM »
ReplyReply

Quote
Quote
That's where the quibbling starts to crop up; information and data are not quite the same thing. Data can contain information, but if there is less information than data, tha data can be compressed down to approximately the size of the actual information it contains.
Jonathan,

You are correct, but this distinction isn't really relevant to the discussion at hand, is it? The same gap between data and information will theoretically be present on Foveon and Bayer sensors, right?
No, it is relevant; Foveon sensors generate 3x the data of a Bayer sensor with the same pixel count, but not 3x the actual image information. So the gap between data and information is wider with a Foveon sensor than a Bayer sensor; a Foveon sensor outputs 300% of the data of a Bayer sensor, but only 130% (approximately) of the actual image information. That is why a Foveon sensor will output more detailed images than a Bayer sensor with the same pixel count, but they're not three times better than the Bayer image. As processing and interpolation techniques improve this gap will narrow, but never quite close.
Logged

jcarlin
Newbie
*
Offline Offline

Posts: 31


« Reply #5 on: July 20, 2005, 11:31:29 PM »
ReplyReply

Bernard,
Jonathan mentioned information theory and I just thought that would complete his though mathematically. The mean error introduced from the bit quantization that happens after a given operation is

E = (D^2)/12

where D = 1/(2^n)
where n is the number of bits.

This means that the mean error from one 8 bit operation is ~64000X greater than the error from one 16 bit operation.  In practice they’re both pretty small.

Also when comparing Bayer vs. Foveon information & data earlier posters correctly pointed out the difference between information and data.  The fact that there really isn't an RGB value for each pixel in a Bayer array and the fact that we can produce perfectly usable images from them points directly to the fact that a Foveon array probably doesn't produce that much more data.  In practice determining exactly how much more information a Foveon array delivers could probably be figured out by looking at the assumptions behind interpolation.


John
Logged
med007
Full Member
***
Offline Offline

Posts: 110


WWW
« Reply #6 on: July 25, 2005, 03:01:09 AM »
ReplyReply

Deleted by Asher
Logged

[span style='color:blue']Journeys to the Masterpiece[/span]
Jonathan Wienke
Sr. Member
****
Offline Offline

Posts: 5759



WWW
« Reply #7 on: August 16, 2005, 10:51:18 PM »
ReplyReply

Digicams are alreeady pushing the limits of smallest usable sensor pixel size; the pixels in current 8MP models such as the Sony F828 are about 2x the wavelength of light. IMO this is one reason why the megapixel race in that category has slowed; to pack in more pixels requires a physically larger sensor, with the attendant larger lens, all of which increases cost and weight. If you do that, you might as well upgrade to a DSLR. Just as an interesting theoretical exercise, a 60MP full-frame sensor (~9487x6325 resolution) would far enough beyond the cababilities of any currently available glass as to not need an AA filter, and could still have a pixel pitch of 3.794 microns. If you downsampled to 15MP in-camera, you'd have enough input pixels for each output pixel that color interpolation artifacts would be pretty much non-existent, and noise could take a pretty sharp drop, too. I'd say that would likely represent a practical upper limit for a 24x35mm-format camera unless there are some really amazing advances in lens technology.
Logged

BJL
Sr. Member
****
Offline Offline

Posts: 5182


« Reply #8 on: September 09, 2005, 05:46:07 PM »
ReplyReply

Quote
Photonic noise is the square root of the total number of photons impinging upon the photodetector. Of 16 photons impinging upon the 8 micron photodetector, 4 will be noise. Total noise for that area of sensor is 25%.

If we cover the same area with 4x4 micron photodetectors, each photodetector will receive 4 photons, two of which are noise. Total noise for that area of sensor is 4x2=8 photons. Ie., 50% noise.
Wrong or at least irrelevant because you ignore that noise is a mixture of positive and negative variations around the "true" value, so that when signals are merged, ther is some cancellatio of positive and negative noise values, so that total noise increass less than in proportio to the number of signals combined. For the common and simple case of uncorrelated noise, noise levels combines in root-mean-square fashion, and so total noise grows as the square root of the number of values combined.

Let me rework your example of using eithe one big photsitre of four smaler photsites to gather light from a given part of the image, and then combining (binning?) the four small photosite signals to get the same resolution as the big photosite sensor.

Say the larger photosites receiving light from a subject of certain illumination level should gather 16 photons, but due to noise, the resulting electron count is "16 plus or minus 4", meaning that various photosites receive an average of 16 photons each but with fluctuations above and below that value, of standard deviation of sqrt(16)=4.

If the subject is instead photographed with a sensor whose photsites are one quarter the area, each will give an average count of four, with standard deviation sqrt(4)=2.  If the signals from the four small photosites covering the same part of the subject as one big photosite are combined (binned), the average signals simply adds, to total 16, while the four standard deviations (noise level) of 2 combine as follows:
sqrt(2^2+2^2+2^2+2^2) = sqrt(16)=4,
EXACTLY the same signal and noise standard deviation as if you had used one bigger photosite to start with.


Actually, this fancy mathematics is not needed in the case of photon noise, which you should remember is variation in the light arriving at the sensor, not something cased by the sensor itself. Clearly, whether you prodice each "big pixel" by counting the light arriving at a certain part of the sensor with one big photosite, or use four smaller photosites and then combine the totals into a single output "big pixel value", the total light received will be the same, and thus the variation between neighboring big pixel values will this be the same.

Thus as fas as photon nouise goes, aggregating data from more smaller photsites given teh same S/N ratio as if fewer larger ones were used.
This also works if the aggregation is done by printing the smaller pixels at higher pixel density to get the same image size, and viewing from a distance at which the lower pixel count image is not visibly pixelated: the smaller pixels will then be too small to resolve, and so get visually averaged by the eyes. Conversely, if the smaler pixels are big enough to resolve, their worse per pixel noise might be detected, but the alternative evil with biger pixels is visible pixelation!
Logged
Gary Ferguson
Sr. Member
****
Offline Offline

Posts: 540


WWW
« Reply #9 on: July 16, 2005, 09:43:22 AM »
ReplyReply

News of a 39MP medium format back makes me wonder if the digital growth curve, at least in terms of pixel count, is starting to flatten out. I'm finding that with a Canon 1Ds Mk II it's the available wide-angle lenses and my ability to hand hold that's the limiting factor, not the sensor.

Anyone any thoughts? Are we approaching the pixel count end game, and where's the practical limits for 35mm and MF?
Logged
Jonathan Wienke
Sr. Member
****
Offline Offline

Posts: 5759



WWW
« Reply #10 on: July 17, 2005, 01:03:03 AM »
ReplyReply

Quote
For those deeply in the topic I recommend the scientific study
over Bayer array based digital capture vs. film, on both
resolution and dynamic range, at ClarkVision.
Clark's spouting the same old silliness that film snobs have been using to claim that film is better than digital, which have been thoroughly refuted in practical experience for several years now. He claims that 16 Bayer megapixels are needed to match 35mm Velvia slides, which is rather easily debunked. The 11MP 1Ds is capable of matching or beating the best drumscanned 6x7cm medium format film images, and the 16.7MP 1Ds-MkII is even better. (See this article for a direct comparison.) If 11MP can hold its own against 6x7 film, then obviously it is easily capable of surpassing 35mm film of any persuasion. This is not theory, but the direct result of observation and comparison, and in my case, shooting over 50,000 frames with my 1Ds. Comparing the overall image quality of those frames to 35mm film shot through the same lenses with a film body attached, it's obvious that the 1Ds is far better than the 35mm film in every respect.

Good science can accurately predict the results of real-world applications of theories and premises. Clark's science (or at least his math) is severely flawed because real-world comparisons between digital and film deviate dramatically from Clark's claims and predictions. He's not an example of a credible source of information or good scientific analysis..
Logged

Guest
« Reply #11 on: July 18, 2005, 07:15:56 AM »
ReplyReply

Folks...

The point that Johnsathan makes is quite correct.

Each grain of silver in a fine grain film is 1 to 2 microns in size, while a digital sensor may be 5-8 microns. One would assume from this that film can outresolve digital.

But, any individual grain can either be on or off, black or white. It takes 30-40 grains in a random clump to properly reproduce a normal tonal range. On the other hand each individual pixel can record a full tonal range by itself.

So a 100% MTF target will be recorded bettter by film (because the edges are either black or white) but with anything photographed in the real world digital's advantage in this respect immediately becomes clear.

It always amazes me when people defend theoretical positions which are clearly contradicted by reality. Working photographers with experienced eyes know what they are seeing, and so do their hyper-critical clients who are paying the bills. When somone tells them that the evidence of their eyes is wrong, all one can do is smile and shake ones head. The sad part though is when people who don't have the direct personal experience to contradict the theorecticians are intimidated into believing them.

Michael
Logged
filmless
Newbie
*
Offline Offline

Posts: 20


WWW
« Reply #12 on: July 18, 2005, 03:21:24 PM »
ReplyReply

Michael posed the question "Kodak or Dalsa?" on the new 39mp sensor. I do not have any information from Phase One but I do know another digital back manufacturer has been testing a Kodak 39mp chip of late, odds certainly would be in favor that Phase One is using the same Kodak
chip.  

Tim Palmer
Capture Intergration
Logged
Mark D Segal
Contributor
Sr. Member
*
Offline Offline

Posts: 7126


WWW
« Reply #13 on: July 19, 2005, 07:52:13 AM »
ReplyReply

Samir, if we must, go to Photodo (albeit 5 years since the last up-dating) and look at their ratings for Leica, Canon and Nikon lenses. The Leitz Summicron 50mm f/2 - an old warrior from decades ago - has a rating of 4.8. There are only 2 Canon "L" lenses and NO Nikon lenses that either approach or equal that rating.

Dr. Zha, with all due respect to your expertise in medical imaging and lab-like testing, which I shall never have, let us confine ourselves to non-medical photographs - and in that context the comments you made in your last post simply confirm what I was saying about the role of post-capture processing. Thank you. To add a bit of insight to this aspect of the discussion, anyone using PK Sharpener Pro will know with that tool and a bit of imagination you can achieve just about any "film-like feel" or any other "feel" in relation to sharpening or softening by choosing the appropriate option and knowing how to move an opacity slider.

As for grain and dust removal at the scanning stage - I suspect that most of the qualities you ascribe to the Imacon in this respect are software related, though never having used one and not knowing much about its underlying technology that is just a deduction on my part. My experience indicates one can use much cheaper scanning solutions and produce scans that minimize grain, retain superb detail, but perhaps takes more time and ancillary software use than what you describe. I do not try to deal with grain and dust at the scanning stage, because I can do it more efficiently and with more process control at the post-scanning stage, not owning an Imacon.

Jonathan - absolutely - unless for effect I also don't see the point of adding noise and grain to otherwise clear, clean photographs. Why muck-up what digital technology is allowing us to escape from?

Going back to the initiating comments in this discussion thread, Gary Ferguson asked a question and Michael took a cogent stab at an answer, which sounds OK until one starts to think about how many such predictions in the past have been stood on their heads. But what could be even more interesting here is for contributors to take a long look down the whole chain of digital imaging from capture to final print and ask what is the weakest link in the chain. Everytime I am about to make a print and I depress "CTRL Y" (Soft Proof) I think I know the answer, because it hits me in the face straight from the monitor, but I'd be interested in seeing what others think about "the weakest link" - apart from the tired and specious comparisons of differences between a 1Ds and a MK2, or a D2x and 1Ds Mk2, and this lens versus that lens, or scanning film versus digital.
Logged

Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....." http://www.luminous-landscape.com/reviews/film/scanning_workflows_with_silverfast_8.shtml
BernardLanguillier
Sr. Member
****
Offline Offline

Posts: 8387



WWW
« Reply #14 on: July 19, 2005, 06:13:22 PM »
ReplyReply

Quote
(1) Foveon (or its senior scientists) have never claimed anything like a 3x resolution advantage for the Foveon sensor over a Bayer. Don't confuse what Foveon fanatics say with what the company actually says.
Hi there,

I know they didn't, but never intended to write that they did.

Cheers,
Bernard
Logged

A few images online here!
BernardLanguillier
Sr. Member
****
Offline Offline

Posts: 8387



WWW
« Reply #15 on: July 20, 2005, 11:37:54 PM »
ReplyReply

Thanks for the information John.

Regards,
Bernard
Logged

A few images online here!
Ben Rubinstein
Sr. Member
****
Offline Offline

Posts: 1733


« Reply #16 on: July 22, 2005, 10:08:05 AM »
ReplyReply

Quote
That means that the lowest-order bits have to be some combination of guesswork and garbage.

Would that be the reason why the underexposed areas suffer from noise and lack of detail and resolution?
Logged

BJL
Sr. Member
****
Offline Offline

Posts: 5182


« Reply #17 on: August 18, 2005, 10:39:07 AM »
ReplyReply

To Jonathan in particular,
   thanks for a lot of useful contributions in this thread, whose end seems not to be in sight.


A couple of comments.

a) I am glad that Jonathan might be joining my club advocating "pushing pixels smaller than lenses can resolve to eliminate aliasing ("oversampling"), then using downsampling or more selective noise reduction processing as needed to control visible print noise."

 that average of 8 bits per pixel of informati content is interesting, and makes sense; for one thing, a typical mid-tone pixel near "18% gray" has three or four leading binary zeros.

c) blurring (AA, OOF) does not necessarily lose information, but sometimes mostly just redistributes it, by averaging over nearby pixels. Thus careful "deconvolution" can rearrange the information back closer to where it originally was: some of the processing Jonathan describes and illustrates is not necessarily guilty of losing information at each step.

d) not all visual data are equally useful to the eyes viewing a photograph, and most of the extra information provided by an X3 (Foveon) sensor is of far less significance than that already gathered with a Bayer CFA sensor. This is because the extra data relate to color information at the extremes of the visible spectrum (red, blue), while luminosity and mid-spectrum (green) is more important to the eye.

e) I think Ray is onto something important about Foveon sensors and noise: the lower two color layers get significantly attenuated light, and all three layers detect a broad spectral mix of colors that has to be "deconvolved" to produce RGB output. That makes the noise problems distinctly worse for a Foven photosite than for a Bayer CFA photosite of the same size.

f) Perhaps we should ignore posters who spout anti-digital cliches, including ones from the old "CD vs LP" wars.


BJL, Ph. D. in and professor of Applied Mathematics, author of various publications in physics/optics journals, and cynic about people who try to bolster their arguments by flaunting academic credentials
Logged
Ray
Sr. Member
****
Offline Offline

Posts: 8943


« Reply #18 on: August 30, 2005, 09:47:25 PM »
ReplyReply

Quote
I am not at all sure, but I believe that you are partly right; total "dark noise" is dominated by read out noise rather than sensor dark current noise, except in fairly long exposure times where dark frame subtraction becomes useful for cancelling part of the dark current noise.
Well, better than being completely wrong  Cheesy .

I agree. It would be unreasonable to expect read-out noise to be independent of pixel size. There's probably some variation, but it's not proportional. That's really my point.

Perhaps more significant is the increasing role of photonic noise, as a percentage of total picture noise, as the pixel size decreases. Although, again I'll have to rely on your mathematical expertise to confirm this. But as I understand it, an area on the sensor consisting of one 8 micron pixel would be subject to less photonic noise than the same area covered by 4x4 micron photodetectors.

The reasoning is as follows. I'll use unrealistically small numbers for the sake of simplicity.

Photonic noise is the square root of the total number of photons impinging upon the photodetector. Of 16 photons impinging upon the 8 micron photodetector, 4 will be noise. Total noise for that area of sensor is 25%.

If we cover the same area with 4x4 micron photodetectors, each photodetector will receive 4 photons, two of which are noise. Total noise for that area of sensor is 4x2=8 photons. Ie., 50% noise.

Now I'm not sure if this is flawless reasoning. There might well be some other probabalistic theories at work.

The other significant thing about photodetectors is that they are 3-dimensional, hence the analogy of buckets of water when describing electronic charge in photodetectors, or 'well capacity'.

In a situation where the number of pixels on the sensor exceeds the the maximum resolving capacity of the lens, so we don't have to use an AA filter, we have presumably dispensed with microlenses. The naked sensor is covered with just a clear piece of glass. How do the photons reach into the well, which has to be just as deep with the smaller pixel to maintain well capacity, with an unavoidably narrower aperture?

Note: I'm using pixel and photodetector interchangeably because in the Bayer type system there appears to be no distinction.
Logged
Guest
« Reply #19 on: July 16, 2005, 10:43:18 AM »
ReplyReply

Interesting question.

My guess is that 35mm will top out at 22MP, and 645 format at 39MP.

Even today the 16MP Canon 1Ds MKII is pushing up against the performance limits of current lenses. A new generation of digital optimized primes, or high end zooms, could support 22MP, but beyond that the actual size of individual pixels will drop too low, and the laws of physics start to mean that the S/N ratio will render higher resolutions impractical. Such lenses would be cheap, either.

With medium format the same applies, because to go much beyond the sensor resolution of the P45 would make the individual photo sites too small. It remains to be seen, but my guess is that the P45 will push even the superb Zeiss lenses on a Contax to their limits.

That isn't to say that for competative reasons some chip maker might not come up with denser sensors, but given that a P45 now allows us to make 24X36" prints without ressing up, I think we're near the end-game, as you point out.

But there are many other games to be played, including sensativity, on-chip noise reduction and so forth, that the end of history isn't quite yet in sight.

Michael
Logged
Pages: [1] 2 3 ... 6 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad