Ad
Ad
Ad
Pages: « 1 ... 3 4 [5] 6 »   Bottom of Page
Print
Author Topic: Is This The End Game?  (Read 16286 times)
Jonathan Wienke
Sr. Member
****
Offline Offline

Posts: 5759



WWW
« Reply #80 on: July 22, 2005, 09:40:04 AM »
ReplyReply

That approach makes sense; a 60MP Bayer sensor would certainly be far enough beyond the capability of any current 35mm-format lens to eliminate the need for an AA filter. After doing color interpolation, lens corrections, and noise reduction, downsampling to 50% of the original pixel dimensions would leave one with an extremely high-quality 15MP image relatively free of digital processing artifacts.
Logged

Jonathan Wienke
Sr. Member
****
Offline Offline

Posts: 5759



WWW
« Reply #81 on: August 18, 2005, 08:36:17 PM »
ReplyReply

Quote
To Jonathan in particular,
  thanks for a lot of useful contributions in this thread, whose end seems not to be in sight.
You're welcome.

Quote
a) I am glad that Jonathan might be joining my club advocating "pushing pixels smaller than lenses can resolve to eliminate aliasing ("oversampling"), then using downsampling or more selective noise reduction processing as needed to control visible print noise."

As long as dynamic range is not objectionably compromised by doing so, and the chip can be read out fast enough that the extra photosites don't objectionably slow down the frame rate, I'm in favor of that as a strategy. I think we're a few generations of sensors and supporting electronics (DIGIC V, anyone?) away from that being a viable approach, but will go out on a limb and predict that such a state of affairs is less than 10 years away. Contemplate, if you would, a 200MP 4x5 full-frame back...
Logged

jcarlin
Newbie
*
Offline Offline

Posts: 31


« Reply #82 on: July 26, 2005, 04:28:08 PM »
ReplyReply

Quote
Quote
That means that the lowest-order bits have to be some combination of guesswork and garbage.

Would that be the reason why the underexposed areas suffer from noise and lack of detail and resolution?
In short, yes
Logged
Ben Rubinstein
Sr. Member
****
Offline Offline

Posts: 1733


« Reply #83 on: July 16, 2005, 08:32:07 PM »
ReplyReply

Forget anything else, why doesn't someone come through with a decent solution to expanded DR in the highlights? As MR has pointed out, it's not necessarily about the pixel count any more, even at the high end.
Logged

Ray
Sr. Member
****
Offline Offline

Posts: 8883


« Reply #84 on: July 17, 2005, 07:46:27 AM »
ReplyReply

Quote
Good science can accurately predict the results of real-world applications of theories and premises. Clark's science (or at least his math) is severely flawed because real-world comparisons between digital and film deviate dramatically from Clark's claims and predictions. He's not an example of a credible source of information or good scientific analysis..
I never realised that Clark had such little credibility amongst you professionals. I wonder why he goes to so much trouble providing sample images which clearly back up his claims. Does he have shares in film manufacturing companies? Is he just on an ego trip trying to be controversial, or is he just incompetent?

There's a tutorial on Michael's site here by Miles Hecker and Norman Koren. Do they also lack credibility?

You will notice from this tutorial that Fuji Velvia, despite its reputation as a sharp film, is not a high resolution film. It's an oversaturated, over contrasty film that is brilliant up to around 20-30 lp/mm then takes a steep dive.

At 50 lp/mm, close to the resolution limit of the 1Ds , but actually slightly higher, the MTF of Fuji Velvia is an unimpressive 35%. Even a good 35mm lens is not going to be too hot at 50 lp/mm and an MF 6x7cm lens even worse.

We don't generally get MTF charts of lenses at 50 lp/mm because they would be just too embarrassing. You'd get Canon wide angle lenses that had zilch MTF at the edges. Not good for sales.

My guess is, by using Fuji Velvia as the choice of film for a shoot out between the 1Ds and the Pentax 6x7, you guarantee that nothing much beyond 40lp/mm on the film is relevant. Maybe there will be something there at 50 lp/mm but barely visible through the miasma of grain.

The fact is, in such comparisons the digital camera is at a disadvantage because you can't change the sensor. You're stuck with it. But you can change the film in the old fashioned camera.

Unless there's an extremely wide gulf in quality between the two cameras being compared, you can get almost any results you want with the film camera by choosing an appropriate film.

If someone would be prepared to ship me their old 1Ds, I can almost guarantee that I could demonstrate that my old Canon 50E 35mm camera can produce greater resolution, using the appropriate film of course  .
Logged
Jonathan Wienke
Sr. Member
****
Offline Offline

Posts: 5759



WWW
« Reply #85 on: July 17, 2005, 05:10:11 PM »
ReplyReply

Quote
The differential factors are:

1. Subjects. In studio digital is fine. With a lot of low
  contrast foliage in landscapes digital simply breaks
  down by their low-pass filtering and noise reduction
  (especially with Canon).

2. Printing size. At 13x19 there is little difference to
  95% of viewers (although not me), but at 30x40
  those of digital origin are full of artifacts whilethose
  from 4x5 film originals can still beexamed with a
  magnifying glass for details from the long tile of
  the film's extended MTF curve.
Regarding point 1: It's only true if you don't know how to properly post-process the files. I shoot landscape and other stuff with lots of fine detail on a regular basis, and digital (1Ds and 1D-MkII) captures much more fine detail than I ever got from 35mm film.

Regarding point 2: This is utterly ridiculous; you're comparing 35mm digital and 4x5 film, and concluding film is better. If you're going to compare apples to apples, compare the 1Ds and 35mm film. 4x5 film has nearly 15X the image recording area of the 24x36mm frame size of the 1Ds, and if you're using a reduced-format DSLR like the 20D, D70, etc. the mismatch becomes even more ridiculously lopsided.

As to the rest, what you're saying is that you prefer the look of film's image artifacts to digital's obviously higher image fidelity; and you're also confusing the obvious film grain in the 6x7 scans with true detail. Your statement that "Too little noise in landscape and nature photography creates a sense of steadiness, unnatural and lack of depth or dimensionality" is simply absurd. When I look at a waterfall or tree with my eyes, I don't see noise patterns of film grain or anything else, I see the tree or waterfall. Film grain and sensor noise are aberrations introduced by the camera that veil and obscure the underlying image, and most people agree that the less such obscurations intrude into the final image, the better, unless their introduction is done deliberately for a creative effect. As a professional photographer, I prefer to start with an image that is as faithful to the original scene as possible, and add only whatever creative effects I deem appropriate to the image, rather than have them imposed on me by the idiosyncracies and limitations of the image capture device.

As to digital lacking "depth" or "dimensionality", again that's a failure to understand the difference between digital capture and film, and how to properly post-process digital images to bring out their best.

As to citing the DPReview forums as a source of reliable and accurate information, I'd recommend not doing that in the future, as the DPReview forums contain more misinformation and outright foolishness than pretty much anywhere else on the web. It's the Weekly World News of photography. The main site is accurate and well-informed, but the forums are beyond help.
Logged

BernardLanguillier
Sr. Member
****
Offline Offline

Posts: 7975



WWW
« Reply #86 on: July 18, 2005, 08:16:13 AM »
ReplyReply

Quote
Each grain of silver in a fine grain film is 1 to 2 microns in size, while a digital sensor may be 5-8 microns. One would assume from this that film can outresolve digital.

But, any individual grain can either be on or off, black or white. It takes 30-40 grains in a random clump to properly reproduce a normal tonal range. On the other hand each individual pixel can record a full tonal range by itself.
Michael,

Although I agree with your conclusions, isn't is true that film being multi-layered, each grain is actually a 3 (or 4) * binary device?

This could probably be more or less directly compared to a Foveon device, but most sensor being Bayer based, their actual resolution is 3 times lower than their naming would suggest.

Please correct me if I am wrong.

Regards,
Bernard
Logged

A few images online here!
Bobtrips
Sr. Member
****
Offline Offline

Posts: 679


« Reply #87 on: July 18, 2005, 05:52:32 PM »
ReplyReply

Quote
To my eyes the image is FULL of interpolation artifacts and
gross oversharpening.  It tend to hide the low contrast
details in hairs and foliage, and make them jump out
suddenly when the local contrast passes a threashold.
Leping -

How about helping me out here.  

What are you calling "interpolation artifacts and gross oversharpening"?
Logged
Bobtrips
Sr. Member
****
Offline Offline

Posts: 679


« Reply #88 on: July 18, 2005, 08:41:15 PM »
ReplyReply

Quote
I see what leping is talking about. The image has a slightly brittle, overly bright and contrasty appearance as opposed to the more mellow and natural, smooth gradations one would expect with MF film. The image 'jumps out' at you in a startling fashion.
So one would need to apply a bit of blurring to the image and decrease the contrast to make it more film like?
Logged
Ray
Sr. Member
****
Offline Offline

Posts: 8883


« Reply #89 on: July 19, 2005, 12:12:43 AM »
ReplyReply

Quote
Clark stated clearly that there are huge resolution gap
between B&W, color negative, and pro slide films, and I
agree both the D2x and 1DsII are beyond the 645 print
film level. However for Velvia it is very different. What
made me speak was nothing but Michael's stretch that
the P45 will beat scanned 8x10 in all cases.
Well, once again I would agree with much of lepingzha's observations, but the counterbalance has to be made.

Velvia should produce very impressive results with 8x10. Within the resolution limits of most 8x10 shots at say f64, Velvia actually boosts the contrast. If the large format lens has an MTF of say 50% at 15 lp/mm, Velvia will make it 55% or more.

Subtle improvements can be made at great cost, whether it's film or digital. Digital is winning because over all, taking all costs into consideration, it's a more economic medium.

As Bill Clinton would have said, "It's the economy, stupid".

I've been through an obsessive stage of Hi Fi fascination. I know how expensive it is to get marginal increases in sound fidelity.

Ultimately there's no point. In the medical arena there's always a point because lives are at stake.
Logged
BernardLanguillier
Sr. Member
****
Offline Offline

Posts: 7975



WWW
« Reply #90 on: July 19, 2005, 08:52:59 AM »
ReplyReply

Quote
"but the information captured is without any possible doubt 3 times less."

Bernard,

This is absolutely not the case. A Bayer matrix reduces resolution by a maximum of 30%. This is basic digital imaging 101. Please do some reading before making catagorical statements which are seen to be untrue by anyone that has a bit of exposure to the available literature.

Michael
Michael,

Thanks for your feedback. I don't think that you read me well enough though.

Pure information theory dictates the fact that a Foveon sensor with 3 million pixels has 9 million photosites, which is 3 times more than a 3 million pixel sensor using bayer interpolation. It will therefore deliver 3 times more information.

I am fully aware that the resolution will not be 3 times higher, since resolution is very much influenced by the tone which is captured in each an every of the 3 million pixels of the Bayer sensor, just like it is by the Foveon sensor. But I didn't write "resolution", I wrote "information", which is the same as "data" in my mind.

Within its range of sensitivity (that will of course be lower), the Foveon sensor has the clear potential to offer much better color purity and a lot less artifacts thanks to this. To what extend this will be visible remains to be proven, but I never wrote or implied anything on this, did I?

Anyway, the underlying fact is that resolution is by itself not good enough a metrics to define the amount of information captured by a digital imaging sensor. Another 101 I guess...

Regards,
Bernard
Logged

A few images online here!
etmpasadena
Jr. Member
**
Offline Offline

Posts: 86


« Reply #91 on: July 19, 2005, 12:43:38 PM »
ReplyReply

Real Quick:

(1) Foveon (or its senior scientists) have never claimed anything like a 3x resolution advantage for the Foveon sensor over a Bayer. Don't confuse what Foveon fanatics say with what the company actually says.
(2) Foveon has only made three main claims with regard to its X3 chip: (a) greater per pixel sharpness (they don't use the word resolution); ( almost 100% immunity from moire; © more pure color (whatever that means.)

People can argue about color. But comparing an SD9/10 against a D30 will show the sharpness difference. And you can shoot fabrics all day without worrying about moire.

People should keep in mind that Foveon's first single capture chip was produced in 1999 (with intellectual work done much earlier). That's a long time ago. That basic 1999 design was what was shopped around to the camera companies. I happen to have photos from that prototype. I can assure you that in 1999/2000 it was way ahead of what the Bayer camp had. Of course Foveon still needed about 1.5 years to refine and produce their first SD9 chip. Of course now that Bayer has advanced as it has the advantages of X3 technology probably make less sense to most photographers. But back in 1999 it really did make sense.

I should add that in February of 2005 ISO came up with its definition of pixel--the new defintions don't change things much or clarify anything regarding how CFA/Foveon/Fuji sensors count pixels or photosites. But they do say that while pixels can be counted, resolution can only be measured, and the two are not the same. Of course on this board we all know that. But it's nice for it to be official.
Logged
Jonathan Wienke
Sr. Member
****
Offline Offline

Posts: 5759



WWW
« Reply #92 on: July 20, 2005, 09:33:18 PM »
ReplyReply

Quote
I think Bernard has a good point about image manipulation causing a loss of 'real' information. It's not about rounding errors in floating point arithmetic. Suppose you take a picture of a test chart using a 12mp camera ( say a 1Ds ) and a 3mp camera ( say a D30 ). Now downsample the 1Ds image 2:1 to make a 3mp image. Then subtract this from the D30 image. What we have is then an 'error image' for the D30. You could measure this magnitude of the error as the standard deviation of the difference over all the pixels.
That would be totally useless, because if you didn't do the exact same sharpening, curves, creative tweaks, and other processing to both images, there would be a difference as a result, and you'd have no real way of proving which camera was "right". You can make a difference mask, but that cannot tell you which image is right, or even if either image is right; it can only tell you how much the two images differ.

The entropic data loss Bernard was referencing is all about rounding errors, and functions where multiple input values can result in the same output value. Once data has passed through such a function, it cannot be known with certainty which input value was responsible for the given output value. This does start in the lowest-order bits, and gradually works its way into the higher-order bits as more edits (curves, levels, and suchlike) are performed. For example, if you're doing a curve adjustment in 8-bit mode where levels 8, 9, 10, and 11 all get mapped to the output value of 5, you have just destroyed the least significant 2 bits of deep shadow values because you are mapping 4 input values to 1 output value. As you continue to perform more edits, you're gradually corrupting and destroying the information in progressively higher-order bits, but it's not a completely linear progression. IIRC, if you have a sequence of edits that each cause 2 bits worth of entropic loss, you need to do two edits to lose 3 bits worth of information, four edits to lose four bits, sixteen edits to lose five bits, and the progression continues with each additional bit requiring the square of the previous bit's number of edits to be destroyed. Most Photoshop edits cause 2 bits of entropic loss or less, so when editing in 8-bit mode this loss can become significant enough to manifest as visible artifacts, usually banding or posterization. But when editing in 16-bit mode, you can do many more edits with a higher entropic loss per edit before visible degradation occurs because the entropic degradation always affects the lowest-order bits first, and the image information always lives in the highest-order bits. So if you can add extra bits to the data, even if it's simple zero padding (which is what happens when you convert an 8-bit file to 16 bits), you can edit in 16-bit mode and destroy less real image information while doing so.

Here's a practical application of this information theory crap, an experiment you can perform for yourself to see all this in action. Open a JPEG image that is already reasonably well-processed (my little girl pic would be a fine candidate), convert it to 16-bit RGB mode, and do a series of random curve and/or level adjustments to screw it up and then put it right again. An easy one would be a level adjustment where you don't change anything but the gamma control (the third slider in the middle). Do 10 random gamma tweaks between 0.5 and 2.0, with the last one or two designed to return the image as closely as possible to its original state. Then convert back to 8-bit mode, while recording these tweaks as an action. Save the tweaked image in a new file. Now reopen the original image, leave it in 8-bit mode, and run the action you just recorded. Save as a third copy. Now open the copy that was tweaked in 16-bit mode and compare its appearance to the one that was tweaked in 8-bit mode. Both files had the exact same number of bits destroyed by the level tweaks, but the bits destroyed in the tweaked-in-16-bit-mode file were the zero bits padded on to the real image information when converted from 8-bit to 16-bit, (and thus were no real loss) while the bits destroyed in the file-tweaked-in-8-bit-mode were actual image information, resulting in a visible degradation of image quality. While you're at it, compare histograms.
Logged

Mark D Segal
Contributor
Sr. Member
*
Offline Offline

Posts: 6931


WWW
« Reply #93 on: July 21, 2005, 07:43:47 AM »
ReplyReply

Jonathan, John Carlin, perhaps you can help me with this conundrum (at least in my mind) - notwithstanding all the erudite technical background in this thread - I am still having trouble seeing how demosaic algorythms impact on image resolution; I think this thread started on the theme of where sensor and lens technology was going to end in respect of maximizing attainable resolution. I sense that concepts of bit depth, image compression and demosaic algorythms are being comingled - I can see the relevance of bit depth and data compression in a discussion about apparent resolution (i.e. printed image detail), but with today's advanced algorythms for demosaic-ing color data it is less clear to me how this impacts on apparent resolution.
Logged

Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....." http://www.luminous-landscape.com/reviews/film/scanning_workflows_with_silverfast_8.shtml
lester_wareham
Full Member
***
Offline Offline

Posts: 116


WWW
« Reply #94 on: July 22, 2005, 04:36:28 AM »
ReplyReply

Quote
News of a 39MP medium format back makes me wonder if the digital growth curve, at least in terms of pixel count, is starting to flatten out. I'm finding that with a Canon 1Ds Mk II it's the available wide-angle lenses and my ability to hand hold that's the limiting factor, not the sensor.

Anyone any thoughts? Are we approaching the pixel count end game, and where's the practical limits for 35mm and MF?
I guess for 35mm 25mp is probably the maximum useful image file size in terms of extracting detail. 16Mp must already be in the diminishing returns region.

However, larger resolution sensors will permit less harsh anti-alias filters, or even none. This then followed by good quality digital low pass filtering and down sampling to the required image size.

The advantage would be no need to sharpen to compensate for the anti-alias filter.

This approach has been used for years in electronic digital conversion (CD players, satellite receiver front-ends), it's called over-sampling. You still retain all of the advantages of low noise in the down sampled image.

This could be how things will develop marketing wise.
Logged
RichardChang
Newbie
*
Offline Offline

Posts: 28


WWW
« Reply #95 on: August 09, 2005, 01:43:54 AM »
ReplyReply

Where will the game end?  The game won't end, technology will provide more and more and more because that's what technology does.  How can Canon sell the same old stuff?  They can't sell the same old stuff but they can sell new stuff.

Where will sensors top out?  They may not, at least anytime near term.  Our society's consumption of technolgy will have to say Enough!, if it's to top out.  I'd think that won't happen anytime soon.

As to the comparison between MF backs and 35mm sensors, it's practical that the sensor technology will evolve to be roughly equivalent, even though the camera back sensors currently have a signal-to-noise advatange.  At the end of the day, the MF backs will have a minimum of double the image area of 35mm, so a digital back will deliver twice the file size and twice the spatial resolution, with everything else being equal.  The difference should be remarkably like the difference between 35mm and 120mm film.

Richard Chang
Logged
Ray
Sr. Member
****
Offline Offline

Posts: 8883


« Reply #96 on: August 18, 2005, 11:57:55 PM »
ReplyReply

Quote
BJL, Ph. D. in and professor of Applied Mathematics, author of various publications in physics/optics journals, and cynic about people who try to bolster their arguments by flaunting academic credentials
Good point! I always like to judge the merits of an argument on the validity of the points made, rather than the academic qualifications of the person making the points.

When I'm incapable of understanding the point, then I'm in a quandry  Cheesy .
Logged
Ray
Sr. Member
****
Offline Offline

Posts: 8883


« Reply #97 on: September 09, 2005, 08:38:43 PM »
ReplyReply

Quote
Quote
Wrong or at least irrelevant because you ignore that noise is a mixture of positive and negative variations around the "true" value, so that when signals are merged, ther is some cancellatio of positive and negative noise values, so that total noise increass less than in proportio to the number of signals combined.

BJL,
Okay, so you have just articulated the other probabilistic theory I was referring to. I understand the logic of this, but I still have a problem with your distinction between 'wrong' and 'irrelevant'.

Binning serves a very useful purpose. You can reduce both 'read-out' noise and photonic noise, but at the expense of resolution. This is basically the same advantage that any sensor has with larger pixels, except binning gives you the choice on the same sensor.

If we are talking about photodetectors no smaller than the lens in use can resolve, then my example is not only relevant but 'not wrong'  Cheesy .

However, there is an important concept here, in your argument, that's relevant to oversampling. Lots of little pixels, smaller than the lens can resolve, will not produce more 'over all' photonic noise for the reasons you've just explained. But there could be an increased 'read-out' noise problem, unless of course you start binning the pixels.

Is there any advantage to binning, over a single pixel the same size as the cluster, with regard to chromatic aberration and birefringence?
Logged
samirkharusi
Full Member
***
Offline Offline

Posts: 196


WWW
« Reply #98 on: July 16, 2005, 11:09:18 PM »
ReplyReply

Physics says that it'll always be a trade-off between effective ISO and resolution. The technology at any point in time just shifts the limiter from being the sensor to the lens and vice versa. I am very happy with my Canon primes with focal lengths from 100mm upwards. They can be used wide open even in astrophotography. On a 1Ds (never mind a 1DsII) I have to close down the shorter primes for astrophotography quite substantially to get satisfactory results, eg 50/1.4 has to be used at f5.6, 28/1.8 at f8... I think Canon really has to redesign their shorter primes if they wish to jack up to 40+ mega pixels on a 35mm format sensor. I do not see that their current short lenses can justify 40 megapixels. I believe that a diffraction-limited f8 lens (of any focal length) may justify pixels as small as 2 microns square (for Nyquist critical sampling). Smaller than that and there is not much to be gained. I.e. if the pixels had sufficient sensitivity, then a perfect f8 lens would be served very amply by a maximum of 200 megapixels on a 35mm format sensor. There's still lots of playroom left, but I think currently the limitation is in the shorter lenses. The pixels also need enhancement in sensitivity, so it remains a race. I once did an experiment on using Nyquist (that 200 megapixel equivalent) and much greater over-sampling to see if one gains anything by going in excess of Nyquist on planetary imaging:
http://www.geocities.com/ultimaoptix/sampling_saturn.html
Personally, for landscapes I'd settle for a small set of f8 diffraction-limited primes and a 100 to 200 megapixel 35mm format sensor  Cheesy
Logged

Bored? Peruse my website: Samir's Home
samirkharusi
Full Member
***
Offline Offline

Posts: 196


WWW
« Reply #99 on: July 18, 2005, 09:28:03 AM »
ReplyReply

Quote
This could probably be more or less directly compared to a Foveon device, but most sensor being Bayer based, their actual resolution is 3 times lower than their naming would suggest.

Please correct me if I am wrong.

Regards,
Bernard
This is a myth perpetrated by the Foveon crowd. A Bayer array does NOT lower resolution a factor 3x, EXCEPT if you are imaging in primary blue or primary red. With any normal, mixed color subject or lighting there is remarkably little loss of resolution. It's actually very easy to verify by just checking out those resolution charts that DPReview puts out for all the DSLRs. I have found that typically, in white light, the combination of Bayer array and anti-aliaising filter in a Canon DSLR lowers resolution to roughly 85% that the pixel pitch should be capable of. I have obtained that %age from my own chart testing and it seems to agree with DPReview measurements. Whether that 85% is due to the Bayer or the anti-aliaising filter I cannot be categorical. Nevertheless when you remove the anti-aliaising filter from the Canon DSLR (I have a 20D with the filter removed) the image sharpness at 1:1 display "looks" enhanced. I have not bothered to verify whether it just "looks" sharper or it actually resolves more. I suspect a bit of both. I have not chart-tested the 20D without the anti-aliaising filter; perhaps one day when I am really, really bored...
Logged

Bored? Peruse my website: Samir's Home
Pages: « 1 ... 3 4 [5] 6 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad