Ad
Ad
Ad
Pages: « 1 2 [3] 4 »   Bottom of Page
Print
Author Topic: Moore's law for cameras  (Read 17623 times)
bjanes
Sr. Member
****
Offline Offline

Posts: 2793



« Reply #40 on: July 16, 2009, 09:25:57 PM »
ReplyReply

Quote from: Ray
Yes. It is interesting, and I tend to agree with Ctein that one can't apply simple mathematical formulas to describe reality. Calculations of Airy disc size equated to pixel size don't tell the whole story.

With regard to measurement and numbers, Lord Kelvin summed up the situation over 100 years ago:

"In physical science the first essential step in the direction of learning any subject is to find principles of numerical reckoning and practicable methods for measuring some quality connected with it. I often say that when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely in your thoughts advanced to the state of Science, whatever the matter may be." [PLA, vol. 1, "Electrical Units of Measurement", 1883-05-03]

Quote from: Ray
One concern I have, is that increasing pixel count on the same size sensor tends to increase total read noise, because more pixels have to be read.

Well stated, Ray. One way to reduce read noise is pixel binning where 4 pixels can be combined into one super pixel and read out with only one read noise. This can only be done in hardware and until recently has been limited to monochrome sensors. The newest Phase One cameras have Sensor+ technology which extends the process to color. With the 60 MP sensor, one can read out the pixels individually or use binning with the press of the button; with 4:1 binning, one still gets a very usable 15 MP.

Like all MFDBs, the Phase One is CCD and I do not know if this binning is possible with CMOS. One can down sample in Photoshop, but this is averaging and one still has 4 read noises rather than one when obtaining the data.

Bill
Logged
bradleygibson
Sr. Member
****
Offline Offline

Posts: 829


WWW
« Reply #41 on: July 16, 2009, 10:30:01 PM »
ReplyReply

Quote from: bjanes
With regard to measurement and numbers, Lord Kelvin summed up the situation over 100 years ago:

"In physical science the first essential step in the direction of learning any subject is to find principles of numerical reckoning and practicable methods for measuring some quality connected with it. I often say that when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely in your thoughts advanced to the state of Science, whatever the matter may be." [PLA, vol. 1, "Electrical Units of Measurement", 1883-05-03]

That is a great quote, Bill, thanks--I'd not heard it before.

-Brad
Logged

Nemo
Sr. Member
****
Offline Offline

Posts: 276



« Reply #42 on: July 17, 2009, 05:54:31 AM »
ReplyReply

Quote from: Ray
I think a step up from the 21mp of the 5D2 to just 30mp would be too little. 40mp would be better. If such a sensor were back-illuminated to enable the use of larger photodiodes, had no AA filter which would also reduce costs as well as improve resolution, had a few panchromatic pixels to further improve low-noise performance, I might not be able to resist buying such a camera, if the price were right.  
 
Whatever happened to that Kodak invention where half the pixels of the Bayer-type array were replaced with panchromatic pixels?

There are problems with color interpolation. That design was aimed a small CCDs for phone cameras. In large  / low resolution sensors that design can bring severe problems related with color interpolation. Using pixel binning you get more light gathering efficiency, but you reduce the final image dimension (final number of pixels). This "panchromatic" design tries to get more light keeping image dimension untouched. That may be good for some applications, and not so good for others.
« Last Edit: July 17, 2009, 07:21:14 AM by Nemo » Logged
dalethorn
Guest
« Reply #43 on: July 17, 2009, 06:39:47 AM »
ReplyReply

Quote from: bradleygibson
That is a great quote, Bill, thanks--I'd not heard it before.
-Brad

"The shortest distance between two points is a straight line - or the line that's straightest under the circumstances." - Henry Kloss
Logged
Ray
Sr. Member
****
Offline Offline

Posts: 8883


« Reply #44 on: July 17, 2009, 10:31:40 PM »
ReplyReply

Quote from: Nemo
There are problems with color interpolation. That design was aimed a small CCDs for phone cameras. In large  / low resolution sensors that design can bring severe problems related with color interpolation. Using pixel binning you get more light gathering efficiency, but you reduce the final image dimension (final number of pixels). This "panchromatic" design tries to get more light keeping image dimension untouched. That may be good for some applications, and not so good for others.

When the design was first announced, the problems of color interpolation were of course raised by many. Kodak countered this with arguments that more sophisticated algorithms would largely take care of this. I believe the arrangement of pixels was such that every panchromatic pixel adjoined, either by an edge or a corner (8 edges and corners to a square pixel) at least one of each of the 3 color-filtered pixels.

If half the total number of pixels on, say a 5D3 sensor, are panchromatic, then a 5D3 image, consisting of double the pixel count of a 5D2 image, would still retain the same amount of color information as a 5D2 image, regardless of any improvement in interpolation algorithms. Is this not the case?
Logged
Rob C
Sr. Member
****
Offline Offline

Posts: 12213


« Reply #45 on: July 18, 2009, 03:54:44 AM »
ReplyReply

Quote from: Ray
When the design was first announced, the problems of color interpolation were of course raised by many. Kodak countered this with arguments that more sophisticated algorithms would largely take care of this. I believe the arrangement of pixels was such that every panchromatic pixel adjoined, either by an edge or a corner (8 edges and corners to a square pixel) at least one of each of the 3 color-filtered pixels.

If half the total number of pixels on, say a 5D3 sensor, are panchromatic, then a 5D3 image, consisting of double the pixel count of a 5D2 image, would still retain the same amount of color information as a 5D2 image, regardless of any improvement in interpolation algorithms. Is this not the case?




Im trying hard, Ray, but what the hell are you guys talking about?

Rob C
Logged

Eric Myrvaagnes
Sr. Member
****
Offline Offline

Posts: 7957



WWW
« Reply #46 on: July 18, 2009, 09:12:38 AM »
ReplyReply

Quote from: Rob C
Im trying hard, Ray, but what the hell are you guys talking about?

Rob C

In plain english, Rob, I think they're trying to say, "It's crackers to slip a rozzer the dropsy in snide." I hope that clarifies it.
Logged

-Eric Myrvaagnes

http://myrvaagnes.com  Visit my website. New images each season.
Ray
Sr. Member
****
Offline Offline

Posts: 8883


« Reply #47 on: July 18, 2009, 10:47:43 AM »
ReplyReply

Quote from: Rob C
Im trying hard, Ray, but what the hell are you guys talking about?

Rob C

Why, Rob, we're merely trying to predict the possible benefits of increased pixel count. Some think we're close to the end of the road, and others think there's a way to go.
Logged
Nemo
Sr. Member
****
Offline Offline

Posts: 276



« Reply #48 on: July 18, 2009, 01:19:36 PM »
ReplyReply

Quote from: Ray
When the design was first announced, the problems of color interpolation were of course raised by many. Kodak countered this with arguments that more sophisticated algorithms would largely take care of this. I believe the arrangement of pixels was such that every panchromatic pixel adjoined, either by an edge or a corner (8 edges and corners to a square pixel) at least one of each of the 3 color-filtered pixels.

If half the total number of pixels on, say a 5D3 sensor, are panchromatic, then a 5D3 image, consisting of double the pixel count of a 5D2 image, would still retain the same amount of color information as a 5D2 image, regardless of any improvement in interpolation algorithms. Is this not the case?

Right. That is a different possibility: increasing the total number of pixels in a "panchromatic" sensor you keep the color information. You have different combinations at hand. Fuji is experimenting with pixel binning, and Ricoh with multiple exposures... Sigma with the Foveons... Fuji has interesting patents on multilayer sensors... Lets see.

Logged
Nemo
Sr. Member
****
Offline Offline

Posts: 276



« Reply #49 on: July 19, 2009, 02:35:54 PM »
ReplyReply

Quote from: Ray
I think a step up from the 21mp of the 5D2 to just 30mp would be too little. 40mp would be better.

I would bet for 30 + something, from Canon, with an improved CMOS architecture. Who knows, but they will jump over the 20MP mark...
Logged
samirkharusi
Full Member
***
Offline Offline

Posts: 196


WWW
« Reply #50 on: July 20, 2009, 07:10:54 AM »
ReplyReply

Quote from: Nemo
I would bet for 30 + something, from Canon, with an improved CMOS architecture. Who knows, but they will jump over the 20MP mark...
In high resolution planetary imaging there is a simple rule of thumb for capturing all the details that your optical system is capable of delivering. Use Nyquist Critical sampling, 2 pixels across Full Width at Half Max (FWHM). For small telescopes the FWHM is determined by the diffraction limit (the FWHM of the Airy disc), for larger telescopes it is determined by atmospheric seeing. For diffraction, Nyquist Sampling is achieved when the focal ratio (f-number) is 4 times the pixel width in microns (roughly). Lens is diffraction-limited at f8? Use pixels that are 2 microns wide. In practical planetary imaging one changes the focal ratio (using tele-extenders) to match one's pixels, rather than the other way around. Eg the Canon 600mm/4.0L IS lens is almost good enough to be called diffraction-limited on-axis. So, to Nyquist-sample it, one needs to tele-extend it so that it operates at f24 on 6micron pixels (I achieved f28 by using a 5x tele-extender + a 1.4x). Compares quite well with an astronomical telescope when shooting Saturn, comparo given here:
http://www.samirkharusi.net/televue_canon.html
Discussion on Nyquist sampling in planetary imaging given here, with examples:
http://samirkharusi.net/sampling_saturn.html
These principles have long been well established. So, the "ultimate" smallest useful pixel size, based purely on diffraction, will be roughly 2microns for lenses diffraction-limted at f8. That's around 200 megapixels on a 35mm format chip. We have a very, very long way to go, and that's for f8... When pixels get cheap, the rules change to overkill, and overkill for f8 diffraction-limited optics begin at about 200 megapixels on 35mm format. Will we ever get there? Do people actually "need" 200 megapixels to achieve their desired print sizes? A very few, yes. For most one would expect something under 50megapixels as adequate for A4 or A3 prints. Obviously for them, the vast majority, chips smaller than 35mm format, combined with superb, smaller lenses will make more sense. Will prints continue to be the end-game for consumers? Dunno. Perhaps an HD TV display (2 megapixels, achievable by a camera phone) will be good enough for Joe Public.
Logged

Bored? Peruse my website: Samir's Home
bjanes
Sr. Member
****
Offline Offline

Posts: 2793



« Reply #51 on: July 20, 2009, 07:45:41 AM »
ReplyReply

Quote from: samirkharusi
In high resolution planetary imaging there is a simple rule of thumb for capturing all the details that your optical system is capable of delivering. Use Nyquist Critical sampling, 2 pixels across Full Width at Half Max (FWHM). For small telescopes the FWHM is determined by the diffraction limit (the FWHM of the Airy disc), for larger telescopes it is determined by atmospheric seeing. For diffraction, Nyquist Sampling is achieved when the focal ratio (f-number) is 4 times the pixel width in microns (roughly). Lens is diffraction-limited at f8? Use pixels that are 2 microns wide. In practical planetary imaging one changes the focal ratio (using tele-extenders) to match one's pixels, rather than the other way around. Eg the Canon 600mm/4.0L IS

I enjoyed reading this excellent analysis. However, for terrestrial photography under field conditions one is often limited by camera shake and focusing error. If you are hand holding and shooting in less than ideal conditons, how much resolution can be achieved? Noise (photon and read) is also a consideration. 2 micron pixels are used in camera phones and P&S cameras, but not in dSLRs or MFCBs where larger pixels yield better compromises among the factors discussed.

Bill
Logged
feppe
Sr. Member
****
Offline Offline

Posts: 2909

Oh this shows up in here!


WWW
« Reply #52 on: July 20, 2009, 07:58:20 AM »
ReplyReply

Quote from: bjanes
I enjoyed reading this excellent analysis. However, for terrestrial photography under field conditions one is often limited by camera shake and focusing error. If you are hand holding and shooting in less than ideal conditons, how much resolution can be achieved? Noise (photon and read) is also a consideration. 2 micron pixels are used in camera phones and P&S cameras, but not in dSLRs or MFCBs where larger pixels yield better compromises among the factors discussed.

Exactly. And as almost every review for a 20+ megapixel camera claims, we are already near or at the resolving capacity of current lens generation.

So while a 35mm equivalent sensor might theoretically produce 200 megapixels, the real-world max resolution seems to be somewhere between 20 and 40 megapixels with today's tech and real-world limitations. Especially lens tech has not kept up with Moore's Law.
Logged

Nemo
Sr. Member
****
Offline Offline

Posts: 276



« Reply #53 on: July 20, 2009, 09:58:40 AM »
ReplyReply

Quote from: samirkharusi
In high resolution planetary imaging there is a simple rule of thumb for capturing all the details that your optical system is capable of delivering. Use Nyquist Critical sampling, 2 pixels across Full Width at Half Max (FWHM). For small telescopes the FWHM is determined by the diffraction limit (the FWHM of the Airy disc), for larger telescopes it is determined by atmospheric seeing. For diffraction, Nyquist Sampling is achieved when the focal ratio (f-number) is 4 times the pixel width in microns (roughly). Lens is diffraction-limited at f8? Use pixels that are 2 microns wide. In practical planetary imaging one changes the focal ratio (using tele-extenders) to match one's pixels, rather than the other way around. Eg the Canon 600mm/4.0L IS lens is almost good enough to be called diffraction-limited on-axis. So, to Nyquist-sample it, one needs to tele-extend it so that it operates at f24 on 6micron pixels (I achieved f28 by using a 5x tele-extender + a 1.4x). Compares quite well with an astronomical telescope when shooting Saturn, comparo given here:
http://www.samirkharusi.net/televue_canon.html
Discussion on Nyquist sampling in planetary imaging given here, with examples:
http://samirkharusi.net/sampling_saturn.html
These principles have long been well established. So, the "ultimate" smallest useful pixel size, based purely on diffraction, will be roughly 2microns for lenses diffraction-limted at f8. That's around 200 megapixels on a 35mm format chip. We have a very, very long way to go, and that's for f8... When pixels get cheap, the rules change to overkill, and overkill for f8 diffraction-limited optics begin at about 200 megapixels on 35mm format. Will we ever get there? Do people actually "need" 200 megapixels to achieve their desired print sizes? A very few, yes. For most one would expect something under 50megapixels as adequate for A4 or A3 prints. Obviously for them, the vast majority, chips smaller than 35mm format, combined with superb, smaller lenses will make more sense. Will prints continue to be the end-game for consumers? Dunno. Perhaps an HD TV display (2 megapixels, achievable by a camera phone) will be good enough for Joe Public.

Photo lenses aren't telescopes. The example based on a Canon 600mm f/4 is good, but most photolenses aren't telephoto designs either. Wide-angle lenses, zooms, macro lenses, etc. put many problems on the table of the lens designer. Even more if those retrofocus or vario designs have to be "fast", considering size, cost and operation (AF) constraints. On the other hand, the type of detail and the capture device are very important. A bayer sensor introduces several constraints. Typical low contrast detail in photographs isn't like bright spots on a dark background. Etc.

A 200MP sensor is a possibility... sometime, in the future. But right now, say in a 2 years timeframe, what can we expect? In the photographic industry, I think the 35mm format will bring more resolution to the sensors. The MF marks are the point of reference here. Time ago 22MP was the exclusive territory of MF cameras, and then Canon jumped into the battle. So I expect competition from the 35mm format in the 33-39Mp domain, the current exclusive territory of MF cameras (from Canon at least). Does 50MP or 60MP cameras make any sense? Considering prints, yes, but only for a few professionals. It is a very small market, very, very small. Alternatives to prints? Web? TV? Cinema? Even lower resolution is needed there!

So, there are cost (supply) variables at play, technical considerations (like diffraction), but also demand considerations. For large parts of the market, professional photographic market (reportage, fashion, advertising)... How much is needed even considering a wide margin? A professional buys a Hasselblad 50MP if it makes some difference. So I think there is a near limit due to practical reasons based on demand considerations, not technical reasons. The argument is: we can increase pixels at no cost, free, so why not to do it? All we had this discussion years ago, but now the situation is different. Currently we have 50-60Mp cameras, and 20Mp cameras are normal in the prosumer segment (Canon 5D Mark II, Sony A900). So we are talking of further increases... The point is that it only does a marginal difference for the "product" the professionals sell to the clients (photos), so the industry will look for alternative ways of providing better tools, for a price. Maybe not more pixels, but the same number of pixels with more quality, more detail per pixel (the Bayer mosaic!), etc.
« Last Edit: July 20, 2009, 12:09:34 PM by Nemo » Logged
BJL
Sr. Member
****
Offline Offline

Posts: 5129


« Reply #54 on: July 20, 2009, 10:25:12 AM »
ReplyReply

I agree with much of what Nathan Myhrvold says, in terms of reasons for having sensors that go significantly beyond the resolution limits of most or all lenses.

But I see no basis for this claim:
"Over time those sensors will get much cheaper and that will drop camera prices. ... A second effect is that Moore’s law also makes physically larger sensors cheaper."
I can find no evidence for this persistent claim that technological progress is driving a substantial downward trend in the price of making a sensor of a given size, like 24x36mm or 42x56mm. This is especially so for devices that are larger than that for which all recent fabrication equipment (steppers) is optimized. All the many stepper models introduced in the last five years or more have maximum field size of at most 26x33mm (most have exactly that field size), and this is too small for making sensors in the traditional "film formats" except with stitching. Of the two steppers with field size larger than 26x33mm ever offered, one is discontinued (Nikon made it) and the other is an old Canon model with minimum feature size of 500nm (c.f the new 34nm process!). This 500nm is too large for the pixel sizes of modern SLR sensors, as for CMOS sensors, minimum feature size needs to be about 1/20th or less of the cell width. Kodak might use that stepper to make its 50x50mm KAF-4301 and KAF-4320 sensors, but those are 4MP sensors with huge 24 micron pixels, for medical and scientific imaging.

Increases in sales volume and improved economies of scale have probably helped bring large sensor prices down compared to five or ten years ago, but even that trend seems to have slowed or bottomed out three years or more ago. The Canon 5D was $2700 in the US by early 2006, the Canon 5DMkII is no cheaper now, and that despite the added competition from Nikon and Sony.


P.S. I also doubt that there is much interest in images with the combination of 100MP+ imagery and the very low DOF coming from the large apertures needed to control diffraction. F/4 in 35mm format enlarged enough and viewed closely enough to see the details of a 100MP image will have far stronger visible OOF effects and far less DOF than F/4 viewed "normally".
« Last Edit: July 20, 2009, 10:26:32 AM by BJL » Logged
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7401


WWW
« Reply #55 on: July 20, 2009, 01:38:22 PM »
ReplyReply

Hi,

My take is that price may go down per square inch, but only slowly. We have essentially seen this with prices dropping on both APS-C and full frame sensor based cameras. I got the impression that an APS-C size sensor did cost a fortune six years ago and just a couple of hundred dollars today, but I don't see that price/performance changes at the rate we associate with Moore's laws, for the very same reasons you mention.

In my humble view we still get gains from shrinking the sensel sizes, but they may be diminishing in the sense that the rate of improvement is slowing down. We may see a trend to smaller sensor sizes like APS-C and 4/3. One problem with APS-C is that there are very few lenses really optimized for it, that is optimally corrected at large apertures, many decent lenses but very few really excellent ones. Olympus actually seem to make excellent designs for their 4/3 cameras, but it's said that their low pass filtering is a bit to aggressive.

Another observation is that we may also need better photographers. Utilizing the performance hiding in all those multimegapixel SLR-s takes some craftsmanship.

Best regards
Erik

Quote from: BJL
I agree with much of what Nathan Myhrvold says, in terms of reasons for having sensors that go significantly beyond the resolution limits of most or all lenses.

But I see no basis for this claim:
"Over time those sensors will get much cheaper and that will drop camera prices. ... A second effect is that Moores law also makes physically larger sensors cheaper."
I can find no evidence for this persistent claim that technological progress is driving a substantial downward trend in the price of making a sensor of a given size, like 24x36mm or 42x56mm. This is especially so for devices that are larger than that for which all recent fabrication equipment (steppers) is optimized. All the many stepper models introduced in the last five years or more have maximum field size of at most 26x33mm (most have exactly that field size), and this is too small for making sensors in the traditional "film formats" except with stitching. Of the two steppers with field size larger than 26x33mm ever offered, one is discontinued (Nikon made it) and the other is an old Canon model with minimum feature size of 500nm (c.f the new 34nm process!). This 500nm is too large for the pixel sizes of modern SLR sensors, as for CMOS sensors, minimum feature size needs to be about 1/20th or less of the cell width. Kodak might use that stepper to make its 50x50mm KAF-4301 and KAF-4320 sensors, but those are 4MP sensors with huge 24 micron pixels, for medical and scientific imaging.

Increases in sales volume and improved economies of scale have probably helped bring large sensor prices down compared to five or ten years ago, but even that trend seems to have slowed or bottomed out three years or more ago. The Canon 5D was $2700 in the US by early 2006, the Canon 5DMkII is no cheaper now, and that despite the added competition from Nikon and Sony.


P.S. I also doubt that there is much interest in images with the combination of 100MP+ imagery and the very low DOF coming from the large apertures needed to control diffraction. F/4 in 35mm format enlarged enough and viewed closely enough to see the details of a 100MP image will have far stronger visible OOF effects and far less DOF than F/4 viewed "normally".
Logged

cmi
Sr. Member
****
Offline Offline

Posts: 491


« Reply #56 on: July 20, 2009, 02:14:28 PM »
ReplyReply

I would like to add, that the game could change with the advent of much more powerful processors and better realtime data processing pipelines. (There is stuff in the works, I just cant find the links or remember the names.) When you can aquire, store, and, most importantly, process a 1000 MP image in the blink of an eye and handle it like a todays 200KB jpeg, of course everybody would use it.
Logged
Alan Goldhammer
Sr. Member
****
Offline Offline

Posts: 1653


WWW
« Reply #57 on: July 20, 2009, 02:21:45 PM »
ReplyReply

I don't think we can predict cost with any reliability.  When the first PC was introduced over two decades ago, I observed that a good solid desk top unit cost about $2500-3000.  For several years after that new, improved models came out (more memory, 20 MB hard drives, etc.) but the cost was still in that neighborhood.  All of the sudden there were tremendous leaps in technology (when was the last time you had a hard drive fail?) and costs shrunk dramatically.  Now we talk about business desk top models in the $400-600 range with much more power, etc.  I suspect the same thing will happen in the sensor arena.  A more pertinent question to ask is what it will mean for photographers.  My Nikon D300 gives wonderful results for the work I do.  I don't print larger than 101/2 x 16 and the clarity of these images is outstanding.  If I was into panoramic printing I might want more out of a sensor.  Lens design as everyone has noted is limiting and to me the major problem we have is not being able to preserve the kind of quality when stopping down past f8.  This limits the control of depth of field when one would want to.  That limit aside, we are presented with great hardware and software technologies that allow us to go much further than in the days of wet chemistry photography.
Logged

ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7401


WWW
« Reply #58 on: July 20, 2009, 03:14:57 PM »
ReplyReply

Hi,

"when was the last time you had a hard drive fail?"

Three days ago. Actually I have had a lot of disc failures, like two, three a year. The last one that has failed was a LaCie external disk which has seen very little use. On the other hand I had very few disk crashes on OEM disks. I guess that computer manufacturers buy disks from series that are "proven" and tested. Keeping temperatures down is probably also very important. I had a RAID server running on six 250 MByte disks without a single failure, but I had three 5" fans in that box, temperature inside was always around 35 degrees C.

Bet regards
Erik



Quote from: Alan Goldhammer
I don't think we can predict cost with any reliability.  When the first PC was introduced over two decades ago, I observed that a good solid desk top unit cost about $2500-3000.  For several years after that new, improved models came out (more memory, 20 MB hard drives, etc.) but the cost was still in that neighborhood.  All of the sudden there were tremendous leaps in technology (when was the last time you had a hard drive fail?) and costs shrunk dramatically.  Now we talk about business desk top models in the $400-600 range with much more power, etc.  I suspect the same thing will happen in the sensor arena.  A more pertinent question to ask is what it will mean for photographers.  My Nikon D300 gives wonderful results for the work I do.  I don't print larger than 101/2 x 16 and the clarity of these images is outstanding.  If I was into panoramic printing I might want more out of a sensor.  Lens design as everyone has noted is limiting and to me the major problem we have is not being able to preserve the kind of quality when stopping down past f8.  This limits the control of depth of field when one would want to.  That limit aside, we are presented with great hardware and software technologies that allow us to go much further than in the days of wet chemistry photography.
Logged

riverpeak
Newbie
*
Offline Offline

Posts: 1


« Reply #59 on: July 20, 2009, 11:51:19 PM »
ReplyReply

I guess I'll add my 2 cents worth.  

I don't think that we have yet hit the limit in "Moore's law" for cameras with respect to pixel density in sensors.  We'll find a use for the extra pixel density, even if it doesn't necessarily increase the effective resolution of the final pictures.  

So it's not about just getting higher resolution pictures.  One very interesting and promising technology is something called a "Plenoptic" camera, which I think could become one of the biggest "killer-app" features in future digital cameras if they get it to work.  Such cameras will benefit most from high pixel density sensors.  If fact, a 100Mpixels sensor would probably be considered an "enabling technology".  A plenoptic camera, as I understand it, allows the user to set a picture's focus AFTER the picture is taken, using post-image-processing.  Pictures, in general, would likely always be taken at maximum or near maximum aperture (like f2 to f4) (which is less affected by diffraction at the higher pixel density; but diffraction is a problem that still doesn't go way).  Once the picture is taken, the user can then adjust the focus and depth-of-field by using post processing.   So here we may have a perfectly good new use for a 100Mpixel sensor on a DLSR.

Here are some links that describe the Plenoptic camera.  Most of what I have learned from these cameras comes from a paper published from researchers at Stanford University.

http://www.digitalcamerainfo.com/content/S...mercialized.htm
http://www.refocusimaging.com
http://graphics.stanford.edu/papers/lfcamera/

If your want to really get technical, you can read the technical report:  http://graphics.stanford.edu/papers/lfcame...mera-150dpi.pdf

I won't pretend to understand the real details of this new technology, but a point I'd like to make is that with a 100Mpixel camera, the final picture may not necessarily be a 100Mpixes image, but may be something significantly less, like 10Mpixels (or less).  The extra sensor pixels would not give the user more usable resolution, but the ability to adjust focus and depth of field after taking the picture.  The picture would still have to be focused when taking the picture to something close to what the photographer wanted.   But this would be pretty awesome for sports photography, where one could set precise focus to the stitching of a baseball, or a players eyes, or both, at the photographer's discretion, after the fact.
« Last Edit: July 21, 2009, 12:02:14 AM by riverpeak » Logged
Pages: « 1 2 [3] 4 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad