Ad
Ad
Ad
Pages: « 1 ... 5 6 [7] 8 »   Bottom of Page
Print
Author Topic: What chance has Sigma's SD1?  (Read 26930 times)
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1666


« Reply #120 on: June 01, 2011, 03:00:58 AM »
ReplyReply

The trouble is, this "comparison" requires making quite the assumption - that is, that the only difference between the two sensors is pixel count. In fact, we're talking about sensors two generations apart. Differences in sensor technology, demosaicing algorithms, and so forth may have more to do with any increase in resolution than pixel count does, for all we know.
I assume that the test was carried out using raw files, ruling out demosaicing as an unknown in you list. Differences in micro lenses, AA filter etc could very well cause some of the differences. It is often claimed that diffraction is a "hard" limit, pointing to some web calculator or simplified formula. It is often claimed upon new camera releases from Canon/Nikon that one will have to use f/4.0 to get any improvement compared to the previous generation. This measurement shows this not to be the case with Canon 8MP vs 15MP sensors, and my bet is that something similar will be seen when going to the next jump in sensor resolution: it is a soft limit where improvements will be getting smaller and smaller compared to the cost (cost in the wide-sense interpretation).
Quote
Unless we can compare two sensors from the same generation in the same format using the same technology from the same manufacturer with different pixel counts, we're comparing apples and oranges.
Would have been nice, but probably wont happen. So when something is hard to measure directly, what are we to do? Proclaim that it cannot be known, and that everyones guess is equally good, or try to measure and model what we can?
Quote
We also don't have the "lines" related in any way to what the maximum resolution as limited by diffraction might be (both cameras may be below it, for example, which means you could see no limitation in these "tests," even though it would apply if the cameras performed well enough for the limitations to be seen).
It might be that both cameras are limited by something else than diffraction. In that case, claiming that the "diffraction limit" is a real problem for those cameras become dubious.
Quote
Quite frankly in the digital age I'm not so confident that "lines" are actually being resolved, as opposed to being interpolated, so I'm not especially convinced by such "data." I'm skeptical of something so easy for square pixels to artificially replicate through algorithm guesswork (i.e., straight lines) as being a meaningful subject of comparison - a more challenging, non-linear subject is more likely to be a realistic test of real-world resolving power.
I dont subscribe to the idea of "interpolating to create real detail". But measuring monochrome targets of regular lines, then using MTF50 to say something about sharpness is for sure a simplified measure that lends itself well to manufacturer "cheating" (or test conductor/interpreter ignoring essential mechanisms).

An alternative would be a 2-d randomized pattern. Measuring the 2-d response to "noise" input, or (even better) comparing the image to the known reference.
Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1666


« Reply #121 on: June 01, 2011, 04:01:11 AM »
ReplyReply

I think the already existing format sizes shouldn't be decreased simply to deal with a fabrication cost issue which is a temporary aspect of a relatively new technology.
I think that anything that improves the quality-to-cost ratio should and will be considered. Larger sensors have always, and will probably always be disproportionally more expensive than small ones. For film cameras the one-time cost of the sensor was zero (however, it did cost something per shot). For DSLR, the sensor seems to be a large fraction of the cost.
Quote
If someone were to suggest to you that it were a great idea to carry around medium format equipment to shoot on a 24x36 sensor, would you agree with that notion, or dismiss it? That is the same thing, to me, as the idea of carrying equipment sized for a 24x36 format to shoot in a less-than-half-frame format, i.e., APS-C.
Well, a Canon 1100D with a 18-55mm EF-S IS and a 55-250 EF-S is not "24x36 equipment". The house will accept 24x36-type lenses, but will only use a cropped portion. The lenses cannot be used on 24x36mm houses, and their cost/size/weight is probably decreased as a result. Someone with more technical insight than me might claim that the EF/EF-S lens mount makes it harder to construct good wide-angle lenses suitable for 1.6x crop than an imaginary mount tailormade for 1.6x crop could have, but I think thats about it. Empiry seems to suggest that there are good, sensibly priced wide-angle crops, although we will never know if they could have been better/cheaper if the mount was different.

Purchasing a 5D with a 24-105mm f/4.0 L IS would have been out of the question for most beginners 5 years ago. Film may have been out of the question. A 350D with a normal zoom would have been a reasonably good entry into DSLR, where they could carry over flashes, some lenses etc into FF if and when they felt the urge.
Quote
I do think 24x36 happens to be the "sweet spot" in terms of size/weight/cost of equipment, and that replacing film with digital sensors does nothing to change that fact.
I think that film/sensor is a part of the optimization, and that you are wrong. 35mm may have been the sweet-spot partially because of the cost/performance of film. With digital the cost/performance is radically different, and it is reasonable to expect that if one could design from scratch, other trade-offs would make more sense. I dont think there ever were film cameras using the miniature sensor size of todays compact/cell-phone camera yet with the cost, resolution and speed of todays small cams? Perhaps we will never see 8x10 digital cameras (at least at non-NASA prices) even though the format made sense (for some) on film?e
Quote
The array of available optics for the 24x36 format is second to nothing,
Agreed. But I believe that have not been the case since beginning of time? If e.g. mFT grows really popular, I would expect  a bunch of lenses (although perhaps mostly lower cost ones.)

In the end, I think 35mm may be a good trade-off for you (and quite a few others). I think that many of the absolute arguments ("bigger is better") could be used for MF against 35mm equally well. So is 35mm "bad" because MF is larger? Of course not, just like 1.5x/1.6x crop is not "bad" just because 35mm is bigger.
« Last Edit: June 01, 2011, 04:15:59 AM by hjulenissen » Logged
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3462


« Reply #122 on: June 01, 2011, 04:38:20 AM »
ReplyReply

The trouble is, this "comparison" requires making quite the assumption - that is, that the only difference between the two sensors is pixel count. In fact, we're talking about sensors two generations apart. Differences in sensor technology, demosaicing algorithms, and so forth may have more to do with any increase in resolution than pixel count does, for all we know. Unless we can compare two sensors from the same generation in the same format using the same technology from the same manufacturer with different pixel counts, we're comparing apples and oranges.

The test result as referenced by Emil is normalized as far as possible. It uses a method to determine the resolution, based on a robust MTF determination (essentially equivalent to the ISO prodedure for testing spatial resolution in digital cameras. The inevitable differences that remain are that we use different camera systems, with their particular AA-filter configuration.

Quote
We also don't have the "lines" related in any way to what the maximum resolution as limited by diffraction might be (both cameras may be below it, for example, which means you could see no limitation in these "tests," even though it would apply if the cameras performed well enough for the limitations to be seen).

While the presentation used doesn't show the relation to diffraction limitation on limiting resolution, the same test procedure I mentioned above does show that in a different graphical chart that the Imatest program can produce. Unfortunately, it is not easy (if at all possible) to present that in a summary graphical chart for multiple apertures. What the charts do show is the effect of diffraction on the MTF50 metric as an approximation of perceived sharpness.

Quote
Quite frankly in the digital age I'm not so confident that "lines" are actually being resolved, as opposed to being interpolated, so I'm not especially convinced by such "data." I'm skeptical of something so easy for square pixels to artificially replicate through algorithm guesswork (i.e., straight lines) as being a meaningful subject of comparison - a more challenging, non-linear subject is more likely to be a realistic test of real-world resolving power.

The resolving power is not determined with a line pattern, it is merely a (numerically converted metric) reference to other ISO test charts that do use hyperbolic line patterns for visual intepretation. The underlying spatial resolution metric is based on an MTF determnation, and more specifically the spatial resolution at 50% MTF response. That is by no means close to the limiting resolutions these systems can resolve, but rather a reasonable indication of perceived resolution. As much as an MTF curve is already a simplified representation, a single point on the MTF curve is even more so.

The important point is that we do not compare based on a line pattern being resolved or not, but on the gradual reduction of contrast as we approach the limiting resolution, and we stop almost half way to pick a reference point that's still very well resolved, and indicative for perceived resolution.

The fact that we compare actual camera's and lenses, will unfortunately make a purely theoretical distinction of a single isolated factor more difficult, because all other factors are not exactly equal. Nevertheless, there is strong enough evidence for us to draw careful conclusions. One such conclusion is that increased sampling density will improve absolute resolution (although it may also influence other characteristics such as dynamic range). Obviously, output magnification will inversely impact resolution, so larger sensor arrays benefit from that. Another conclusion is that diffraction blur has a negative impact on per pixel resolution, but there is no hard limit. However, it is possible to indicate for a given sampling density from which aperture number on the diffraction will visually impact that per pixel resolution. Smaller sensels will already be hurt at wider apertures. How that impacts the combined effect of the whole system, can only be seen at the final output size.

Cheers,
Bart
Logged
joofa
Sr. Member
****
Offline Offline

Posts: 486



« Reply #123 on: June 01, 2011, 09:24:05 PM »
ReplyReply

a more challenging, non-linear subject is more likely to be a realistic test of real-world resolving power.

Yes. However, resolution measurement metrics would need to be devised for such "real world" imagery, if that is what you mean. The good thing is that one can devise such measures, though. For example, the image below measures the "sharpness difference" between several pairs of images using JISR.



Variations of such approaches can be used if one wants to have alternate notions of such metrics, which even work on real images, compared to "legacy" stuff such as line charts, MTF50, etc.

Joofa
« Last Edit: June 01, 2011, 09:32:54 PM by joofa » Logged

Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins
Plekto
Sr. Member
****
Offline Offline

Posts: 551


« Reply #124 on: June 01, 2011, 11:54:40 PM »
ReplyReply

Also, those resolution tests should clearly be in *color*.  The biggest advantage of the Sigma sensor is that all of the colors have identical resolution.  IF they get the color issues tweaked or worked out (possibly through in-camera processing or similar), it could really be a stunning piece of technology.

But the price...  Nobody's going to buy it.
Logged
Aku Ankka
Newbie
*
Offline Offline

Posts: 25


« Reply #125 on: June 02, 2011, 01:59:56 AM »
ReplyReply

Also, those resolution tests should clearly be in *color*.  The biggest advantage of the Sigma sensor is that all of the colors have identical resolution.  IF they get the color issues tweaked or worked out (possibly through in-camera processing or similar), it could really be a stunning piece of technology.

But the price...  Nobody's going to buy it.


The resolution of the Foveon is actually not quite identical for all the colors due to differing S/N of the readings of the different layers. Basicly red-heavy data has slightly lower resolution in principle, though not necessarily under ideal conditions, but in the shadows and near the edges and corners of the sensor.

The relatively low resolution of Bayer equipped sensors for red/blue is largely a myth. First, human eye is far more sensitive to luminance variations than changes in color, second, the color filters do have quite significant overlap (see DxO data for individual sensor data), third, in real life almost all the object data is formed from rather wide spectra information - for exaple the light reflecting from red flowers tends to have quite significant amount of green and/or blue as well. All this makes it much easier for a modern demosaiccing algorithm to reconstruct the image with significant accuracy regardless of the perceived color of the subject.

If there is a significant diffrrence between a Foveon and Bayer of similar pixel counts, the real culprit is not the Bayer CFA, but the anti-alias filter.

As for the color issues of the Foveon - they can not be solved with any amount of processing. It would take several more readout points per pixel making the sensor even more expensive to manufacture and creating even more data to be moved, stored and processed just to gain parity with the competition.
Logged
jeremyrh
Full Member
***
Offline Offline

Posts: 243


« Reply #126 on: June 02, 2011, 12:16:16 PM »
ReplyReply

I have no experience with foveon sensors but aren't we as photographers trying to capture what the eye sees?

Yeah, but the eye is connected to a brain, which does a lot of processing that a humble camera cannot, so mimicing the physiology of the eye in a camera sensor is not necessarily the best way to go.
Logged
ejmartin
Sr. Member
****
Offline Offline

Posts: 575


« Reply #127 on: June 02, 2011, 12:49:25 PM »
ReplyReply

I have no experience with foveon sensors but aren't we as photographers trying to capture what the eye sees?

Yeah, but the eye is connected to a brain, which does a lot of processing that a humble camera cannot, so mimicing the physiology of the eye in a camera sensor is not necessarily the best way to go.

Actually, the physiology of the eye is much closer to Bayer than to Foveon.
Logged

emil
joofa
Sr. Member
****
Offline Offline

Posts: 486



« Reply #128 on: June 02, 2011, 01:42:16 PM »
ReplyReply

so mimicing the physiology of the eye in a camera sensor is not necessarily the best way to go.

This is true if one knows what are the intended applications. If the criterion is to mimic the eye, then the CFA's used on many Bayer-type sensors have a closer response to the human eye than a "bare" Foveon sensor. However, "mimicing" the eye has not always been the goal of engineering applications based upon color vision, in devices ranging from NTSC color TV to more modern stuff such as certain REC specifications on how to do color space transformations. The underlying idea in such situations has been to forgo mimcing eye if an otherwise "incorrect" set of parameters resulted in a more pleasing image or a technically more manageable image.


Joofa
« Last Edit: June 02, 2011, 01:44:23 PM by joofa » Logged

Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins
jeremyrh
Full Member
***
Offline Offline

Posts: 243


« Reply #129 on: June 02, 2011, 02:53:39 PM »
ReplyReply

so mimicing the physiology of the eye in a camera sensor is not necessarily the best way to go.

This is true if one knows what are the intended applications. If the criterion is to mimic the eye, then the CFA's used on many Bayer-type sensors have a closer response to the human eye than a "bare" Foveon sensor. However, "mimicing" the eye has not always been the goal of engineering applications based upon color vision, in devices ranging from NTSC color TV to more modern stuff such as certain REC specifications on how to do color space transformations. The underlying idea in such situations has been to forgo mimcing eye if an otherwise "incorrect" set of parameters resulted in a more pleasing image or a technically more manageable image.


Joofa

Logged
Christoph C. Feldhaim
Sr. Member
****
Offline Offline

Posts: 2508


There is no rule! No - wait ...


« Reply #130 on: June 02, 2011, 03:59:15 PM »
ReplyReply

so mimicing the physiology of the eye in a camera sensor is not necessarily the best way to go.
Actually mimicking the eye would
- cause halos (lateral inhibition)
- require different pixels for color and for luminosity (cones and rods)
- require concepts of shape to be represented within camera intelligence
(primary sensory fields in the brain are strongly influenced by higher centres)
- require a sharp center  (fovea) and a rapid falloff to the corner
- require a blind spot (optic nerve exit)
Logged

joofa
Sr. Member
****
Offline Offline

Posts: 486



« Reply #131 on: June 02, 2011, 04:19:09 PM »
ReplyReply

so mimicing the physiology of the eye in a camera sensor is not necessarily the best way to go.

This is true if one knows what are the intended applications. If the criterion is to mimic the eye, then the CFA's used on many Bayer-type sensors have a closer response to the human eye than a "bare" Foveon sensor. However, "mimicing" the eye has not always been the goal of engineering applications based upon color vision, in devices ranging from NTSC color TV to more modern stuff such as certain REC specifications on how to do color space transformations. The underlying idea in such situations has been to forgo mimcing eye if an otherwise "incorrect" set of parameters resulted in a more pleasing image or a technically more manageable image.


Joofa

The last times when I used to get so much striking out on my response was when I was in school. I thought I had passed that terrible phase in my life.  Huh

Anyway the answer is still what was there before: "This is true if one knows what are the intended applications."

Joofa

Logged

Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins
joofa
Sr. Member
****
Offline Offline

Posts: 486



« Reply #132 on: June 02, 2011, 04:20:24 PM »
ReplyReply

Actually mimicking the eye would
- cause halos (lateral inhibition)
- require different pixels for color and for luminosity (cones and rods)
- require concepts of shape to be represented within camera intelligence
(primary sensory fields in the brain are strongly influenced by higher centres)
- require a sharp center  (fovea) and a rapid falloff to the corner
- require a blind spot (optic nerve exit)

What is meant by "mimicking" the eye here is not the complete visual processing as it happens in the human visual system, a lot of which is still not known fully. What is meant is the "low-level" or the signal acquisition stage - that is why the argument only stays around the color response of the Bayer CFA or the Foveon. Furthermore, "luminosity", as in daylight, is mostly derived from cones, and not from rods. Rods do provide an achromatic signal but is not contributing in a certain sense.

"Higher cognitive representations" such as shape / object identification, etc., that you mention, are not a concern in the lower-level processing that is being talked about in "mimicking the eye" in this thread.

Joofa
Logged

Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins
jeremyrh
Full Member
***
Offline Offline

Posts: 243


« Reply #133 on: June 03, 2011, 12:00:52 AM »
ReplyReply

mimicing the physiology of the eye in a camera sensor is not necessarily the best way to go.



The last times when I used to get so much striking out on my response was when I was in school. I thought I had passed that terrible phase in my life.  Huh

Anyway the answer is still what was there before: "This is true if one knows what are the intended applications."

Joofa


Clearly it is not. Note that I said is not necessarily the best way to go., which is (maybe trivially) correct regardless of any application.

Anyway, enough pedantry.

The larger point is that the human visual system is dominated by a super-sophisticated processing system which can overcome severe shortcomings in the acquisistion stage which would be disastrous in a camera.
Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1666


« Reply #134 on: June 03, 2011, 12:22:32 AM »
ReplyReply

What is meant by "mimicking" the eye here is not the complete visual processing as it happens in the human visual system, a lot of which is still not known fully. What is meant is the "low-level" or the signal acquisition stage - that is why the argument only stays around the color response of the Bayer CFA or the Foveon. Furthermore, "luminosity", as in daylight, is mostly derived from cones, and not from rods. Rods do provide an achromatic signal but is not contributing in a certain sense.

"Higher cognitive representations" such as shape / object identification, etc., that you mention, are not a concern in the lower-level processing that is being talked about in "mimicking the eye" in this thread.

Joofa
I think that if one gives the concept a little thought, it becomes clear that a camera "mimicing the eye" is nothing to strive for. A much more sensible goal would be a camera that, upon reproduction allowed a human observer an experience that mimic "having been there". My point is that even if my sight has limitations (e.g. limited resolution), I have no desire for my camera to have the same limitation if those two limitations add up.

-h
Logged
joofa
Sr. Member
****
Offline Offline

Posts: 486



« Reply #135 on: June 03, 2011, 12:34:39 AM »
ReplyReply


The larger point is that the human visual system is dominated by a super-sophisticated processing system which can overcome severe shortcomings in the acquisistion stage which would be disastrous in a camera.

As I mentioned above, higher-level semantic processing of human vision may not be mixed with the lower-level signal acquisition of a camera system, unless it is clear what is the intent. The usual application in photograpghy, as I understand, is acquiring pleasing images, aided with some simple tasks such as color corrections, profile assignments, etc. And, judging by people's response, the visual appeal or difference between Bayer CFA and Foveon is important for many. However, processing at this level does not require higher level human visual processing as in "image / scene understanding", if that type of processing is what you are talking about when you say "super-sophisticated processing system". And, where higher level scene understanding is involved, e.g., hierarchical semantic relationship of primitive image elements to each other, shape recognition, object identification, motion analysis, computer vision, etc, I believe the difference between Bayer CFA and Foveon color response is perhaps not always important. Example, a challenging task in computer vision is object recognition, say just identifying if a particular picture has a cat in it. How much does it matter under ordinary imaging conditions if the picture was taken with Bayer CFA or Fovean sensor camera?

Joofa  
« Last Edit: June 03, 2011, 12:59:56 AM by joofa » Logged

Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins
Christoph C. Feldhaim
Sr. Member
****
Offline Offline

Posts: 2508


There is no rule! No - wait ...


« Reply #136 on: June 03, 2011, 01:12:06 AM »
ReplyReply

What is meant by "mimicking" the eye here is not the complete visual processing as it happens in the human visual system, a lot of which is still not known fully. What is meant is the "low-level" or the signal acquisition stage - that is why the argument only stays around the color response of the Bayer CFA or the Foveon. Furthermore, "luminosity", as in daylight, is mostly derived from cones, and not from rods. Rods do provide an achromatic signal but is not contributing in a certain sense.
"Higher cognitive representations" such as shape / object identification, etc., that you mention, are not a concern in the lower-level processing that is being talked about in "mimicking the eye" in this thread.
Joofa
Everything I was mentioning are things happening in the retina and the first level representation in the brain.
So this indeed is the very basic level of signal acquisition.
E.G. there are optical illusions, like this:

(image linked from 4.bp.blogspot.com/_t7lQZ9jD6C4/SQuDwiAkcXI/AAAAAAAAAAM/xE9DNM_hWt4/s320/300px-Kanizsa_triangle.jpg)

where the shape of the triangle (which doesn't exist in the image) is fully represented in the first level brain area, which is directly supplied by the nervus opticus.
Even at the very basic level optical concepts come into play.
The luminosity information from the rods plays an important role when its dark, since they are more sensitive.
As a consequence we can see our color recognition suffers greatly when its dark.

Of course you might leave out that first level brain thingy, however - what I wanted to stress is, that mimicking the eye might be not so desireable than it might look on first sight.
« Last Edit: June 03, 2011, 01:13:57 AM by Christoph C. Feldhaim » Logged

jeremyrh
Full Member
***
Offline Offline

Posts: 243


« Reply #137 on: June 03, 2011, 03:43:10 AM »
ReplyReply

I'm regretting resurrecting this topic, and I apologise to Joofa if my responses seem aggressive. I think we are mostly saying the same thing, in fact, but my original coment was in response to this:

I have no experience with foveon sensors but aren't we as photographers trying to capture what the eye sees?
I understand this sensor mimics color film's layers but the eye has individual cones (RGB) and rods (B&W). It seems to me a sensor with individually colored photosites would mimic the eye better?
I always thought if the RGB sites were arranged in a pseudo random pattern (latin square) vs Bayer pattern we might have something. Other than mimicking film I don't see the advantage of a foveon sensor


which seems to suggest that it is desirable to make a sensor that physically resembles the human eye in some sense, which I think is a mistake.
Logged
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3462


« Reply #138 on: June 03, 2011, 07:37:48 AM »
ReplyReply

which seems to suggest that it is desirable to make a sensor that physically resembles the human eye in some sense, which I think is a mistake.

I'd say that it would be more useful to have a sensor that allows to output something that resembles what the human eye saw, with minimal processing. That doesn't necessarily mean that the sensor response should resemble our eye's, although it likely would help.

Cheers,
Bart
Logged
ejmartin
Sr. Member
****
Offline Offline

Posts: 575


« Reply #139 on: June 03, 2011, 09:13:15 AM »
ReplyReply

Not sure about that, Bart.  Red-green separation in color response of the eye is quite poor, having evolved quite late in our evolutionary history; if I'm not mistaken, this is related to red-green colorblindness issues when one is on the tail of the distribution of color visual response.  Having a CFA with even the normal human response would I suspect lead to a lot of chroma noise when one attempts to separate the strongly overlapping color channels.

I thought one of the things people liked about MFDB's is their willingness to sacrifice some QE for good color separation between the channels.  I think the engineering challenge in digital photography is really quite different from that of human vision, so wouldn't necessarily expect that the solution embodied in the latter to be optimal for the former.
Logged

emil
Pages: « 1 ... 5 6 [7] 8 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad