Ad
Ad
Ad
Pages: « 1 [2]   Bottom of Page
Print
Author Topic: Canon Medium Format Camera  (Read 52680 times)
Gordon Buck
Sr. Member
****
Offline Offline

Posts: 409



WWW
« Reply #20 on: October 16, 2006, 11:34:24 AM »
ReplyReply

Would it be possible to combine two full frame 35mm sensors (nominally 24x36mm) to get 48x36mm and use that double sensor in a new camera that would accept existing 35mm lenses as well as a series of new (“double full frame”) lenses?  

Wouldn’t existing 35mm lenses produce a 43mm circular image on such a double sensor?  That circular image could be cropped to square as well as the traditional 35mm in either portrait or landscape orientation and other aspect ratios – without rotating the camera body!   (Would existing “full frame” 35mm lenses then be considered to have a 1.4 crop factor when used on the doubled sensor?)

By using two existing (or already developed) sensors, there should be some economies of scale as well as savings in developing a new, larger sensor.

Seems like a camera for the double sensor would be only an inch or so taller than existing full frame digital cameras.

It would not be necessary to purchase new lenses in order to get started with the new system but, of course, the new “double full frame” line of lenses would be very tempting!
Logged

pss
Sr. Member
****
Offline Offline

Posts: 960


WWW
« Reply #21 on: October 16, 2006, 11:51:14 AM »
ReplyReply

the rep i spoke with made it very clear that here would be no MF sensor, anything close to what is being offered by DMF makers right now...the size of the mirror would make the camera body too big, the size he was talking about was more like 10-20% larger chip which would be able to handle the 20-25mpix canon want to achieve in the near future....
i am not making this stuff up, i am not writing what i want to see, only passing on what i heard...
Logged

RedRebel
Guest
« Reply #22 on: October 16, 2006, 11:52:10 AM »
ReplyReply

The next rumour is, that Canon has bought Hasselblad, to implement their new medium and large format sensors. No need to design new lenses, except for a 24-105 f2.8 IS.


ps. this is a joke of course    although I wouldn't be surprised if Canon would do such a thing, Sony also entered the DSLR market by overtaking Konica Minolta.
Logged
Graham Mitchell
Sr. Member
****
Offline Offline

Posts: 2282



WWW
« Reply #23 on: October 16, 2006, 11:54:18 AM »
ReplyReply

Quote
Wouldn’t existing 35mm lenses produce a 43mm circular image on such a double sensor?  That circular image could be cropped to square as well as the traditional 35mm in either portrait or landscape orientation and other aspect ratios – without rotating the camera body!   (Would existing “full frame” 35mm lenses then be considered to have a 1.4 crop factor when used on the doubled sensor?)

This sounds a bit like the Sinar modular system. The camera has to be modular to allow the user to swap out various lens mounts. The price would go up considerably and the sale volume would be tiny. This is not really Canon territory, and I don't see them bringing out a modular camera.

It is hard to compare a 24x36mm sensor with a 36x48mm sensor because the aspect ratio is different. You will get different crop factors depending on which dimension you use for the calculation.

Finally, why bother with two sets of lenses when you could just use the 645 format and get the max IQ out of each shot?

If Canon does anything at all (which I really doubt) it will be something like the Mamiya ZD. However they don't have the lenses, so unless they can use an existing lens mount it's just not going to happen. They would need a full lens selection to compete and that is simply too expensive for any likely return in such a small market.
Logged

Graham Mitchell - www.graham-mitchell.com
BJL
Sr. Member
****
Offline Offline

Posts: 5168


« Reply #24 on: October 16, 2006, 02:42:31 PM »
ReplyReply

Quote
Would it be possible to combine two full frame 35mm sensors (nominally 24x36mm) to get 48x36mm and use that double sensor in a new camera that would accept existing 35mm lenses as well as a series of new (“double full frame”) lenses? 

...

By using two existing (or already developed) sensors, there should be some economies of scale as well as savings in developing a new, larger sensor.
[a href=\"index.php?act=findpost&pid=80691\"][{POST_SNAPBACK}][/a]
No.

Look at it this way: this is an obvious idea that has been floated in forums many times over the years, so if it worked, Kodak and Dalsa would have thought of it and done it years ago, greatly reducing the cost of their MF sensors.

I conclude that it cannot be done, at least not with good results.

(In fact I beleive that is has been tried, sort of, with a beam splitting mirror to spread the light over two sensors side-by-side but with a gap between them, asneeded to connect their wiring. The product did poorly, and the idea has not been tried again.)
Logged
rainer_v
Sr. Member
****
Offline Offline

Posts: 1134


WWW
« Reply #25 on: October 17, 2006, 01:54:37 AM »
ReplyReply

Quote
No.

Look at it this way: this is an obvious idea that has been floated in forums many times over the years, so if it worked, Kodak and Dalsa would have thought of it and done it years ago, greatly reducing the cost of their MF sensors.

I conclude that it cannot be done, at least not with good results.

(In fact I beleive that is has been tried, sort of, with a beam splitting mirror to spread the light over two sensors side-by-side but with a gap between them, asneeded to connect their wiring. The product did poorly, and the idea has not been tried again.)
[a href=\"index.php?act=findpost&pid=80722\"][{POST_SNAPBACK}][/a]

did you ever see a mf sensor? if so you easily can see that they are stitched.
Logged

rainer viertlböck
architecture photographer
munich / germany

www.tangential.de
eronald
Sr. Member
****
Offline Offline

Posts: 4201



« Reply #26 on: October 17, 2006, 02:05:14 AM »
ReplyReply

Quote
did you ever see a mf sensor? if so you easily can see that they are stitched.
[a href=\"index.php?act=findpost&pid=80816\"][{POST_SNAPBACK}][/a]

This is done on the silicon wafer during each processing step. If you will, it's like printing several negatives across a big sheet of photo paper: Click, move paper, click, and so forth. You only develop and fix once regardless of the number of exposures onto the sheet. The comparison is appropriate: The process is called photolithography.

Edmund
« Last Edit: October 17, 2006, 02:06:31 AM by eronald » Logged
rainer_v
Sr. Member
****
Offline Offline

Posts: 1134


WWW
« Reply #27 on: October 17, 2006, 03:06:59 AM »
ReplyReply

Quote
This is done on the silicon wafer during each processing step. If you will, it's like printing several negatives across a big sheet of photo paper: Click, move paper, click, and so forth. You only develop and fix once regardless of the number of exposures onto the sheet. The comparison is appropriate: The process is called photolithography.

Edmund
[a href=\"index.php?act=findpost&pid=80817\"][{POST_SNAPBACK}][/a]

these parts of the sensors can be read out by different electronic amplifiers. this is probably what leaf does with its new faster sensors. also the sensor stitch is which elads to the "centerfold issue " of the dalsa sensors,- although it can be removed electronically as sinar and brumbaer are doing,- leaf already hasnt found the code therefore....
Logged

rainer viertlböck
architecture photographer
munich / germany

www.tangential.de
yaya
Guest
« Reply #28 on: October 17, 2006, 03:22:06 AM »
ReplyReply

Quote
these parts of the sensors can be read out by different electronic amplifiers. this is probably what leaf does with its new faster sensors. also the sensor stitch is which elads to the "centerfold issue " of the dalsa sensors,- although it can be removed electronically as sinar and brumbaer are doing,- leaf already hasnt found the code therefore....
[a href=\"index.php?act=findpost&pid=80825\"][{POST_SNAPBACK}][/a]

Rehnniar you should listen to Edmund the guy knows a thing or two about making sensors and silicon wafers.
There is no "stitching" done where you believe to see it on the sensor. Maybe Stephan H. can explain to you how his software removes the centrefold (not electronically).

Best regards

Yair

[span style=\'font-size:8pt;line-height:100%\']Creo UK Ltd., a subsidiary of Kodak.
---------------------------------------
Yair Shahar | Leaf EMEA | Regional Manager |
mob: +44 77 8992 8199 | yair.shahar@kodak.com | www.leaf-photography.com

Please notice my email address has changed to yair.shahar@kodak.com please update your contacts thanks!!!
[/span]
Logged
eronald
Sr. Member
****
Offline Offline

Posts: 4201



« Reply #29 on: October 17, 2006, 05:37:18 AM »
ReplyReply

Quote
these parts of the sensors can be read out by different electronic amplifiers. this is probably what leaf does with its new faster sensors. also the sensor stitch is which elads to the "centerfold issue " of the dalsa sensors,- although it can be removed electronically as sinar and brumbaer are doing,- leaf already hasnt found the code therefore....
[a href=\"index.php?act=findpost&pid=80825\"][{POST_SNAPBACK}][/a]

-----
Yair:
 I actually wonder whether the chip couldn't be used in single-readout mode by changing the firmware. This would mean a slower readout,  no problem  for people who do architecture and wide angle work - they don't care about speed.

If it can be done with the current hardware design, this might be a quick and permanent fix for the centerfold problem - if not maybe it should be incorporated as an option in the next camera hardware iteration. It might even make economic sense as it would be able to "save" some chips with faulty or severely mismatched readouts and sell them as slower units.

could you pass this message on to the dev team please ?

------

Rainer,

 You are right that pixels are moved to the closest edge for readout, but AFAIK this is done for speed and noise limitation, it has little to do with the mask stitching itself.  However you are also right that the multiple readouts are probably what creates the centerfold issue.

 This issue must be clearly known to the chip designers, because it's obvious that readout amps located at the periphery will mismatch on such a huge chip - if only because of external temperature gradients. Hence  they must have considered that for chips in certain tolerances the mismatch can be compensated by software.  Readout mismatch issues are not new, Canon (the original 1D) and Nikon have seen similar problems on some models.

 I hope that at some point some expert will come in to explain the design tradeoffs; in the mean time, full technical documentation of the Aptus chips is available on the Dalsa site.


Edmund
« Last Edit: October 17, 2006, 05:40:23 AM by eronald » Logged
yaya
Guest
« Reply #30 on: October 17, 2006, 05:55:01 AM »
ReplyReply

Quote
-----
Yair:
 I actually wonder whether the chip couldn't be used in single-readout mode by changing the firmware. This would mean a slower readout,  no problem  for people who do architecture and wide angle work - they don't care about speed.

If it can be done with the current hardware design, this might be a quick and permanent fix for the centerfold problem - if not maybe it should be incorporated as an option in the next camera hardware iteration. It might even make economic sense as it would be able to "save" some chips with faulty or severely mismatched readouts and sell them as slower units.

could you pass this message on to the dev team please ?

------

Rainer,

 You are right that pixels are moved to the closest edge for readout, but AFAIK this is done for speed and noise limitation, it has little to do with the mask stitching itself.  However you are also right that the multiple readouts are probably what creates the centerfold issue.

 This issue must be clearly known to the chip designers, because it's obvious that readout amps located at the periphery will mismatch on such a huge chip - if only because of external temperature gradients. Hence  they must have considered that for chips in certain tolerances the mismatch can be compensated by software.  Readout mismatch issues are not new, Canon (the original 1D) and Nikon have seen similar problems on some models.

 I hope that at some point some expert will come in to explain the design tradeoffs; in the mean time, full technical documentation of the Aptus chips is available on the Dalsa site.
Edmund
[a href=\"index.php?act=findpost&pid=80831\"][{POST_SNAPBACK}][/a]

Alright I'll try to explain it one more time:

The centrefold issue has NOTHING to do with multiple readouts. If fact the readouts on the Dalsa sensors are located on the short side.
Using a single readout will for sure slow the capture rate but will not resolve this issue.

We currently have a working software solution for correcting this issue. I promised to post an official statement last week and I will do so as soon as it is ready for posting, should happen in the next 2-3 days.

Thanks

Yair
Logged
eronald
Sr. Member
****
Offline Offline

Posts: 4201



« Reply #31 on: October 17, 2006, 06:09:44 AM »
ReplyReply

Quote
Alright I'll try to explain it one more time:

The centrefold issue has NOTHING to do with multiple readouts. If fact the readouts on the Dalsa sensors are located on the short side.
Using a single readout will for sure slow the capture rate but will not resolve this issue.

We currently have a working software solution for correcting this issue. I promised to post an official statement last week and I will do so as soon as it is ready for posting, should happen in the next 2-3 days.

Thanks

Yair
[a href=\"index.php?act=findpost&pid=80832\"][{POST_SNAPBACK}][/a]

Sorry Yair,
I obviously misread the Dalsa docs. That's what you get for being out of practice
Edmund
Logged
khwanaon
Jr. Member
**
Offline Offline

Posts: 70


WWW
« Reply #32 on: October 17, 2006, 09:28:55 AM »
ReplyReply

Quote
Alright I'll try to explain it one more time:

The centrefold issue has NOTHING to do with multiple readouts. If fact the readouts on the Dalsa sensors are located on the short side.
Using a single readout will for sure slow the capture rate but will not resolve this issue.

We currently have a working software solution for correcting this issue. I promised to post an official statement last week and I will do so as soon as it is ready for posting, should happen in the next 2-3 days.

Thanks

Yair
[a href=\"index.php?act=findpost&pid=80832\"][{POST_SNAPBACK}][/a]

alright Yair, and what is then causing the centerfold issue, if it isn't the multipl readout? I am really curious to know this.

thanks,
Aon
Logged
khwanaon
Jr. Member
**
Offline Offline

Posts: 70


WWW
« Reply #33 on: October 17, 2006, 10:29:14 AM »
ReplyReply

Quote
alright Yair, and what is then causing the centerfold issue, if it isn't the multipl readout? I am really curious to know this.

thanks,
Aon
[a href=\"index.php?act=findpost&pid=80851\"][{POST_SNAPBACK}][/a]

it seems that nobody knows excatly where this problem comes from.

Aon
Logged
brumbaer
Jr. Member
**
Offline Offline

Posts: 67


« Reply #34 on: October 17, 2006, 10:29:18 AM »
ReplyReply

The sensor is a single "chip".

The sensor is designed as a portrait sensor, the number of "active pixels" is 4992 x 6668.

There are some additional pixels which are of no concern for this matter.

Panels
When you hold the sensor into the light and tilt it, you will see that the light is reflected unevenly. You will be able to make out 6 areas. 2 rows of 3 columns (portrait mode).

This has to do with the manufacturing process. Whatever the reason for having those areas is doesn't matter, important is that they exist. On page 9 of the data sheet the stitching effect is defined to be typical 1% and maximum 3%.

Readout
The sensor has 4 (not 2) readouts which are located in the corners (not sides).

When an image is taken, all pixels are captured simultaneously. So the data in the sensor (so to speak) is identical regardless of the number of readouts used.

Once captured the data can be read out using 1, 2 or 4 readouts.

If you use 1 readout, the first sensor line is copied into an output register and the pixels are shifted out one after the other, as soon as the register is empty (all pixels of the line are shifted out) the next line is loaded into the register and shifted out and so on.

If you use 2 readouts you can use the same register, but the register will shift out 2 pixels per clock, one on each "end". The left half of pixels from one readout, the other half from the second pin. Note that the right hand pixels are shifted in reverse order. Outside pixels first so to say.

Alternatively you can use 2 registers. The top half of lines (one after the other of course) will be loaded into the top register and the second half into the bottom register. Both registers will be shifted out at the same time.
The top register gets the top half of rows the bottom register the bottom half. Again lines are shifted out in the outside first order. So the top register will shift out lines 0, 1, ... the bottom will shift out lines 6667, 6666 ... (talking visible lines only).

With 4 readouts both variants are combined.

Please remember that the sensor data is identical regardless of the readout mode used.

So if there is a stitching error of 3% it will be there no matter how many readouts you use, but ...

If you use more than one readout, you will add an additional error for the separate signal paths.

I haven't found a specification how much the amplifiers of the readouts differ and of course I can't say how much the external amplifiers differ.

This might add up to the problem, but it's difficult to guess how much. An educated guess would be easier, if I'd know which readout variant is used (1 readout or 2 readouts / 1 register or 2 readouts/ 2 registers).

But what can be said is, that if the problem is related or at least increased by the readout mode, it would have to be the 2 readout / 2 registers mode (or 4 readout mode which I doubt is used).

It has two be 2 readouts because 1 readout doesn't add different errors to top and bottom halves and it would have to be 2 registers because the error shows usually as a difference in top and bottom half of the image (remember it's a portrait sensor) and to get different paths for top and bottom half you have to use separate readouts for the lines.

So the sensor is not stitched in the classical two parts sense but it is stitched in the sense of areas that differ.

Two sensor stitching

Any sensor has a frame of some sort, because the sensor can not be manufactured in a way that the sensor-pixels are placed directly up to any edge of the chip.
So you will always have a gap. The gap is very large compared to the size of a single pixel.
Assuming that nobody wants to live with a multi-pixel gap in the center of his images, he optical path will have to be split, so that one part of the light will hit one sensor the other part the other sensor.
Which sounds rather difficult especially if the solution has to work for wide angle and tele lenses alike. It doesn't make the design simpler that the "splitter" should not reduce the amount of light hitting the sensor.
Not to speak of the problems to make the sensors match in color and brightness rendition.

Regards
SH
Logged
khwanaon
Jr. Member
**
Offline Offline

Posts: 70


WWW
« Reply #35 on: October 17, 2006, 10:43:49 AM »
ReplyReply

Quote from: brumbaer,Oct 17 2006, 10:29 PM
The sensor is a single "chip".

The sensor is designed as a portrait sensor, the number of "active pixels" is 4992 x 6668.

Thanks for this precise and clear answer! It's appreciated

Aon
Logged
yaya
Guest
« Reply #36 on: October 17, 2006, 01:11:12 PM »
ReplyReply

Thank you Stephan for the explanation, you've put it down in much better words than I could have done.

Yair
Logged
ericstaud
Sr. Member
****
Offline Offline

Posts: 384


WWW
« Reply #37 on: October 17, 2006, 01:35:14 PM »
ReplyReply

Hi Stephen and Yair,

Is there a configuration that would eliminate the centerfold effect with the current Dalsa Sensor?  If there is, would there be trade-offs, like a slower capture rate?

-Eric
Logged
brumbaer
Jr. Member
**
Offline Offline

Posts: 67


« Reply #38 on: October 18, 2006, 01:17:51 AM »
ReplyReply

It's difficult to say.

I do not have enough data about the frequency and kind of this problem on a single sensor.

Does it exist all the time ?

Does it show all the time (which is not the same as existing) ?

Does it change with camera, lens, motiv, shifting or lightning and if it does in which way ?

What readout mode is currently used ?

That is the first set of questions that has to be answered. The answers will help to judge the complexity of an hardware solution.

Regards
SH
Logged
Pages: « 1 [2]   Top of Page
Print
Jump to:  

Ad
Ad
Ad