Ad
Ad
Ad
Pages: « 1 ... 3 4 [5] 6 »   Bottom of Page
Print
Author Topic: Leica M Monochrom review  (Read 19335 times)
Guillermo Luijk
Sr. Member
****
Offline Offline

Posts: 1291



WWW
« Reply #80 on: May 14, 2012, 02:09:46 AM »
ReplyReply

But you are unusual and special, Guillermo. Grin Most of us are not concerned with such finer technical points, and what may be technically possible if we take a lot of trouble, especially when such achievements have certain disadvantages, perhaps in respect of the attractiveness of the review image on the camera's LCD screen, for example.

Ray, there is no problem at all in having a beautifully white balanced review image on your LCD (camera's JPEG for instance), and a RAW histogram and clipping/underexpose warning on it. The image display is never RAW, it has to be demosaiced for instance, there is no need to have greenish images:



So we have the best of both worlds: a natural looking image, and accurate exposure information. This IS possible, and this IS easy to achieve; camera makers simply don't have a focus on the fine RAW shooter. If you are not concerned with accurate RAW exposure, then you are simply not a fine RAW shooter.

Anyway I just wanted to point that RAW histograms are possible, no mater if the sensor is CFA or monochrome.

Regards
« Last Edit: May 14, 2012, 04:00:41 AM by Guillermo Luijk » Logged

grzybu
Newbie
*
Offline Offline

Posts: 45


« Reply #81 on: May 14, 2012, 04:34:54 AM »
ReplyReply

I would love to have RAW histogram in camera. Even if its available only in review mode.
RAW developers can show RAW histograms and it should be really easy to code, especially if you leave 4 colors with two different greens. I don't see reason why it's harder than making histogram from jpeg file. WB is not important, because I want to see RAW data histogram, and this are just numbers.
It's just producers are lazy and they don't want to see how users struggle with UniWB, etc. to get something closer to RAW histogram.
Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1683


« Reply #82 on: May 14, 2012, 04:50:06 AM »
ReplyReply

I would love to have RAW histogram in camera. Even if its available only in review mode.
Agreed
Quote
RAW developers can show RAW histograms and it should be really easy to code, especially if you leave 4 colors with two different greens.
I tend to think that the gain of the two green channels is a complexity that I dont want to think about when using my camera. If there is some slight deviation, I'd rather see them bundled together in one histogram.
Quote
I don't see reason why it's harder than making histogram from jpeg file. WB is not important, because I want to see RAW data histogram, and this are just numbers.
It's just producers are lazy and they don't want to see how users struggle with UniWB, etc. to get something closer to RAW histogram.
They are selling in-camera JPEGs to the most customers, for low-end to medium-end products, that functionality is probably the most important one, and re-using this functionality for high-end products is basically free. Canon and Nikon seem to be putting a lot of R&D and marketing effort on producing the "best" jpeg files, meaning that they are unlikely to jeopardize their investement.

Any raw feedback would (probably) have to be developed especially for niche cameras, and for a niche audience. Soccer-mums might think that the additional flexibility clutters the user interface. So while the requested functionality might be real simple, it might not make sense for the major players to include it - especially as long as the lu-la readers & friends continue to purchase products despite the lack of raw histograms.

-h
Logged
dreed
Sr. Member
****
Offline Offline

Posts: 1260


« Reply #83 on: May 14, 2012, 06:20:18 AM »
ReplyReply

AgreedI tend to think that the gain of the two green channels is a complexity that I dont want to think about when using my camera. If there is some slight deviation, I'd rather see them bundled together in one histogram.They are selling in-camera JPEGs to the most customers, for low-end to medium-end products, that functionality is probably the most important one, and re-using this functionality for high-end products is basically free. Canon and Nikon seem to be putting a lot of R&D and marketing effort on producing the "best" jpeg files, meaning that they are unlikely to jeopardize their investement.

Are photographers at the Super Bowl or Olympics going to be interested in raw histograms or fiddling around with raw conversion?

When you're in a situation where seconds matter in getting a picture out of the camera and onto a web page, JPEG is always going to be part of the answer.

Similarly, if you've just been out to a gig or a party and you get home with a camera full of photos that you want to upload to facebook, etc, do you want to mess around with raw conversion?

Quote
Any raw feedback would (probably) have to be developed especially for niche cameras, and for a niche audience. Soccer-mums might think that the additional flexibility clutters the user interface. So while the requested functionality might be real simple, it might not make sense for the major players to include it - especially as long as the lu-la readers & friends continue to purchase products despite the lack of raw histograms.

Do you count landscape and street photograph as "niche"?
Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1683


« Reply #84 on: May 14, 2012, 06:57:20 AM »
ReplyReply

Are photographers at the Super Bowl or Olympics going to be interested in raw histograms or fiddling around with raw conversion?
Most of them: no. That was what I was trying to say in my post.
Quote
Do you count landscape and street photograph as "niche"?
Not necessarily, but I count those who care about ETTR as a minute niche compared to all of the Japanese and Germans purchasing expensive cameras to take snaps of their pets and family and occasional holliday trips using AE and jpeg exclusively.

Do you think that all landscape and street photographers work in raw format and obsess about the noise levels found in the darkest parts of their images? I dont. I am sure that you could find any number of photograpers producing more interesting images than myself who havent got the faintest idea what ETTR is or how many stops of usable DR her camera has.

-h
« Last Edit: May 14, 2012, 07:02:58 AM by hjulenissen » Logged
grzybu
Newbie
*
Offline Offline

Posts: 45


« Reply #85 on: May 14, 2012, 07:12:45 AM »
ReplyReply

Most of EVIL cameras has live histogram, but it's only luminance so it's not perfectly usable. I don't think socker mums are enabling this to take snapshots, but it's available for others who care. The same could be with RAW histogram in review mode. Just let users to enable this and let them take responsibility for the results Wink
It won't cost much more to add such feature and won't make camera worse.
Logged
dreed
Sr. Member
****
Offline Offline

Posts: 1260


« Reply #86 on: May 14, 2012, 10:51:32 AM »
ReplyReply

Most of them: no. That was what I was trying to say in my post. Not necessarily, but I count those who care about ETTR as a minute niche compared to all of the Japanese and Germans purchasing expensive cameras to take snaps of their pets and family and occasional holliday trips using AE and jpeg exclusively.

Do you think that all landscape and street photographers work in raw format and obsess about the noise levels found in the darkest parts of their images? I dont. I am sure that you could find any number of photograpers producing more interesting images than myself who havent got the faintest idea what ETTR is or how many stops of usable DR her camera has.

I'd always assumed that they would ... I suppose that's rather silly of me.
Logged
image66
Full Member
***
Offline Offline

Posts: 122


« Reply #87 on: May 14, 2012, 11:32:51 AM »
ReplyReply

So much of this discussion is centered around the resolution issues involved with the Bayer array. But in order to give quantifiable numbers for comparision, we really do need to be VERY specific about the demosiacing algorithm being used.

In a handful of algorithms, green is what is used to determine luminance. In others, each one is used for that pixel location. Yet in others three pixels are averaged together, while once in a while an algorithm actually used four.

Since we photographers here on Luminous Landscape are the smartest kids in the classroom, I personally would prefer that when we toss these numbers out that we identify just which conversion algorithm we are using. It's the little things like this which make a big difference in backing up our conclusions.

We ARE the smart kids, right?

Ken N.
Logged
Rob C
Sr. Member
****
Offline Offline

Posts: 12213


« Reply #88 on: May 14, 2012, 11:47:38 AM »
ReplyReply

So much of this discussion is centered around the resolution issues involved with the Bayer array. But in order to give quantifiable numbers for comparision, we really do need to be VERY specific about the demosiacing algorithm being used.

In a handful of algorithms, green is what is used to determine luminance. In others, each one is used for that pixel location. Yet in others three pixels are averaged together, while once in a while an algorithm actually used four.

Since we photographers here on Luminous Landscape are the smartest kids in the classroom, I personally would prefer that when we toss these numbers out that we identify just which conversion algorithm we are using. It's the little things like this which make a big difference in backing up our conclusions.

We ARE the smart kids, right?Ken N.




Nope; it's the docs, dentists, lawyers and even accountants are the smart kids... photographers are those too dumb to do anything else. (So I was led to believe shortly before I took it up.)

;-)

Rob C
Logged

joofa
Sr. Member
****
Offline Offline

Posts: 488



« Reply #89 on: May 14, 2012, 11:31:53 PM »
ReplyReply

Quote
Michael wrote:
Engineering a monochrome sensor equipped camera isn't simply a matter of removing the Bayer array. Though based on the M9 sensor, a significant amount of reengineering at the chip level was required
This is absolutely false. If you knew what you are talking about, instead of just repeating what someone at Leica has told you, you'd know that. Of course, I would be delighted if you would tell of one single thing that would need to be reengineered (at the "chip" level - I assume you mean the sensor, but maybe something else).

Several differences can be mentioned here. I shall only list two, among others, that I have encountered personally during the making of digital cameras. There are no claims that they have any relevance to Leica camera, but does answer your query to provide a single difference, which I interpret as that the state of affairs should not change if you substitute a monochrome sensor for a color sensor: (1) A sensor typically has more pixels than the quoted resolution that is output. There are several uses of having a slightly larger grid than announced resolution. Among other things it can let one move around the rectangle that is output a little for several benefits that I won't go in here. However, on a monochrome sensor one can just offset the grid by a single pixel. However, on a color Bayer sensor, if you offset by one pixel then you change the order of BGRG pattern. A camera firmware must take the corresponding action to account for that. (2) Many sensors provide analog gain that can be different numbers for R, G and B. On a monochrome sensor you have a single number. Again, the firmware has to account for accordingly.

Admittedly, these are minor differences, and the overall design of a camera should not change a whole lot between a color and a monochrome version. But, again you asked for a single difference, however small that may be.


Quote
Your lack of knowledge is clear here as well ...

It is quite easy to pick on many (most??) articles for technical correctness. Guessing by the tone of your attack, I went back and read Michael's essay. To me the points that you have mentioned are quite minor to detract from the overall message of the essay. The aim is to get the overall gist correct on an informal Internet forum, in the context things are written. I shall give you two examples again.

(1) On LL, DPR, and elsewhere, how many times have "knowledgeable" people mentioned that to get the overall system MTF one multiples the MTF of lens, sensor, this, and that, and what not. However, technically, that is incorrect, unless properly accounted for. Because, to multiply MTFs in such manner the system should be shift-invariant, which the "MTF" of the sensor is not. Of course, people have realized that the sensor response is not shift-invariant, when you see stuff such as the sensor output to alternating thin-enough white and black lines depends upon the registration of the lines with the pixels. On certain displacements you can get white falling on a single pixel and black on the next pixel giving you a contrasty image. However, with a slight displacement, say half a pixel, you get part of black and white lines on each pixel, giving a less contrasty or more uniformly gray image. This is layman speak for the phenomenon of non-shift-invariance of the sensor response as mentioned above.  That is, the MTF itself is a function of the displacement in this scenario. So, while such an effect is realized by people, it is many times not incorporated into the definition and determination of MTF, which it is possible to do so. So one might attack making the "product of MTFs" assertion done without proper qualification.

(2) Open almost any book, Internet article, etc and stare at the xyz chromaticity diagram. Do you see what is wrong? I shall leave that as a homework exercise  Grin.

Quote
Also with all due respect, you may well talk to all kinds of experts, but that does not mean that you are an expert in this field (digital imaging, including sensor operation). You are not.

I hope you get the point by now.

Sincerely,

Joofa
« Last Edit: May 28, 2012, 02:04:40 AM by joofa » Logged

Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1683


« Reply #90 on: May 15, 2012, 02:46:01 AM »
ReplyReply

Several differences can be mentioned here...
Aku claimed that there were no re-engineering changes needed on the sensor level. Your reply talked only about changes needed on firmware level. I don't see your post disputing any of Akus claims?

I am guessing that any trivial change may lead to all kinds of extra costs. A "new product" might have to be certified for electrical hazard, RFI. If might need a rewrite of user manuals and redesign of packaging. Any change to the GUI/firmware could mean re-staffing of software developers and changing source code that may not have been used or tested since the launch of the M9. Purchasing an even smaller batch of specialized image sensor might not be easy to negotiate. And in the end, Leica does not do this to be kind, but because they believe that the difference between their costs and what the selling cost makes it worthwhile.
Quote
It is quite easy to pick on many (most??) articles for technical correctness. ...
I think it is important that _if_ some photography reviewer wants to talk about sensels and other technical details, he/she should get it right. If they don't want to talk about nitty-gritty technical details, that is fine too.

-h
« Last Edit: May 15, 2012, 02:47:47 AM by hjulenissen » Logged
joofa
Sr. Member
****
Offline Offline

Posts: 488



« Reply #91 on: May 15, 2012, 02:55:49 AM »
ReplyReply

Aku claimed that there were no re-engineering changes needed on the sensor level. Your reply talked only about changes needed on firmware level. I don't see your post disputing any of Akus claims?

You can try to be a smart alec, and find all sorts of ifs and buts. The point, as I see it, is that Micheal implied some differences in monochrome and color cameras can have an effect on the availability, timing, etc. You can try to narrow it down to a sensor and firmware difference and nit pick. For the cameras we have designed, firmware was mostly on the cameras. So, for me, in broader sense, the question is the difference between color and monochrome camera design and not necessarily between color and monochrome sensors.

Logged

Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1683


« Reply #92 on: May 15, 2012, 03:17:34 AM »
ReplyReply

You can try to be a smart alec, and find all sorts of ifs and buts.
Not trying to be anything except discuss the matter at hand. I suggest you do the same. Michael said "something". Aku interpreted this as "A", and claimed that "A" was false. You claimed that "B" was true. B != A.
Quote
The point, as I see it, is that Micheal implied some differences in monochrome and color cameras can have an effect on the availability, timing, etc. You can try to narrow it down to a sensor and firmware difference and nit pick. For the cameras we have designed, firmware was mostly on the cameras. So, for me, in broader sense, the question is the difference between color and monochrome camera design and not necessarily between color and monochrome sensors.
So if Michael had more generally stated that a monochrome version of a CFA camera may be more expensive and complicated to manufacture than what one might initially think, both me and you would agree to that. Claiming that "a significant amount of reengineering at the chip level" was required is more concrete.

I do believe that Aku should have phrased his objections differently.

-h
Logged
joofa
Sr. Member
****
Offline Offline

Posts: 488



« Reply #93 on: May 15, 2012, 03:31:27 AM »
ReplyReply

Not trying to be anything except discuss the matter at hand. I suggest you do the same. Michael said "something". Aku interpreted this as "A", and claimed that "A" was false. You claimed that "B" was true. B != A.

No, there is no Boolean (or any other) algebra involved here. From your response, I take it that you have seem to have never been involved with a camera design. Even simple logistical reasons could make "A" false. I shall give you a personal example again. Some of the monochrome and color cameras that we have produced, that otherwise used the same "public consumption" specs such as resolution, pixel size, form factor of the sensor, etc., were based upon the availability of sensors from the manufacturer that differed in stuff such as the number of output channels to read on the sensor, which to the consumer doesn't matter. You can read on all sorts of camera experts on LL and DPR who think that a camera has few parts other than a sensor and an ADC, but at least for us, there were other chips that provide timing and control signals to a sensor among other things. Only a difference in the number of readout channels made us design things differently. Again, you can go ahead and nitpick that why didn't we used a monochrome and color sensor that have all stuff the same including the read out channels as mentioned above. However, as I said before, we don't live in a world of armchair sensor and/or camera design experts for whom everything is possible on paper and pencil. Logistics and availability of certain sensor types was enough of a reason for us.

Sincerely,

Joofa
« Last Edit: May 15, 2012, 02:26:50 PM by joofa » Logged

Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins
sandymc
Sr. Member
****
Offline Offline

Posts: 269


« Reply #94 on: May 15, 2012, 04:36:58 AM »
ReplyReply

I think there's a certain amount of missing the point going on here.

What got people excited - anyway, what got me to respond above - was not the inaccuracies in the original article. Everybody makes mistakes, and everybody that writes has to simplify occasionally to get the message across. Sometimes that simplification means that what gets written is mostly true, but not true in every possible condition. The "problem", if you will, was Michael's response (message #16) where he apparently defended the absolute accuracy of what was written, and did so not on the facts, but by reference to nameless experts. Yes, Michael was provoked by language that shouldn't have been used, but you have to separate the facts from the emotion.

Sandy

« Last Edit: May 15, 2012, 05:27:32 AM by sandymc » Logged
Jim Pascoe
Sr. Member
****
Offline Offline

Posts: 824


WWW
« Reply #95 on: May 15, 2012, 05:09:13 AM »
ReplyReply

I think the problem is that Michael is trying to write articles that are aimed at the 'informed, intelligent' photographer, and not at scientists.  Listening to Michael talking on his various tutorials he comes across to me as a man who knows he has limitations on the deep scientific and engineering side of photography (as do almost all of us), and is trying to understand as much as is necessary to get the best out of cameras as tools.  The article in question was billed as a "Hands on First Impressions Report", and went into just enough technicalities to point out the differences between the M9m and the M-Mono.  It was never meant to be a treatise on the chip level engineering involved, and to be quite honest I interpret engineering to involve hardware and software anyway.
The thread started out with a few points about the article until Aku's rude posting questioning Michael's credentials as a technical writer.  Well if you want to discuss the fine points of sensor design perhaps you should post on a forum aimed at technical writings, or at the very least have the decency to start a new thread on LL entitled "M9 sensor engineering", and then start out in manner not designed to antagonise the author.  Of course Aku is much cleverer than Michael and is obviously quite keen to make sure we all know this.  As this is primarily a website about photography perhaps he would be good enough to post a link to a gallery of his work so that we can all have a look and judge for ourselves.

I'm sure Michael must be a lot thicker skinned than me, but posts like this would make my blood boil.

Jim
Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1683


« Reply #96 on: May 15, 2012, 07:29:42 AM »
ReplyReply

No, there is no Boolean (or any other) algebra involved here. 
I am not a camera designer, and have never claimed to be one.

Aku talked about sensor, you responded with a talk about firmware. You may now claim that sensor and firmware really are the same, or that it is difficult to draw the line, or that camera design is far too complex for us mere mortals to ever understand. Fine. My point was that your initial post was a very poor counterargument to Akus post. Insinuating that my motivation for participating in the discussion is any less honorable than yours does not change my view (lines such as: "smart alec", "go ahead and nitpick", "armchair sensor designer").

-h
Logged
FranciscoDisilvestro
Sr. Member
****
Offline Offline

Posts: 549


WWW
« Reply #97 on: May 15, 2012, 07:45:51 AM »
ReplyReply


(2) Open almost any book, Internet article, etc and stare at the xyz chromaticity diagram. Do you see what is wrong? I shall leave that as a homework exercise  Grin.


Since I'm very interested on this subject, could you please explain what is wrong?

Regards,
Logged

joofa
Sr. Member
****
Offline Offline

Posts: 488



« Reply #98 on: May 15, 2012, 02:43:58 PM »
ReplyReply

Insinuating that my motivation for participating in the discussion is any less honorable than yours does not change my view (lines such as: "smart alec", "go ahead and nitpick", "armchair sensor designer").

Sorry for the harsh tone. To be honest the "armchair designer" comment was not oriented at you at all, because, I know that you are among the few on LL and DPR that are usually technically correct and have a sound background. I had some other people in mind  Grin but lets not get into that. While I did feel that Aku and you were needlessly picking on a technical point that perhaps does not deserve that much attention, lets bury the hatchet and move on.

Since I'm very interested on this subject, could you please explain what is wrong?

Yes, the diagrams are usually drawn that give an incorrect notion of distance in the color space, IMHO.
« Last Edit: May 15, 2012, 02:54:27 PM by joofa » Logged

Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins
FranciscoDisilvestro
Sr. Member
****
Offline Offline

Posts: 549


WWW
« Reply #99 on: May 16, 2012, 03:35:13 AM »
ReplyReply


Yes, the diagrams are usually drawn that give an incorrect notion of distance in the color space, IMHO.

Ok, Thanks, they are not perceptually uniform and one can draw wrong conclusion. Anyway this is too OT, so I'll leave it.
Logged

Pages: « 1 ... 3 4 [5] 6 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad