Playdo
Newbie
Offline
Posts: 35


« on: October 18, 2009, 07:47:54 PM » 
Reply

I have a few questions to ask about RAW in reference to colour.
Photos from different cameras have a different colour to them. In my eyes Canon's colours are more softer, natural and cooler whereas Nikon's colours tend to be more louder, vibrant and colourful. From what I gather RAW is a totally unedited (and editable) form of capture. If we take these two brands (Nikon + Canon); their RAW presets are slightly different from one another so there will be a difference in colour between them at default (I believe).
If colour from a photo is only comprised of the hue value and it's saturation (??) then each of these cameras' colours can be identically matched through RAW. Am I right here?
If so, how difficult is it to match another camera's colour through RAW and what is the best way of doing this?



Logged




Jonathan Wienke


« Reply #1 on: October 18, 2009, 09:40:48 PM » 
Reply

What you're seeing is nothing more than differences in the default color profiles each RAW converter is using, nothing more. If you use Adobe Camera RAW to convert your RAWs, you can adjust the conversion color profiles to match the colors from multiple cameras, even if they are different brands and models. Google DNG Profile Editor for more information.



Logged




ejmartin


« Reply #2 on: October 19, 2009, 07:25:52 AM » 
Reply

There is one effect that can differ substantially between camera brands, and that is the spectral response of the color filters on the sensor  the strength by which they filter different wavelengths of light in each of the three color bands. This will lead to quantitative differences in the RAW data. Tools such as the profile editor that Jonathan mentioned can do a reasonable job of matching the colors from different cameras, but ultimately the results will differ to some extent, simply because there are not enough freedoms in the color model to be able to adjust all wavelengths individually to compensate the differences in the color filters.


« Last Edit: October 19, 2009, 07:26:35 AM by ejmartin »

Logged

emil



Playdo
Newbie
Offline
Posts: 35


« Reply #3 on: October 19, 2009, 08:24:25 AM » 
Reply

Thanks for the advice. I'm asking because I'd like to get Canon colours from a Nikon. There's something about Canon landscape photographs that I find really apparent and I don't know if it's purely down to the colour profile but I don't think I've seen a landscape photo from a Nikon camera with that same look. There's a natural and earthyness about it. eg. http://www.flickr.com/photos/jimgoldstein/...57594358475765/I just wanted to know if it is possible without being too difficult. If anyone has the same opinion and can link me to Nikon landsapes with the same look that would be good.



Logged




walter.sk


« Reply #4 on: October 19, 2009, 08:58:28 AM » 
Reply

Thanks for the advice. I'm asking because I'd like to get Canon colours from a Nikon. There's something about Canon landscape photographs that I find really apparent and I don't know if it's purely down to the colour profile but I don't think I've seen a landscape photo from a Nikon camera with that same look. There's a natural and earthyness about it. eg. http://www.flickr.com/photos/jimgoldstein/...57594358475765/I just wanted to know if it is possible without being too difficult. If anyone has the same opinion and can link me to Nikon landsapes with the same look that would be good. While that is a beautiful image, it is an sRGB processed for the web, and it is so reduced in quality from the original that I would find it impossible to judge what camera it came from. I also doubt whether the maker simply converted it from RAW using default settings with no tweaking and no post processing. There are so many variables involved in the final production of an image, starting with the light at the source, that the very small differences in profile from one good camera to another have very little influence on the final results. It is also not possible to know what it is that you are perceiving about the specific differences, but if you have a good Nikon and Nikon glass you could try various tweaks to produce the effect you are looking for. If you really want to be obsessive about it, rent a Canon (which model?) and a Canon lens (which also vary in contrast and color rendition depending on the model of the lens) and shoot the same scene from the same viewpoint, and then compare images. You would then be able to identify, if not quantify, what the differences are. I have Canon equipment, and sometimes the grass looks greener on the other side of the street (Nikon). But a selective increase of saturation easily takes care of that.



Logged




Jonathan Wienke


« Reply #5 on: October 19, 2009, 01:45:58 PM » 
Reply

Tools such as the profile editor that Jonathan mentioned can do a reasonable job of matching the colors from different cameras, but ultimately the results will differ to some extent, simply because there are not enough freedoms in the color model to be able to adjust all wavelengths individually to compensate the differences in the color filters. True, but in practice the residual differences between cameras can be made small enough that you won't be able to identify the source camera merely by looking at a single image, and often you won't even be able to tell consistently when comparing images sidebyside. You can also tweak profiles to make your own color "look" if no manufacturer's default palette is quite what you want.



Logged




madmanchan


« Reply #6 on: October 19, 2009, 01:50:52 PM » 
Reply

I'm asking because I'd like to get Canon colours from a Nikon. If you are using Camera Raw / Lightroom, try Tutorial 3 of the DNG Profile Editor documentation: http://labs.adobe.com/wiki/index.php/DNG_P...l_base_profilesAs Emil said, there are technical limitations due to the different color filters used in the different cameras. So in practice you'll be able to match the results for some set of colors, under some set of lighting conditions, but not others.



Logged




Playdo
Newbie
Offline
Posts: 35


« Reply #7 on: October 19, 2009, 04:30:51 PM » 
Reply

OK thanks for all the info



Logged




joofa


« Reply #8 on: October 19, 2009, 06:25:26 PM » 
Reply

As Emil said, there are technical limitations due to the different color filters used in the different cameras. So in practice you'll be able to match the results for some set of colors, under some set of lighting conditions, but not others. In my understanding the mismatch for certain "sets" arises due to the lack of sufficient coverage of training samples (upon which model parameters are estimated) as spanning the color space in a meaningful way. At the end of the day, it is a problem of finding a transformation from one set of data to another. (Simple examples are 3x3 matrices to convert from one color space to another; more complex would entail a LUT; even more complex would entail ...., the list goes on and on). However, it appears Emil is not considering the full scope of the complexity of the issue here, when he made the comment regarding increasing the dimensionality of color, when he mentions the "freedoms in the color model". Without explicit structure on such a higher dimensional space, estimation/regression/optimization/etc., of the unknown parameters to be calculated would become very complex to solve. Indeed one already has a problem in satisfying all "sets" in a lower dimensional color space (say 3D), as you mention, and now if one goes on to a much higher dimensional space, do the number of training samples scale appropriately?


« Last Edit: October 20, 2009, 10:27:20 AM by joofa »

Logged




Jonathan Wienke


« Reply #9 on: October 19, 2009, 06:53:11 PM » 
Reply

All that aside, my experience color matching several different cameras (2 Canon and 5 Nikon DSLRs, an Olympus digicam, and a Hasselblad medium format) worked pretty well. You could see slight differences side by side, but nothing pronounced enough to distiunguish any particular camera by color rendering.



Logged




ejmartin


« Reply #10 on: October 19, 2009, 07:35:32 PM » 
Reply

I've not researched color theory as thoroughly as I have other aspects of image processing, so I'm happy to learn.
My take on it is that if we divide the visible spectrum into a set of sufficiently narrow bands (say 510nm wide, so that the spectrum is approximately constant in each band), we get via such binning of a continuous function space, a finite dimensional approximation to spectral response. Response of sensors is linear so we can treat the space as a linear vector space (of about 30 or more dimensions). Different CFA spectral responses define three dimensional subspaces of this high dimensional vector space, in that the spectral response functions of the three "primary colors" in the CFA define basis vectors spanning a 3D subspace of the larger vector space of spectral data, and any response of the sensor is a linear combination of those three primaries (let's ignore demosaic issues and concentrate on patches of uniform tonality). The CIE standard observer defines a "preferred" linear subspace corresponding to human vision, while the CFA spectral responses of any given camera define color primary response subspaces for that camera. For instance, metamerism is a degeneracy whereby different colors in the larger 30dimensional spectral response space project onto the same values of the three primaries of the color model. Change the spectral response of the primaries and the space of metamers change  this will be one difference of cameras with different CFA filters.
If one has a training set of data  say a color chart under some chosen set of illuminants  one can try to find a linear transformation between 3D vector spaces that minimizes a mean square error over the training set. It is my understanding that this is what the DNG Profile Editor attempts to do (at least, that is what previous profiling programs such as Thomas Fors' and others tried to do). The degree to which CFA responses of different cameras can be made to match will depend on the degree to which their 3D primary color subspaces "overlap" in the larger 30+ dimensional subspace of spectral response, in a measure given by populating the space with training data. If the 3d subspaces are "far apart" in the larger space then it will be hard to match their outputs.
I am puzzled as to what the advantage of a LUT is, given that the responses at the RAW level are linear. Can anyone enlighten me? Does it have to do with nonlinearities induced by the optimization measure imposed by the training set (ie, it may be more important to match certain colors in the space than others; memory colors etc)?
If the above viewpoint is not sufficiently sophisticated, I would hope I am able to grasp a more accurate one, so please feel free to improve my understanding.


« Last Edit: October 19, 2009, 07:39:09 PM by ejmartin »

Logged

emil



joofa


« Reply #11 on: October 20, 2009, 10:39:49 AM » 
Reply

In addition to the known problems of higher dimensionality, the sparsity of data points, which is more common in higher dimensions, also starts causing problems. Any clustering operations need to be come with a metric; a notion of distance. However, L_p distances, for p>2 start to degenerate, for dimensionality as low as around or over 10. Even, L_1 and L_2 (Euclidean) and min. mean square as its variants, have problems. Control engineers have known this fact for a long time. Database people have come to realize that nearest neighbor index searches in relatively higher dimensions degenerate in performance to a brute force search that goes over all items!
However, that does not mean that sky has fallen down. With the right structure in the higher dimensionality things can be worked out. Among other things, dimensionality reduction is always an option, however, starting in the first place directly with 3D color data is also a form of dimensionality reduction, which is already in place.
Bottom line is that as far as statistical reasoning go the problem statement is simple. Here is Canon color data and there is Nikon color data. Just find a transformation that converts one to the other. We may not need to bother about the frequency response of the CFA. The problem can be treated as being agnostic to that fact. But, as mentioned before, as the dimensionality of points starts to increase finding that transformation becomes more elusive, as the notion of distance, sparsity of the space, and the right number of training samples, start causing problems.


« Last Edit: October 20, 2009, 11:08:38 AM by joofa »

Logged




sandymc


« Reply #12 on: October 20, 2009, 11:21:04 AM » 
Reply

If one has a training set of data  say a color chart under some chosen set of illuminants  one can try to find a linear transformation between 3D vector spaces that minimizes a mean square error over the training set. It is my understanding that this is what the DNG Profile Editor attempts to do (at least, that is what previous profiling programs such as Thomas Fors' and others tried to do). I'm afraid its more complicated than that. The DNG profile editor actually doesn't touch the color matrix. By modern standards, the color matrix is a crude way to set color; it only has 9 variables to adjust. What modern Adobe camera profiles have in them are HueSat maps  basically they can take any point in a three dimensional color space and map that onto any other point. What the profile editor does is to generate a two dimensional map based on the colors in the calibration chart. The builtin camera profiles in LightRoom or ACR use full threeD maps. If you want more technical details, I've written about profiles here: http://chromasoft.blogspot.com/2009/02/adobehuetwist.htmlSandy



Logged




joofa


« Reply #13 on: October 20, 2009, 02:10:41 PM » 
Reply

I'm afraid its more complicated than that. The DNG profile editor actually doesn't touch the color matrix. By modern standards, the color matrix is a crude way to set color; it only has 9 variables to adjust. What modern Adobe camera profiles have in them are HueSat maps  basically they can take any point in a three dimensional color space and map that onto any other point. What the profile editor does is to generate a two dimensional map based on the colors in the calibration chart. The builtin camera profiles in LightRoom or ACR use full threeD maps. Technically, what Emil is saying regarding min. mean square derived linear transformation is still correct if the underlying variables are jointly Gaussian, and the transformation is not restricted to a 3x3 matrix. What you are describing seems like a 3D/2D LUT table to me. I think the way interpolation in such LUTs is implemented typically can be reduced to a matrix multiplication (of more than 9 elements) by defining a certain ordering on the vertices of the LUT, and matrix multiplication is a linear transformation.



Logged




bjanes


« Reply #14 on: October 20, 2009, 02:38:32 PM » 
Reply

I've not researched color theory as thoroughly as I have other aspects of image processing, so I'm happy to learn.
Different CFA spectral responses define three dimensional subspaces of this high dimensional vector space, in that the spectral response functions of the three "primary colors" in the CFA define basis vectors spanning a 3D subspace of the larger vector space of spectral data, and any response of the sensor is a linear combination of those three primaries (let's ignore demosaic issues and concentrate on patches of uniform tonality). The CIE standard observer defines a "preferred" linear subspace corresponding to human vision, while the CFA spectral responses of any given camera define color primary response subspaces for that camera. For instance, metamerism is a degeneracy whereby different colors in the larger 30dimensional spectral response space project onto the same values of the three primaries of the color model. Change the spectral response of the primaries and the space of metamers change  this will be one difference of cameras with different CFA filters. As far as I can discern, Emil has a good grasp of the situation, and the key criterion that color filters in cameras do not meet is the linearity condition, which is explained somewhat further on the DXO web site: "The underlying physics is that a sensor can distinguish exactly the same colors as the average human eye, if and only if the spectral responses of the sensor can be obtained by a linear combination of the eye cone responses. These conditions are called LutherIves conditions, and in practice, these never occur. There are objects that a sensor sees as having certain colors, while the eye sees the same objects differently, and the reverse is also true." The color filters need not have the same response of the eye, but if the linearity condition could be met, it would be possible for a camera to have an exact tristimulus match to the eye. A 3x3 matrix could convert from the camera response to a tristimulus XYZ of human vision. Due to nonlinearity, an exact match is not possible and the matrix coefficients are obtained by a least squares fit. Eric Chan is expert in this area and perhaps he could comment.



Logged




ejmartin


« Reply #15 on: October 20, 2009, 02:46:24 PM » 
Reply

In addition to the known problems of higher dimensionality, the sparsity of data points, which is more common in higher dimensions, also starts causing problems. Any clustering operations need to be come with a metric; a notion of distance. However, L_p distances, for p>2 start to degenerate, for dimensionality as low as around or over 10. Even, L_1 and L_2 (Euclidean) and min. mean square as its variants, have problems. Control engineers have known this fact for a long time. Database people have come to realize that nearest neighbor index searches in relatively higher dimensions degenerate in performance to a brute force search that goes over all items!
However, that does not mean that sky has fallen down. With the right structure in the higher dimensionality things can be worked out. Among other things, dimensionality reduction is always an option, however, starting in the first place directly with 3D color data is also a form of dimensionality reduction, which is already in place.
Bottom line is that as far as statistical reasoning go the problem statement is simple. Here is Canon color data and there is Nikon color data. Just find a transformation that converts one to the other. We may not need to bother about the frequency response of the CFA. The problem can be treated as being agnostic to that fact. But, as mentioned before, as the dimensionality of points starts to increase finding that transformation becomes more elusive, as the notion of distance, sparsity of the space, and the right number of training samples, start causing problems. So if I understand correctly, if I take the 3d subspace spanned by the primaries of camera A, and I take the 4096^3 points (12bit RAW data), perhaps after gamma transformation, and I want the closest point in 16bit sRGB space (or for that matter, the 3d subspace spanned by the primaries of camera B )  I may not want a linear transformation but a LUT that tells me where to map each of those 4096^3 points. A linear map is not likely to generate the closest point in the other space to a given point in the source space, but rather will do so only on average over all the 4096^3 points (or the number of points one chooses to match on a color chart, say).


« Last Edit: October 20, 2009, 02:53:17 PM by ejmartin »

Logged

emil



joofa


« Reply #16 on: October 20, 2009, 02:46:29 PM » 
Reply

matrix coefficients are obtained by a least squares fit. BJanes, please read my earlier response regarding higher dimensionality. When you say "least square fit", there is always a notion of distance (of course there is, after all, it the error distance that you are trying to minimize in least squares sense), and I mentioned before that the "goodness of fit" depends upon the dimensionality of space taken together with the size of training population and any detrimental effects of sparsity.



Logged




ejmartin


« Reply #17 on: October 20, 2009, 04:37:01 PM » 
Reply

BJanes, please read my earlier response regarding higher dimensionality. When you say "least square fit", there is always a notion of distance (of course there is, after all, it the error distance that you are trying to minimize in least squares sense), and I mentioned before that the "goodness of fit" depends upon the dimensionality of space taken together with the size of training population and any detrimental effects of sparsity. I'm still not sure it's as bad as you're making it out to be. Any given triplet of primaries spans a 3D subspace, so the maps between color spaces are maps between low (three) dimensional hyperplanes in some higher dimensional ambient space. The neardegeneracy of distance in such a situation is much less than between groups of points spread uniformly in the higherdimensional space.


« Last Edit: October 20, 2009, 04:37:37 PM by ejmartin »

Logged

emil



bjanes


« Reply #18 on: October 20, 2009, 04:51:32 PM » 
Reply

BJanes, please read my earlier response regarding higher dimensionality. When you say "least square fit", there is always a notion of distance (of course there is, after all, it the error distance that you are trying to minimize in least squares sense), and I mentioned before that the "goodness of fit" depends upon the dimensionality of space taken together with the size of training population and any detrimental effects of sparsity. Joofa, despite all of your qualifications, the details of which I don't understand, the correction matrix is still obtained by a least squares fit and this can be done by the user with Imatest.



Logged




joofa


« Reply #19 on: October 20, 2009, 04:59:21 PM » 
Reply

I'm still not sure it's as bad as you're making it out to be. Any given triplet of primaries spans a 3D subspace, so the maps between color spaces are maps between low (three) dimensional hyperplanes in some higher dimensional ambient space. The neardegeneracy of distance in such a situation is much less than between groups of points spread uniformly in the higherdimensional space. No, I'm not saying that situation is always bad. In the very first response I mentioned that the number of training samples should scale with the increase in dimensionality. However, for relatively higher dimensions, a huge number is required, and of course a single Macbeth chart is not going to help here. If you are able to satisfy this requirement things may seem to settle down in a certain way at least. However, there is a catch, the number of model parameters wrt training samples should not be too large either, otherwise it shall also cause a problem of error in color transformation between two sets of data for a different reason. Determination of the right number of model parameters is a widely recognized area. A linear map is not likely to generate the closest point in the other space to a given point in the source space, In general, if the data are jointly distributed Gaussian, then the best candidate in min. mean square error sense is obtained as a linear operation. Further more, if the error is assumed to have equal variances in all dimensions, then the linear operator boils down to the usual linear orthogonal projection operator. If data is not jointly Gaussian, then the solution is the nonlinear conditional expectation.


« Last Edit: November 02, 2009, 12:49:38 AM by joofa »

Logged




