joofa


« Reply #20 on: October 20, 2009, 05:08:48 PM » 
Reply

Joofa, despite all of your qualifications, the details of which I don't understand, the correction matrix is still obtained by a least squares fit and this can be done by the user with Imatest. Of course ImaTest can do it since the dimensionality is low enough (3 or 4; where as Emil was suggesting using around 30) and a single Macbeth or the newer 150 pt digital chart may suffice as enough training samples. I quickly glanced over the ImaTest approach, so I could be wrong, but I see a potential problem with their approach. It would appear they are trying to perform least square solution in the RGB domain, which is not perceptually uniform. They should have used a more uniform space say LAB, and a better distance metric, say CIE 2000. However, one can't take a direct least square solution in the LAB domain as the coefficients we want are in the RGB domain. Therefore the optimization problem is harder since the coefficients to be found are in RGB domain, but the distance calculation happens in, say, LAB space. BTW, we played around with ImaTest extensively a while ago when we were trying to determine optimum white balance coefficients and found their matrix to be not good enough. Therefore we opted for a more extensive optimization problem along the lines I said above.


« Last Edit: October 20, 2009, 05:16:25 PM by joofa »

Logged




madmanchan


« Reply #21 on: October 20, 2009, 08:11:19 PM » 
Reply

The problem with trying to make one camera match another camera is that sensors record postprojected RGB triplets, rather than a spectrum per pixel. This makes color matching impossible for some sets of colors and illuminants, regardless of how advanced the profile technology is.
Consider this scenario. Suppose you photograph two color patches (e.g., different shades of red). With camera A, the two patches are recorded as distinct colors. With camera B, the two patches are recorded as the same color (i.e., the spectral radiances are projected, thru the color filters of camera B, to the same RGB triplet values). There is nothing one can do, via a profile, to make camera B's results the same as camera A. Whatever the profile does to the first color patch, it will also do to the second color patch, since they have the same starting values. The key distinguishing characteristic between the two patches (i.e., the difference in spectral radiance) has been lost. It was never recorded by the camera.
In general, profilebuilders optimize color profiles using various metrics, and various training data. Even though a matrix may seem relatively crude (with only six degrees of freedom, once the white point is chosen), you might be surprised how far a good matrix can take you.



Logged




ejmartin


« Reply #22 on: October 20, 2009, 08:55:49 PM » 
Reply

Of course ImaTest can do it since the dimensionality is low enough (3 or 4; where as Emil was suggesting using around 30) and a single Macbeth or the newer 150 pt digital chart may suffice as enough training samples. I suspect you might be misinterpreting what I wrote (apologies if not). The space of spectral responses is a function space and thus formally infinite dimensional. Of course, in the real world if spectra don't have sharp features then one can get by with a discretization of that space by integrating spectral responses over narrow windows; this approximates the infinite dimensional function space as a finite dimensional linear vector space. This is in fact that way spectral response data are often reported (eg in CIE tables), as responses over 5 or 10nm bands. Thus the CIE standard observer response functions are three vectors in a 30+ or 60+ dimensional space (however many 5 or 10nm bands there are in the visible spectrum). These vectors thus span a 3d hyperplane in a 30+ dimensional linear space. Any other spectral response functions span an in general distinct 3d hyperplane. Any individual color patch on a colorchecker chart is a point in the high dimensional space; it has distinct orthogonal projections P_a and P_b onto the hyperplane spanned by the primaries of different color models (say a camera color response and the CIE XYZ responses). I was then imagining that the linear map one might use is the 3x3 matrix that minimizes the distance d(P_a,P_b) between the two projected points, averaged over a training set consisting of a set of color patches one wants to match as closely as possible. EDIT: This last is a bit garbled; I meant that one could choose a matrix transform M(P_a) of the RGB coordinates of model A into the RGB coordinates of model B, that minimizes d(M(P_a),P_b) overa training set. Note that since this is a distance within model B's 3d hyperplane, so is not a distance needing the full ambient high dimensional space.Now, there might be nonlinearities introduced if the distance function is a nonlinear distance such as the CIE deltaE metric, since that doesn't preserve the linearity of the spectral responses that might make a nonlinear map between hyperplanes more appropriate, such that one considers small patches of the two hyperplanes in question (of the two color spaces) and does a piecewise linear map on the patches; that I imagine would lead to some sort of lookup table. But note that in what I was suggesting, the map from color space A to color space B is locally a map between 3dimensional spaces; the high dimensional space is only an intermediate construct to visualize what is happening with spectral response.


« Last Edit: October 21, 2009, 10:20:12 AM by ejmartin »

Logged

emil



joofa


« Reply #23 on: October 20, 2009, 10:13:44 PM » 
Reply

I suspect you might be misinterpreting what I wrote (apologies if not). Well, I always misinterpret anything ;) BTW, I must clear my position that despite the technical difficulties I have enumerated for higher dimensional data, we are totally for higher dimensional imaging  i.e., hyperspectral imaging. Please see the following image: http://www.djjoofa.com/data/images/hyperspectral.jpgThe link shows 3 actual images out of the 22 images I took at a 10nm increments from 470nm to 680nm, with a bandwidth of 10nm each, and the corresponding reconstruction of the Macbeth chart from the actual hyperspectral data. We believe hyperspectral imaging will have big contributions in certain fields. These are not synthetic images, they are actual images acquired using a sophisticated frequency selective imaging device. Any individual color patch on a colorchecker chart is a point in the high dimensional space; it has distinct orthogonal projections P_a and P_b onto the hyperplane spanned by the primaries of different color models (say a camera color response and the CIE XYZ responses). I was then imagining that the linear map one might use is the 3x3 matrix that minimizes the distance d(P_a,P_b) between the two projected points, averaged over a training set consisting of a set of color patches one wants to match as closely as possible. Yes, that is true for deterministic data. Correlationergodic stochastic data in the limit approximates similar respone, and as I mentioned before, if such data is jointly Gaussian then such linearity results, and in the case of equalvariance errors, orthogonal projection results. If data is distributed differently then for min. mean square error the solution lies in conditional expectation. But note that in what I was suggesting, the map from color space A to color space B is locally a map between 3dimensional spaces; the high dimensional space is only an intermediate construct to visualize what is happening with spectral response. Thanks for clarification. I see your point.


« Last Edit: October 20, 2009, 10:24:38 PM by joofa »

Logged




Ray


« Reply #24 on: October 20, 2009, 10:37:46 PM » 
Reply

"The underlying physics is that a sensor can distinguish exactly the same colors as the average human eye, if and only if the spectral responses of the sensor can be obtained by a linear combination of the eye cone responses. These conditions are called LutherIves conditions, and in practice, these never occur. There are objects that a sensor sees as having certain colors, while the eye sees the same objects differently, and the reverse is also true." Bill, There's something in such satements that cause a niggling worry, that something is not quite right. You talk about what the eye sees as being a clearly defined, consistent and uniform concept, as though it's a manufactured product according to strict quality control. This is clearly not true. What the eye sees in this context, is surely an artificial construct. Every human is unique, including their eyesight capabilities. It is claimed that some females are actually tetrachromats. Color blindness is a wellknown condition that affects some people in an obvious way, but must surely affect others in a less obvious way to such a lesser extent that they are not even aware they are slightly color blind. If a manufactured lens varied in quality as much as human eyesight, there'd be an outrage at the poor quality control.



Logged




ejmartin


« Reply #25 on: October 20, 2009, 11:04:39 PM » 
Reply

If a manufactured lens varied in quality as much as human eyesight, there'd be an outrage at the poor quality control. As I get older, I keep wondering if I can upgrade to a newer model, or at least a refurb



Logged

emil



Ray


« Reply #26 on: October 20, 2009, 11:20:55 PM » 
Reply

As I get older, I keep wondering if I can upgrade to a newer model, or at least a refurb . Are you ready for a cataract operation? My partner had one recently. She no longer needs spectacles, although they are still useful because an artificial lens does not have the flexibility of a natural lens. But definitely an improvement. How her spectral response has changed, I wouldn't have a clue.



Logged




bjanes


« Reply #27 on: October 21, 2009, 09:39:26 AM » 
Reply

There's something in such satements that cause a niggling worry, that something is not quite right. You talk about what the eye sees as being a clearly defined, consistent and uniform concept, as though it's a manufactured product according to strict quality control.
This is clearly not true. What the eye sees in this context, is surely an artificial construct. Every human is unique, including their eyesight capabilities. It is claimed that some females are actually tetrachromats. Color blindness is a wellknown condition that affects some people in an obvious way, but must surely affect others in a less obvious way to such a lesser extent that they are not even aware they are slightly color blind. Ray, An excellent point, but I think that the DXO paper was referring to the CIE standard observer. The molecular basis of color vision is being elucidated, and the Online Mendelian Inheritance in Man (OMIM) web site has some excellent if technically dense articles. The Wikipedia article on color blindness is more easily understood. In summary, the color response of the red, green and blue cones is determined by the spectral absorption characteristics of the photopigments, and the amino acid sequences of these pigments can be studied. Even in individuals with "normal" color vision, there are variations in the red and blue pigments and there are 2 types of normal color vision according to 'greenpoint,' i.e., the point at which the subject sees pure green, and 2 types according to 'bluepoint.' Males can be of either G1/B1, G1/B2, or G2/B2; females can be of 6 genotypes (OMIM 303900). The red and green genes are on the Xchromosome, of which males have only one copy. Females have 2 copies, of which only one copy is active in any given cell (the other copy is randomly inactivated). This random inactivation can result in tetrachromats in females. Furthermore, classical Mendelian genetics assumes that there is only one genetic locus for each gene pair; it is now known that multiple copies of the green gene locus can exist in one person, further complicating matters. I'm not certain that every human is unique in their color vision, but there is a great deal of variability, and a practical color model for photography can not take all of this variation into account. OMIM 303900OMIM 303800Wikipedia



Logged




bjanes


« Reply #28 on: October 21, 2009, 09:44:16 AM » 
Reply

.
Are you ready for a cataract operation? My partner had one recently. She no longer needs spectacles, although they are still useful because an artificial lens does not have the flexibility of a natural lens. But definitely an improvement. How her spectral response has changed, I wouldn't have a clue. Ray, I may be old fashioned, but I am relieved to learn that your partner is a she.



Logged




bjanes


« Reply #29 on: October 21, 2009, 09:49:56 AM » 
Reply

Of course ImaTest can do it since the dimensionality is low enough (3 or 4; where as Emil was suggesting using around 30) and a single Macbeth or the newer 150 pt digital chart may suffice as enough training samples. I quickly glanced over the ImaTest approach, so I could be wrong, but I see a potential problem with their approach. It would appear they are trying to perform least square solution in the RGB domain, which is not perceptually uniform. They should have used a more uniform space say LAB, and a better distance metric, say CIE 2000. However, one can't take a direct least square solution in the LAB domain as the coefficients we want are in the RGB domain. Therefore the optimization problem is harder since the coefficients to be found are in RGB domain, but the distance calculation happens in, say, LAB space.
BTW, we played around with ImaTest extensively a while ago when we were trying to determine optimum white balance coefficients and found their matrix to be not good enough. Therefore we opted for a more extensive optimization problem along the lines I said above. Joofa, I did not understand that Emil was suggesting 30 or more dimensionsthe math and statistical analysis used by you two is beyond my comprehension, at least without a great deal of study (which I am not prepared to do at this time). With respect to Imatest, I do think Norman is using CIE Lab, since the optimization parameters involve the L* parameter.



Logged




joofa


« Reply #30 on: October 21, 2009, 11:02:17 AM » 
Reply

Joofa,
With respect to Imatest, I do think Norman is using CIE Lab, since the optimization parameters involve the L* parameter. Hi BJanes, you are right. I went back and checked the ImaTest page again which link you sent earlier. It would appear that their methodology is right: viz., that is to move the coefficients of the matrix in RGB space but calculate distance in LAB space. Sorry for my misunderstanding.


« Last Edit: October 21, 2009, 11:09:56 AM by joofa »

Logged




bjanes


« Reply #31 on: October 21, 2009, 11:31:45 AM » 
Reply

In general, profilebuilders optimize color profiles using various metrics, and various training data. Even though a matrix may seem relatively crude (with only six degrees of freedom, once the white point is chosen), you might be surprised how far a good matrix can take you. Eric, Correct me if I am wrong, but I was under the impression that ACR uses a matrix for the basic color matching function and then the DNG profiles superimpose a LUT type of profile. Intuitively, this would make sense: get as a good a match as possible with a matrix and then refine it with a LUT type profile.



Logged




madmanchan


« Reply #32 on: October 21, 2009, 08:52:32 PM » 
Reply

Yes, you are right, Bill. The matrix does quite a lot of the work, and then the luts can be used to deal with problem colors, or to nail specific colors.
The closer the sensor's color filter responses are to being within a linear transformation of human LMS cones, the more a simple matrix suffices to make highly accurate scenereferred profiles. Unfortunately, all of the technical mumbo jumbo simply means that a simple profile is capable of reproducing the scene colorimetry well, but that by itself does not guarantee a positive reception from the photographer. The next step is to account for human preference (i.e., visual preference), and this is something that the film designers spent years working on.



Logged




Ray


« Reply #33 on: October 22, 2009, 06:40:40 PM » 
Reply

Ray,
I may be old fashioned, but I am relieved to learn that your partner is a she. Have no fear of me, Bill. If I have any regrets in life, it would be that I never managed to acquire a harem of a thousand beautiful women and sire a thousand children and have 10,000 grandchildren. I guess I just didn't quite have what it takes. Too bad!



Logged




joofa


« Reply #34 on: October 23, 2009, 10:47:29 AM » 
Reply

Even though a matrix may seem relatively crude ... The apparent simplicity of a linear model such as a matrix may turn out to be more useful in higherdimensions (and Emil wants 30 ;)) than 3D color, because of the possible correlations among color data dimensions. Linearity can help reduce certain requirements on training samples from being exponential to polynomial (for e.g., for 3D data such as RGB, if the data model is taken to be linear combinations of R,G,B, R^2, G^2,B^2, RG, GB, RB, R^3, ..., etc., note: the problem is still linear as it is linear in coefficients), and may even further reduce to linear in dimensionality if powers such as R^2, RG, etc., are not considered. It might happen that higher dimensional data is distributed with a lower intrinsic dimensionality and therefore linear models may start having more appeal. (In many domains the SNR gain due to number of samples, N, and the number of parameters to be estimated, p, is given as N/p.) As far as LUTs are concerned, which perhaps may be embedded in profiles, what is the complexity of constructing a 30dimensional LUT? Note, the partitions of the LUT (hypercubes if it has a regular structure) may not be of equal volume to cover the data correlation, intrinsic dimensionality and sparsity of the 30dim color space better. All of this discussion regarding higherdimensional color is not academic hairsplitting. The problem is real, since hyper/multispectral imaging is beginning to show its power in fields such as medical imaging, material inspection, remote sensing, etc., and would spill over to digital cinematography and photography domains also. What are companies such as Adobe, Apple, doing to prepare their products for such color data?


« Last Edit: October 23, 2009, 11:09:58 AM by joofa »

Logged




crames


« Reply #35 on: October 23, 2009, 06:09:24 PM » 
Reply

For those like me who are trying to understand the advanced color topics in this thread, here is one of the better introductions, available for free on the web: Jon Hardeberg's thesisIt's also available in print from Amazon (but the color illustrations are poorly reproduced in b/w). Happy reading! Cliff



Logged

Cliff



madmanchan


« Reply #36 on: October 23, 2009, 06:16:16 PM » 
Reply

joofa, all I meant to say in my post was that some more recent sensors are not too far off from satisfying the LutherIves condition, and hence good scene referred profiles can be obtained with a simple 3x3 matrix.



Logged




joofa


« Reply #37 on: October 24, 2009, 12:54:46 AM » 
Reply

joofa, all I meant to say in my post was that some more recent sensors are not too far off from satisfying the LutherIves condition, and hence good scene referred profiles can be obtained with a simple 3x3 matrix. Hi Eric, true. A sufficient condition to satisfy LutherIves is that human visual response be contained within the sensor spectral response space. It would appear to me that going higher dimensions would increase the likelihood of that happening, though, of course, it is not guaranteed. If the interpretation of LutherIves is that some colors that are distinct to human vision loose that discrimination power because of the projection in the sensor space, then, there may be distinct colors that a sensor may distinguish but project to the same color in human visual response space. Interestingly, if the basis vectors that span the camera subspace are not "rigid and fixed" in their directions and can be molded like a spline then even a 1D mapping similar to "space filling curves", which maps closer higherdimensional points to nearby points in 1D, would be an interesting candidate, as among other things it may be relatively immune to metamerism, which projectionbased approaches on rigid hypersurfaces that define human/sensor color response, can't escape. But, of course, I am not aware of any camera that projects color vectors on a space filling curve!


« Last Edit: October 24, 2009, 01:04:44 AM by joofa »

Logged




ErikKaffehr


« Reply #38 on: November 03, 2009, 12:23:50 AM » 
Reply

Thanks! Interesting! Erik For those like me who are trying to understand the advanced color topics in this thread, here is one of the better introductions, available for free on the web: Jon Hardeberg's thesisIt's also available in print from Amazon (but the color illustrations are poorly reproduced in b/w). Happy reading! Cliff



Logged




