Ad
Ad
Ad
Pages: [1] 2 »   Bottom of Page
Print
Author Topic: Spectro Software Feature Request  (Read 8476 times)
Jonathan Wienke
Sr. Member
****
Offline Offline

Posts: 5759



WWW
« on: May 12, 2009, 08:45:13 AM »
ReplyReply

I had an idea the other day that I think would improve the quality of printer profiles, but could be applied to other devices as well. The general idea is to specify a desired level of profile accuracy, and then after printing and measuring a standard patch chart, dynamically create, print, and measure a second patch chart. Both the number of patches and the RGB/CMYK values of the patches in the second chart would be calculated to fill in the widest gaps in the measurements from the first patch chart, so that the final profile (made from measurements of both charts combined) has a high probability of meeting the specified accuracy standard. The procedure would work as follows:

1. Start out by printing and measuring a standard TC918 patch chart.

2. Do an analysis of the measured patches to identify areas where the measured patches have the greatest separation--measurements that have the greatest distance in 3D color space to their nearest neighbors.

3. Generate and print the second patch chart with RGB/CMYK values selected to fill in the widest measurement gaps, so that the specified profile accuracy (say 1 DeltaE) can be achieved with a >90% probability.

4. After waiting for the patch charts to dry properly, re-measure both the first and second chart, and generate the final profile from both sets of measurements combined.

The second patch chart would automatically focus on areas where the device is the most non-linear (where a small change in the RGB or CMYK values sent to the device causes a large change in the measured color value). This could be helpful when profiling monitors as well as printers; monitors often have significant non-linearities in the deep shadows and highlights. When profiling a monitor, the second set of RGB values to measure could be calculated and then measured immediately; a second phase that starts after a brief delay to analyze the first set of measurements and select the supplementary RGB values to measure.

X-Rite? ColorEyes? Bueller? Bueller?
Logged

Scott Martin
Sr. Member
****
Offline Offline

Posts: 1312


WWW
« Reply #1 on: May 12, 2009, 10:56:53 AM »
ReplyReply

Have you ever used the 2 step profiling process in Monaco Profiler or ColorMunki? It's fantastic, especially for devices that aren't greyscale balanced. MP has had this available for almost a decade now but it's surprising how it's overlooked.
Logged

Jonathan Wienke
Sr. Member
****
Offline Offline

Posts: 5759



WWW
« Reply #2 on: May 12, 2009, 11:25:36 AM »
ReplyReply

I've only used EyeOne and Spyder.
Logged

Ethan_Hansen
Full Member
***
Offline Offline

Posts: 114


WWW
« Reply #3 on: May 12, 2009, 12:05:46 PM »
ReplyReply

Jonathan,

Scott beat me to it. What you describe is similar to the two-step process used by several applications, most recently the ColorMunki. A preliminary target is used to characterize the printer, a second to build the profile. The first such application I am aware of was Franz and Dan's ProfileCity suite from 1999 or 2000. MonacoProfiler came quickly therafter, followed by Argyll, ColorVision (now Datacolor) and Fuji. The goal of all these products was to create a final profiling target whose colors were equally distributed in the printer's color space. This makes the calculations easier and, as you note, can improve profile accuracy. Basing the second - and possibly third, fourth, etc. -- target(s) on a desired output accuracy is an intriguing idea, however.

Of the profiling applications on the market, most require that the target colors be either distributed evenly numerically (i.e. the initial target values are evenly spaced in the n-dimensional color space being profiled) or distributed fairly evenly in the printer color space (the output value spacing). ProfileMaker and Argyll allow some additional flexibility in color spacing, but it is distinctly possible to overload areas and throw the calculations off.

We took a slightly different approach for the profiles we build. Our base RGB target contains around 1100 color patches, some defining an evenly spaced color cube, the rest distributed across areas that printers typically have trouble in. We also include secondary fields of several hundred other patches that may or may not be needed. We then throw computational horsepower at the problem. We have the luxury of only using our code in-house, so there is no need to support systems other than servers stuffed with CPUs and memory. Implementing algorithms that would take much longer than normal folks want to wait is possible.

We find that there are two primary areas of concern from a profile perspective and one for measurement. The profiling problems stem from using the profile as a printer linearization tool. As you noted, if the color patches are spaced too far apart, interpolation errors can reduce profile accuracy.  This is worst if the printer output has a sudden discontinuity occurring in a measurement gap.Inkjets tend to be the worst offenders here, although we see the same behavior with some silver-halide photo lab printers. A good calculation can detect that such a discontinuity exists, but the only solution is a profiling target with better coverage.

A different issue arises in color areas where the printer or driver collapses a wide range of input values into a narrow output band (Epson, here's looking at you). Print output tends to "smudge" for lack of a better term. Good target resolution helps here as well, although the profile calculations play an more significant role.

Finally, there is the matter of the instrument used for the measurement. Hand-held i1 or ColorMunki scans are insufficiently accurate to make these sorts of profiling heroics worthwhile. The instrument drift between calibration cycles is too large, and measurement resolution and accuracy not up to the task. The ColorMunki is small, fast, and convenient. Absolute measurement error is over 3x higher other instruments (Spectrolino, iCColor, and the i1iSis) provide. Short term repeatability, which governs consistency of patch-to-patch measurements, is over 5x worse. You are not going to make a profile with 1 Delta-E accuracy using an instrument that, at best case, is +/-0.6 DE and on some colors well North of +/- 1 DE. The ColorMunki's spectral range also falls at both the red and blue ends. The i1 fares better in this regard, but even when mounted on the i1iO table, accuracy suffers compared to more proficient instrumentation.
Logged

Jonathan Wienke
Sr. Member
****
Offline Offline

Posts: 5759



WWW
« Reply #4 on: May 12, 2009, 12:34:04 PM »
ReplyReply

Quote from: Ethan_Hansen
Jonathan,

Scott beat me to it. What you describe is similar to the two-step process used by several applications, most recently the ColorMunki. A preliminary target is used to characterize the printer, a second to build the profile. The first such application I am aware of was Franz and Dan's ProfileCity suite from 1999 or 2000. MonacoProfiler came quickly therafter, followed by Argyll, ColorVision (now Datacolor) and Fuji. The goal of all these products was to create a final profiling target whose colors were equally distributed in the printer's color space. This makes the calculations easier and, as you note, can improve profile accuracy. Basing the second - and possibly third, fourth, etc. -- target(s) on a desired output accuracy is an intriguing idea, however.

Oh well, at least I have confirmed the idea is sound. I was just throwing out the 1 DeltaE accuracy figure as an example. Whatever figure is chosen should obviously be selected based on the accuracy of the measuring device.
Logged

digitaldog
Sr. Member
****
Offline Offline

Posts: 9190



WWW
« Reply #5 on: May 12, 2009, 12:58:09 PM »
ReplyReply

Quote from: Jonathan Wienke
Oh well, at least I have confirmed the idea is sound. I was just throwing out the 1 DeltaE accuracy figure as an example. Whatever figure is chosen should obviously be selected based on the accuracy of the measuring device.

Yes, its useful in the context of the device and two sample measurements.

Keep in mind, all this talk of deltaE is pretty simplistic in terms of imagery. Its a useful metric for defining differences in two solid colors. Far more useful is the new metric Henry Wilhelm has designed call iStar. It is based, weighted on how we view images. You can find more info on his web site. I've been using it a great deal for a project, sending data to Henry to churn up. Point is, don't put a lot of weight on detlaE values over what they provide; a very simple measure of difference of a single set of colors. Yes, having thousands, and seeing the average deltaE, standard dev etc tells us a lot about certain accuracy of output device and input values. But it doesn't tell us anything useful about images, where the delta's are weighted the same when in fact, we view differences quite differently depending on where in color space they fall.
Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
Ethan_Hansen
Full Member
***
Offline Offline

Posts: 114


WWW
« Reply #6 on: May 12, 2009, 02:33:29 PM »
ReplyReply

Quote from: digitaldog
Yes, its useful in the context of the device and two sample measurements.

Keep in mind, all this talk of deltaE is pretty simplistic in terms of imagery. Its a useful metric for defining differences in two solid colors. Far more useful is the new metric Henry Wilhelm has designed call iStar. It is based, weighted on how we view images. You can find more info on his web site. I've been using it a great deal for a project, sending data to Henry to churn up. Point is, don't put a lot of weight on detlaE values over what they provide; a very simple measure of difference of a single set of colors. Yes, having thousands, and seeing the average deltaE, standard dev etc tells us a lot about certain accuracy of output device and input values. But it doesn't tell us anything useful about images, where the delta's are weighted the same when in fact, we view differences quite differently depending on where in color space they fall.

I don't follow you. Wilhelm's iStar is designed to give a numerical value to his permanence ratings, allowing a metric to track how a particular print changes over time. The iStar formula breaks out hue, tone, and chroma as well as adds in a print contrast term. Hue, tone, and chroma are lumped together in Delta E (dE) formulas. Print contrast is valuable if one is tracking an individual print, but has little to do with evaluating profile accuracy. The iStar software has helpful reporting features, indicating whether tone or hue dominates the color shift as well as which colors shift the most. Similar information can be extracted for dE evaluations -- format your data sensibly and view in MeasureTool or ColorThink.

Saying that Delta E values provide "a very simple measure of difference of a single set of colors" is misleading. They provide a very precise measure of difference over any number of colors. The original Delta E metric is indeed simple: one dE difference means two colors are separated by one CIELAB value; e.g. L*AB (0, 0, 0) to (1, 0, 0). Each dE unit was further intended to be the minimum visible color difference. CIELAB was designed to be a uniform color space, where the spacing between each value was visually identical. Unfortunately, that did not work out perfectly; CIELAB is not a perceptually uniform color space. A dE (original 1931 version) of 3 is not visible in saturated yellow, while your eyes can distinguish midtone grays falling half a dE apart.

In the years since the original 1931 specification there have been a number of revisions to the dE equation. The latest is dE-2000. which does a commendable job of treating color shifts equally across the visible range. This is exactly what we want when evaluating profile accuracy.

A valid point that Wilhelm makes about iStar is that dE breaks down for large shifts. My copy of Wyszecki and Stiles states that above 10 dE-1994 units, human vision no longer sees color differences linearly (the book was published in 2000, so no word on the limits of dE-2000; both models are primarily concerned with smaller shifts). Wilhelm's fading work requires tracking color shifts of larger magnitudes than this, so a modified metric is essential.

Getting back to printer profiling, having models that are only accurate up to 10 dE is no problem. A 10 Delta E-2000 color shift is not subtle. Make an inkjet print on glossy paper using a matte paper profile and driver settings and you will have an average dE-2000 of around 8. I think we want better profiles than that. Evaluating profiles based on dE-2000 is useful, valuable, and informative. This should be checked both for the profile itself (push numbers through to self-check the output and input sides) as well as on actual prints. The first check is if the dE distribution is Gaussian. If so, mean dE and standard deviation are all one needs. If the errors are non-Gaussian, this points to a profile construction problem or, going back to Jonathan's original post, a target that did not provide sufficient resolution into the behavior of the printer. A quick check will highlight the problem areas and a new target can be generated. Damn. Numbers actually are useful.
Logged

digitaldog
Sr. Member
****
Offline Offline

Posts: 9190



WWW
« Reply #7 on: May 12, 2009, 03:00:19 PM »
ReplyReply

Quote
I don't follow you. Wilhelm's iStar is designed to give a numerical value to his permanence ratings, allowing a metric to track how a particular print changes over time.

That's initially why Henry started it, but its evolved far more. He's working on using it as I said, as a matrix for difference in how we perceive images.

Quote
Saying that Delta E values provide "a very simple measure of difference of a single set of colors" is misleading. They provide a very precise measure of difference over any number of colors.

Solid colors yes, but it tells us little if anything about color appearance nor is the color model its based on, with its various warts, based on color appearance models. Its useful, no question. But it has a slew of areas where its not to be used as a definite point of reference.

Probably one of the best posts ever on the issues are from the late Bruce Fraser. Note too, that the work Henry is doing is an attempt to provide better data for imagery:


CIE colorimetry is a reliable tool for predicting whether two given solid
colors will match when viewed in very precisely defined conditions. It is
not, and was never intended to be, a tool for predicting how those two
colors will actually appear to the observer. Rather, the express design goal
for CIELab was to provide a color space for the specification of color
differences. Anyone who has really compared color appearances under
controlled viewing conditions with delta-e values will tell you that it
works better in some areas of hue space than others.

When we deal with imagery, rather than matching plastics or paint swatches,
a whole host of perceptual phenomena come into play that Lab simply ignores.

Simultaneous contrast, for example, is a cluster of phenomena that cause the
same color under the same illuminant to appear differently depending on the
background color against which it is viewed. When we're working with
color-critical imagery like fashion or cosmetics, we have to address this
phenomenon if we want the image to produce the desired result -- a sale --
and Lab can't help us with that.

Lab assumes that hue and luminance can be treated separately -- it assumes
that hue can be specified by a wavelength of monochromatic light -- but
numerous experimental results indicate that this is not the case. For
example, Purdy's 1931 experiments indicate that to match the hue of 650nm
monochromatic light at a given luminance would require a 620nm light at
one-tenth of that luminance. Lab can't help us with that. (This phenomenon
is known as the Bezold-Brucke effect.)

Lab assumes that hue and chroma can be treated separately, but again,
numerous experimental results indicate that our perception of hue varies
with color purity. Mixing white light with a monochromatic light does not
produce a constant hue, but Lab assumes it does -- this is particularly
noticable in Lab modelling of blues, and is the source of the blue-purple
shift.

There are a whole slew of other perceptual effects that Lab ignores, but
that those of us who work with imagery have to grapple with every day if our
work is to produce the desired results.

So while Lab is useful for predicting the degree to which two sets of
tristimulus values will match under very precisely defined conditions that
never occur in natural images, it is not anywhere close to being an adequate
model of human color perception. It works reasonably well as a reference
space for colorimetrically defining device spaces, but as a space for image
editing, it has some important shortcomings.


Quote
In the years since the original 1931 specification there have been a number of revisions to the dE equation. The latest is dE-2000. which does a commendable job of treating color shifts equally across the visible range. This is exactly what we want when evaluating profile accuracy.

Better, not exactly what we want. And you know why all the updates....

Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
MHMG
Sr. Member
****
Offline Offline

Posts: 623


« Reply #8 on: May 12, 2009, 06:06:59 PM »
ReplyReply

Quote from: Ethan_Hansen
I don't follow you. Wilhelm's iStar is designed to give a numerical value to his permanence ratings, allowing a metric to track how a particular print changes over time.

The I* metric was developed initially with image permanence research in mind, but it has the capability to score on a percentile basis any loss of color and tonal accuracy between a reference image and a comparison image.  While the reference may be a printed image before fading, and the comparison may be the same printed image after fading or other aging tests, the reference/comparison pair can just as easily be an original print versus a copy or proof print. Or the reference can be the color data of a source digital file while the comparison image is measured as the actual colors and tones posted on an electronic display. Thus, the I* metric has strong applicability to initial image quality studies as well as image permanence studies.

Full chroma weighting and near neighbor image contrast evaluation are two necessary features of a good color and tonal accuracy metric when evaluating real image content rather than just two side-by-side colors which have no contextual significance other than the fact that they are slightly different colors. Delta E and its various flavors (delta E 2000, etc.) possess neither capability. The I* metric is not just about tracking large changes between a reference image and its comparison image that overwhelm the perceptual scaling significance of delta E models. If, for example, spatial information content in an image is recorded by only small L* value differences between neighboring image elements (a shallow tonal gradient), and those subtle L* variations expand or contract when the reference image is reproduced, then the information content contained in that area of the image is also compromised (ah, those darned highlight, midtone, and shadow details).   For an introduction to the I* metric that explains these considerations in greater detail, please visit the documents page of the AaI&A website and download the article entitled "An Introduction to the I* Metric".

http://www.aardenburg-imaging.com/documents.html


Best regards,

Mark
http://www.aardenburg-imaging.com
Logged
digitaldog
Sr. Member
****
Offline Offline

Posts: 9190



WWW
« Reply #9 on: May 12, 2009, 07:15:55 PM »
ReplyReply

Quote from: MHMG
For an introduction to the I* metric that explains these considerations in greater detail, please visit the documents page of the AaI&A website and download the article entitled "An Introduction to the I* Metric".
http://www.aardenburg-imaging.com/documents.html

Excellent, thanks for the link, I started reading and have to say my initial post:" Far more useful is the new metric Henry Wilhelm has designed call iStar" deserves to be corrected and of course credited to none other than Mark H. McCormick-Goodhart. My apologies.
Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
MHMG
Sr. Member
****
Offline Offline

Posts: 623


« Reply #10 on: May 13, 2009, 06:47:59 AM »
ReplyReply

Quote from: digitaldog
Excellent, thanks for the link, I started reading and have to say my initial post:" Far more useful is the new metric Henry Wilhelm has designed call iStar" deserves to be corrected and of course credited to none other than Mark H. McCormick-Goodhart. My apologies.

Andrew,

You are not the one who should apologize. It is perfectly understandable that you would have assumed the I* metric is 100% WIR because the WIR website presents it that way. Giving proper credit where credit is due is a simple code of ethics to follow. It doesn't cost a penny, but speaks volumes about personal integrity.

Like CIELAB itself, the mathematics of the I* metric are open source, ie. non proprietary, and were published in November of 2004. The functions are relatively easy to program into a spreadsheet program like Excel which is how I use the I* metric here in my digital print research at AaI&A. That the I* metric is superior to color difference models when evaluating changes in image color and tone is trivial to demonstrate with a few simple image manipulations in LAB mode in Photoshop, although I imagine the color science community may want to hold out for a more formal psychophysical study before giving I* its full blessing.  Dedicated software applications like WIR-iStar that can calculate I*color and I*tone scores are a logical step forward in the evolution of the I* metric. I think the I* metric could be a really useful color tool as a plug-in for Photoshop or added to Xrite's Measure tool, etc., and I'd be interested in collaborating with any programmers who'd like to explore other possibilities.  I'm truly pleased to see that the I* metric is finally beginning to gather some interest in the imaging community.

best regards,

Mark
http://www.aardenburg-imaging.com
Logged
Davi Arzika
Newbie
*
Offline Offline

Posts: 2


« Reply #11 on: May 13, 2009, 06:56:02 AM »
ReplyReply

Quote from: Onsight
Have you ever used the 2 step profiling process in Monaco Profiler or ColorMunki? It's fantastic, especially for devices that aren't greyscale balanced. MP has had this available for almost a decade now but it's surprising how it's overlooked.

Scott, What do you mean about 2 step profiling process in Monaco Profiler? Can you elaborate more about this? i have Monaco Profiler but never try this process. Shame on me :-)
Logged
digitaldog
Sr. Member
****
Offline Offline

Posts: 9190



WWW
« Reply #12 on: May 13, 2009, 08:13:33 AM »
ReplyReply

Quote from: Davi Arzika
Scott, What do you mean about 2 step profiling process in Monaco Profiler? Can you elaborate more about this? i have Monaco Profiler but never try this process. Shame on me :-)

There's a linearization then profile option but this is not the same as the iteration process in ColorMunki. My NDA's don't let me go into more detail but the chief color scientist at X-Rite has provided pretty impressive functionality in this newer process and using a tiny amount of patches in comparison to other packages. As for CM profile quality, while we could look at numbers alone, I can say that profiles I built in early testing were on par, at least when ink hit paper with an iSis and 2700 plus target package with ProfileMaker Pro.
Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
digitaldog
Sr. Member
****
Offline Offline

Posts: 9190



WWW
« Reply #13 on: May 13, 2009, 08:14:55 AM »
ReplyReply

Quote from: MHMG
Dedicated software applications like WIR-iStar that can calculate I*color and I*tone scores are a logical step forward in the evolution of the I* metric. I think the I* metric could be a really useful color tool as a plug-in for Photoshop or added to Xrite's Measure tool, etc., and I'd be interested in collaborating with any programmers who'd like to explore other possibilities.

Lets talk off list, I think that would also be useful and I've got the ears (along with Karl Lang who says hello) of both teams who could implement this.
Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
Scott Martin
Sr. Member
****
Offline Offline

Posts: 1312


WWW
« Reply #14 on: May 13, 2009, 09:17:42 AM »
ReplyReply

Quote from: Davi Arzika
Scott, What do you mean about 2 step profiling process in Monaco Profiler? Can you elaborate more about this?

MP allows one to print a small "linearization" target, measure it, and it then creates a customized profiling target that takes under consideration the non-linear (or native tonal response curve) nature of that device.

Most MP users use ColorPort for target generation and measurement and simply take the final measurements into MP for profile generation. So, in CP, generate, print and measure a "Linearization 40 step" target. Then create a new target with one of the "XRite Profile" options that are compatible with MP, check the customize button, check the Linearization button and select the measurement file for the Lin target you've just measured. When you hit OK you'll see the generate target color patches change into customized target colors.

This two step process is unnecessary for well behaved grey balanced processes like todays modern inkjet printers and papers. The process is best suited for processes that aren't as well grey balanced, like silver halide printers and solvent and UV curable printers that use RIPs that don't perform a very good linearization prior to profiling.

I was told I was the only person outside of XRite that provided product testing and feedback when the ColorMunki was being produced in early 2007. The Munki's new engine represents an evolution of thought and process from MP's approach. We focused carefully on silver halide and dye sub photo printers when testing the new engine as they are some of the most challenging and poorly behaved devices to try and profile. The Munki proved to work fantastically well and superior to all other packages in this context and it did so with remarkably few patches. A little more control over Perceptual rendering (like what MP has) would be nice as would the ability to use high end spectros.

Anyway, I hope this helps. Davi, what printer are you considering using this process with? It's important not to go through the extra hassle if you don't have to. In fact, 2 step profiling with MP can perform worse than a 1 step profile in the wrong context.

Logged

Ethan_Hansen
Full Member
***
Offline Offline

Posts: 114


WWW
« Reply #15 on: May 13, 2009, 02:23:12 PM »
ReplyReply

Quote from: Onsight
Anyway, I hope this helps. Davi, what printer are you considering using this process with? It's important not to go through the extra hassle if you don't have to. In fact, 2 step profiling with MP can perform worse than a 1 step profile in the wrong context.

This is an important point. MP's linearization works, with some caveats, for CMYK profiling. If you are building RGB profiles, the multiple steps required in the translation can indeed backfire. Generally speaking, the more an RGB-driven printer needs linearization, the less well MP's lin-tool works. Also, if the paper substrate contains significant OBA levels, fugeddaboutit.
Logged

Scott Martin
Sr. Member
****
Offline Offline

Posts: 1312


WWW
« Reply #16 on: May 13, 2009, 03:09:13 PM »
ReplyReply

Quote from: Ethan_Hansen
Generally speaking, the more an RGB-driven printer needs linearization, the less well MP's lin-tool works.
Hmm, I'm not sure if I understand you. If a device is well linearized then 1 step profiling works great. If a device doesn't have a good linearization then a 2 step profile will really help overcome a poor linearization. Perhaps that is what you are saying too.

Quote from: Ethan_Hansen
Also, if the paper substrate contains significant OBA levels, fugeddaboutit.
UV filtered devices solve any problems with OBAs, and surprisingly, actually solve problems with some papers that don't.
Logged

tived
Sr. Member
****
Offline Offline

Posts: 691


WWW
« Reply #17 on: May 13, 2009, 10:51:37 PM »
ReplyReply

Quote from: Onsight
Hmm, I'm not sure if I understand you. If a device is well linearized then 1 step profiling works great. If a device doesn't have a good linearization then a 2 step profile will really help overcome a poor linearization. Perhaps that is what you are saying too.


UV filtered devices solve any problems with OBAs, and surprisingly, actually solve problems with some papers that don't.

Hi guys,

what an interesting link I don't have anything to contribute, but I do have a question.

In order to archive the best possible profiles, what would be the recommended hard ware and software. I have been told by others, that the iOne Pro has issues with specular highlights reflecting off the textured paper. Can anyone confirm or deny this.

Where would one start, without having to spend $10k+ but still get very good profiles. yes, it is probably Color management 101 question. I have just finish preparing images to CMYK, for a press in a different country. The hard part for me, was not knowing my end points/target...such as how black and how much colors would shift? Maybe this is just something that comes with experience?

yes, they did supply a profile (Fogra uncoated 29L), but being a visual person in a technical world, it help me, when I actually had a visual, luckily I had gotten it right.

But I would would like to be able to better predict, what the outcome is going to look like, in the comfort of my own office, not that I mind flying :-)

thanks very much, keep this thread going, its very interesting! thanks all

Henrik
« Last Edit: May 13, 2009, 10:53:08 PM by tived » Logged
Scott Martin
Sr. Member
****
Offline Offline

Posts: 1312


WWW
« Reply #18 on: May 14, 2009, 10:37:41 AM »
ReplyReply

Quote from: tived
In order to archive the best possible profiles, what would be the recommended hard ware and software.
Hard to say without knowing more about your equipment, usage and demands for quality. My gut feeling is that some onsite color management work with color-by-the-numbers prepress workflow training might not only be beneficial but could cost less than getting into professional profiling equipment. You might look into what options you have for this in your area.
Logged

papa v2.0
Full Member
***
Offline Offline

Posts: 197


« Reply #19 on: May 15, 2009, 05:49:25 AM »
ReplyReply

Hi

I* sounds quite interesting.
So how would it be used in the real world. Is it a proposed metric for image difference between original and reproduction?

Have read the published paper but still a bit confused.




Logged
Pages: [1] 2 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad