Do you include some sort of viewing standard when calculating the I* value.
I see that the system is image dependent and is there a mechanism for reporting which elements of the image are in error. If for example I* reported a 85% colour accuracy - which colours are out? (Is it the sky or the models jacket or the product colour).
If at the end of the day the reproduction goal is to produce a 'pleasing' reproduction (for arguments sake) of the scene and not a colormetric reproduction, how would I* fit in?
The I* metric uses the CIELAB color model as the underlying architecture, so viewing standards get handled by the illuminant assumption you make when measuring the LAB values before processing the I* math.
We still don't have really great artificial intelligence algorithms that will take an original scene and capture and process it for color and tone in a way that pleases everyone (though many digital camera companies do have proprietary ways to produce "pleasing" color that they think the majority of their customers will like "out of the box"). In fact it's obviously an impossible task to please everyone. Long live custom layer edits in photoshop! You may like skintones warm in a particular scene, for example, while I may prefer them cooler. Anyway, I* enters the workflow at the point where you have decided what a good source image should be. That image becomes your reference image for the I* calculations. In image permanence testing, for example, the assumption is that the original print (whether it's the most pleasing print or not) contains the color and tonal qualities you are trying to preserve. One could find a situation, for example, where a print that is too dark will fade and lighten in a way that becomes more pleasing to most observers over time before it fades too far and becomes less pleasing than the original. The I* metric would not compute it getting "better" then getting worse. The reference print was dark to begin with, so accurate color and tone scores means it should stay that way.
Thus, the basic assumption with I* is that you now want to bolt down your preferred color and tonal relationships and reproduce them with as much colorimetric accuracy as possible. Once you've got your aimpoint reproduction in mind and have edited your preferences into your digital image file, then I* can take it from there and tell you downstream how the subsequent reproduction choices are stacking up in terms of retaining said chosen color and tonal quality. In other words, once you've created your "pleasing" image, then the whole process at that point becomes one of accurate colorimetric matching to the extent that it is possible. There's the rub. It often not possible, so I* can tell you how far you have strayed, and as you suggest, even what localized parts of the image are suffering more in the reproduction than others. The I* analysis is all about color and tone distributions in an image. Specific images have specific colors. They get sampled as an ordered array of locations (ie. spatial frequency analysis) within the image and then summed and averaged by the I* method to produce the overall score.
The whole "pleasing versus accurate" endeavor is why, for example, many printmakers build a "master" file with carefully chosen edits that gives them their "ideal" color and tone, then turn on softproofing in Photoshop and add final edit layers to try to "pop" the profile-translated color and tone back into better visual alignment with the source image before committing to printer output. An I* plug in in photoshop, for example, could help to objectively quide your visual edits to tell you you when are getting closer or farther away. It's possible that the I* metric could even give profiling applications some feedback that could help produce a "smart CMM". There's a lot of research potential for further development of the I* metric. We've just scratched the surface.
Finally, just to make sure I've fully answered your question about tracking specific regions of interest in the image with the I* metric, a robust I* software application can do what your are suggesting and track specific colors or even localized areas within the image and give you selective I* scores about these specialized regions. Some "colorgeek" apps have this image-specific tracking capability now only using delta E. The WIR iStar comparative image analysis software was designed to perform this type of evaluation using I* math as well as having the conventional delta E methods available to the user.