Thanks for your thoughtful reply.
What I am actually doing is photographing/scanning art. Much is larger than my scanner so I have to scan segments of the art. With my camera, I often photograph it in segments to get better resolution (to a camera, a 2in x 2in painting looks the same when filling the image area as does a 4ft x 4ft painting.
I am using Photoshop photomerge to put the segments of the paintings together. I'm using VueScan (as SilverFast has simply priced itself out of my market and provides crummy support for the price and I enjoy nearly instant personal replies from Ed Hamrik at VueScan). VueScan does output a "raw" file, which, as you have commented, I assume is simply an image file with more information (this is the "extra" information I refer to and don't fully understand). It outputs the file as a tiff.
I have presumed, as you suggest, that Photoshop does some processing of the image when it photomerges the file, which seems to keep its "appearance" intact. I'm curious about what data that might be used for further processing is lost from the image file in photomerge. My understanding of RAW is that much more of the "raw" data picked up by the scanner/camera is included in the file while with non-raw files the data is processed in the camera/scanner and less of the raw data is included in the image file. This provides more information for the image editing process. If that is actually the case, I wondered (per my last post) what happens to the "extra" data in the original "raw" files when they are photo merged.
For instance, if I do color correction after the merge, will I have less image data for doing that than if I process the segments before the merge? It is, of course not really possible to process the segments individually and expect them to "fit" after the merge. I guess I could process one in, say, ACR and apply the changes to the other two, if it were critical to do the processing before the merge. Would this distract Photoshop in trying to get a pixel-by-pixel merge?
I have actually been processing my RAW images in Aperture, which doesn't differentiate them from ordinary tiffs the way ACR/Photoshop does. It seems to simply apply its own algorithm for the camera used to process the image, preserving, I assume, the "extra" RAW data in the file somehow. Aperture always saves the original file, also saving the image after each processing session as a "version." To photomerge my raw camera images, I open them in Photoshop from Aperture (they are already called tiff files) and merge them, saving the merge back into Aperture. Photoshop decides somehow what to do with the "extra" RAW data when I save the file. It is not, long, of course a RAW file in the way a RAW file is when it comes from the camera.
What haunts me is what happens in this process to the "extra" data left in a RAW file to extend the range (or character) of adjustments made to that file when it is process in Photoshop or Aperture.
Trying to photograph art, where literal reproduction of colors is a priority, has been a challenge in a world where everything seems to process images as though they were snapshots using pre-conceived notions of what makes a snapshot "pleasant" to look at (the word "pleasant" came from an XRITE professional describing the goal XRITE seeks in programming their printer calibration software like i1pro.). With art, artists often seek color effects that may be "unpleasant" in a typical photograph. Thus, I am often seeing odd color shifts when scanning/photographing/printing artwork. I often see the same colors shifted differently in different images. I presume this results from something in an icc file or some hardware device that thinks certain colors should be shifted differently dependent on their context (Or it may simply think that certain colors are simply not pleasant to the eye). Go figure...
To me RAW is pretty confusing. I just bought "The digital negative" by Jeff schewe. Maybe that will help me understand it all better.