Thanks for the link. Interesting to see the extent to which deconvolution sharpening can help (I don't use this type of sharpening...yet, but that's a story for another day).
Yes, it also applies to color. The diffraction pattern can be considered as a non uniform blur filter. The pixel(s) near the center of the blur pattern will dominate the summed result, and surrounding pixels will have a weighted contribution.
Is it the case that even though the blur pattern is non-uniform, each pixel is the centre of its own blur pattern - the 'spike' in the diagram in Erik's post above - and therefore the effect of each pixel on its neighbours returns a uniform effect across the sensor? Presumably this does not hold true at sensor edges (summed result not equal to that from 'balanced' pixels, i.e pixels remote from an edge)?
It's not so much a dulling effect, but rather a blending effect with nearby colors. If the colors are similar, then not much will change...
The first sentence I understand, and in the context of different coloured pixels adjacent to each other, also your second sentence. However, your use of "much" has confused me somewhat. Consider the following thought experiment:
Say we shoot a theoretical pure white background. Assuming we shoot a series of images where the camera is progressively stopped down but shutter speed is adjusted to return the same exposure (assume ISO is held constant and that noise is non-existent). Is it correct that we would find the images tending to grey for smaller aperture? If so, then given that in this example, "the colors [of adjacent pixels] are similar" (identical actually), why would "not much...change"?
The green wavelengths are contributing most to the luminance of a scene, so the wavelength dependent diffraction of those wavelengths will dominate our impression of contrast and color purity loss.
Yes, this makes sense to me.