I'd say it's harder. It's one thing to alter scaling, colour I think is more complicated, as there's no single answer, like "scale everything up by a factor 2" or whatever.
Yes, but how would you know what was a legacy application? If an application developer is concerned enough to set a colour legacy bit, they probably already do colour management.
The assumption with Apples high-dpi stuff seems to be that legacy applications are hard-wired for a given moderate dpi (say, 96 dpi), and when the OS knows that the display is actually a lot more (>200dpi), it will present a low-resolution virtual display to applications so that they can render into a frame where one pixel is of a traditional size, then scale this buffer to fit the physical panel. Applications that wants to access the true capabilities of the display will have to take active steps (raise a flag, choose a new API, whatever). Old applications that happened to do well with high-dpi displays but are not updated by the developer are out of luck.
I am suggesting the same thing for color management. Make a clean start. The OS does color management such that as a default, all applications are rendering to sRGB (or some approximation of it). Those applications that wants to access the true display capabilities will have to take active steps in order to be able to do so. This means that users clinging on to Photoshop CS3 will be out of luck (they may have the possibility to do some manual white-listing).
My reasoning is that the majority of computer users don't care much about colors. They are used to semi-predictable sRGB, and accept it. You really don't want to do anything to upset 95% of your customers (see what happened to Microsoft when they figured that mouse-and-keyboard users did not matter to Windows 8 ). The remaining 5% is still a substantial (and vocal) set of users, and you want to make them happy as well. So try to make stuff work for them. Some subset of the subset (e.g. 5% of the 5%) care a lot about colors, but for some reason won't update their applications. They are the ones who will have to suffer some inconvenience in order to make the bulk of users happy.
But some applications will do colour management internally without using WCS, and there's probably no way Windows can tell that. Applying a "sandbox sRGB assumption" would be completely wrong in that case.
And why is this such a bad thing? If you write applications for an OS, you either follow the rules or suffer the consequences. The OS makes no warranty that backwards compability will be maintained forever. In fact, Apple seems to be quite aggressive in breaking whatever the feel is needed in order to make their product more stable, tested and user-friendly. MS seems to be more conservative in maintaining basic backwards compability for a long time (applications is king).
By the way, EDID colour space values read from a monitor are often wholly incorrect. Some monitors return the values of the sRGB primaries in the EDID - even for wide-gamut monitors.
This is a real problem. How do you know the response of a device if it lies to you, and the user can not be expected to purchase a $150 colorimeter and operate it correctly? I do know that MS (perhaps Apple as well) does certification testing. So one possibility would be that a "MS/Apple"-certified display would have an EDID that (within some accuracy spec) described the measured response. A simpler (and perhaps more short-sighted) solution would be that displays offered an AdobeRGB (or some other pre-defined response) that would be sufficiently accurate for many displays and users. Like how Adobe & friends measure and maintains a database of camera responses (due to unfriendly camera manufacturers), MS/Apple _could_ do something similar to displays, but it would be cumbersome and at the very least, the OS would have to have access to the display "raw" mode. I have often thought that X-rite should be very happy that they have a significant number of users purchasing expensive colorimeters when many would perhaps be well-served by a global, high-quality measurement.
I am guessing that in many cases, and for many years, the middle layer would essentially be a no-op. The application would be deemed as "legacy" and the display would be deemed as "untrustworthy", falling back to assuming sRGB/nontagged source, sRGB/nontagged destination and safer to do nothing than to do anything. It would, however, present the opportunity for display manufacturers to get their act together and sell wide-gamut display that caused less frustration with (occasionally) casual users, such as myself.
While waiting for this development, my family continue watching video on either:
1) The 27" Dell screen with horrible colors
2) The 20" 9:16 (portrait) sRGB screen where 16:9 videos are reduce to postage-stamp size