Why not have the camera capture two exposures on the same sensor, one right after the other, without the mirror moving between exposures. Perhaps the user sets two sensitivity levels, one for each exposure, for example: 100 ISO and 800 ISO and then the camera selects the best pixels from either exposure and combines them into a final saved image using a technique similar to that used by GLuijk in the "Zero Noise technique" thread.
That could work; I haven't thought through GLuijk's process enough yet. It's starting to seem that anything that requires two moments in time to form one image (either two exposures, or one analysis moment and one exposure moment) will result in edge artifacts. Either the camera or subject will move between the two moments, and abnormal exposures will occur at those edges.
There might be specific types of photography where adjusting sensitivities of individual pixels could be used, but they'be be unique. We'll just need pixel sites that inherently have 14 or 16 bit latitude!