If the (potentially filtered) signal that enters the point-sampler is allowed to take the shape of both sinuses, we cannot from the samples decide which it actually was, and therefore we cannot faithfully recreate the waveform.
But that's where you lose me, I don't understand what you mean by that. The Blue line is what can be reconstructed if the higher frequency detail is not resolved (beyond Nyquist or low-pass filtered). The blue line could originate from any signal frequency, even real detail with that frequency.
So what is needed is more sub-pixel samples (effectively a higher sampling frequency, although with an area aperture sampling device, not point sampling). It's not the lower frequency alias that helps, but the denser sampling. As said, the aliasing, when identified as such (which may be hard because it looks like any other real detail), can
help in pin-pointing the amount of sub-pixel displacement of the samples. However, as the literature references show, the more successful SR approaches are spatial domain and sub-pixel sampling based approaches. PhotoAcute apparently uses a warping variation which reveals clues about the displacement of the sub-samples, and thus doesn't suffer from lens distortions as much as some other solutions. Such spatially variant solutions do require some extra processing power.
If we filter the signal, or can make assumptions about the signal such that one of those sinuses are allowed, but the other is not, then we could recreate the waveform from the samples. A given set of samples generally corresponds to an infinite set of possible waveforms, but pre-filtering reduce that set to (ideally) exactly one waveform.
Yes, but we cannot recreate the higher frequency signal from the lower frequency one, because it might well be the correct waveform (or an alias from a number of possible higher frequencies). Sub-pixel spatial displacement is a requirement to achieve Super Resolution, aliasing is not (unlike what the Wiki page suggests). In practice we will have a mix of both in our sub-images, because the AA-filters in our cameras are not perfect.
Yet another way to paraphrase: some frequencies wrapping over into other frequency bands does no harm if there are no other signals in that band to interfer with, and if we know which frequencies wrapped where.
But that's the problem. Human vision is pretty good at picking out aberrant information when it doesn't fit a pattern, but an automatic system has no clue about what to expect or not. All data is seen as relevant, even noise.
It seems to me that you don't get the theoretical point that I am making, and I think that our discussions would be more fruitful if we are able to agree on those.
Indeed, but I'm afraid I still do not get your point. My point is that aliases and real detail share the same output signal after being quantized. Only by sub-sampling (thus resolving higher spatial frequencies than the Nyquist frequency of a single image) can original detail be identified, and aliasing in the sub-images eliminated. My test above should illustrate that, only where the sub-sampling increased the Nyquist frequency (in the horizontal direction thus mostly benefitting vertical feature orientations) will the aliasing artifacts be reduced.
And this all has only partial impact on single shot lens tests with or without OLPF.