I do not see where you get that "5% can see better" number: a 95% confidence interval just acknowledges a 5% chance that the results are wrong in some unspecified way.
To me, the BBC testing has the same defect as the Sony white paper: judging the benefits of increased resolution in moving picture
display by what the eye can distinguish in side-by-side comparisons of stationary images
. It seems very clear to me that moving images have lower resolution needs; think how much worse a frame grab from video looks than watching the video it comes from: pixelation and screen-door effects are far less visible when then images are in motion and you only have 1/24th of a second or less to see each frame before the transit to the next frame adds "motion blur".
What we really need is an ABX test: people watching the same video first at one of 2K and 4K [A], then at the other , then shown a random one of the previous two [X] and asked to try to match X to A or B.
I am inclined to hypothesize that the most relevant experimental data we have so far moving picture resolution needs are from the previous unpublished experiments mentioned in the introduction to the BBC document (my underlining):
These were carried out with a television display, using television signals representing edges varying in step size, mean brightness and hue. The results indicated that a rise angle of around 1.5 to 3 minutes of arc was just perceptible.
That would make 1920x1080 HD good enough until viewing distance is less than about 1 to 2 picture heights, or less than 0.6 to 1.2 picture widths, or 0.5 to 1 times screen diagonal size.
Sure, people will look at computer screens this close for short periods of time, but how many will watch a movie at home from this close range?
Anyway, I am fairly sure than computer screens (and even iPad screens) will soon go beyond 1920x1080p, and there will be 4K samples around to download and do our own ABX testing with. Perhaps we should pause this theoretical debate until more experimental data arrive.