There does seem to have some improvement with the recent updates.
I doubt we'll see anything really dramatic until (if?) someone at Adobe manages to utilise the currently unused power of GPUs for processing, as they did so successfully in Premiere Pro CS5.
GPU processing tends to give "two orders of magnitude" speedup for carefully selected floating-point, infinitely threadable scientific tasks where the scenario is unfairly rigged by Nvidia/AMD. Think physical modelling, Monte Carlo simulations and such.
For more realistic tasks, the speedup tends to be far more modest, while the increased complexity, potential for nasty bugs, test-matrix, difficulty of hiring developers etc can be significant.
If the pipeline is fixed-point and hard to thread into 100s of highly separate tasks, the speedup may be negligible.
Did anyone try running Lightroom on a 16-core x86 system? It is significantly easier to thread most applications to that hardware. If Adobe did not bother/was not able to exploit moderately large numbers of x86 cores, I dont see them doing anything useful on GPUs?