Saw this on F-Stoppers and went to the link in the article
on the Cornell site.
I can't see this as being overly practical.
The photographer needs to walk around the scene or subject with a flash in hand taking upwards of 100 images and lighting different areas of scene/subject differently. The amount of time that would take would be, I would think, inordinate. It would require, unless the camera could be triggered wirelessly, an assistant to fire the camera while the photographer walked around with the flash, or vice versa.
Then, the photographer needs to load these images into software created by people at Cornell which creates three composite images with different lighting scenarios that the photographer can adjust.
The computing power necessary to ingest over 100 RAW or converted images must be vast. On a standard desktop with 4 or 8 cores and between 8 and 32 GB of RAM, ingesting, mixing and processing 100+ images would take large amounts of time, would it not? 100 images from, say, a D800 at 100MB each?