Ad
Ad
Ad
Pages: [1]   Bottom of Page
Print
Author Topic: Computational Lighting - Practical?  (Read 861 times)
RFPhotography
Guest
« on: August 28, 2013, 12:08:34 PM »
ReplyReply

Saw this on F-Stoppers and went to the link in the article on the Cornell site.

I can't see this as being overly practical. 

The photographer needs to walk around the scene or subject with a flash in hand taking upwards of 100 images and lighting different areas of scene/subject differently.  The amount of time that would take would be, I would think, inordinate.  It would require, unless the camera could be triggered wirelessly, an assistant to fire the camera while the photographer walked around with the flash, or vice versa. 

Then, the photographer needs to load these images into software created by people at Cornell which creates three composite images with different lighting scenarios that the photographer can adjust. 

The computing power necessary to ingest over 100 RAW or converted images must be vast.  On a standard desktop with 4 or 8 cores and between 8 and 32 GB of RAM, ingesting, mixing and processing 100+ images would take large amounts of time, would it not?  100 images from, say, a D800 at 100MB each? 

Thoughts?
Logged
K.C.
Sr. Member
****
Offline Offline

Posts: 650


« Reply #1 on: August 30, 2013, 08:03:38 PM »
ReplyReply

"The present version is still not production ready, Bala said, but “we hope to make it available as a prototype,” she said, and it will probably become part of an Adobe product, such as Photoshop or Lightroom.

The research was supported by the National Science Foundation and Adobe."

So this is the attempted development of a product done under the guise of research. I know a few people that have learned how to right effective grant proposals and make a very good living do it.
Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1615


« Reply #2 on: September 05, 2013, 04:04:58 AM »
ReplyReply

I want to put my camera with a fish-eye exactly where some model in some landscape looks the best. Snap.

Then I want a semi-spherical arrangement of RGB LEDS to reproduce that same image onto a central point. Then place the same model in the same relative spot, and you ought to have similar lighting. Every time.

-h
Logged
AFairley
Sr. Member
****
Offline Offline

Posts: 1100



« Reply #3 on: September 05, 2013, 04:14:18 PM »
ReplyReply

Wow, technology is so great!  Now, not only can you spray hundreds of images without any clue as to composition or framing and see if one them ends up working, you don't have to worry about the lighting either.
Logged

RFPhotography
Guest
« Reply #4 on: September 06, 2013, 06:48:59 AM »
ReplyReply

Wow, technology is so great!  Now, not only can you spray hundreds of images without any clue as to composition or framing and see if one them ends up working, you don't have to worry about the lighting either.

Well not quite.  In fact you have to spend great amounts of time lighting from different angles.  Oloneo has something similar in their PhotoEngine software.  With theirs you need only to take up to 6 shots, each with different lighting turned on/off and can then adjust in their software.  This isn't new technology.  The way Cornell is implementing it is new but it's the practicality of the Cornell solution that I question.
Logged
Pages: [1]   Top of Page
Print
Jump to:  

Ad
Ad
Ad