Ad
Ad
Ad
Pages: [1]   Bottom of Page
Print
Author Topic: 3D what and how  (Read 3979 times)
KevinA
Sr. Member
****
Offline Offline

Posts: 898


WWW
« on: February 28, 2012, 01:32:17 PM »
ReplyReply

I know it could be a passing fad.
But what are the ins and outs of it. You can buy various consumer cameras  that say they shoot 3D, so taking them at face value. What next? You have all this 3D video, how do you edit it and is there more than one way to output it?
Will it view as 3D on any 3D TV or are there Sony 3D and Panasonic 3D systems that are not compatible etc.
Slightly curious about it, I did see some really effective 3D at a show, much better than anything I've seen on the big screen.

Kevin.
Logged

Kevin.
Rhossydd
Sr. Member
****
Offline Offline

Posts: 1888


WWW
« Reply #1 on: February 28, 2012, 04:37:03 PM »
ReplyReply

There's a pretty high level of standardisation within the 3D market, so there shouldn't be any major compatibility issues between brands.

With respect to editing and distribution; There are several NLEs that happily handle 3d streams. The editor needs an awareness of the issues surrounding 3D to make a decent edit, but it's straightforward to understand. Distribution will need to be either Blue Ray, or streamed(broadcast), again not much of an issue.

The real trip up point for 3D is acquisition. If you want to arrive at the sort of high quality 3D that doesn't give you a headache after 15 minutes you need to spend a huge amount of care setting up the rig and lining up the cameras. Then it needs to be used carefully with a good idea of how the end product will be viewed. The small subtleties of set up make all the difference between an eye popping gimmick and a final product you can sit, watch and enjoy.
The domestic cameras really aren't good enough and let the possibilities of the technology down.

I spent three full days at Sony's UK headquarters attending their 3D production course last year and it was highly technical and intensive. It gave me a half decent start to help understand the tech involved, but I'd still say I'd only scratched the surface of subject.

Paul
Logged
fredjeang
Guest
« Reply #2 on: February 28, 2012, 05:20:58 PM »
ReplyReply

Examples of editing (8 tutos): http://www.youtube.com/watch?v=fFTzkNtzB7s&feature=related
Logged
KevinA
Sr. Member
****
Offline Offline

Posts: 898


WWW
« Reply #3 on: February 29, 2012, 03:45:27 AM »
ReplyReply

I'm assuming the set up is down to the convergence angle of the two streams. I'm shooting aerials so my distance is fairly well set. Would the amount of zoom change that?  I suspect not.
I've shot still 3D from the air, but to get any great effect the angles needed to be in feet not a few inches, I had wondered about playing with a consumer 3D camera or even Sony DEV-5 Digital Recording Binoculars. It's as much a curiosity thing on my part, I'm not sure of any commercial application for my clients........yet.
If something like the iPad went 3D I could see a market.
I'm assuming then that the two streams are a separate stereo pair and not a form of anaglyph and that the end viewing screen determines wether it's polaroid or shutter glasses needed, not the input format.

Kevin.
Logged

Kevin.
Rhossydd
Sr. Member
****
Offline Offline

Posts: 1888


WWW
« Reply #4 on: February 29, 2012, 04:14:24 AM »
ReplyReply

I'm assuming the set up is down to the convergence angle of the two streams.
There's actually a lot more to it than that. Convergence (and inter ocular distance) is a property that's adjusted in real time varying with the shot, all within the parameters for the whole shoot. Rig set up involves a lot more detailed line up of the two cameras. It's small details in line up that make the difference between being able to watch 3D comfortably for prolonged periods or getting headaches and eye strain.

Aerials are rather a specific case.
Quote
I'm assuming then that the two streams are a separate stereo pair and not a form of anaglyph and that the end viewing screen determines wether it's polaroid or shutter glasses needed, not the input format.
Yes.
Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1666


« Reply #5 on: February 29, 2012, 07:05:48 AM »
ReplyReply

In my Bluray player, I have a setting for how large my tv is - supposedly the 3D playback use the display size for some rendering purpose.

-h
Logged
KevinA
Sr. Member
****
Offline Offline

Posts: 898


WWW
« Reply #6 on: February 29, 2012, 10:08:11 AM »
ReplyReply

There's actually a lot more to it than that. Convergence (and inter ocular distance) is a property that's adjusted in real time varying with the shot, all within the parameters for the whole shoot. Rig set up involves a lot more detailed line up of the two cameras. It's small details in line up that make the difference between being able to watch 3D comfortably for prolonged periods or getting headaches and eye strain.

Aerials are rather a specific case. Yes.

Thank you for the replies Paul,
Is the inter ocular determined by subject distance? Or is it more of a black art combination of subject size, distance and focal length?

Kevin.
Logged

Kevin.
Rhossydd
Sr. Member
****
Offline Offline

Posts: 1888


WWW
« Reply #7 on: February 29, 2012, 10:59:18 AM »
ReplyReply

Or is it more of a black art combination of subject size, distance and focal length?
That's about it.
Logged
fredjeang
Guest
« Reply #8 on: February 29, 2012, 12:01:37 PM »
ReplyReply

That's about it.
So, if I understand Paul's interesting imputs, it's more critical to plan each scene-details in the sense that there is one "ideal" combination that has to be re-calibrated case by case ?
 
Logged
Rhossydd
Sr. Member
****
Offline Offline

Posts: 1888


WWW
« Reply #9 on: February 29, 2012, 12:26:02 PM »
ReplyReply

it's more critical to plan each scene-details in the sense that there is one "ideal" combination that has to be re-calibrated case by case ?
 
More than that. It may be necessary to adjust interaxial distance and convergence during the shot depending on the action. That's operated by a convergence puller.
This is why high quality 3D production is so spectacularly expensive, loads of highly specialised equipment and highly skilled operators are needed.

Have a look at the Hobbit blog except on 3ality's web site to get an idea of what's involved. http://3alitytechnica.com/index.php
Logged
fredjeang
Guest
« Reply #10 on: February 29, 2012, 12:40:54 PM »
ReplyReply

I saw it some time ago. Impressive.
« Last Edit: February 29, 2012, 01:51:01 PM by fredjeang » Logged
Sareesh Sudhakaran
Sr. Member
****
Offline Offline

Posts: 547


WWW
« Reply #11 on: March 05, 2012, 09:58:56 PM »
ReplyReply

Is the inter ocular determined by subject distance? Or is it more of a black art combination of subject size, distance and focal length?
Kevin.

There is no black art, Kevin. A couple of years ago I spent a lot of time on stereoscopy, even designed a camera prototype. Here are some notes based on your specific case:

1. The Interaxial distance is something that can change based on how you want the elements of the scene to be placed in the 'depth' (think: box) you create. You might come across 'rules' such as keeping the interaxial distance greater than the interocular distance for far away objects. In your case, depending on your height above ground, this might apply. If this is all you're shooting, I suggest a custom-made (two camera) side-by-side setup. Why custom-made? Because I'm not sure consumer cameras can stretch the interaxial distance to the levels you might need.

2. If you are too high above ground, you also run the risk of eliminating the stereoscopic effect entirely, as you can see with the human eye. After a certain distance, our stereoscopic ability vanishes and objects far away (not really that far in terms of feet, mind you) compress to a 2D plane. Same with camera lenses. And if you try to force the issue, then objects might start to look like miniatures.

3. Another 'rule' professionals follow is to NOT set convergence while shooting, for many technical reasons. I suggest you start reading here: http://www.dashwood3d.com/blog/beginners-guide-to-shooting-stereoscopic-3d/ and if you are really serious you HAVE to study these two books:

  • Foundations of the stereoscopic cinema by Lenny Lipton (probably the Ansel Adams of the stereoscopic world)
  • 3D Movie Making by Bernard Mendiburu

4. As to why stereoscopy hasn't caught on, regardless of the hype, marketing and Hollywood films, the answer is pretty simple: You can only make one kind of 'depth box' for each type of 'screen' (You can make two kinds if you use three cameras, but that is another nightmare altogether). What this means, is that if you apply the interaxial distance and convergence calculations (plus a host of other post-production calculations) for a particular screen type, like a cinema screen, for example, then the 3D won't work for a smaller screen type. What's worse, the distance of the viewer from the screen (and the angle) is also very important. There are too many variables that are beyond the filmmaker's control. Multiplexes usually conform to the SMPTE or THX standard, and the viewing angle and distances are controlled. This allows filmmakers to estimate the 3D experience (just like they do with audio) within acceptable tolerance levels. But at home, all this goes out the window.

Add to this the discomfort of wearing glasses, and having enough glasses for everyone watching at home! As far as I know, the only successful market penetration has been in the gaming world, where the experiences are controlled, and the 3D variables can be 'generated and manipulated on the fly' based on the systems used. There are really cool autostereoscopic displays out there, but then again, the quantity of quality content isn't there yet.

What Hollywood studios usually do is make the film for cinema and then 'redo' the 3D in post production (a compromise at best, since studios can't control the viewer's screen size, and the interaxial and/or convergence values have been set in stone already). Even if one is making 3D for the Internet, DVD or Blu-ray exclusively, the screen size can range from 10" to 60". Lower frame rates don't help either, which is why PJ roots for 48fps (a compromise for wide release), Cameron fights for 60fps (He hopes enough theaters will be ready by then), and Douglas Turnbull backs 120fps. So far, nobody has an answer to this problem.

5. Add to all this the different technologies out there, from anaglyph to Dolby to Real3D to nVidia ad infinitum. Every technology 'renders' 3D differently, and what works on one might not work on the other!

Please understand that my intention is not to discourage you, but to help you realize it's a field where a lot of effort, dedication, study and commitment is involved. You can't fake it, and you can't copy-paste.

If you do manage to wade through the muck and find solutions to your own problems, I think, ultimately, it is a worthwhile endeavor. And it's a lot of fun once you get the hang of it. There are many resources on the internet, and lots of support so you'll never get stuck. I hope this helps to get you started.
Logged

Get the Free Comprehensive Guide to Rigging ANY Camera - one guide to rig them all - DSLRs to the Arri Alexa.
Rhossydd
Sr. Member
****
Offline Offline

Posts: 1888


WWW
« Reply #12 on: March 06, 2012, 03:14:08 AM »
ReplyReply

3. Another 'rule' professionals follow is to NOT set convergence while shooting, for many technical reasons.
Not a 'rule' and most 3D productions DON'T use that approach.
Quote
4. As to why stereoscopy hasn't caught on, regardless of the hype, marketing and Hollywood films, the answer is pretty simple: You can only make one kind of 'depth box'....................
That's just one reason.
"The" reason is somewhat more fundamental. Current 3D is basically unnatural to see.
The eye/brain perceives 3D from many sources and clues, but one of the key clues is where the eye is focussed and how it changes focus with distance. This is also coupled to the physiology that makes the eyes converge on nearer objects and automatically focus closer. This all happens by reflex and for the vast majority of people is impossible to override.
As soon as you need to focus on a fixed plane, ie a cinema screen or monitor screen, those mechanisms stop working. It's also impossible to get secondary 3D clues by the eyes 'ranging' through the scene as the depth of filed is completely fixed which, again, is unnatural.

Most people can cope with the illusion and see the 3D effect, but very few people find watching 3D comfortable for prolonged periods of time, some will suffer headaches very quickly and some simply can't process it and don't see the 3D at all. The better the attention to detail with camera alignment, convergence and interaxial distance the easier the illusion is to view, but it still remains an illusion of depth.

Maybe at some future date a technology will develop that can actually give real depth information to our brain and is natural to view, then 3D may really take off. For now it's likely to remain just a gimmick to generate more sales.
Logged
Craig Murphy
Sr. Member
****
Offline Offline

Posts: 312


WWW
« Reply #13 on: March 06, 2012, 03:57:30 PM »
ReplyReply

Hugo was awesome!   Smiley
Logged

CMurph
KevinA
Sr. Member
****
Offline Offline

Posts: 898


WWW
« Reply #14 on: March 07, 2012, 04:56:46 AM »
ReplyReply

There is no black art, Kevin. A couple of years ago I spent a lot of time on stereoscopy, even designed a camera prototype. Here are some notes based on your specific case:

1. The Interaxial distance is something that can change based on how you want the elements of the scene to be placed in the 'depth' (think: box) you create. You might come across 'rules' such as keeping the interaxial distance greater than the interocular distance for far away objects. In your case, depending on your height above ground, this might apply. If this is all you're shooting, I suggest a custom-made (two camera) side-by-side setup. Why custom-made? Because I'm not sure consumer cameras can stretch the interaxial distance to the levels you might need.

2. If you are too high above ground, you also run the risk of eliminating the stereoscopic effect entirely, as you can see with the human eye. After a certain distance, our stereoscopic ability vanishes and objects far away (not really that far in terms of feet, mind you) compress to a 2D plane. Same with camera lenses. And if you try to force the issue, then objects might start to look like miniatures.

3. Another 'rule' professionals follow is to NOT set convergence while shooting, for many technical reasons. I suggest you start reading here: http://www.dashwood3d.com/blog/beginners-guide-to-shooting-stereoscopic-3d/ and if you are really serious you HAVE to study these two books:

  • Foundations of the stereoscopic cinema by Lenny Lipton (probably the Ansel Adams of the stereoscopic world)
  • 3D Movie Making by Bernard Mendiburu

4. As to why stereoscopy hasn't caught on, regardless of the hype, marketing and Hollywood films, the answer is pretty simple: You can only make one kind of 'depth box' for each type of 'screen' (You can make two kinds if you use three cameras, but that is another nightmare altogether). What this means, is that if you apply the interaxial distance and convergence calculations (plus a host of other post-production calculations) for a particular screen type, like a cinema screen, for example, then the 3D won't work for a smaller screen type. What's worse, the distance of the viewer from the screen (and the angle) is also very important. There are too many variables that are beyond the filmmaker's control. Multiplexes usually conform to the SMPTE or THX standard, and the viewing angle and distances are controlled. This allows filmmakers to estimate the 3D experience (just like they do with audio) within acceptable tolerance levels. But at home, all this goes out the window.

Add to this the discomfort of wearing glasses, and having enough glasses for everyone watching at home! As far as I know, the only successful market penetration has been in the gaming world, where the experiences are controlled, and the 3D variables can be 'generated and manipulated on the fly' based on the systems used. There are really cool autostereoscopic displays out there, but then again, the quantity of quality content isn't there yet.

What Hollywood studios usually do is make the film for cinema and then 'redo' the 3D in post production (a compromise at best, since studios can't control the viewer's screen size, and the interaxial and/or convergence values have been set in stone already). Even if one is making 3D for the Internet, DVD or Blu-ray exclusively, the screen size can range from 10" to 60". Lower frame rates don't help either, which is why PJ roots for 48fps (a compromise for wide release), Cameron fights for 60fps (He hopes enough theaters will be ready by then), and Douglas Turnbull backs 120fps. So far, nobody has an answer to this problem.

5. Add to all this the different technologies out there, from anaglyph to Dolby to Real3D to nVidia ad infinitum. Every technology 'renders' 3D differently, and what works on one might not work on the other!

Please understand that my intention is not to discourage you, but to help you realize it's a field where a lot of effort, dedication, study and commitment is involved. You can't fake it, and you can't copy-paste.

If you do manage to wade through the muck and find solutions to your own problems, I think, ultimately, it is a worthwhile endeavor. And it's a lot of fun once you get the hang of it. There are many resources on the internet, and lots of support so you'll never get stuck. I hope this helps to get you started.

Consider me discouraged!!
It was just a flash of an idea. I know I shot 3D stills from the air I made a two camera setup. But in truth one camera was all that was needed, just firing off a few shots on a run then aligning best suited in Photoshop. Distance between shots was feet not inches.
3D isn't going to work for everyday home use with all that to contend too.
I did see some 3D at a show on a monitor stand, the conclusion I came to was the smaller the screen the better the 3D, but it did look good, which started me thinking. I've stopped thinking now it gets me into expensive trouble.
Thanks for all the replies, I am now an armchair dinner party expert  on 3D.
I think to shoot aerial 3D successfully you would need a Cineflex on each end of a boom synced together, then you could only shoot one direction, no panning , just zooming, tilting and forward motion. Well out of my league.

Kevin.
Logged

Kevin.
Sareesh Sudhakaran
Sr. Member
****
Offline Offline

Posts: 547


WWW
« Reply #15 on: March 07, 2012, 09:12:52 PM »
ReplyReply

Not a 'rule' and most 3D productions DON'T use that approach.That's just one reason.

You're right. If a professional knows what he/she is doing, then it probably is an acceptable approach to set convergence for each shot. And they also get to live with the consequences of that decision.
Logged

Get the Free Comprehensive Guide to Rigging ANY Camera - one guide to rig them all - DSLRs to the Arri Alexa.
Pages: [1]   Top of Page
Print
Jump to:  

Ad
Ad
Ad