Ad
Ad
Ad
Pages: « 1 2 [3] 4 »   Bottom of Page
Print
Author Topic: How does BMPCC video/film compare to Canon DLSR with Magic Lantern raw video?  (Read 19974 times)
bcooter
Sr. Member
****
Offline Offline

Posts: 1055


Bang The Drum All Day


WWW
« Reply #40 on: October 19, 2013, 09:09:37 AM »
ReplyReply

Morgan,

You can white click in cinex if you shoot with RED, though I find it rarely works for continuity.  I agree with John that it is preferable to shoot a series of scenes with a locked in color temperature, usually 3200 for tungsten and practical tungsten, 5800 for daylight but that depends on the camera and scene.

But if you want to click on a grey or white card Apple's 3 way color corrector has a dropper for shadows, midtones and white balance and good auto settings for all three ranges.

Proress 422 HQ holds up quite well in converting and we use the 3 way color corrector for basing out the footage prior to the edit.

I also agree with John that ambience color bounce effects a scene.

I just edited a scene where the subjects walk from a daylight room into an area that is mostly tungsten and I balanced out two clips, the overlayed the adjusted tungsten clip over the adjusted daylight clip and then changed the opacity in a ramp style of transition so the correction from daylight blue to tungsten orange was subtle, but everybody has a different way of working.

One software I've just began to use is RED Giants colorista II.   I find it's excellent for finishing out once the edit is locked, but soon as we move to 4k editing, I'll probably have to change that.

Not to go off topic, but I assume that DiVinci 10 is the beginning of making a color suite a full featured NLE, so some of this is probably mute in a few years.  I think DiVinci will be the bridge between editors that work in FCP 7 and need a faster 64 bit editorial suite, though are resistant to fcp X.   (I think I fall in this category).

IMO

BC
Logged

Morgan_Moore
Sr. Member
****
Offline Offline

Posts: 2206


WWW
« Reply #41 on: October 19, 2013, 09:57:19 AM »
ReplyReply

I know there are a million subtleties to each scene or job - from clinical accuracy to wild "art" and a million tricks like grey balance maybe outdoors or close to a key light but then shoot in a different location but overall I think an accurate start point is important. And the smaller your budget the more you hit odd lights and have to work with them. The bigger your budget the more you can just do what you want with lights..

As for suite - I'm a huge fan of resolve in every aspect but this

The power windows tracking layer opacity make it shine above all other suites IMO
Logged

Sam Morgan Moore Cornwall
www.sammorganmoore.com -photography
Sareesh Sudhakaran
Sr. Member
****
Offline Offline

Posts: 547


WWW
« Reply #42 on: October 20, 2013, 01:43:26 AM »
ReplyReply

I know there are a million subtleties to each scene or job - from clinical accuracy to wild "art" and a million tricks like grey balance maybe outdoors or close to a key light but then shoot in a different location but overall I think an accurate start point is important. And the smaller your budget the more you hit odd lights and have to work with them. The bigger your budget the more you can just do what you want with lights..

Or you could string together four Arri L7-Cs and have 'raw' light as well. Paint the scene in real time. Cost to you? $10,000, a van and a gym membership....maybe the occasional visit to the chiropractor as well.
Logged

Get the Free Comprehensive Guide to Rigging ANY Camera - one guide to rig them all - DSLRs to the Arri Alexa.
Morgan_Moore
Sr. Member
****
Offline Offline

Posts: 2206


WWW
« Reply #43 on: October 20, 2013, 02:01:55 AM »
ReplyReply

I made a simple comment suggesting a raw software should have a grey balance button and now I'm sposed to be investing $10g in lights?
« Last Edit: October 20, 2013, 02:30:19 AM by Morgan_Moore » Logged

Sam Morgan Moore Cornwall
www.sammorganmoore.com -photography
Sareesh Sudhakaran
Sr. Member
****
Offline Offline

Posts: 547


WWW
« Reply #44 on: October 20, 2013, 07:17:36 AM »
ReplyReply

I made a simple comment suggesting a raw software should have a grey balance button and now I'm sposed to be investing $10g in lights?

Just kidding!
Logged

Get the Free Comprehensive Guide to Rigging ANY Camera - one guide to rig them all - DSLRs to the Arri Alexa.
bcooter
Sr. Member
****
Offline Offline

Posts: 1055


Bang The Drum All Day


WWW
« Reply #45 on: October 20, 2013, 09:42:05 AM »
ReplyReply

I made a simple comment suggesting a raw software should have a grey balance button and now I'm sposed to be investing $10g in lights?

Come on Morgan don't be cheap.

You should look at 30 grand in lights, because you never know what will happen and we always need backups.



IMO

BC
Logged

jjj
Sr. Member
****
Offline Offline

Posts: 3239



WWW
« Reply #46 on: October 20, 2013, 01:24:22 PM »
ReplyReply

Thanks for the scripts but I don't want to post process motion in Lr or ps - I'd like to use davinci which is awesome apart from a missing "start point"
A film maker colleague recently tried LR instead of DaVinci and was amazed at how much easier it was to use in comparison.

Quote
I cannot see how wanting a start point is in any way controversial
It isn't. Being rude to a helpful poster is however well...rude.
Logged

Tradition is the Backbone of the Spineless.   Futt Futt Futt Photography
Morgan_Moore
Sr. Member
****
Offline Offline

Posts: 2206


WWW
« Reply #47 on: October 20, 2013, 11:51:00 PM »
ReplyReply

Lr and c1 are certainly simpler to get a start point and single look with a file but without any motion tracking I cannot see how they are really any use at all for processing motion. I'm firmly of the belief that resolve is by far the best affordable motion colouring suite.

It is never my intention to be rude especially to those whose work I deeply respect. I also however expect some respect for the stills industry where people like coots are clearly better and more experienced with colour than most of Hollywood put together. I d like to promote mutual learning and exchange between all those passionate about the creation if good images.,

Logged

Sam Morgan Moore Cornwall
www.sammorganmoore.com -photography
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1666


« Reply #48 on: October 21, 2013, 01:53:08 AM »
ReplyReply

So we're in agreement that using "chroma sub-sampling" terminology is not the correct language to use when describing a bayer sensor performance. You're wanting to "compare them" using the wrong terminology.
Well, I never used "chroma sub-sampling" as a terminology to describe a bayer sensor. Seems like you are argueing against a straw-man.
Quote
I'm not really disagreeing with the point you're actually trying to make about differences in the way chroma is "captured" but using chroma sub-sampling terminology is the wrong way to make your point, which is I believe, that there are less blue and red pixels relative to green in a given bayer sensor.
The fact is that our camera sensors are (usually) Bayer, and our video storage format is (usually) YCbCr with or without chroma subsampling. Comparing those two _is_ relevant, no matter how this hurts your philosophical view.

People will ask themselves things like:
*Will I loose anything by converting a (Bayer originated) native 1080p stream to 4:2:2 or 4:2:0?
*Will I loose anything significant?
*My target/editing occurs at 1080p 4:2:2, and color transitions are utmost important to me. What sensor resolution would be a (necessary) component in delivering this?

Answering those questions can be hard, and will usually include some "ifs" and "depending". Some idealized ballpark measures can still be handy, though.
Quote
I'm being pedantic
Frankly, it seems to me that you have just learned something new and wants to boast about it.
Quote
about this because your next leap of logic is really a gross simplification and in my opinion gives a misleading idea about Bayer sensors having "half" the chroma resolution.

I see it differently.

At 1920 x 1080 there are a fixed number of pixels.  And this is the point. You're throwing pixel resolution into a discussion about colour fidelity and making a direct leap to the ratio of RGB pixels and equating that to "colour" resolution.

The inference of the sensor only having "960x540" worth of blue pixels doesn't make sense because you never ever only have the blue channel, because nothing we shoot is ever so highly monochromatic.  Even BLUE LED's would have a range of "blue"  And for that matter, even the "blue" pixels have a huge overlap of sensitivity to the other colours.  It's not so highly monochromatic. If it was, then we'd have a very odd looking image indeed.
Talking about single-colorchannel variation is an extreme case that can enlighten us on the more general behaviour.

Quote
So while I would absolutely accept there *IS* a difference in the chroma resolution, you can't use video encoding terminology as it over simplifies what's actually going on and leads to these kinds of numbers being kicked around and it leads to silly conclusions about cameras not being good enough for chroma keying for example.
I never claimed that any cameras were not good enough for chroma keying. In fact, I believe that all of my claims were fairly accurate, while you have been putting words in my mouth throughout this discussion.
Quote
Yes there is a case for oversampling with Bayer sensors, and that's exactly the thinking behind a camera like the Sony F65 having an 8K sensor for 4K raw files.  Can you name for me another RAW camera that oversamples in this way to address this "problem" of reduced chroma resolution ?  
If anything, I would dare to claim that this "problem of reduced chroma resolution" seems to be overrated*) by many videophiles and enthusiasts. If there are few cameras that oversample (specifically) to overcome this, it could be a reflection that it really does not matter that much for most material. There are other good, technical reasons to oversample that would be outside the scope of our discussion.

I have some knowledge about the technology and theory behind video. I have never touched any of the cameras that you mention, and if you gave one to me I would know nothing about how to produce interesting videos with it.

-h
*)Like stated in my first post: I acknowledge that there are or might be situations where 4:2:0 is a real problem, just as there are situations where the Bayer CFA is a real problem. It is not apparent to me that this occurs very often, and when it occurs, it might be that jumping 1080p 4:2:2->4k 4:2:2 is a better option than jumping 1080p 4:2:2 -> 1080p 4:4:4.
« Last Edit: October 21, 2013, 02:56:26 AM by hjulenissen » Logged
John Brawley
Newbie
*
Offline Offline

Posts: 10


« Reply #49 on: October 21, 2013, 04:06:21 AM »
ReplyReply

Well, I never used "chroma sub-sampling" as a terminology to describe a bayer sensor.

I think you do though when you make statements like this...



For camera sensors that have a Bayer CFA at the same pixel count as that of the luma channel, "full"/true 4:4:4 information is not available.



The fact is that our camera sensors are (usually) Bayer, and our video storage format is (usually) YCbCr with or without chroma subsampling.

My point actually is that YCbCr IS chroma subsampling full stop, even when it's 4:4:4. Even if the ratio is 1:1:1 because it's no longer RAW sensor data.  It's "encoded" video. In fact it's VIDEO.  The bayer data isn't video until it's gets transcoded into VIDEO, and it's at that point one chooses the choma subsampling used, even if it's 4:4:4.

It's no longer RGB. 

It's no longer Raw Bayer sensor. 

The terminology is important.

Choma-Subsampled video = Encoded video

All of the variants, be they 4:4:4 4:2:2, 4:2:0, 4:1:1, 3:1:1 or 22:11:11 are all ways of describing encoded video.

None of those terms are ever used to describe raw bayer data.


People will ask themselves things like:
*Will I loose anything by converting a (Bayer originated) native 1080p stream to 4:2:2 or 4:2:0?
*Will I loose anything significant?
*My target/editing occurs at 1080p 4:2:2, and color transitions are utmost important to me. What sensor resolution would be a (necessary) component in delivering this?

Which are all fine to ask.  But to use "encoded video" terminology to compare to a bayer sensor's RBG pixel ratio is just misleading and a simplification.


I never claimed that any cameras were not good enough for chroma keying.



...Which is not to say that 4:4:4 _never_ have merits. I understand that green-screening have benefits from 4:4:4.

And again, this is my point.  Using 4:4:4 terminology and associating it with "benefiting" chroma keying means that 4:2:2, the next qualitative metric down from 4:4:4 is somehow inferior.

Of course it is in Keying 4:4:4 will be an advantage, but you can't quantify bayer sensor data as only having "4:2:2" worth of chroma resolution, the next step down from 4:4:4.


If there are few cameras that oversample (specifically) to overcome this, it could be a reflection that it really does not matter that much for most material.


I agree.  The cameras I mentioned are at the very top price bracket for any camera, and in the end it doesn't matter very much at all. 

But, there is a large difference to me between 4:2:2 and 4:4:4 video in terms of end result of colour information.

That is my very point at taking you up on the use of this terminology.

There is very little difference to me though in cameras that have oversampled bayer sensors so that I can then get what your own mathematics demand as being of "true" resolution.

4:2:2 Vs 4:4:4 = a big difference in end result. It really does have "half" the chroma resolution.

8K sensor sampled for 4K --> 4:4:4 video Vs 4K camera sampled to 4:4:4 video makes much much much less of a difference in terms of chroma resolution.  It's certainly not half which is what your comparison is saying.

By your "comparison" there is half the chroma resolution on the 4K sensor for 4:4:4 video compared to using an 8K sensor but the end result doesn't actually end up that way. 

It is not apparent to me that this occurs very often, and when it occurs, it might be that jumping 1080p 4:2:2->4k 4:2:2 is a better option than jumping 1080p 4:2:2 -> 1080p 4:4:4.

Agreed, in general assuming all other metrics are equal. There are very few cameras that can actually do this though and bit depth, compression codec factor more heavily most of the time.

jb
Logged
jjj
Sr. Member
****
Offline Offline

Posts: 3239



WWW
« Reply #50 on: October 21, 2013, 06:35:14 AM »
ReplyReply

Lr and c1 are certainly simpler to get a start point and single look with a file but without any motion tracking I cannot see how they are really any use at all for processing motion. I'm firmly of the belief that resolve is by far the best affordable motion colouring suite.
One doesn't always need motion tracking. Just like one doesn't always need a magic wand from Photoshop to select an area to change with LR's different way of working. Sometimes LR is better than DaVinci, just like sometimes a Canon is a better tool than a Hasselblad. Horses for course and all that. Smiley

Quote
It is never my intention to be rude especially to those whose work I deeply respect. I also however expect some respect for the stills industry where people like coots are clearly better and more experienced with colour than most of Hollywood put together. I d like to promote mutual learning and exchange between all those passionate about the creation if good images.
I certainly agree the raw workflow is superior to a non-raw workflow and find it frustrating to effectively go back in time when grading video footage. But I would hesitate to claim photographers are necessarily better at colouring/grading than Hollywood colourists because photographers have better [in some respects] tools than them. Heck, they've been at it longer than us.  Smiley
It seemed like you were having a go at John because of arguments you may had had elsewhere with other people, I didn't notice anything where he put down photographers. He simply explained a workflow used by a crew of professional film makers.
Logged

Tradition is the Backbone of the Spineless.   Futt Futt Futt Photography
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1666


« Reply #51 on: October 21, 2013, 06:44:10 AM »
ReplyReply

I think you do though when you make statements like this...
What I was trying to explain was that using 6 megasamples to store 2 megasamples will always be a redundant way of storing things*). Equivalently, if you want a 6 megasample file to contain true (unpredictable) information, then you need a sensor of at least 6 megasamples (realistically more). Thus, when you store your 1080p Bayer image as 1080p 4:4:4, you are not getting the information that this format (ideally) can convey. Which can be a bad thing or might not matter.

I never wrote that a 1080p Bayer sensor inherently _is_ 4:2:0, or that it maps 1:1 to the information of a 4:2:0 stream.

*)some caveats that I will add at a later time if the interest is really there
Quote
My point actually is that YCbCr IS chroma subsampling full stop, even when it's 4:4:4. Even if the ratio is 1:1:1 because it's no longer RAW sensor data.  
That may be your usage of chroma subsampling, but it does not correspond with how the rest of the world use that word.

Would you say that a 16-bit developed still-image *.tiff file is "chroma subsampled because it is no longer RAW"? I don't think anyone would do, except possibly a few people who mix up Bayer CFA with CbCr sampling...
Quote
It's "encoded" video. In fact it's VIDEO.  The bayer data isn't video until it's gets transcoded into VIDEO, and it's at that point one chooses the choma subsampling used, even if it's 4:4:4.
Are you the kind of guy who protest loudly anytime anyone talk about a digital image, claiming that "it is only bits & bytes, it is not an image until it is printed"?
Quote
Quote
I never claimed that any cameras were not good enough for chroma keying.
Quote
I understand that green-screening have benefits from 4:4:4.
And again, this is my point.  Using 4:4:4 terminology and associating it with "benefiting" chroma keying means that 4:2:2, the next qualitative metric down from 4:4:4 is somehow inferior.
Both of your quotes from me seems to be accurate and true.

If I was to design a chroma keying algorithm and had the choice between 4:4:4 input, or the same input converted to 4:2:2, I would prefer 4:4:4. For some kinds of sources it might not matter much, but it would never hurt (except bandwidth and processing cost).

Quote
That is my very point at taking you up on the use of this terminology.
I am sorry, but I think that you are taking up the wrong guy, for the wrong reasons with little effect other than wasted bandwidth.

-h
« Last Edit: October 21, 2013, 06:57:54 AM by hjulenissen » Logged
jjj
Sr. Member
****
Offline Offline

Posts: 3239



WWW
« Reply #52 on: October 21, 2013, 06:49:39 AM »
ReplyReply

I am sorry, but I think that you are taking up the wrong guy, for the wrong reasons with little effect other than wasted bandwidth.
Yup certainly 'taking up the wrong guy'. hjulinessen don't like facts that conflict with his slightly strange world view, so best avoided.
Logged

Tradition is the Backbone of the Spineless.   Futt Futt Futt Photography
Morgan_Moore
Sr. Member
****
Offline Offline

Posts: 2206


WWW
« Reply #53 on: October 21, 2013, 07:00:17 AM »
ReplyReply

One doesn't always need motion tracking. Just like one doesn't always need a magic wand from Photoshop to select an area to change with LR's different way of working. Sometimes LR is better than DaVinci, just like sometimes a Canon is a better tool than a Hasselblad. Horses for course and all that. Smiley
I certainly agree the raw workflow is superior to a non-raw workflow and find it frustrating to effectively go back in time when grading video footage. But I would hesitate to claim photographers are necessarily better at colouring/grading than Hollywood colourists because photographers have better [in some respects] tools than them. Heck, they've been at it longer than us.  Smiley
It seemed like you were having a go at John because of arguments you may had had elsewhere with other people, I didn't notice anything where he put down photographers. He simply explained a workflow used by a crew of professional film makers.

Now two total threads Smiley If find nearly every secondary needs motion tracking, also I like the big scopes, nodes and many other bits of Resolve.. but motion tracking is the big one vs a stills software. After a lot of experiments Resolve is now my choice to deliver colour on time and on budget.

A workflow by professional film makers. Yes everyone is welcome to use their system especially those who make it work wonderfully. But that does not make it right or the correct solution for all. Especially when one considers the implications even of a 'monitor' that is sensitive enough to dial out a filter green cast 'on set' - basically that is a tethered monitor with AC supply or at least a multi k onboard monitor - (you cant see shit on a $1k SmallHD in terms of colour - but coming from raw stills I only see on set colour as a guide and the smallHD is good enough for that). Or 'lighting to a certain temperature' that is wonderful if you have a van of HMI, but not so good for solo efforts.

As for the experience of grading - well most corporate video looks horrid, most hollywood is graded by professional specialists, neither path really puts the DP in the driving seat in that way that smaller stills houses work which for me for example means I control the look from lighting and exposure choice to delivered file on 90% of my jobs for 15years. Also not all work is dramatic, imaging work may go from copying art works and product to any level of dramatic contrivance. The suggestions I make a more towards doing the first end of the scale on time and on budget.

Fundamentally I think we all need/want to become faster/more flexible better or cheaper.. to get more from our budget and I think adding those tools to Resolve would aid that, is the right thing, is the  'normal' thing in raw grading, and of course any user can choose to leave them untouched.





« Last Edit: October 21, 2013, 07:05:03 AM by Morgan_Moore » Logged

Sam Morgan Moore Cornwall
www.sammorganmoore.com -photography
bcooter
Sr. Member
****
Offline Offline

Posts: 1055


Bang The Drum All Day


WWW
« Reply #54 on: October 21, 2013, 09:05:59 AM »
ReplyReply


As for the experience of grading - well most corporate video looks horrid, most hollywood is graded by professional specialists, .......snip......I make a more towards doing the first end of the scale on time and on budget.

Fundamentally I think we all need/want to become faster/more flexible better or cheaper.. to get more from our budget and I think adding those tools to Resolve would aid that, is the right thing, is the  'normal' thing in raw grading, and of course any user can choose to leave them untouched.







Morgan on some of this I agree, but I think where some of this falls down is there is a constant comparison of "Hollywoood" production, vs what most of us do.

If Hollywood wants an effected, blue tone look, with great deep skintones, they build a set or change a practical location to match, wardrobe for the look and color, test various stocks of digital settings, test through the colorists and then go forward.

I see it all the time in our work.  A client wants a cool desaturated look, though the location has brown walls and yellow trim, the wardrobe is bright primary and that will never color as they anticipate.

On a scene like that you can color, mask, matte, select and effect all you want but a brown scene becomes very global when brought down to deep un saturated blue.  You can get close to the look, but it's never going to have the exact refinement or look of a "hollywood" project that is planned for the look way before the cameras start rolling.

If you've seen the movie "Rush" which is shot by a very good dp Don Mandel and in Hollywood terms, shot on a "low budget", $38 million (which didn't allow for any over the line expenses), to achieve the look the DP wanted he had teams of colorists working on site and didn't move off a shot until he was sure he got the look he wanted. 

The Movie Gravity took 4 let me repeat this Four years to complete, so in my view any of us comparing our work to budgets that range in the 100's of millions is not an apples to apples comparison.

Few of us have that luxury and the common thought is shoot what you got, then we'll try to fix it later, which means post work is double the time of the actual pre production and on set time.

I do agree that Di Vinci is good, (though I don't think it's that great) maybe 10 will be more intuitive, though if we had a lightroom interface that worked with layers, had some tracking and keying and used sliders instead of those silly wheels, our (and even professional colorists) would find it a godsend.

I personally like coloring on the timeline, because I can play and replay and see that my continuous look is true to the story and though suites like Colorista II are not exactly an easy slam dunk, I find once we have a locked edit and are ready for finish, staying on the time line gives me a more cohesive look, even if round tripping is linear, because round tripping rarely gives you the sound, the soundtrack, the effects and the titles that rest on the timeline.

In video, or motion, or digital cinema (take your pick of the terms) we still have one foot in the new world and one foot in the past and it's time to step up and find a way to go forward faster and easier.

It's funny, I have a friend and supplier that's a top drawer effects artist.  At his fingertip he has used Flint, Fire, Flame, Smoke, every expensive effects suite ever made and when we talk about his projects and what he shows on his reel, I ask what was that done in and he routinely says uh . . . photoshop . . . uh After Effects . . . uh photoshop again . . .

But watching recent films and expensive television production, I see budget and time (same thing) effecting the look more and more.   I just saw a Hollywood movie last night and the reverse shots of the actors had such a different coloration and look, it seemed like they were shot on different sets with different crews and I assume the dp and director wanted a better match, but I bet budget stepped in and said ok, that's good enough, lock it.

IMO

BC
Logged

jjj
Sr. Member
****
Offline Offline

Posts: 3239



WWW
« Reply #55 on: October 21, 2013, 10:12:37 AM »
ReplyReply

After Effects gets used a lot for things like the HUD in Iron Man or green screen work like in Meet The Fockers - although AE isn't mentioned in this post, Hype specifically mentions using it it lots of features elsewhere then links to this example of his work.
Logged

Tradition is the Backbone of the Spineless.   Futt Futt Futt Photography
Morgan_Moore
Sr. Member
****
Offline Offline

Posts: 2206


WWW
« Reply #56 on: October 21, 2013, 12:11:19 PM »
ReplyReply


IMO

BC


Well if I was looking for examples of colour on a 'budget' (ie not building the whole thing from scratch with an art team) I would point to work such as your stills before I would point to many 'videographers' work Smiley




Logged

Sam Morgan Moore Cornwall
www.sammorganmoore.com -photography
Sareesh Sudhakaran
Sr. Member
****
Offline Offline

Posts: 547


WWW
« Reply #57 on: October 22, 2013, 03:07:13 AM »
ReplyReply

Quote
That may be your usage of chroma subsampling, but it does not correspond with how the rest of the world use that word.

Would you say that a 16-bit developed still-image *.tiff file is "chroma subsampled because it is no longer RAW"?

To obtain a Y'CbCr stream you must first sub-sample, always. What you sub-sample are split into Luma and Chrominance information, which forms part of the core specifications of a stream that can be properly labelled Y'CbCr.

John never mentioned a TIFF file, he specifically said "My point actually is that YCbCr IS chroma subsampling full stop..." In this respect he is correct. Scientifically you could sample for anything, even bacteria on the sensors...but in the video world, Y'CbCr is always in reference to sub-sampling of chrominance.

Quote
Using 4:4:4 terminology and associating it with "benefiting" chroma keying means that 4:2:2, the next qualitative metric down from 4:4:4 is somehow inferior.

It is, because you're sampling at half the frequency. Once you've digitized a sampled feed there is no going back. Sampling is always destructive, by definition.

However, the beauty of the solution is that you are managing to throw away 1/3rd of the data while retaining 100% perceptual color information. An algorithm that tries to extrapolate color information from a 4:2:2 signal will always struggle because it cannot be expected to know the conditions of the sampling (trade secrets, and even mistakes).

I have extensively tested 4:2:2 against 4:2:0 and RGB on many top-class keyers (Primatte, Ultramatte, IBK, alpha, etc.) and I must say that 4:2:2 is great for most tough keys. What I've found is that the 'impossible' keys also struggle at 4:4:4, and you must use multiple methods to pull something out. The results are similar if you had used 4:2:2.

Quote
If I was to design a chroma keying algorithm and had the choice between 4:4:4 input, or the same input converted to 4:2:2, I would prefer 4:4:4. For some kinds of sources it might not matter much, but it would never hurt (except bandwidth and processing cost).

In fact, you should prefer neither. When working with data it is always preferable to have the most 'virgin' feed possible. In the case of Bayer sensors, that feed is raw+a supreme understanding of the specific CFA+sensor+lens used. However, no camera manufacturer will divulge many of the details necessary for a third-party algorithm (not to say anything about patent restrictions and licensing). Therefore, if I were to design an algorithm, I would prefer RGB data any time. Without RGB data, you're always a slave to the manufacturer's proprietary RAW processor (Resolve, Redcine-X, Arri, Sony viewer, and so on). Not that that's a bad thing!

Any subsampling, even 4:4:4, is not preferable to RGB, which is preferable to RAW for unscientific reasons!

Quote
It's funny, I have a friend and supplier that's a top drawer effects artist.  At his fingertip he has used Flint, Fire, Flame, Smoke, every expensive effects suite ever made and when we talk about his projects and what he shows on his reel, I ask what was that done in and he routinely says uh . . . photoshop . . . uh After Effects . . . uh photoshop again . . .

It can edit, it can grade (as good as Resolve and Speedgrade) and it can pull of VFX (as well as Nuke or Shake) like any other app on the planet. Where the Autodesk suite had its advantage was in working with 3D models and 3D space. AE now has that too. It has a tracker, superb masking (better than power windows), rotoscoping and keying controls. There's nothing that AE can't do...maybe except audio. In quality, nothing is better if not worse. And, its encoding engine is probably the finest I've seen for mastering.

To stay with the topic in this thread, where AE lags behind Resolve is that Resolve is a better RAW processor for Blackmagic cameras. Also, Resolve 10 beta works and feels much better than Resolve 9 (I only use the Lite version). In the next one week, Adobe is expected to release their CinemaDNG update to CC, so things might change.
Logged

Get the Free Comprehensive Guide to Rigging ANY Camera - one guide to rig them all - DSLRs to the Arri Alexa.
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1666


« Reply #58 on: October 22, 2013, 02:00:24 PM »
ReplyReply

To obtain a Y'CbCr stream you must first sub-sample, always. What you sub-sample are split into Luma and Chrominance information, which forms part of the core specifications of a stream that can be properly labelled Y'CbCr.
"Chroma subsampling is the practice of encoding images by implementing less resolution for chroma information than for luma information, taking advantage of the human visual system's lower acuity for color differences than for luminance."
http://en.wikipedia.org/wiki/Chroma_subsampling

The conversion of some R'G'B' format to some Y'CbCr format consists of:
1. Apply pre-shifting
2. Apply a 3x3 linear (invertible) transform
3. Apply post-shift

4. Subsample chroma to something like 4:2:2, 4:2:0, or leave it at 4:4:4

5. Quantize and clip

steps 1-3 are invertible (lossless), steps 4-5 are not. In an actual design, the actual processing may deviate some, and the precision may be reduced, but the general black-box behaviour should be aligned with this description.
Quote
In fact, you should prefer neither.
That is a nonsensical answer. Given the choice of ice cream and cookies you prefer chocolate? No meaningful discussion can be based on that logic.

-k
« Last Edit: October 22, 2013, 10:31:21 PM by hjulenissen » Logged
Sareesh Sudhakaran
Sr. Member
****
Offline Offline

Posts: 547


WWW
« Reply #59 on: October 23, 2013, 02:48:37 AM »
ReplyReply

steps 1-3 are invertible (lossless), steps 4-5 are not. In an actual design, the actual processing may deviate some, and the precision may be reduced, but the general black-box behaviour should be aligned with this description.That is a nonsensical answer. Given the choice of ice cream and cookies you prefer chocolate?

You are confusing digitally obtained data as lossless. 0 = 1, so to revert 1 = 0. To obtain this data you must first sample, and then sub-sample. There are many analog sampling stages before you even begin to see data.

Digitization is lossy by definition.
Logged

Get the Free Comprehensive Guide to Rigging ANY Camera - one guide to rig them all - DSLRs to the Arri Alexa.
Pages: « 1 2 [3] 4 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad