Ad
Ad
Ad
Pages: [1]   Bottom of Page
Print
Author Topic: AVCHD  (Read 3270 times)
fredjeang
Guest
« on: January 02, 2011, 07:09:01 PM »
ReplyReply

Hi,

Again some questions in this oscur dark muddy and uncertain universe of codecs.

We compress because otherwise the volume of datas would exceed the capacity of the recording. Nothing strange so far.
But...where are the full recorded datas then?

AVCHD is highly compressed solution. The problem is that it uses very complex algorythms that slowdown the edition. But when we transcode to a lossless format, do we recuperate datas? or in fact what happens is that the avchd is in itself a closed format.

Ex: I have a jpeg file, wich is already compressed. I would create a tiff or a psd for editing, then export again in jpeg. But I actually do not create a Raw from the jpeg...so, if I understand well, we transcode a AVCHD to edit but we do not recuperate any data, we just avoid more damages.

In this case, the word CODEC is a lie, because what would occurs is that there is no real decompression but simply a "stabilization" (I do not loose more but what was lost is lost)

That is what I've always understood but now I doubt. Am I right in my explaination?

So the only solution is actually Red if what we want is the full potencial of datas recorded.



Second question is that in AVCHD lite, when I compared the footages obtained in 720 between mjpeg and avchd, I always prefered the look of the mjpeg. I know it is an outdated codec but each time I have the same sensation. I'm talking about non edited footage.
I'd like to hear about your experiences.
« Last Edit: January 02, 2011, 07:28:24 PM by fredjeang » Logged
Chris Sanderson
Administrator
Sr. Member
*****
Offline Offline

Posts: 1920



« Reply #1 on: January 02, 2011, 09:09:41 PM »
ReplyReply

Hi, Again some questions in this oscur dark muddy and uncertain universe of codecs.

We compress because otherwise the volume of datas would exceed the capacity of the recording. Nothing strange so far.
But...where are the full recorded datas then?

Other than RED - which currently is a RAW solution - the full data is indeed gone. What remains is a good rendition of the original.

AVCHD is highly compressed solution. The problem is that it uses very complex algorythms that slowdown the edition. But when we transcode to a lossless format, do we recuperate datas? or in fact what happens is that the avchd is in itself a closed format.

The data is expanded from compressed to less compressed....it is not the original but the 'good rendition'

Ex: I have a jpeg file, wich is already compressed. I would create a tiff or a psd for editing, then export again in jpeg. But I actually do not create a Raw from the jpeg...so, if I understand well, we transcode a AVCHD to edit but we do not recuperate any data, we just avoid more damages.
Pretty much.

In this case, the word CODEC is a lie, because what would occurs is that there is no real decompression but simply a "stabilization" (I do not loose more but what was lost is lost)

That is what I've always understood but now I doubt. Am I right in my explaination?

So the only solution is actually Red if what we want is the full potencial of datas recorded.

The loss is at the moment of COmpression. There is subsequently no further loss on DECompression. The data is smaller on COmpression and much larger on DECompression. If you want (as we all do!) RAW, RED is a solution.


Second question is that in AVCHD lite, when I compared the footages obtained in 720 between mjpeg and avchd, I always prefered the look of the mjpeg. I know it is an outdated codec but each time I have the same sensation. I'm talking about non edited footage.
I'd like to hear about your experiences.

sorry, no experience with mjpeg
Logged

Christopher Sanderson
The Luminous-Landscape
michael
Administrator
Sr. Member
*****
Offline Offline

Posts: 4916



« Reply #2 on: January 02, 2011, 10:21:56 PM »
ReplyReply

Short of buying a RED or a Genesis, the answer is to buy a Nanoflash and attach it to your camera's HDMI or SDI port. This bypasses the camera's codec and allows recording a 100 MBPS or higher signal direct from the camera. Usually this is then recorded as Prosres with 422 color, usually in 10 bit mode. This is as close to RED raw as you can get in the lower end of the market.

Units like the Nanoflash cost between $2k and $5k, but combined with a decent video DSLR or a camcorder like the new Panasonic AF100 they'll provide superb pro-level quality for well under 10 grand.

Michael
Logged
fredjeang
Guest
« Reply #3 on: January 03, 2011, 09:34:27 AM »
ReplyReply

Thanks guys.
Yes, I had a look on that Nonoflash. Very interesting solution. The thing is that because of the need of external LCD, that ocupates 2 ports so a splitter is needed and where and how to attach this device? With gaffer? Roll Eyes

http://www.amazon.com/ViewHD-Splitter-Certified-Bandwidth-10-2Gbps/dp/B002673EW6/ref=pd_cp_e_1
« Last Edit: January 03, 2011, 09:36:04 AM by fredjeang » Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1712


« Reply #4 on: January 03, 2011, 09:45:31 AM »
ReplyReply

In this case, the word CODEC is a lie, because what would occurs is that there is no real decompression but simply a "stabilization" (I do not loose more but what was lost is lost)
CODEC means enCOder/DECoder. In this case, it is no lie, because the encoder takes a large video file and makes it smaller (while introducing losses). The decoder takes the smaller file and makes it into the same _format_ as what you had in the beginning. Nowhere does it say that the process will be without losses (in fact, there is a cathegory of codecs that introduce no loss, they are called lossless. Typically they generate files that are too large for video capture).
Quote
So the only solution is actually Red if what we want is the full potencial of datas recorded.
I think it pays to be pragmatic. Who cares if data was changed if you cannot see it? If a camera produce good imagery, would you think any less of it if you knew that it was compressed?

There are all kinds of other trade-offs in a camera that may or may not reduce image quality. I would suggest that a complete review of all of those trade-offs is complex and not something that should be left to study of manufacturers spec-sheets.

-k
Logged
fredjeang
Guest
« Reply #5 on: January 03, 2011, 10:15:59 AM »
ReplyReply

A part of your post is not what I understand.
The DECompression is NOT a proper decompression. You take for example the Canopus lossless that I use in editing, from what it does generate its editing format? From a file that has already lost datas, not recuperable datas.
So again, the only thing that happens is that you avoid more lost but you do not get the full datas that where recorded first. In that sense, nothing has been recuperated but stabilized.
The only reason to use this is because it does not slow-down the workflow.
Unless as Michael pointed you'd use a Nanoflash device because it by-passes the encoding step in-camera.

Of course that many other factors influenced the video IQ but this thread is centered on codec. Feel free of course to share any thing that appears relevant to you even if it's not related to codec.
« Last Edit: January 03, 2011, 10:19:53 AM by fredjeang » Logged
bcooter
Guest
« Reply #6 on: January 03, 2011, 11:14:36 AM »
ReplyReply



There are all kinds of other trade-offs in a camera that may or may not reduce image quality. I would suggest that a complete review of all of those trade-offs is complex and not something that should be left to study of manufacturers spec-sheets.



As much as compression, decompression or file size effects a file when you work it deep, in my experience 12 bit vs. 10 bit shows the most change.

A few years ago we had a section of HD footage that was "challanged" due to a drastic change in natural lighting.

We put it in two 10 bit systems and just couldn't save it, then converting it to a 12 bit system (even though at this point I guess you would call it faux 12 bit) we were able to no only save it but make it look good.

Around this time we shot a project where one of the videographers, working with a HD ENG tape camera, shot it non compressed (who knows why?).  That was a nightmare as non compressed HD footage took terabytes to capture and though we got it out of the system, had it converted to a workable codec it was not fun.

What would be nice to have one simple standard for capture to conversion, but with every hardware maker having their own ideas and secret sauce, every editing system using a different format or wrapper, it seems we will be fighting it  for some time to match footage and keep everything to one single standard.

The film world has a system where you essentially edit on a 1 light or 3 light proxy, then go back with the edl and purpose out the footage into whatever format is needed.  (that's over simplified but close).

Digital video is very much like the early stages of digital stills, as workflow, file size/type is still in the make it up as you go stage.

Some dp.s like the Panasonic because they believe it has a sharper file than the canon's.  Canon has a lot of smoothing on each frame, I guess to kill noise and artifacts on line skipping they do to get the data off the chip in a reasonable amount of time and storage space.

The RED seems to be a much more robust file than any of the digital video footage I've used, but it's still not overly tack sharp like a still image.  The RED is also a system that seems to mimic film workflow than the standard prosumer digital video.

Regardless of what camera anyone uses, it's still the wild west and probably will be for some time.

IMO

BC


Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1712


« Reply #7 on: January 03, 2011, 01:16:45 PM »
ReplyReply

A part of your post is not what I understand.
The DECompression is NOT a proper decompression. You take for example the Canopus lossless that I use in editing, from what it does generate its editing format? From a file that has already lost datas, not recuperable datas.
So again, the only thing that happens is that you avoid more lost but you do not get the full datas that where recorded first. In that sense, nothing has been recuperated but stabilized.
The only reason to use this is because it does not slow-down the workflow.
Unless as Michael pointed you'd use a Nanoflash device because it by-passes the encoding step in-camera.
Not sure that I quite understand you, but most video codecs are what is known as "lossy". This means that the total result of an enCOde-DECode pass is some loss of quality. Just like VHS or BETA or mp3. The main reason why we still use them, is that it allows video to be put on a single flash card, instead of a massive rack of RAID hard-drives.

The decoder does not try to "stabilize" or "recouperate" anything (not by my definitions at least). It follows a precise formula that is written down by some consortium or international organization saying something like "if this bit-pattern occurs in a file: '101100111', then generate a pixel at pixel position 42, 666 with YCbCr values [11 23 101]. Oh, and this example is naive, but hopefully instructive.

-h
Logged
Chris Sanderson
Administrator
Sr. Member
*****
Offline Offline

Posts: 1920



« Reply #8 on: January 03, 2011, 01:22:52 PM »
ReplyReply

The RED seems to be a much more robust file than any of the digital video footage I've used, but it's still not overly tack sharp like a still image.  The RED is also a system that seems to mimic film workflow than the standard prosumer digital video.

Regardless of what camera anyone uses, it's still the wild west and probably will be for some time.

 FWIW - I agree 100%.

Some further thoughts on the language of convergence here.

Logged

Christopher Sanderson
The Luminous-Landscape
fredjeang
Guest
« Reply #9 on: January 03, 2011, 02:58:50 PM »
ReplyReply

Not sure that I quite understand you, but most video codecs are what is known as "lossy". This means that the total result of an enCOde-DECode pass is some loss of quality. Just like VHS or BETA or mp3. The main reason why we still use them, is that it allows video to be put on a single flash card, instead of a massive rack of RAID hard-drives.

The decoder does not try to "stabilize" or "recouperate" anything (not by my definitions at least). It follows a precise formula that is written down by some consortium or international organization saying something like "if this bit-pattern occurs in a file: '101100111', then generate a pixel at pixel position 42, 666 with YCbCr values [11 23 101]. Oh, and this example is naive, but hopefully instructive.

-h
H,
Yes, you're correct. I used the word "stabilize" as a short-cut but what I had in mind is that no more datas are lost if you use a lossless format. I know its playing on words. The problem I experienced like many others with avchd in editing is basically that the algorithms are very complex so it slows down the workflow.

But again, the Michael's Nanoflash solution is a very good one. Ok, with 3000$ more on the bill.
« Last Edit: January 03, 2011, 05:08:20 PM by fredjeang » Logged
bradleygibson
Sr. Member
****
Offline Offline

Posts: 829


WWW
« Reply #10 on: January 04, 2011, 01:37:48 AM »
ReplyReply

Hi, Fred,

But when we transcode to a lossless format, do we recuperate datas?

As you know, compression saves space.  With lossy compression, information is lost at the time of encode (at the time of initial recording).  That lost information can never be recovered, even when transcoding to a lossless format.  All that means is no *further* loss will be incurred.  If you transcode losslessly from a lossy source, you'll see whatever image degradation the lossy encoding gave you.  It simply won't make it any worse.

if I understand well, we transcode a AVCHD to edit but we do not recuperate any data, we just avoid more damages.

Edit formats can also be lossy or lossless.  What they offer is the ability to quickly render any frame (frame addressibility).  For technical reasons which all boil down to space and bandwidth, streaming formats such as AVCHD are very difficult for hardware to work with in real time for quick and responsive frame-accurate rendering and.  This is an issue that we don't have in the photo domain (as there's only one frame).  So your statement is true if you use a lossless editing format, which will require a bigger hard disk, faster PC, etc. etc.

In this case, the word CODEC is a lie, because what would occurs is that there is no real decompression

It's called a codec because you'd be REALLY unhappy if it was only enocded!   If you couldn't decode it, it would be lost!  So by choosing your encode format you want to know that your software of choice can decode it, and possibly even re-encode it, if happen to choose the same format as your final output format.  Usually video devices have decode (playback) capability in addition to encode (record) capability.

Second question is that in AVCHD lite, when I compared the footages obtained in 720 between mjpeg and avchd, I always prefered the look of the mjpeg

The world of video formats is a mess just about beyond compare.  You've got to sort through codecs vs. containers vs. profiles vs. a plethora of variables which make understanding difficult.  At the risk of oversimplifying a bit, AVCHD and AVCHD Lite are names for a limited set of encoding parameters for H.264/MPEG-4.  The full standard is so complex, people needed it narrowed down a little for the real world.  So if you don't like the look of AVCHD Lite, there are some knobs and adjustments that the standard allows (your device may or may not) that might improve it for you.  Or you could step up to AVCHD for more choices.

So the only solution is actually Red if what we want is the full potencial of datas recorded.

I think you're right--raw video (which still undergoes lossy compression, by the way, but it is *much* better than what most of us have had had until now) is top-of-the-heap for quality and versatility.
Logged

hjulenissen
Sr. Member
****
Offline Offline

Posts: 1712


« Reply #11 on: January 04, 2011, 05:01:09 AM »
ReplyReply

I've been playing with words I admit with the DEC part. But I still beleive that a word like transcoding is more appropriate.
Reinventing terminology for a field that allready have established a terminology seems like a waste of time, especially when you dont (?) seem to fully understand what is technically going on.

A decoder is a decoder. If you dont like lossy video, then that is fine, but insisting that a decoder is not really a decoder... Why? Insisting that a car is not really a car but an umph-ma-gatzmo simply because you dont like the sound of the word 'car' seems equally wastefull.

-k
Logged
Rhossydd
Sr. Member
****
Offline Offline

Posts: 1990


WWW
« Reply #12 on: January 04, 2011, 06:17:36 AM »
ReplyReply

The hability of Premiere or Avid or Edius to edit native avchd is not really usable, unless you are runnin the highest computers availables.
All editing software will benefit from having the best possible hardware to run it on. Premiere Pro CS5 is a fantasticly slick and reliable program to use, that actually doesn't need very high end hardware if you've a supported graphics card in the system (and they aren't that expensive).
I guess you haven't actually used a licensed copy of PP CS5 ? making any judgements based on the trial version is pointless as the real power of PP CS5 is only available in the full licensed versions as the trial version doesn't include the AVCHD codecs due to licensing issues.
Quote
Now, why editing in avchd? just to avoid the transcoding step and avoid slowdown the workflow. But then applying complex effects in a lossy format doesn't seems to me a good idea. So yes, we are still dependant on batch transcoding tasks.
You edit in AVCHD to keep file sizes small and avoid transcoding losses. With smaller files sizes, there's less data to move around, so one aspect of system performance(I/O disk performance and throughput) is less important than it would be if using large transcoded files.
Effects are only applied when the project is finally rendered, so 'working with a lossy format' isn't an issue.
Logged
bcooter
Guest
« Reply #13 on: January 04, 2011, 08:10:24 AM »
ReplyReply

Reinventing terminology for a field that allready have established a terminology ........

Sorry for the long reply.

Word to the wise to people moving from stills to motion.

Know your workflow before you shoot.

Whether your a one man band that is director, dp, camera operator, editor, colorist, effects specialist or you outsource everything, know exactly how you plan to shoot, store, preview and deliver.

Digital video is the wild west and it's not just codecs, it's cameras, what is real 24p what is faux, what happens with jello cam, or strobing, or mixed light, flickering, the list can go on and on forever.

There is a reason the movie guys, test, test and test again, with lenses, lights, in real world situations.  Something that in stills that looks simple, like a person walking under a tree can cause a nightmare in a strobing look, that is just non usable.

Something that looks like a few kelvin off in color temp, can be a non fix within the budget.

Everybody grabs a 5d, shoots with window light, it looks beautiful and say "ah ha" I've got it down and then they accept a commissioned project.  Of course the commissioned project requires sound, shooting at night, pulling focus from a face to a street, mercury vapors in the background, hmis in the foreground and boom . . . the world ends and you've got 2 hours of messy footage that nobody will touch.

Also do your test, run it through your system and find out what codec works in editing, coloring and output.  Don't think that high def is high def cause it's not.  Don't think that non compressed from compressed footage will make it better, because it usually won't.

If you outsource (and I strongly suggest everyone outsource whatever you can afford, at least at first), talk to your editorial house, test with them, have them cut 30 seconds, send it to coloring, have it proofed and output to broadcast AND web standards (sorry there are no real standards) and at that stage you'll have a pretty good idea what it takes to complete a project.  If you have any effects, or even elaborate 3d titles, get those people on board way before you shoot.

Then think about how you shoot, how the clients view as you shoot, (do you want them to see everything or not?).

There is a reason that a 30 second MOS spot takes 10 times the footage of a dialog spot, there is a reason that dialog can take 11 takes instead of 2 and one more suggestion, if you direct and it's dialog, make sure you hire a sound technician and have him/her run a secondary set of headphones to you as the shoot progresses.  You'll learn as much about direction from listening as viewing.

Honestly, Hollywood has it down.  They know the system, they have a reason for everything they do and what looks like overindulgence is just the standards to produce a professional motion piece.

Right now the powers that be in Hollywood are just ga ga over the 5d/7d syndrome and looking at the final balance sheet can see line items falling like flies.  Take out the cable pullers, the dedicated generators, a few grip trucks, crew by the dozens cause those little 5ds work in room light.

Well, unfortunately the guys that count the money are somewhat wrong and fail to remember that the reason they got those big offices was listening to real film makers that knew how to hire professional artists and technicians that were perfect for the project.

There is a reason the directors scream, don't make me shoot video, but it's coming and nothing is gonna stop it, just like in stills.  And just like in stills it will finally get just as good or better than film production, but until then it's going to be a mess.

So, my view is don't be part of the mess, be the artist that takes it to the highest level.

Please don't misunderstand me, a great film maker can make a 5d work well, and tell the story in a way that respects the viewer, but that's a great film maker.

I know for a while were all in for it.  Just like in stills where the arguments go on and on that a Canon is as good as a hasselblad, a Nikon is as good as a Phase, we'll see the same thing for led lights being as good as hmi, a 5d being as good as a RED a RED as good as an ARRI, CS as good as Avid, etc. etc.

And just like in stills, people that know when and what equipment to use that's right for the project will have the most success, regardless of the brand.

The last thing we want to do as still photographers moving to motion is to lower the levels of the art, we want to raise them.

IMO

BC
Logged
bradleygibson
Sr. Member
****
Offline Offline

Posts: 829


WWW
« Reply #14 on: January 04, 2011, 08:57:47 AM »
ReplyReply

I still beleive that a word like transcoding is more appropriate.

Transcoding is the act of decoding from one format and encoding into another.  So it also involves both aspects of a codec, but simply different codecs.  Nothing difficult or convoluted there.  (I agree with others on this thread about using 'personal definitions' of terminology, as it will be harder for others to understand what you are referring to.)

Now, why editing in avchd?

The only (obvious) reasons one might CHOOSE AVCHD as source material that I can think of are:
a) It's the only option your favorite recording device offers
b) To save space (longer record times).

As for editing in AVCHD, agreed, it wouldn't be my first choice either at this point in time.  But if one has the right tools (hardware and software) for the job, "knows" (yeah, right) that it will only go through one generation of editing, and/or is careful about managing quality loss (similar to editing JPEGs), it can be done, of course.

Remember AVCHD was not originally designed as an edit (frame-addressable, bidirectional) format.  On the engineering side, folks have been doing backflips to try to make it feel light and responsive, but as you've discovered, the industry hasn't made it all the way there yet.  I may be dating myself here but I remember when JPEG first came out, it took several *minutes* to encode a single megapixel-ish image, at and around the time of Photoshop 1.0 (software was called Art Department Professional).  At some point in the future, we will have enough computing horsepower that it won't be an issue.  But by then, we'll have moved on to new formats, at extremely high resolution (by today's standards), with new encodings, such as raw.

I think you can already see this in RED's offerings and rumored in Canon's upcoming 1Ds Mark IV.
Logged

fredjeang
Guest
« Reply #15 on: January 04, 2011, 10:19:34 AM »
ReplyReply

All editing software will benefit from having the best possible hardware to run it on. Premiere Pro CS5 is a fantasticly slick and reliable program to use, that actually doesn't need very high end hardware if you've a supported graphics card in the system (and they aren't that expensive).
I guess you haven't actually used a licensed copy of PP CS5 ? making any judgements based on the trial version is pointless as the real power of PP CS5 is only available in the full licensed versions as the trial version doesn't include the AVCHD codecs due to licensing issues.You edit in AVCHD to keep file sizes small and avoid transcoding losses. With smaller files sizes, there's less data to move around, so one aspect of system performance(I/O disk performance and throughput) is less important than it would be if using large transcoded files.
Effects are only applied when the project is finally rendered, so 'working with a lossy format' isn't an issue.
Paul, with all the respect I due to your long experience as cameraman, wich I have, this time you're guessing, and you guess wrong. Wink
All my material is fully licensed except for a few softwares that have nothing to do with video.
The issue with AVCHD editing has been largely expressed even by people having the most powerfull devices, so this is not a Fred's thought. If you do not beleive about Edius, try by yourself. It's very simple: I can't edit comfortably AVCHD in CS5 or 4 (I do not have a last generation computer) and I can edit comfortably AVCHD in Edius 6. It's not just a thought, or a guessing, it is a fact.

I'm not pro edius or against Premiere. In fact I've expressed my enthousiasm with Premiere wich is my main editor. But hey, I'm looking to what's work the best for me for each task, and that is why I use different NLE. I'm not married with any brand but multibrands and if I had to it would certainly be with Avid (the latest M.C is a sexual bomb) and Autodesk more than with Premiere or Edius.

And this fact has been commented and confirmed by thousands of users, both pros or amateurs all over the net.
AVCHD is not a codec I'd use for editing simply because it slows down too much due to its complex algorithm to render. But it just depends how one edit and what. If it was an interview, then I have no problem with AVCHD, but if you are doing a lot of layers, cross process from After Effects and Autodesk then back in the NLE and in real time that's another story, it simply freezes.
If I do that in Canopus lossless it just works fine with any NLE. That's why in my case I wouldn't edit in AVCHD. And far from being the only one.

Yes, I don't have the latest big power in the studio, true.

I agree with BC, you seems to think that things are sort of established, in fact they are not. This is the jungle, we all know that and we are all experimenting in that. Trying different solutions, taking some and discarting others. I must say that I'm seeing a lot of TV and movie guys, serious guys, almost as lost as I am with what's going on, (I said almost) so I'm very surprised to read such level of security in some posts. But hey, maybe you know
a lot more where all that stuff is going. Personaly I'm making my choices on testing and testing. Sometimes it's clear, other time is not so clear and requires more time.




Cheers.
« Last Edit: March 25, 2011, 07:26:00 AM by fredjeang » Logged
Pages: [1]   Top of Page
Print
Jump to:  

Ad
Ad
Ad