Couldn’t believe it when I received a tweet back today. Could AJA Kona 3 owners really be in luck with Avid Media Composer? It’s looking good to me!
In the blink of an eye, the release of Final Cut Pro X has caused a ripple in the Matrix so huge, I’m not sure Neo could even fix this catastrophe. But it’s much more than good software gone astray, it’s deeper than infrastructure changes; it’s about the loss of trust, faith and even livelihoods.
The writing was on the wall for some time actually, but myself, like many others didn’t see it coming—at least not this badly. Let’s go back a few years. Apple’s acquisition of Shake from Nothing Real was huge, and ultimately sold under the Apple Store for a mere $499 (original price was $9900 prior to the Apple buy-out). People, myself included, felt that Apple was really getting serious about post-production based software. The excitement was huge and people (large companies included) really started to take notice. But by 2009, Shake was quietly removed from Apple’s line-up and officially EOL’ed (End-Of-Life).
This really should have been the first hint that Apple didn’t have a huge interest in the pro apps market. Why they initially bought them out in the first place, I do not know. Even worse, why they EOL’ed an awesome app like Shake is even more strange to me. It’s as if Apple became the black hole of pro apps software—they buy it and kill it.
But Shake isn’t the only app gone astray. It seems that with the FCPX release, Color is now the next major pro app to hit the chopping block. I received an NFR (Not For Resale) copy of Final Touch HD directly from Silicon Color (along with some control surface hardware that I still have today) before Apple bought Final Touch out and renamed it to Color. First using Final Touch HD, my mind was blown, despite the fact that, at the time, the software was in dire need of video card hardware acceleration (which later came about). But like Shake, Apple bought it, and “chopping-blocked” it.
Shall I go on? How about Cinema Tools… Originally developed by my old friends at DigitalFilm Tree under the application name of FilmLogic. If EOL’ed, how will FCPX work with telecine? Oh that’s right, there is no more telecine because it’s not the next wave of the future (sarcasm).
DVD Studio Pro? Bought and gone from Astarte. Next.
Final Cut Server? Eeeeeesh.
QuickTime Player 10 is even a pile, compared to QuickTime Player 7. I outline it here in a blog post I coincidently wrote hours before the release of FCPX.
Soundtrack pro has been reported that it might be integrated into Apple’s Logic Studio. But beware, although Logic is still alive, this too was an app originally bought out by Emagic, and it too, like FCPX, has been simplified down to a degree. At any rate, Soundtrack Pro is nowhere to be seen as a stand-alone, and the current state of Logic Studio is making people scratch their head.
Seeing the trend here? Seems like Apple’s best pro apps were bought out by other companies, only to be ultimately killed in the end; again, the black hole syndrome.
There are many other examples of EOL’ed software by Apple, but let’s circle back to Final Cut Pro X. It’s pretty clear at this point that Final Cut Pro X isn’t an update to Final Cut Pro 7, and is an entirely new app built from scratch. This would be great if it was built upon the functionality of Final Cut Pro 7, but it’s not, leaving many editors, facilities and studios up in arms as to what to do next.
For us, it looks like we’re going in the direction of Avid Media Composer. This isn’t set in stone until after we’re fully installed with it and run it through its paces, and hopefully with AJA Kona 3 hardware integration in the not so distant future. For others, Adobe Premiere seems to be the answer. Both are great apps, and both can continue on where you left off with Final Cut Pro 7, if Final Cut Pro X isn’t working out for you.
But switching non-linear editing applications is only part of the problem, and in some cases, only a small problem by comparison. There are many people that have devoted the last ten years in direct relation to Final Cut Pro, and as a result of Final Cut Pro X, are going through a major life/career change. These include certified Final Cut Pro trainers, value added resellers, user group organizations, third party software/plugin developers, and many more. There are many people that are questioning their livelihood because of this radical shift that Final Cut Pro X has taken. I wonder if Apple knew they were going to directly cause such an issue, or if all they were looking at was nothing but their bottom-line. To me, it shows an enormous amount of arrogance, ignorance, and selfishness from Apple, which is not the Apple I once knew. Is Steve Jobs losing grasp of his company that badly?
Because of all the shifting and killings of Apple’s pro apps, I have to wonder about the MacPro desktop computers as well. If Apple continues to shrink their pro app offerings, and all of their pro apps are shifting to over over-simplifications with less controllability, why then will people need these massive desktop machines, if Apple is more focused on “iProducts,” including the iMac? Do iApp people even fit in the MacPro desktop market? If the MacPro desktops eventually hit the chopping block, then there will be an even larger problem for those of us that rely on powerful MacPro workstations with software licenses that are on OSX. I have Windows machines in my facility, but I still do prefer MacPros for the main computer workstations.
So all of this begs the big question, “Can Apple be trusted from here on out?” For me, I will no longer buy any kind of pro app from Apple for my facility, knowing their EOL history is very strong—even if an updated Final Cut Studio 3 (based on FCP7) was released. It amazes me that all of my favorite pro apps from Apple are all EOL’ed. Literally. For me personally, I no longer care about FCPX or even FCP7 updates. I’ve already started passing up articles on FCPX how-to’s or work-arounds, as I will not be using FCPX in my facility, and since FCP7 is EOL’ed.
Because I’m switching to Avid Media Composer, all I care about at this juncture is integrating my current hardware and software with it, and what else I need to change or buy in order to make that happen. It’s a domino effect really, and all because of a 2.5-star, $299 downloadable app from the App Store. Seriously, that $299 app is costing me thousands of dollars, but luckily, none of it is going to the App Store. Just keep making powerful desktops, Apple, so I don’t cut my ties from you completely.
In the ever-growing demand of clients wanting more creative options from you, their creative go-to person, you might want to seriously consider learning 3D modeling and animation. Today there are powerful and even FREE software packages out there for 3D modeling and animation, with even more free tutorials abound.
I remember in the early 1990’s when 3D software was young and basically reflected this: really cheesy looking and labor-intensive, or if it was nice looking, was extremely expensive, and reserved to exclusive users. And any which way you looked at it, it was extremely slow, unless you had a $50k SGI workstation. To that end, my limited 3D software choices included Specular Infini-D, Strata Studio Pro, and the short-lived Adobe Dimensions, Adobe’s first and only (feeble) attempt at a dedicated 3D application (which was then a short-lived plugin for Adobe Illustrator). In short, this was both frustrating and limiting. The good thing however, is that it taught me (i.e., forced me) essential basics for my future 3D foundation, to which I would later be grateful for.
Around 1996, I bought our first license of Newtek Lightwave 3D. Finally a 3D software package that was both affordable and powerful. For me, this allowed me, for the first time, to really model and animate in 3D with features and rendering options that I had only dreamed about before (oooh, soft shadows!). At this point, I could really offer my clients new creative options for their productions. I also added Electric Image Animation System (EIAS), which was (and is still available today) solely a 3D animation/rendering package (no modeling) but gave me a different approach to 3D animation than Lightwave, and was used accordingly, depending on the project. In short, the birth of affordable and functional 3D software was finally starting to bloom throughout the market place. These were exciting times!
The only problem in the mid-to-late nineties was the limited resources for learning 3D modeling and animation. You either had to enroll in a physical school that you had to attend (and they were few and far between), you had to rely on monthly trade magazines in hopes they’d cover something you could use, you could mail-order VHS instructional videos, or you had to buy a book on the topic (usually the best option at the time).
Fast-forward to the second decade of the twenty-first century and you literally have a plethora of informational resources on any 3D software package, on any topic, and at any proficiency level, whether it’s online training, YouTube videos, downloadable eBooks, or the traditional print media approach.
But the best part is, if you’re new to the 3D world, you can either try some software for free (either limited in features or limited in time), or you can perpetually use free 3D software, legally, like Blender 3D. Blender is an open source 3D software package (Linux, OSX, Windows) with a strong user-base, with enough online tutorials, videos, and forums to keep you learning until the polygon cows come home. This is a viable way to at least see if you like 3D and want to grow into it and offer it to your clients as a professional service.
For me personally, I still use Newtek Lightwave 3D after fifteen years as my software package of choice, whether a complex medical animation for FDA approval, or for a simple packaging animations for UPS. The excitement of starting a new project and producing something from scratch in a “3D world” is something that always keeps things fun. We also use Maxwell Renderer for some photo-realistic rendering and good old Poser 3D, which we find useful for storyboard production. And there are many other 3D packages abound; Maya, Cinema 4D, 3D Studio Max, SoftImage XSI, Houdini, Modo, and Strata 3D are but a few packages that can also cater to your needs. Also check out Bryce 3D for fun 3D landscape and terrain visualization.
And if you’re not big on 3D modeling, but you are interested in 3D animation, then you’re in luck. There are many 3D model resources out there where you can buy and download 3D models for free or for a price (like stock footage libraries). Price usually goes hand-in-hand with the kind of detail that’s involved with the 3D model. Our favorite 3D model library over the years has been Turbo Squid. In most cases, buying a pre-built 3D model is more efficient and economical. Why spend 3 days building a highly detailed model with full texture mapping when one has already been built for you? In reality, the cost is minimal in comparison.
A quick caveat: If you’re completely new to 3D, be prepared to undergo a new way of learning something. Working in 3D is nothing like the way you work in video editing, or even 2D motion graphics for that matter. Even the implementation of pseudo-3D features in compositing applications like Adobe After Effects is completely different than working in true 3D in any other dedicated 3D software suite. It’s a different mind-set and a different approach to the creative process.
But fear not; once you learn the basic foundations of 3D space, modeling, lighting, surfacing, animating and rendering, you’ll literally have a whole new world open to you that you never had before. I can attest that at least 90% of my fellow editor colleagues, and even motion graphics colleagues (and talented ones at that) do not use true 3D software packages as part of their personal creative toolset. This means more work for me, and can mean more work for you. Admittedly, it does seem to be somewhat of a rare breed for someone to truly envelope themselves in 3D production, but if you desire it and develop the skill set for it, the market is yours for the taking.
Remember, patience is your greatest ally when learning 3D. Before you know it, you’ll be working with fluid dynamics, inverse kinematics, global illumination, and everything else that will knock your clients’ socks off!
It’s official. Today, after 15 years of using NLEs, OneRiver Media officially purchased our first licensed seat of Avid Media Composer. This was actually a long time coming, as we’re constantly in situations where Final Cut Pro 7 has limitations in how we work in the timeline. And don’t get me wrong; we’re not completely jumping ship from Final Cut Pro. We still have several projects that are in Final Cut Pro 7 right now, and may even continue to use Final Cut Pro 7 until we get solid hardware migration working with Media Composer, namely our AJA Kona 3 hardware. We might temporarily use Matrox hardware in the interim and sell the Matrox hardware if and when our beloved Kona 3 cards finally work with Media Composer.
With today’s release of Final Cut Pro X and the sheer amount of features it has dropped from its functionality really hit home. Rather than sit and wait, I decided to buy a seat of the $995 cross-grade offers from VideoGuys.com while they’re still available. Who knows how long the discounted supply will last for, or how long the offer will last for, right? I was certain last Friday was the last day to buy it, so I felt compelled to buy a $995 Media Composer seat today while I still had the second chance.
I’m definitely still interested in what Final Cut Pro X has to offer, more so in the future than the now. Right now it’s just too young as a version 1 release. But in a version or two, I can see some cool things being added (and re-introduced from FCP7). And don’t get me wrong, there are some really cool features that FCPX has now that I wish FCP7 and Media Composer had, namely the magnetic timeline.
It’s been a fun journey though, all the way back from the first release of Final Cut Pro 1 in the late 1990’s. Before Final Cut Pro, we had a Media100 system, which I was able to sort of hack the hardware into the Final Cut Pro 1 software, which made for some fun. Final Cut Pro was a huge step forward for us at the time, and continued to be. I admittedly had strong animosity towards Avid back in those days. I still think they’re a bit of a pompous company, if ya ask me, but at this point, I feel joining the dark side of Media Composer may end up being the best thing for us at this juncture. I know at the very least it wont hurt us in trying, and if it meets or exceeds our expectations, then all the better.
I know Final Cut Pro X will continue to be a powerful and useful tool for many people, and possibly even us, but for now, we’re going to switch gears a bit, and Avid Media Composer is going to be the vehicle that takes us there.
Edit: Follow up blog post here: Is the trust for Apple gone for good?
QuickTime Player 10 (“X”) has been out for over a year now (since the release of Snow Leopard), but some people may not know they can still use QuickTime Player 7 for some much needed features that QuickTime Player 10 did away with. Maybe the “10″ stands for the 10 reasons why I hate QuickTime Player 10. And what ever happened to Players 8 and 9 anyway? The latter is sarcasm… read on for even more sarcasm.
Sarcasm #1: Nothing I love more than overlaying my video playback with an obnoxiously large transport control interface. The Duplo of Legos for interfaces.
Sarcasm #2: Nothing I love more than overlaying my video playback with the window controls at the top of the video.
Sarcasm #3: Nothing I love more than NOT being able to select in and out points of my video with the “I” and “O” keys, and copy-and-pasting them into a different QuickTime Player window. You can frame-inaccurately trim videos in Player 10 (Edit > Trim), but you can’t copy/past the trim into another video. Luscious!
Sarcasm #4: Nothing I love more than NOT being able to export and transcode out of QuickTime Player, even though I have QuickTime Pro installed via Final Cut Pro. Because Compressor should be used for EVERYTHING, right? Of course!
Sarcasm #5: Nothing I love more than NOT being able to copy a frame out of QuickTime Player and pasting that frame into Photoshop or Preview. Who needs to email a reference frame of a video to a client? Not me! Never!
Sarcasm #6: I love hitting Command-J and Player 10 sneeringly ignores my command and doesn’t open up the editable Properties window for changing functions and adding values to the video itself. You’re so naughty, Player 10. I just love that about you.
Sarcasm #7: I love that you can’t use the standard editorial “JKL” keys for scrubbing through video in Player 10. Oh that’s right, I love that you can’t edit and copy/paste in Player 10, so no reason for having JKL keys either. Duh, stupid me!
Sarcasm #8: I love that Player 10 shows LESS information in the Get Info window than Player 7. Less is more, right? Right?
Sarcasm #9: I love that Player 10 doesn’t have an “RTZ” (return to zero) and “RTE” (return to end) buttons to quickly jump to the beginning and end of the video. Because spending time finding that tiny scrub button and dragging it back to the beginning is so much fun! They should make a Nintendo Mario game of it!
Sarcasm #10: I love that Player 10 no longer has a stereo spectrum analyzer for viewing audio levels because, who hasn’t come across a time when they’re troubleshooting an audio issue, and the spectrum analyzer quickly confirms that the video does in fact have audio muxed into it? Audio tracks are so over-rated anyway. I mean, if they’re accidentally not there, who cares, right? The Honey Badger definitely doesn’t care.
To solve all these issues, drag-and-drop an old copy of QuickTime Player 7 from an old OSX installation, or download it here from Apple:
It amazes me that Apple dropped all these features and added a goofy interface to QuickTime Player 10, but that’s just it… QuickTime Player 10, is truly just that… a player. So for simple editing and exporting (and sanity), use the good old QuickTime Player 7. Hopefully the release of OSX Lion will bring back some of these much needed features, as well as ditching that annoying overlayed transport interface.
Resetting your camera’s clip number can really make things easier on your production and the post-production process. This is especially useful when you have a crew member on set that’s recording timecode-based reference notes of the shots for better post-production organization.
When you do get into the post-production stage, digging through timecode can be a little eye-numbing, so quickly identifying the clip you need by clip number first makes things much easier. Once a clip is identified, you can then refer to the timecode reference for the specific shot you need within the recorded clip.
To learn how to use timecode with your HDSLR cameras, check out our recent post on the very topic.
Sadly, it seems as though audio is always on the back burner for a lot of video editors. But as I like to say, good audio WILL make your video LOOK better. It’s true! In this week’s Tuesday Tip, I’d like to reinforce the idea of making your vocal tracks sound better with tools that you already have in your NLE arsenal.
Often times when vocal audio is recorded on set, especially if it was directly recorded from a mic to the camera’s input, the vocal track itself may be a little thin. This isn’t entirely bad, as you don’t want to overly process the incoming sound before it gets recorded into your camera or double-system sound device.
Once your video edit is complete, take a little time and fatten up those vocal tracks a bit by using dynamic compression. Every NLE application has dynamic audio compression plugins. In essence, compression is like an automatic volume control. When the recorded volume is low, the automatic volume is raised up. Then when the recorded audio is peaking really high, the automatic volume is then lowered. Different compression plugins implement different controls and features, but they generally work similarly in nature. And like anything else, some have better compression quality than others.
If you’d like to read about this topic in more detail, check out the published article I wrote back in 2006 in the very first premiere issue of the printed Creative Cow Magazine. It may be a few years old, but the article is just as valid and useful today as it was the moment it went to print!
Direct web converted article on Creative Cow:
If you’re interested in shooting your HDSLR cameras with timecode, it’s a lot easier than you might think. This article describes the method I’ve developed and use on sets to have the same running timecode across multiple cameras, double-system sound recording, our wireless digital slate, and the wireless ScriptBoy.
Let me first say this isn’t a replacement for plugins like PluralEyes that automate the synching between multiple cameras and sound. I still use PluralEyes and love it immensely. But there are times you need timecode on set so it can be referred to later on. A prime example is when you have a person on set recording marks on the shot list so that the editor knows which takes were the best, or what pickups to use. The best way is by means of timecode, especially if you have long takes with no cuts and many pickups.
A second disclaimer is that this process is not frame accurate, but is very close; usually within a quarter of second of accuracy (go to the end of this article for the frame-accurate method). But if your main purpose is to track and match marks against a shot list, then this accuracy is more than adequate, to which I call the “soft sync” approach.
So how do we get timecode working with HDSLR cameras? Well, as you probably already know, these HDSLR cameras sadly don’t have TC or genlock ports on them, or you wouldn’t be reading this blog post. So instead, we’ll have to do it virtually. And the timecode itself will be “time of day” (TOD), rather than the typical ascending timecode that starts and stops, akin to ENG style video cameras. And since HDSLR cameras don’t have timecode outputs on them, we’ll need to use a timecode generator unit that will be the master timecode source.
But first, make sure all your cameras on set have the same exact clock setting, down to the second. It’s important that you check and set this every time you start your production day for your cameras. I’ve noticed our 7D and 5D Mk II drift a few seconds apart after around a week or so of time. I sometimes set the clocks on my HDSLR cameras against an atomic clock I have in the studio. A bit overkill since it only technically matters that your HDSLR cameras just have the same clock value, even if the clock doesn’t match the day’s true time of day. But I like to keep things square, so I match to the atomic clock when I can.
Next, you’ll need to buy yourself a timecode generator. You’ll need to make sure you get one that does SMPTE timecode with some kind of LTC output, and one that will allow you to change between 29.97/30 FPS and 24/23.976 FPS. I purchased a used TimeLine Lynx Timecode Generator from eBay for a mere $20 plus shipping. I have several timecode generators in the facility, most are rack-based units, but I like the Lynx because it’s somewhat compact and has a built-in LED display of the timecode on the unit itself, which I would highly recommend with any timecode generator you end up buying.
You can also check out the app called, “Jumpstart LTC” which lets you set and output LTC timecode from your iPhone, iTouch or iPad.
The next step is to set the timecode generating start point on your TC device, in my case, the Lynx. I personally like to run TOD timecode as military time, so if it’s 7:32:14 PM, I set the Lynx to 19:32:14:00 (your converted HDSLR clips will end up as military time as well, so best to stick to military time). What you want to do is set your timecode generator ahead of the current time of your HDSLR camera(s). So if it’s currently 5:23:04 in real-time, you might want to set the timecode generator to 5:24:00. Once the real-time reaches 5:24:00, hit the “start” or “generate” button on your timecode generator as closely as possible, and the timecode on your timecode generator should match that of your HDSLR camera(s). If your timecode generator has an “online” function like my Lynx does, then make sure to turn it on or it may drift, especially in 24 FPS. Otherwise it’ll count frames to each second, instead of maintaining a true second of time. If you do experience some drifting from your timecode generator after an hour or so, then simply reset the generator accordingly when it does. Best to keep an eye on things to see how your system behaves. If your TC generator has an online mode, there shouldn’t be an issue with drift over time.
So now the timecode generator is essentially the master timecode clock of your HDSLR camera(s). From here, you can hook up whatever timecode slave units you need to it. I usually split the timecode output from the generator to our wireless digital slate, the wireless Script Boy, and to double-system sound recording (like a Fostex PD-6 we’ve used on set). Remember that the LTC output is essentially an analog signal of sound (sounds kind of like a modem). If you get an overly hot signal out of your TC generator’s LTC output, then try using a passive audio attenuator to lower the level. Works like a charm.
Now the cameras, the audio recording device, the digital slate, and the Script Boy all have the same timecode. Once again though, you’ll probably still want to use something like PluralEyes to match the cameras and audio together in post, but now you also have timecode reference for takes, marks, notes and anything else that has been written down in reference to the footage you’re shooting. A major time-saver when it’s time to edit the footage together. And if you’re using Canon cameras, you’ll need to make sure you install the E1 plugin for Final Cut Pro so that it can convert the originally H.264 video file’s TOD stamp into usable timecode.
Oh but wait, there’s more!
There is a second method of using timecode with HDSLR cameras (or any video camera for that matter). Instead of hand-synching the master timecode generator to the HDSLR cameras like I’ve done, you can alternatively send timecode from your TC generator to all of your devices, including one of your camera’s mic inputs. This method will in fact ensure frame-accurate timecode across all devices, including your HDSLR camera(s), however, you then need to use a plugin for Final Cut Pro like, “FCPauxTC” by VideoToolShed to extract and implement the audio timecode into the file’s timecode track. Avid Media Composer has a built-in feature that does the same thing if you’re cutting on that. And if you do decide to send timecode to one of your HDSLR’s mic input channels, check out the compact Lockit Buddy to easily balance LTC levels to your camera. This method definitely requires you use dual-system sound recording if you need more than one available audio track in your HDSLR camera, since the other audio track is recording the LTC timecode.
So that’s that. You can either “soft sync” your HDLSR cameras together with your other devices, with little conversion work at the NLE ingest stage, or you can get frame-accurate sync with a little extra work at the NLE ingest stage. For means of shot notation and marking, I prefer the “soft sync” approach. For frame-accurate needs, I’d go with the latter. And hey, if anything else, all the running timecode on set makes you look really cool. Always a good thing when the client is on set making sure their money is being well spent!