Pinnacle Studio 23 Update
The first signs of us moving into the “silly season” have pretty much arrived. Last week Vegas Pro updated and this week the big news is the release of Pinnacle Studio 23.
As is usually the case with a major upgrade, Corel have added some new features to Pinnacle Studio as well as working on a few “behind the scenes tweaks and fixes.
In this new… Read more…
Light Field Lab said it had closed a $28 million Series A round of venture capital financing to support its development of holographic display technology.
The company said it will use the money to commercialize holographic displays that do not require the use of glasses or helmets, starting with video walls for large-format, location-based entertainment and eventually moving into the consumer electronics market.
The funding round was led by Bosch Venture Capital and Taiwania Capital with strategic investment from Samsung Ventures, Verizon Ventures, Comcast, Liberty Global Ventures, NTT Docomo Ventures and HELLA Ventures, with financial investors including Khosla Ventures, Alumni Ventures Group, R7 Partners and Acme Capital, the company said.
Light Field Lab said its technology also includes content distribution hardware and software, suggesting that new 5G networks to be deployed by Verizon, among others, will be used to deliver content to holographic display devices.
“The industry response has been extremely enthusiastic as demonstrated by the strength of our investors,” said Light Field Lab CEO Jon Karafin in a prepared statement. “We look forward to working with our syndicate of manufacturing, content creation and distribution partners to uncover opportunities and alliances across a range of vertical markets as we take our technology to the next phase.”
Conceptual rendering of Light Field Lab’s technology for rendering a three-dimensional image in front of a viewer. Light Field Lab
Light Field Lab was founded in 2017 by Jon Karafin, Brendan Bevensee and Ed Ibe, three veterans of Lytro Cinema, the ill-fated start-up that debuted a cinema camera based on light-field technology at NAB 2016 before vanishing from the face of the earth early last year.
Today’s news suggests the company’s is work is progressing more or less as originally planned — in an April 2017 interview with StudioDaily, Karafin said the company hoped to spend one to two years developing a small-scale prototype that would help it secure venture financing to develop the technology on a larger scale. Late last year, the company announced a technology partnership with OTOY along with commitments from Endeavor and Roddenberry Entertainment to develop original content for holographic display.
From Lacie.com Well known photographer and cinematographer, Andy Best is doing what most dream of doing. Day to day Andy passes through many landscapes and photographs breathtaking scenery.
Andy was exposed to the outdoors at a young age, his appreciation for natural beauty took a hold of him and never let go. Andy and his wife with their dog, Sequoia hit the road and lived life as nomads, they felt that living in a society where success was measured by things you have was not for them.
Before editors everywhere are insulted at the notion that the software knows what is boring better than they do it’s important to take a look at what the “boring detector” actually does and you can do that by just taking a look at the “boring detector” dialog box.
This new tool is nothing more really than a frame counter that is analyzing your shots based on a frame count. It’s looking for shots that could be either too long (hence, being boring) or too short. I’ve seen this new “boring detector” described as a good use of AI in an NLE but it seems to me it’s just counting frames and not artificially intelligently analyzing anything. Long shots and jump cuts can both be legitimate issues you might want to flag in your cut but depending on the creative intent the shots might be neither boring or jump cuts.
The “boring detector” still has me chuckling just as a concept and framing, even if I can think of a dozen jobs where I would absolutely have used it. Lots of corporate stuff that’s really mechanical anyway.
The thing is, I have no problem with the feature itself.. definitely, not gonna use it, but I understand who is it for. But calling it the “boring detector” and saying that editors won’t have to watch their video is just encouraging misconceptions and bad practices
Whenever you’re looking at something as easy and simple as frame counts and trying to derive something as subjective and complex as emotion and creative intent you’re bound to insult the editors and directors out there when you call it a “boring detector.” I’m sure this wasn’t what BMD set out to do when they created a truly useful tool for editors but it’s what they did.
Would Resolve 16 think the legendary Goodfellas tracking shot was boring?
What about the 30-second “refusal” scene in Philadelphia? That’s a shot that still haunts me to this day but I guess the Resolve “boring detector” wouldn’t have flagged that one as it’s set to 45 seconds by default.
And then there’s Rope or the entirety of Russian Ark.
I realize it is a bit of a stretch to compare some shots from great cinema to what feels like an editing feature that was added to another editing feature that may very well have been created to placate an era of low budgets with fast turnarounds and YouTubers but if Resolve wants to make in-roads in ALL areas of media post-production serious, creative offline storytelling is one of them. I saw a number of high-end feature editoral professionals talking about this thing on social media.
Obviously, a piece of software counting frames is never going to know the creative intent of why a shot was chosen by the editor (and sometimes director) to last as long as it does. While it may seem like long shots make the life easy for an editor there is, in reality, a lot of time and work that goes into the decision to even include a long shot in an edit.
How long do you hold it?
At what point do you cut (or transition) away from the shot?
Where to go next?
And what is perhaps the biggest question of all … what is happening in the frame during that shot to make sure it isn’t boring?
It’s those reasons that the naming of a tool inside of a video editing application a “boring detector” caused such offense to editors everywhere. I doubt there was much discussion inside of BMD about how such a name would cause a stir. Then again, maybe that was BMD’s creative intent when they named it.
Ran my final cut through Davinci’s Resolves’ new *boring detector* and it highlighted the whole thing. Weird. If I wanted an #abusiverelationship I’d go back to Avid.
Comment on the EOSHD Forum The Pocket 6K with Super 35mm sensor is a $2500 upgrade compared to the much cheaper $1200 Pocket 4K. Does it do enough to justify the price increase? Speaking personally, I held off ordering the 6K. Tempting though it may be, Blackmagic doesn’t seem to have solved any of the very basic shortcomings of the Pocket 4K. $2500 is a very different ballpark to $1200 and I expect a battery fit for purpose. I also expect a tilting screen, an EVF and IBIS. These are the bare minimum convenient features for me. Autofocus another one. …
There’s a market for the Blackmagic Design eGPU Pro, but it’s very specific. Blackmagic Design has updated its external GPU device, replacing the Radeon Pro 580 with a Radeon RX Vega 56. This upgraded model carries the Pro moniker and a US$1199 sticker price, compared to the less expensive US$699 of the base model.
The Short Version
If you have a TB3 capable, entry level Mac (iMac, MBP, Mini), and you need a simple, quick solution for some real GPU power for editing and coloring in Resolve, Premiere or FCPX, then this is worth taking a look at. If you have an LG UltraFine 5K display and a desire to have the least number of cables and a beautiful, whisper-quiet hub on your desk, then even more reason. The Blackmagic eGPU Pro looks and sounds great in a client-facing environment.
If you are not in this situation or you’re trying to spend the least amount of money on an eGPU enclosure, then this is not the machine for you. Like I said, this is for a specific market.
An External GPU and a Hub of Many Ports
The eGPU Pro is, at its core, a graphics card shoved into a hub. It adds much needed speed and performance to an existing computer system. These units have been popular for the last few years because they can breathe performance life back into older machines, or ones that can’t swap out their graphics cards.
Until the last year or so, using one has required some hacking and fiddling to get it to play well in the OSX ecosystem. All that changed with the release of High Sierra. You still had to restart your machine to get it to detect an external GPU, but it was closer to plug and play.
With Mojave, this process is hot-swappable. You can plug-and-play at will, without losing the 45 seconds it take you to reboot. The other big innovation that makes an eGPU a reasonable piece of accelerant hardware is Thunderbolt 3. And this is the giant caveat: you need a Thunderbolt 3 capable machine. Really. While there are TB3 to TB2 adapters that will allow you to connect and use an eGPU, you’re cutting your bandwidth in half, which pretty much defeats the purpose of adding an external GPU.
The Blackmagic Design eGPU Pro is for machines that have Thunderbolt 3 (USB-C) and a GPU that’s underpowered and can’t be upgraded. So we’re talking about Apple products. Specifically the entire starting line-up of MacBook Pros in 2016, iMacs starting in 2017, and Minis in 2018. If you have one of these machines with an Intel Graphics GPU, you’re the person who will benefit the most from an eGPU Pro. Especially if you are using that machine to edit and color grade compressed 4K video in Davinci Resolve, FCPX or Premiere.
The eGPU Pro fits so nicely into the Apple OSX ecosystem that it is the only external GPU you can buy directly from Apple. Plugging your entry level iMac, MBP or Mini into the eGPU Pro will mean playback will leap from 3-4 fps up to realtime while you stack on nodes. It’s the difference between ripping your hair out and getting work done. And it’s also a nice, attractive and quiet hub that will connect to a the very popular 5K UltraFine LG display at full resolution with a single USB-C cable. It’s the only eGPU that has Intel’s Titan Ridge Thunderbolt 3 controller which brings DisplayPort 1.4 support to all the TB3 ports.
And that’s also who this is for — the person that owns that 5K LG display and a Mini or MBP and wants to work smoothly in Resolve, FCPX or Premiere for serious creative work. It’s for the person that has a pretty great iMac from 2017 that they don’t want to replace yet but need some GPU heavy lifting. If you already have a machine that already has a discrete, higher-end GPU cruising along inside, the performance differences are much less noticeable.
What About Performance on a Pro Machine?
I tested the loaner unit from Blackmagic Design for about two weeks on my entry-level iMac Pro with a Radeon Pro Vega 56, and it provided a nice boost, but I really only noticed the difference when I was stacking a lot of nodes on to h.264 4K footage. My iMac Pro was suddenly a dual GPU set up. Which was cool, but in terms of actual performance, I saw about a 10-15% increase in render speeds and play, back but only on clips that had a multiple nodes and/or a lot of Noise Reduction applied.
This is a significant boost. But if you’re someone buying an iMacPro, you could just spend another 700 dollars for the Pro Vega 64X GPU, or more RAM and processors.
Why Can’t I Upgrade the Card Inside?
Here is the big complaint that is leveled at the BMD eGPU Pro: unlike every other eGPU unit on the market, the Blackmagic Design eGPU Pro takes a very Apple-like design take — you can’t swap out the card inside. This renders the unit semi-disposable since GPU’s are a volatile commodity in the economy of computer system building. Today’s top-of-the-line “pro” GPU will become a slow-moving tugboat compared to the options available 18 months from now.
I think that this is largely true in fields like 3D animation and Machine Learning. And if you’re looking to build a system with lots of raw GPU processing power this is really the wrong thing to invest your limited resources in to. But that power user is not who this unit is aimed at. This is for accelerating Resolve on Mac hardware with USB C ports. I know, that’s a very specific market. I have no idea how many people that may be, but Blackmagic does a great job at creating relatively affordable, highly specific pieces of hardware for what I imagine are very niche markets. How many people need thousand dollar colorociter control surfaces?
But Why Is It 1200 dollars?
You can buy another eGPU enclosure and Radeon RX Vega 56 Pro for about half the cost of the BMD eGPU Pro. So why the premium price?
Because the person that is willing to pay that kind of money doesn’t have the time to fiddle around with shoving a GPU into a controller in a box, and troubleshooting anything that might go sideways. Blackmagic Design profiled, who I think, the perfect user for the eGPU: the team that did dailies for ROCKETMAN.
They already had a iMac 5K and needed some extra power to iterate looks, create different LUTs and knock out h264 dailies. A note, this article is about the base model, not the Pro, but the market is the same.
If you have already own a 2018 Mac mini that you want to set up as a dailies station. Or you have a newer MBP and want a more powerful home system. For an additional 2500 bucks, you could add the eGPU Pro and an LG Ultrafine 5K display and be in business with a couple of USB-C cables and a visit to the Apple Store.
Are there less expensive options for external GPUs? Absolutely. But the eGPU Pro offers up a beautiful piece of hardware, quiet and cool, with the ease of plug-and-play without the hassle of wondering which piece of your puzzle isn’t working.
Just recently, an amazing thing happened to me. Not one, not two, but three different clients within a two-week period requested that we shoot their project in 4K RAW. Big deal you say? It actually is a big deal and in this blog, I’m going to focus on why this represents an actual global mind shift, at least for our clients. Frankly, compared to the feedback we were getting before this on what formats our clients wanted us to shoot, the whole thing has left me feeling like I’m living in a parallel universe. It’s like that Seinfeld episode where Elaine meets three doppelgangers for Jerry, George and Kramer who are the same yet completely different in attitude and actions (if you can’t tell, I’m a Seinfeld fan and go through life assuming that most other people in Western Civilization have also watched the show, weird, huh?) In the episode, Jerry tells Elaine about the existence of a Bizarro world where everything is the opposite of the reality that you know.
A Change of Attitude
For years, I’ve been trying to convince our clients of the value in shooting their projects in RAW. When I shoot still photography, I have been shooting RAW files for as long as I could remember but for video, shooting RAW, until fairly recently, was an expensive endeavor in both budget and time. It’s still is to a point, but the bar has been rapidly falling as media costs, storage costs and computers and editing software become more and more common. Our clients mostly have clients who are the studios in the PR/Marketing and Home Entertainment departments and even today, most of these clients are very conservative as far as preferring 1080 over 4K or anything greater resolution. We typically shoot a bunch of long interviews for these clients’ projects. A good portion of this footage is shot green screen, so I’ve been trying to get our clients to move to shoot RAW, especially for when we shoot green screen.
Which of these two formats that our camera shoots do you think would be better for shooting great green screen footage?
XF-AVC – UHD (3840×2160) 8-bit 4:2:0 shot at 160 Mbps.
Cinema RAW Light – DCI (4096×2160) 12-bit 4:2:2 shot at 1 Gbps.
As an editor who occasionally composites, the 12-bit footage would allow for pulling much cleaner and smoother composites without a doubt.
The Cost Is Considerable
There’s a considerable cost to shoot RAW footage though. That cost can be broken down into two categories, media and editing/archival storage cost and time.
To give you an idea of the media costs that it takes to shoot RAW, of course, it varies with the camera. On the high end, cameras like the Panasonic Varicam 35, the Canon C700FF, the RED lineup and the Arri lineup are all capable of shooting RAW 4K and in some cases, up to 8K.
As an example, if you use the Canon C700FF, the add-on Codex RAW recorder costs you about $7,000. Plus, you need to add on another $7,000 per 2 TB storage drive. And don’t forget another $5,700 for the Codex drive reader. All in, you’ll pay an additional $20k-plus to shoot RAW on that camera. If we go down the line to the C700FF’s little brother, the C200, the economics to shoot RAW change considerably. The C200 shoots a fixed 5:1 compression ratio Cinema RAW Light format to CFast 2.0 cards. In the beginning, a little over a year and a half ago, these cards were pretty expensive, but since then, because there are now so many other cameras that can shoot the same cards, economies of scale have kicked in and you can buy a 256 GB CFast 2.0 card for as little as $149.
If I can buy a 256 Gb for $149, how long of a recording will that card hold? With it’s fixed data rate of 1 Gbps, the C200 will record 34 minutes of DCI 4K to the 256 GB card. A 256 GB SD card for the C200 won’t record 4K RAW, but it will let you record 4K (UHD) XF-AVC at 160 Mbps, but that recording will be 8-bit, not 12-bit and will not work very well for green screen compositing. The XF-AVC recording is 6.2X smaller than the CFast 2.0 recording though.
Time Isn’t On Your Side
One other thing you should consider is the time it takes to download and clone these RAW files. To shoot 34 minutes of XF-AVC, I can download the footage to a drive in about 3 to 4 minutes whereas 34 minutes of the Cinema RAW Light footage takes between 24 and 28 minutes on average. You can see how, if you’re shooting hours of long interviews in RAW, it’s easy to fall behind and possibly run out of cards to shoot to. This has been the other major factor that has, until recently, soured our clients on letting us shoot at least some of their projects in RAW.
A Change of Heart
I attribute a few factors to our clients’ recent change of heart about letting us shoot at least some of their projects in RAW. The first being that storage drive costs have continued to fall. We recommend the Seagate Backup Plus Hub 8 TB drives that have been available at Costco for as little as $129 on sale, less than $18 per GB, which is quite incredible. The drives are name brand, as reliable as anything else on the market and while not fast enough to serve as a good editing drives, they are an excellent value for storage drives to hold client footage. We always insist on a minimum of a double backup for all footage and highly recommend triple backups for crucial projects, with at least one set of media being stored at an alternate location from the main drive(s). The cost of storing RAW is now pretty minimal for clients.
The other big factor has been simply picture quality. We make sure to light green screen properly, but even with perfect lighting, pulling clean composites can be challenging with blonde hair, hair that’s thinning with the green screen shining through it, the view through lenses of glasses and other challenges like this. Having 12-bit 4K makes compositing much less of a problem-solving exercise as the 12-bit, when properly exposed, gives you an incredibly robust signal to work with. Clients have seen the value in better-quality footage and now seem to be willing to spend the extra time for us to shoot RAW, download it to their media and for the extra time it takes their assistant editors to convert the media to proxy for off-line editing.
As a cinematographer, shooting the best quality format and resolution makes me happy because it gives clients the most options to do what they need to with the footage. Shooting RAW makes clients happy because it results in fewer headaches with quality, being able to adjust white balance after the shoot and they can archive essentially what becomes a digital negative, just like we used to do with physical negatives in the days of shooting film. Shooting RAW isn’t the ultimate panacea for all problems, and it’s not right for every workflow, but it’s definitely worth considering if you’re trying to differentiate your work and the value you can add to clients, studios and distributors.