Hello,
Today I’m going to show you a proven method to transcode 2.8K Anamorphic ProRes shot on the Alexa Mini or ARRI Amira. This particular format throws you some curveballs in the transcoding process which requires some in-depth troubleshooting. I figured out a reliable workflow to deal with this and I’ll be presenting it here in a set ready format to solve your problems when you are under the pump on location.
***If you just want the solution to the problem, please scroll down to the ‘STEPS TO TRANSCODE’ Section.***
FOOTAGE DETAILS
ACQUISITION FORMAT
File: Quicktime .mov Container
Codec: ProRes 4444 XQ
Resolution: 2880 x 2160
Project Frame Rate: 25fps
Embedded Timecode, ARRI Filenaming Convention
TRANSCODE FORMAT
File: Quicktime .mov Container
Codec: ProRes 422 Proxy
Resolution: 1920 x 1080
Project Frame Rate: 25fps
Timecode and Filename Passthrough
SOFTWARE
In this example we’ll be using DaVinci Resolve 12.5.3 for our transcode. This software can be obtain free of charge from Blackmagic Design or directly from the App Store.
LENS SQUEEZE FACTOR
In the Amira/Mini there is an option in ‘Project Settings’ called ‘Lens Squeeze Factor’, this setting is designed to provide metadata that tells your computer if the footage should be desqueezed or not. The ‘Lens Squeeze Factor’ Options are ‘1.0x’, ‘1.3x’ and ‘2.0x’. For Spherical Lenses you should leave this setting at it’s default of ‘1.0x’, in the past I have found that by setting the option to ‘2.0x’ it’ll automatically desqueeze anamorphic footage when played back in Quick Look or Quicktime. Recently I have found this is not the case, despite being set correctly on camera the metadata passthrough and implementation does not work. This should always be used as your first line of defence for dealing with Anamorphic Footage, if it doesn’t work then we deal with it in the transcode.
HOW THE FOOTAGE SHOULD LOOK
AS SHOT ON CAMERA:
This is a photo of the Alexa Mini EVF, it shows the image area within the white framelines and our recording area is everything that we see, including the shaded section. The camera sensor when in 2.8K Anamorphic Mode shoots a default aspect ratio of 2.66:1, we were framing for 2.40:1 and that is why our framelines crop the recording area on the left and right, so for post production finishing we essentially have horizontal racking room should we decide to use it.
FOOTAGE DIRECTLY AFTER OFFLOAD:
After you have wrangled the footage from the Camera CFast Card to your Hard Drives this is how the footage will appear. As you can see it’s squeezed due to shooting on Anamorphic Lenses and it is quite flat as we were recording to LogC as opposed to REC709.
INCORRECT TRANSCODE:
If you follow standard transcode procedures your footage will end up looking like this. It’s basically your desqueezed image inside of a 1920 x 1080 frame, except due to some ARRI Image Processing that occurs your footage has a black border around it. If you deliver a transcode to Post like this I can guarantee that they won’t be happy.
CORRECT TRANSCODE:
If you follow my transcode method your footage will appear as follows. It delivers full gate of your shooting format, so basically the entire 2.66:1 Image that we saw above. Most Post Houses and Editors would prefer their transcoded images that way as they have all the information and will know if there is any room to rack should a boom come into frame or if they need to use information outside of the DP’s Frame (image area) for VFX Tracking, etc.
STEPS TO TRANSCODE:
STEP 01:
Open DaVinci Resolve and Load Project. I have a default project setup specifically for on-set data/transcoding. Important Settings for Transcoding are as follows:
- Master Project Settings -> Timeline Format -> Timeline Resolution: 1920 x 1080 HD
- Master Project Settings -> Timeline Format -> Pixel Aspect Ratio: Square
- Master Project Settings -> Conform Options -> Use Timecode ‘Embedded in the Source Clip’
- Image Scaling -> Image Scaling Preset -> Mismatched Resolution Files ‘Scale Entire Image to Fit’
- Image Scaling -> Output Scaling Preset: Match Timeline Settings
- Image Scaling ->Output Scaling Preset -> Mismatched Resolution Files ‘Scale Entire Image to Fit’
STEP 02:
Make sure you are in the ‘Media Workspace’. Use the window in the top left to navigate to your camera media, select the folder containing your card, right click and ‘Add Folder and SubFolders Into Media Pool’. You should see all camera clips appear in the bottom half of the screen in your ‘Media Pool’.
STEP 03:
Select the clips that you will be transcoding from the ‘Media Pool’. You can use the Keyboard Shortcut ‘Apple + A’ to Select All. In this instance it’s just one clip so I can simply click it. Once all clips are selected, right click and select ‘Clip Attributes’.
In the ‘Clip Attributes’ Window click on the drop down next to ‘Pixel Aspect Ratio’ and select ‘Cinemascope’. In all of the DaVinci Resolve Documentation it never specifically states exactly what ‘Cinemascope’ is, but I can tell you from experience that it is a 2.0x Desqueeze which is required in this instance for our Anamorphic Footage.
Once selected, hit OK and you’ll be returned to the ‘Media Workspace’ where you can now see your footage desqueezed in the Preview Window. Please take note of the Resolution Value that we can see next to our clip in the Media Pool, it says 2944 x 2160, this will be important for solving or black border issue.
STEP 04:
Use the tabs at the bottom of the screen to change to the ‘Edit Workspace’. Here you will select all of your clips from the ‘Media Pool’ and drag them down into the ‘Timeline’. When you have dragged them in a new timeline will automatically be created called ‘Timeline 1’ which also appears in your ‘Media Pool’.
STEP 05:
In this step we are going to apply a LogC to REC709 Look Up Table so that the transcode appears as photographed on-set rather then low contrast and desaturated. Use the tabs at the bottom of the screen to change to the ‘Color Workspace’. Here you will see the clips from your ‘Timeline’ in the middle of the screen. Make sure they are all selected and right click, select ‘3D LUT -> ARRI -> ARRI Alexa LogC to Rec709’ and click. You will see your image preview increase in contrast and saturation, thus you know that it has been applied correctly.
STEP 06:
In this step we are going to amend out black border issue. For the maths behind the figures see below, here we are just going to detail how to get results.
In the lower left quadrent of the ‘Color Workspace’ you have a smaller window with many tabs such as ‘Camera Raw’, ‘Color Wheels’, ‘Window’, etc. Select your clip and then click on the ‘Sizing’ Tab.
In this tab you will have various sizing adjustments that will alter the image. The two that we are concerned with are ‘Width’ and ‘Height’. The default setting for these parameters is ‘1.000’ which means that manual resizing is not applied and the image is at it’s default proportions. As we essentially need to scale up or zoom in on the image to get rid of the black border, this is where our work will be done. Change the value of ‘1.000’ to ‘1.059’ for both ‘Width’ and ‘Height’. You will see your preview image alter and reflect a full frame 2.66:1 Image as we intended.
UPDATE:
It appears that something has changed slightly in either the way ARRI processes it’s images in the Quicktime Container or in the way that Resolve handles them, but it has been brought to my attention by a reader and also noticed by me in more recent transcodes that the 1.059 Scale on the X and Y Axis actually crops the images slightly.
Based on some new calculations and a bit of trial and error I have discovered that the new value to scale in on both X and Y should be ‘1.023’. Please follow this guide exactly as specified but rather then using the original ‘1.059’ Value, use ‘1.023’ instead.
Unfortunately when you are dealing with multiple clips in your ‘Timeline’ you won’t be able to select all and then apply the ‘Width’ and ‘Height’ adjustments as detailed above, you’ll have to do each clip individually. This is massively time consuming but fortunately there is a solution. Once you have input the appropriate values for one clip into ‘Width’ and ‘Height’ you can press the ‘Create’ Button in the top right of the ‘Sizing’ Tab. This will create a ‘Format Preset’ where we can see all of our applied sizing values. Name it something that makes sense to you and hit the ‘Save’ Button.
When dealing with multiple clips in the ‘Color Workspace’ you can select all of the clips that you need to apply the sizing transform to, then right click and select ‘Change Input Sizing Preset -> Format Preset We Created’, in my case it is called ‘2.8K_ProRes_Ana’. This will apply the preset ‘Width’ and ‘Height’ transformation to all selected clips.
STEP 07:
We are all done now with Desqueezing, LUTs and Fixing the Black Border so it’s time to actually transcode. Use the tabs at the bottom of the screen to change to the ‘Deliver Workspace’. In the top left corner is where you setup your ‘Render Settings’, thankfully Resolve makes it quite straight forward. Setup your ‘Render Settings’ as required for your specific project.
‘Render Settings’ are broken down into 3 Categories, ‘Video’, ‘Audio’ and ‘File’. I will detail my render settings below as per my transcode format at the very top of this page.
MAIN:
- Location: Desktop
- Render as ‘Individual Clips’
VIDEO:
- Video -> Export Video
- Format: Quicktime
- Codec: Apple ProRes 422 Proxy
- Leave Field Rendering Unchecked
- Resolution: 1920 x 1080 HD
- Advanced Settings: Always Check But Usually Leave at Defaults
AUDIO:
- Audio -> Export Audio
- Codec: Linear PCM
- Channels/Bit Depth: Default (2 Channels/Bit Depth 16)
FILE:
- Filename Uses ‘Source Name’
- Everything Else as Default
Once you are 100% Happy with your ‘Render Settings’, press ‘Add to Render Queue’ which will add your timeline to the ‘Render Queue’ which can be seen on the right of the screen while in the ‘Deliver Workspace’. Once it’s in there, press the ‘Start Render’ Button and away you go.
STEP 08:
Once your transcode is complete I always conduct tests to ensure all has gone as expected. I will playback the clip in Quicktime and check that it looks as expected, also listening that the sound has passed through. I will then open my original clip and my transcoded clip in Resolve and play them back from the ‘Media Workspace’, here you can check if the timecode is matching between the clips, this is displayed in the top right corner of the image preview window. I also check that the filenames match which can also be done in the image review window as seen below. Once the check and test phase is done you are good to go and the transcode is complete.
THE MATHS
In Step 03 above we briefly touched on the cause of the Black Border Issue and in Step 06 I executed a 1.059 Scale of the image to solve the issue. Now it’s time to geek out a little and explain how I obtained the magic number ‘1.059’.
THE PROBLEM:
The recording resolution of the camera is: 2880 x 2160
The resolution of the ProRes Clips straight out of camera are: 2944 x 2160
The camera resolution can be referenced by looking at the Home Screen of the Alexa Mini/Amira Menu, you can see it below where is says 2.8K ‘(2880×2160)’.
The resolution of the ProRes Clips can be checked in Resolve as we touched on before, or by simply using the ‘Get Info’ Function in macOS. Select the clip and either ‘Right Click -> Get Info’ or press ‘Apple + I’. As you can see below in the ‘More Info’ Section the Resolution or ‘Dimensions’ are referenced at ‘2944 x 2160’.
Weird right? This is basically an extra 64 Pixels on our horizontal image plane. Why you ask? ARRI details it perfectly in the Alexa Mini SUP 4.2 Release Notes:
Key words here being ‘padded with black pixels’, ‘flagged in metadata’ and ‘not all tools may respect that information’. So this basically means that the Alexa Mini/Amira adds extra black pixels around the active image area to make the ProRes Codec work with the camera, they take note of the issue via metadata but not all programs are going to use that metadata. Resolve is one of those programs and thus why we have a black border around our image.
THE MATHS:
As we discussed before, we use an image scale to solve this problem, basically blowing up the image to make the ‘recording area’ full frame and remove the ARRI induced black border. It is figured out by knowing the aspect ratios that you are dealing with and the resolutions that are being recorded, both of which we know. This was my process:
Recording Area Aspect Ratio: 2.66:1 (2.6666666667)
Resolution From Camera: 2880 x 2160
When we desqueeze this footage, which is to scale it 2.0x horizontally, it makes the resolution from camera ‘5760 x 2160’, further referenced as ‘desqueezed resolution’. The equation here is ‘(2880 x 2) x 2160 = 5760 x 2160’.
Resolution on ProRes File: 2944 x 2160
When we apply the same calculation as above to obtain the ‘desqueezed resolution’ the equation is ‘(2944 x 2) x 2160 = 5888 x 2160’.
Typically to calculate aspect ratio of an image you divide the horizontal resolution by the vertical resolution, for Full HD that would be ‘1920 ÷ 1080 = 1.7777777778’. When you round this to simplified form you get 1.78, which is referencing our common 1.78:1 Aspect Ratio, commonly known as 16:9.
We apply a similar calculation to our ‘desqueezed resolutions’ above.
Desqueezed Camera Resolution: 5760 ÷ 2160 = 2.6666666667
Desqueezed ProRes File Resolution: 5888 ÷ 2160 = 2.7259259259
This indicates that our ‘Desqueezed Camera Resolution’ has a 2.66:1 Aspect Ratio which is as intended, but our ‘Desqueezed ProRes File Resolution’ has a 2.73:1 Aspect Ratio which is not what we were shooting.
So basically my next step was to find out the difference between the two aspect ratios which was done by subtracting the ‘Camera Aspect Ratio’ from the ‘ProRes File Aspect Ratio’ which is as follows: ‘2.7259259259 – 2.6666666667 = 0.0592592592’.
This tells us that the difference between the ‘actual resolution’ and the ‘intended resolution’ is ‘0.0592592592’. Meaning that we need to scale our image by that value to remove the black border. As you can remember, Resolve sees our default image scale as 1.000, so in order to apply that scale we add our value above to 1.000: ‘1.000 + 0.0592592592 = 1.0592592592’.
As Resolve works to 3 Decimal Places for ‘Sizing Input’ we need to round that number to work with Resolve which would turn ‘1.0592592592’ to ‘1.059’. You then input that ‘1.059’ Value into our ‘Width’ and ‘Height’ fields and viola, we have our solution as well as our process in figuring out that magic number.
UPDATE:
While maths is all good and well in solving many of the problems that we face in life, sometimes it just doesn’t work. While initially when I first tackled this transcoding problem a scale of ‘1.059’ on the X and Y worked perfectly to correct the black border, now it doesn’t. So please in future iterations of completing this transcode task please use the value of ‘1.023’ on the X and Y Scale to ensure you have a full frame anamorphic image with no cropping.
CONCLUSION
I hope you have found this transcoding and problem solving tutorial helpful. My best intention is that it gets you out of a tricky and time consuming mathematical exercise on-set when the DP wants to shoot 2.8K Anamorphic ProRes and the PM asks you to provide transcodes for Post Production with very little notice as has happened to me in the past. If you’ve got any question or queries, feel free to hit me up in the comments below.
Thanks!
Thanks for this walk-through. But, am I right in assuming that you’re loosing some pixels on each side of the frame as seen above in a screenshot of your test chart?
LikeLike
Hi Tony,
Thanks for touching base. When I initially tackled this transcoding problem the original values specific worked perfectly to solve the black border issue, I used them on many jobs with no problems.
It appears that something has changed either in the ARRI Image Processing or the way Resolve interprets it both when I compiled the assets for this post and over the past few weeks when I’ve been working.
In the future when completing this transcode task please apply a scale of ‘1.023’ to the X and Y rather then the specified ‘1.059’. This will ensure a full frame anamorphic image with no black borders and no minor image cropping. I have verified this and confirmed that it works.
Please let me know if you have any other issues!
Regards
Brad
LikeLike
Thanks so much for the detailed guide! I kinda got lost on the math, but you’re sure you need to re-size both the X & Y to get rid of the borders? Or should you just be stretching X? Seems to me that you shouldn’t need to do anything to the Y dimensions in an image that is just squeezed horizontally. But, after scaling up X & Y, it’s still a perfect 2:1 ratio?
Thanks!!
LikeLike
Hi Andrew,
No worries at all, I’m glad that you found it helpful. You definitely need to resize both the X and the Y, if you just resize the X it will actually stretch the image slightly which would make it slightly distorted, proportions should be constrained when scaling up. I understand the point you are making but despite the black borders there is no way the images come out of the camera squashed or stretched. The native aspect ratio for the format discussed (Anamorphic ProRes 2.8K) is 2.66:1 which is usually cropped at the sides to make 2.39:1, so yes, if you scale up both X and Y you maintain that 2.66:1 Native Aspect Ratio.
Thanks!
LikeLike
Hi Andrew,
Thanks for the timely walk through! We’re intending to use the camera original ProRes files to edit in Premiere and then take the finished edit to Resolve. From the looks of your workflow and applying it to Premiere, we just interpret the footage as 2:1 anamorphic and it will then be interpreted correctly? From my eyes it seems to be the case.
Then it’s just a matter of scaling it in our sequence to fit? Like the other commenter I was also confused as to whether we should just be scaling the X axis.
It looks like we’ll be editing it in a 2.39:1 sequence, so we will loose some of the image on either side anyway.
Thanks Andrew!
Chris
LikeLike
Hi Chris,
Thanks for reading and reaching out. Just for the record, my name is Brad, it often gets confused with my surname Andrew so all good!
That sounds about right to me, if you’ve got fast enough drives and a decent system editing 2.8K ProRes natively in Premiere shouldn’t be a problem. As you say, if you shot anamorphic definitely run a 2x desqueeze and you should be good to go, your eyes will be the best judge. Just remember the native aspect ratio of disquieted 2.8K Anamorphic is 2.66:1, so if you framed for 2:39:1 you’ll need to zoom in 11.3% to ensure what you see in Premiere matches what you shot, then you’ll have left to right racking room for corrections. Or setup your sequence for 2.39:1 which is sounds like you have done.
Keep in mind that the aspect ratio scaling mentioned above is different to the black border scaling. Different softwares handle the black borders differently. If you open the files in Quick View or Quicktime on Mac you won’t have black borders, but you will in Resolve. You’ll get different results if you playback in Silverstack, Premiere, VLC, etc. and it also depends on what version you are running. If cutting natively in Premiere and you don’t see any black borders then you shouldn’t need to scale to correct this issue.
To understand the uniform scale for the black border you need to look at some specs. The camera shoots at ‘2880 x 2160’ but displays in the Quicktime Container at ‘2944 x 2160’, this indicates that we only need to scale the X Axis, but when you look at the desqueezed image in Resolve circles still look like circles, and I’m confident the black border is around the entire image despite not being able to see it as it blends with the slug of the 2:66:1 Aspect Ratio. There is no way that ARRI would record a squashed image out of camera and if you choose to scale only the X then you’ll be distorting your image which is certainly not desirable.
I hope this helps!
Regards
Brad
LikeLike
I recently shot my first anamorphic project on my Sony F5. We are having post issues regarding transcoding and final output. Do you have any ideas of the scaling with shooting 4K RAW on the Sony F5 with anamorphic in DaVinci? Thanks
LikeLike
Hi Andrew,
Thanks for reaching out. Most of my work deals with Alexas and REDs, I’ve used Sony Cameras from time to time but unfortunately can’t shed much light in regards to troubleshooting your transcoding problems.
If the Sony RAW Processing doesn’t do anything weird with pixel aspect ratio or false resolutions then I’d imagine it’d be a 2x Desqueeze and that’s it.
LikeLike
Excellent article! If you were required to have the transcodes at 2.39:1 and/or 2.4:1 instead of the full 2.66:1, what would the input settings be for that?
Thanks
LikeLike
Thanks Simon. Glad that you found it useful!
If you have a native aspect ratio of 2.66:1 you need to scale that in by 11.111% to make it 2.40:1.
I’m on the road at the moment without access to a computer but your combo that 11.111% with the scale that you are already applying to loose the black border. Shouldn’t be too tricky to figure out.
I always be pretty careful with anything like this as the last thing you want is an incorrect scale and your transcodes representing something other then the DPs Frame.
LikeLike
hmm i think im having this same issue. this article has been super helpful as a reference so thank you! but if im finishing for 2.39. how would i work with this file?
LikeLike
Hey Jesse,
Thanks for reaching out. The instructions in this tutorial will run you through getting an output of Full Gate at 2.66:1. If you want to finish 2:39:1 you’d just want to crop in a little bit more so that you lose your left to right racking room.
I’d import the footage, set it to Cinemascope to desqueeze it and then apply an 11.5% Scale to the image. That should give you 2.39:1 as your output aspect ratio. You’ll need to double check it’s all working on your end though, shooting a frame chart would be beneficial. Another way is to make up a 2.39 Frame Leader, you can make one here: http://www.arri.com/camera/alexa/tools/arri_frameline_composer/ and download it as a PNG File, then overlay that with your transcodes to check all is good.
LikeLike
Just wondering, I’ve done this a few different ways on features, based on post house and editorial requests… couldn’t I just create a custom input sizing or output sizing and apply it to the entire project to save on all those clip attribute steps. Much faster for when you have to add reels n reels of footage all day long?
If that makes sense would the desqueeze be H: 1.000 W: .694 ?
Save it as custom scaling input preset
Then override input scaling and set to the custom preset.
this would be for a 2x anamorphic project and a 2:39 frame
left and right ends will be cropped off at some point down the pipe.
Any input is appreciated
LikeLike
Hey Ryan.
Thanks for reaching out!
I see what you mean, so rather than tweaking the pixel aspect ratio to Cinemascope you’d just squish the height of the image to achieve a similar desqueeze effect? The problem I see with that is it needs to be set to Cinemascope to do the desqueeze at 2x, then you need to scale the whole image up as the black border theoretically exists on the left/right/top/bottom, but if that is being sorted further on down the pipeline I think your method would work. I’d just want to be double checking that if it’s 2.8K Anamorphic, that the transcodes that Resolve is spitting out are indeed 2.8K and your not losing any res or weird cropping by using the scaling in that way.
How do you suggest applying it to the project by default? In the project setting the only options for input/output scaling are:
-Center crop with no resizing
-Scale full frame with crop
-Scale entire image to fit
-Stretch frame to all corners
When I was testing, these didn’t work for 2.8K Anamorphic thus I needed the custom scaling which seems to function at a clip level. But if you weren’t worried about the black borders you could just set Output Scaling to ‘Cinemascope’ and ‘Scale entire image to fit’ which should do the trick.
Hopefully that makes sense!
LikeLike
Hi! My source material in the resolution of 2880×2160. I changed the clip attributes to a cinemascope and started to do color correction using masks (Power window)
After that, I rendered in the original resolution. But after the render, I realized that all my masks shifted to a certain number of pixels! They are now out of place. I had to do anew the color grading without changing the attributes of the clip in the original resolution. My settings were Input – Scaling Stretch frame to all corners. Then the masks were displayed correctly after the render. But I have a 1920×1080 monitor and the image is stretched, that the client does not like very much, it’s so hard for him to understand, to get the result obtained. What am I doing wrong? How to work in the cinemascope mode and render in the original resolution, so that the masks are displayed correctly? Or how to configure an HD monitor to display the image of a cinemascope when the clip attributes I leave in squared mode?
LikeLike
Hello Kirill,
How are you going?
I suspect if you have 2880×2160 Footage, then you desqueeze it with CinemaScope and export as 2880×2160 you will either be letterboxing your image or having some sort of translation problem as you describe. Given that your source footage is squeezed, are you intending to output it squeezed or desqueezed?
I wouldn’t worry about your HD Monitor for the time being. Ideally you want the image to display correctly on all monitors. Sounds like the problem is related to the settings within Resolve.
What role does Input Scaling Stretch play in your workflow? I’d be careful with that setting, sounds like it may be messing with the aspect ratio or not scaling as you intend.
Thanks!
LikeLike
Hi!
Can’t get my head around my issue, maybe you can advice?
So, I shot in 2880×2160 with anamorphic lenses. Now, id like to desqueeze and export the footage to biggest possible files into new quicktime files. Most of the solutions I’ve found so far is either just HD files or it has black borders above and under.
I’m picturing I should be able to make new files that has the height 2160? Or have I misconceived it all?
Thanks for any advice!
LikeLike
Hello Urball,
Thanks for reaching out!
Just wanting to confirm what camera you’re using? If you desqueeze 2880×2160 that would be 5760×2160. You could make a timeline in Resolve that is 5760×2160, import your footage, apply CinemaScope and in theory it should be represented as full gate within the timeline. Then you export as QuickTime using that desqueezed resolution. The files will be very big though. You’ll also need Resolve Studio as the free version is limited to UHD Output.
Hopefully that helps, let us know how you go!
LikeLike
Hey Brad,
Could you provide me some advice? My math for the scaling keeps saying I need to scale it to “1.57” is that right?
How much should I scale in if my resolution from the camera is: 2880 x 2160 and
the resolution of my ArriRaw file is: 3424×2202?
We shot it on an Arri ALEXA mini in 4:3 2.8k OG 3.4k.
My aspect ratio is 2.39.
I appreciate this post! I usually work with prime lenses.
Thanks,
Leslye
LikeLike