Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 20 Next »

FFmpeg is frequently used by different studios for encoding their media, however the documentation for ffmpeg is often poor, or cryptic so its often harder than it should be to come up with a good starting point. We are aiming to come up with recommendations for different scenarios as well as document what the different flags are doing with the aim to make this easier to get to a good baseline.


Overview

We are looking for recommendations for the following:

  • Best color preservation for output to:
    • Web, OSX, IOS and Windows.
    • Common applications: e.g. RV, Nuke.
    • Rec709, P3, rec2020 and HDR.
    • Web browser - Firefox reviewing mp4. - use firefox plugin.
    • RV
  • Codec recommendations for:
    • Proxy H264 playback (e.g. web streaming), should be setup for web streaming.
    • Animation/Modelling/Layout movie playback. - somewhat lower quality playback, but should always provide smooth motion.
    • Lookdev/lighting/compositing movie playback - should have excellent color fidelity and minimal encoding artifacts
      • Should any filmlook be baked in, or should we assume that is always applied during viewing.
      • How much should we be able to adjust color and have the image hold up? (Or rely on exr's for that?).
    • Export to editorial.
    • High-resolution or frame rate - e.g. 4k, 8k, 60fps, 120fps.
    • Stereo or VR.
  • Q: Which container should we be considering: mov, mp4, mxf.

Where ffmpeg arguments, it would be great to document why we are using them, rather than ending up with a recipe. 

Color Preservation

Testing Methodology

Converting SMPTE color bars to the compressed movie, using ffmpeg to expand and then compare with OIIO. NOTE, for compression schemes that are not 444 we may need to mask the transitions.

Testing loading the compressed movie in to RV, Firefox, VLC, Avid, resolve, , to compare the resulting color transformation - not sure if there is a procedural way to run this?

For the tests below we are assuming that other tools are being used (e.g. oiiotool) to convert the rendered frames into an intermediate file (e.g. PNG) in the target color-space. 

Q: Currently focusing just on color matching in vs. out, but should also do EXR ACEScg in to resulting movie. Feels like we should also bless full pipeline, e.g.: Reference "Dailies script" https://github.com/jedypod/generate-dailies

Test Sources

SMPTE test chart: https://commons.wikimedia.org/wiki/File:SMPTE_Color_Bars_16x9.svg

Download image sequence from: https://senkorasic.com/testmedia/ - 

Explore netflix: https://opencontent.netflix.com/

Test Results:

taurich.org/encodingTests/results.html

Notes

The big issue here is that by default if you start converting images to another format, and ffmpeg cannot determine the colorspace  it will default to bt601. So many of the flags below are to:

A: Tell ffmpeg that the source media is in fact bt709

B: Add the metadata to the output, so that other future conversions also know how to convert it back.

C: Do as clean a conversion from RGB to YUV as possible.

Example Usages

Need to specify the build of ffmpeg. – ? - and specify build flags. -loglevel trace

NameSourceffmpeg flags

Description

ffmpeg colormatrixhttps://trac.ffmpeg.org/wiki/colorspace
-pix_fmt yuv444p10le -sws_flags spline+accurate_rnd+full_chroma_int -vf "colormatrix=bt470bg:bt709" -color_range 1 -colorspace 1 -color_primaries 1 -color_trc 1
8bpc only

The sws_flags are needed for the RGB to YUV conversion.

-color_range 1   # mpeg see: FFmpeg/pixfmt.h at master · FFmpeg/FFmpeg · GitHub

-colorspace 1 # BT709   FFmpeg/pixfmt.h at master · FFmpeg/FFmpeg · GitHub

-color_primaries 1 # BT709 FFmpeg/pixfmt.h at master · FFmpeg/FFmpeg · GitHub

-color_trc 1 # bt709 FFmpeg/pixfmt.h at master · FFmpeg/FFmpeg · GitHub - Color Transfer Characteristics.

ffmpeg colorspacehttps://trac.ffmpeg.org/wiki/colorspace
-pix_fmt yuv444p10le -sws_flags spline+accurate_rnd+full_chroma_int -vf "colorspace=bt709:iall=bt601-6-625:fast=1" -color_range 1 -colorspace 1 -color_primaries 1 -color_trc 1 
Supports 10bpc and 12bpc, SIMD (faster), better quality than colormatrix.


Compression quality

Testing Methodology



NameSourceffmpeg flagsDescriptionSize
colorspace_yuv444p10lehttps://trac.ffmpeg.org/wiki/colorspace-c:v libx264 -preset placebo -qp 0 -x264-params "keyint=15:no-deblock=1" -pix_fmt yuv444p10le -sws_flags spline+accurate_rnd+full_chroma_int -vf "colorspace=bt709:iall=bt601-6-625:fast=1" -color_range 1 -colorspace 1 -color_primaries 1 -color_trc 1

Prores 4444
-c:v prores_ks -profile:v 4444 -qscale:v 1 -pix_fmt yuv444p10le -sws_flags spline+accurate_rnd+full_chroma_int -vf "colorspace=bt709:iall=bt601-6-625:fast=1" -color_range 1 -colorspace 1 -color_primaries 1 -color_trc 1

-profile:v 4444 is equivalent to -profile:v 4
shotgun_diy_encodehttps://support.shotgunsoftware.com/hc/en-us/articles/219030418-Do-it-yourself-DIY-transcoding',-vcodec libx264 -pix_fmt yuv420p -g 30 -vprofile high -bf 0 -crf 2

Prores 422 HQSome FFMpeg commands I need to remember for converting footage for video editing. http://bit.ly/vidsnippets · GitHub

-pix_fmt yuv422p10le -c:v prores_ks -profile:v 3 -vendor ap10 -sws_flags spline+accurate_rnd+full_chroma_int -vf "colorspace=bt709:iall=bt601-6-625:fast=1" -color_range 1 -colorspace 1 -color_primaries 1 -color_trc 1



Note the -vendor ap10 part below is only needed if working with Final Cut, but it does no harm otherwise.

-profile:v 3 is equivalent to -profile:v hq


VMAF

I did explore using VMAF - Video Multi-Method Assessment Fusion as a way to quantify the compression, the notes for setting this up are below, however I think we are going with a fairly high compression factor , so I think this is probably not really going to help us much.

https://github.com/Netflix/vmaf

https://jina-liu.medium.com/a-practical-guide-for-vmaf-481b4d420d9c

https://netflixtechblog.com/toward-a-practical-perceptual-video-quality-metric-653f208b9652

https://ottverse.com/vmaf-ffmpeg-ubuntu-compilation-installation-usage-guide/ - building VMAF on ubuntu.


ffmpeg flags

Codec

x264

x264rgb

Prores

DnxHD

RGB/YCrVb Colorspace Conversion  

As a rule of thumb, we would like ffmpeg to do as little as possible in terms of color space conversion. i.e. what comes in goes out. The problem is that most of the codecs are doing some sort of RGB to YUV conversion (technically YCrCb). The notable exception is x264rgb (see above). For more information, see: https://trac.ffmpeg.org/wiki/colorspace and https://ffmpeg.org/ffmpeg-filters.html#colorspace

While it is possible to do the encoding outside of ffmpeg (see later). Its easier if you do the encoding inside, using flags like:

-sws_flags spline+accurate_rnd+full_chroma_int -vf "colorspace=bt709:iall=bt601-6-625:fast=1"

Metadata flags -sws_flags spline+accurate_rnd+full_chroma_int helps with the YCrCb conversion. Its using the swscale filter, which has a number of options:

TODO: why spline?   

accurate_rnd - enables accurate rounding

full_chroma_int - Enable full chroma interpolation.


The second part -vf "colorspace=bt709:iall=bt601-6-625:fast=1" encodes for the output being bt709, rather than the default bt601 matrix. iall=bt601-6-625 says to treat all the input (colorspace, primaries and transfer function) with the bt601-6-625 label). fast=1 skips gamma/primary conversion in a mathematically correct way. 

TODO, CONFIRM - THIS IS TO GET THE CONVERSION TO TREAT THE RESULTING DATA AS BT709 rather than BT601? Why does fast=1 work? Surely I do want it to do all the flags?


Color Metadata


https://github.com/bbc/qtff-parameter-editor

  • No labels