TSC Meeting Notes 2024-09-19

Attendance:

Cary Phillips
Christina Tempelaar-Lietz
John Mertic
Joseph Goldstone
Kimball Thurston
Larry Gritz
Nick Porcino
Peter Hillman
Rod Bogart

Community:

Li Ji, ILM
Mathieu Mazerolle, The Foundry
George Tattersall
Mitch Jacobs
Pierre-Anthony Lemieux
Tillman Schmidt
Lutz Latta

Discussion:

  • HTJ2K discussion: Michael Smith and Pierre-Anthony Lemieux joined to discuss their proposal to integrated H2J2K codec into OpenEXR, white paper here: https://academysoftwarefdn.slack.com/files/U07N2SPQZL2/F07N2TS2ECS/openexr_ht-v10.pdf

    • HTJ2K is a “high throughput” extension of JPEG2000 

    • Order of magnitude improvement

    • It’s a codec 

    • International standard, with wide adoption: HTJ2K adopted by defense industry, etc.

    • Looked into adding a compressor

    • White paper goes over it

    • Opens doors to automatic support for previs, progressive decoding

    • Advantages: used throughout the industry

    • Allows remote browsing of exr images

    • RGB: It decides the ordering of how things are laid out: channel order (progression order). Any restrictions?

    • Larry: would need extra API to support that

    • PAL: we’d need the API

    • RGB: but the API would be specific to the codec

    • LG: but you’d ask if it’s available

    • MS: Is there a mipmap?

    • RGB: Yes mipmap, and tiled. 

    • HTJ2K white paper:

    • Exrperf utility for measurements

    • Lossless compression. 

    • Test files use float16 images

    • Ran 5 rounds, took average

    • Mathieu Mazerolle: Love the results. We’ve built a DWA decoder, GPU accelerated.

    • MS: all tests are lossless, but lossy is much faster

    • PAL: decoder is faster than realtime, so you overwhelm the memory bus before the CPU.

    • GT: We have our custom GPU stuff

    • KT: the library allows you to push the GPU as far back in the process as possible.

    • RGB: we pull the data to the GPU then decompress there. But we didn’t have a software way of doing the same way.

    • MS: HTJ2K is much faster than realtime, so you might not even have anything for the GPU to do

    • MS: I think lossy is great, but some people don’t like it. Lossy is just a knob.

    • PAL: Why would someone tile an image?

    • KT: Scanline is about making a scanline compositer, pulling a scanline through the pipe. Tiling is about rendering, texturing

    • LG: Want to be able to read partial images spatially. Modern rendering, a single frame might be TBs of data and 1000’s of exrs, but a tiny amount is visible. 4TB texture set with a footprint of 4Gb

    • PAL: J2K has a tiling mechanism.

    • MM: Our compositor is scanline based and typically ingests scanline-oriented EXRs, but our virtual production workflow is to bake tile-based EXRs out from Nuke

    • PAL: so tiling is for random access HTK2K has random access capability. Can structure the image as tiles

    • KT: it’s hierarchical wavelet

    • LI: What are we getting? EXR just becomes a container for J2K?

    • MS: You get to use exr, metadata, deep/not deep

    • PAL: We compiled a dll of OpenEXR and just dropped in to Resolve, and it reads J2K images. Awesome! EXR has some nice things like data/display windows, but I don’t know to what degree J2k supports arbitrary channels?

    • The "j2k codec" in EXR should handle arbitrary channels, as well as INT and FLOAT datatypes, but it doesn't have to use j2k for that. DWA uses ZIP compression for non-RGB channels; j2k could do the same if there isn't a clean way to do that via the j2k library

    • The j2k standard supports up to 16,384 channels, each channel can have a different bit depth and different resolution

  • V3.4 release:

    • KT: fuzz tests caught some bugs

    • Cary will cut the branch on Tuesday, stage the release

    • Cary is preparing release notes