TSC Meeting Notes 2024-02-08
Attendance:
- Cary Phillips
- Christina Tempelaar-Lietz
- John Mertic
- Joseph Goldstone
- Kimball Thurston
- Larry Gritz
- Nick Porcino
- Peter Hillman
- Rod Bogart
Guests:
- Li Ji, ILM
- Christian Wieberg-Nielsen, Colorist, Storyline
Discussion:
- Deep CVE bug fix - OIIO test suite issues are failingÂ
- Larry will verify further
- Peter & Nick had discussed possible change to core so every exception can be caught so you can capture the true source of an error.
- Christian
- interested in how to get metadata from the camera into OpenEXR
- GitHub security vulnerability reporting
- Cary: anyone have any insight? People who filed the CVE had a blocked address so we did not receive the message, email not as reliable for CVE reports.
- You have to be an administrator to accept a draft and turn it into a CVE, Cary will look into it further.
- Need multiple administrators
- fuzz reports go to openexr.org, but cve reports go to openexr.com ?
- Cary made it all consistent a while back except for the fuzz reports. Should test if the openexr.org address is working.
- Deep CVE bug
- Peter is getting a repro, Kimball was able to repro
- Kimball needs to update the checkfile test to catch the break reported by OIIO
- As a repro, this fails:
- iinfo -v --hash --stats testsuite/iinfo/src/tinydeep.exr
- Kimball: Pointer unpack is causing this issue
- OpenSUSE already cherry picked the fix into their next release but caught it in their tests before releasing
- Deep file limits 2.5 gb
- Peter: amplification attacks could be a risk if you can allocated a lot of memory for loading
- PR 1616 - automate compression method detection - Phillipe lePrince
- Kimball: shouldn't have automated detection
- Peter: compile time trick, wouldn't have implemented it this way because it's a little difficult to reason about
- Not built every time you build library, only when a new compression type added, done at cmake time but only if you ask it too.
- Doesn't need to work for everybody
- Kimball: went away from having float tables auto built. do that for configuration but we should be against such a mechanism if it something that doesn't change very often.
- Peter: could do it with the CI , generate the files and inject back into system
- But added 1000 lines of code to save writing 5 lines of code when adding new compression types.
- cmake changes are large
- Peter: could ask to take out the automation, leaving files as is and modifying them by hand
- Would need to add a comment to cpp file as to what needs updating when adding compression type
- Kimball: Add static assert in compression.cpp or compression.c to check the length of the enum against the compression types.
- Peter: old c interface uses #define's instead of an enum so difficult to check
- Peter: should be able to catch that in the test suite
- Cary: what about std compression in the PR?
- Peter: scanline implementation breaks with deep, single scanline would solve, compression step is just given raw data doesn't know which datatype it is dealing with (on C++ side, it's different on the C side). maybe special case handling of compression type just in the core then forward it.
- Kimball: we already did that with ... (missed this) , handled in core then forwarded.
- Blosc library performs differently when it knows if it has 4-byte vs 3-byte data.
- Z-standard or LZMA have discrete chunk vs streaming mode, can keep a little bit of state around that helps the streaming. should make sure we are taking advantage of these capabilities.