TSC Meeting Notes 2024-06-27
Attendance:
Community:
Discussion:
Sparsely attended: public holiday in New Zealand (Kimball, Peter), Cary and Christina travelling.
RGB asks: is the order of attributes in the file significant? Consensus with LG: we don't think it was ever intended to be so.
Li Ji: discussing benchmarking of the library, trying to organize such thing, and has academic collaborators. Among other things, need a good corpus of exr files that cover the right use cases, also must be open source content.
Nick points out that there is the openexr-images repo, too. But all agree that it doesn't necessarily cover good benchmarking cases. (E.g., no 4k images? the deep image examples are unrealistically simple?)
LG: Wish there were a freely usable production-level complexity test deep image. Nick: Generating images from ALAB or Intel Moore Lane scenes in DPEL would give good production complexity image for this and other modes (deep, albedo, cryptomatte ids, etc).
General talk: we've never had a perf benchmark for OpenEXR, no perf regression testing in any organized way, nor any way to know if over time we're improving perf (or making it worse) other than anecdotal/accidental noticing in studios.
Nick suggests Li Ji start a google doc to collect thoughts on the "spec" of what we need in benchmark test images and how to generate them. LJ: Already exists: https://github.com/lji-ilm/openexr-notes/blob/main/docs/benchmarkplanning.md