Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Current »

Attendance:

  • Cary Phillips
  • Christina Tempelaar-Lietz
  • John Mertic
  • Joseph Goldstone
  • Kimball Thurston
  • Larry Gritz
  • Nick Porcino
  • Peter Hillman
  • Rod Bogart

Discussion:

  • Compression issue PR 1604
    • Peter: curious how Blosc deals with half channels. With scanlines, all of one channel first, then the next.. not interleaved
    • Rod: what is the chunk size?
    • Peter: actual payload of the data itself, just the pixels. header stuff is outside of the compression. If the structure of data changes, only way to do that is to write a new compression type and deprecate the old one which would be unfortunate. So want to be certain of the structure for the long term
    • Rod: orthogonal question of compression supporting interleaved data that is GPU ready
    • Peter: could add a compression that gives you back interleaved faster. Haven't looked at whether DWA could be modified so when it's doing its decompression it knows to interleave it. Maybe come up at the BoF because the Foundry is interested in GPU decompression. Might need a group of people specifically looking at how to do GPU with OpenEXR.
    • Cary: might be worth reaching out to ppl at Foundry to see if they have any strong thoughts on this extension.
    • Peter: Vlad mentioned could put zstd in without Blosc , not sure of details. Take zstd source code out of Blosc and try to use directly. Blosc does some restructuring of the data, so not sure of full implications. Might be slightly larger files or slightly slower... but might make it easier to build. 
    • Cary: extend the format but don't add complexity to the build process. How do we stage this into a release? Clear it will not be in time for next years VFX ref platform. We'll make a 3.3. release with Kimball's most recent changes, Cary's python bindings, then revisit the PR 1604 after that. Then could merge and let it sit in main until we make a major release. Or could check it into a branch.
    • Peter: could tell people to pull it from Vlad's fork of the repo. Pulling it into a branch seems like a good way of staging it.
    • Peter: to make it more experimental, could use one of flags in the header to indicate it is an experimental file. 
    • Cary: Foundry contact?
    • Peter: Phil Parsonage possibly
    • Cary: perhaps we could reach out to them to discuss how to make it as straightforward as possible. 
    • Peter: Kimball might be able to connect with them at SIGGRAPH
    • Peter: SideFX supports writing deep so might want to reach out to them. Nuke is only thing that reads it but SideFX writes it.
    • Peter: did test for zip but didn't test against other compression types.
    • Rod: piz has already done a fairly good job but zip ends up being smaller.
  • Python binding for deep
    • Cary: could use Peter's eyes on Python bindings and how they deal with deep. Haven't experimented with the limits of what it can handle in terms of files size. 
    • Cary: read a deep file and creates a pixel array that holds all the samples. Need to understand the id manifest more to understand what an interface in Python would like for it. 
    • Peter: as long as it can read pixels and store integers as integers. Would be interesting to be able to read the id manifest so it can decode your ids into strings.
    • Cary: have to read the entire image as once. Making it up as I go in the abstract since not developing it based on a particular use case. Based on what was there and making what was there more useful. Has name recognition going for it, flavor should exhibit is simplicity. Just get the data, don't do anything complicated. Nothing about reading by scanline, etc. Good for doing something simple.
    • Peter: good use of python, writing your own version of what the stdattr does . Would be good to implement rawcopy, just wrapping the C++.
  • No labels