2024-06-10 nano
June 10, 2024
Host: Doug Walker
Secretary: Doug Walker
Attendees:
Apologies:
Carol Payne
OCIO NanoColor Working Group Meeting Notes
Scene-referred vs. Display-referred sRGB
Doug W.: I made this diagram to explain why the built-in OCIO configs have both a scene-referred and display-referred version of sRGB. Any thoughts?
Nick: I was just dealing with a situation where someone had used a lot of display-referred textures. Doug S.: Yes, sadly, that happens often.
Nick: A lot of the 3D models available for download on the web have this issue. It's 99% of the stuff on the web and yet it's terrible. Doug S.: That would be a good documentation project that could be a sidebar, to help people who are not color scientists to understand what the issue is and how to deal with it. I like the diagram.
Nick: I was working on a project this week and have a new appreciation for why MaterialX overrides the file metadata, since what's in the file is usually wrong.
Inverting a view transform on a texture
Doug W.: So how would we expect this scenario to be handled, if someone has a display-referred texture that they want to convert to a scene-referred rendering space by inverting a DisplayViewTransform? You can do this in Maya. So for example, if you're working in advertising and you need a logo to be a specific color in the rendered image, this allows you to calculate the necessary scene-referred values that would be required. So if we think about the implications for the color space data model, allowing that capability would require allowing more than only the matrix and exponent transform operators.
Doug S.: Because this scenario is so incredibly common, my first reaction is to have the display-referred color space in the nanoColor universe. I think it would be worth discussing having a dedicated node in MaterialX to handle this case. Make a one-stop-shop node that users could drop in to convert it to something that is at least sensible to work with. And then there's the user education part to go along with that.
Larry: I have two questions, first, we know we get a lot of things that are represented to us as "sRGB" but I'm unclear on how we discern whether they are scene-referred or display-referred. Something from a calibrated camera is obviously scene-referred, but what about a texture painted by an artist that they think looks right on their display? Doug S.: I think it's mostly a matter of how color-space savvy the tools that the artist is using are. An awful lot of tools out there have no color pipeline between the values in the texture and what shows up on screen. So a lot of it is probably display-referred due to limitations of the tools.
Anders: It may be worth pointing out that 99% of the content out there on the web is pretending to be srgb_texture when in fact it's srgb_display because no one calibrated their monitors properly when they painted textures, any yet the world hasn't exploded. So do we want to introduce a huge amount of complexity in user-facing applications for people who don't even understand that they should calibrate their monitors? It's of very limited benefit except for people like us where we do have stuff calibrated correctly and actually control this stuff.
Thomas: To go back to how the node would work for converting from display to scene, there are a large number of view transforms that could be used. So which one do we pick and why do we pick it?
Doug S.: Is there a situation where srgb_texture and srgb_display are the same?
Anders: What do we expect people to do if they're different? If they've painted something on an uncalibrated monitor, there's nothing they can do. The utility for having the display called out is in an advertising or film pipeline and you know what Show LUT is being used and you need to get an exact RGB value on the screen. But for the general case, there's nothing we can do. I feel like we should be optimizing for the general case rather than the case where people know what they're doing.
Doug W.: To answer Doug Smythe's question, the only case where srgb_texture and srgb_display would be the same is if the Show LUT is an identity transform, essentially there is no view transform, it's only gamma correction. Perhaps in animation that may sometimes be the case, but it's usually not the case in VFX. So if someone does have a specific view transform they want to invert on a texture, how do we envision that would work in USD or MaterialX?
Doug S.: I'm trying to keep my uneducated user hat on. Would it make sense to have three names: srgb_texture, srgb_display, and just srgb? The first two would be for people that know what they need and the third is for Larry's case where they don't know which it is. Does that make sense?
Larry: But that's kind of backwards. If they pulled it off the internet, they have no way of knowing what the view transform is, so the only thing they could do is treat it as scene-referred. Doug W.: But you know what your Show LUT is. Larry: If it came from a calibrated display in the studio. Doug W.: No, I'm saying that you don't need to care what the original view transform was. If the artist needs those display-referred colors to appear in the rendered output for the movie they're working on, it's only their current Show LUT that matters. That's what they would invert.
Nick: One way to explain the difference is "use srgb_texture if you don't know what the display environment was and srgb_display if they know the Show LUT."
Thomas: We can't make a generic node in MaterialX because we can't know what the view transform is. Nick: Well you would impose that node yourself manually.
Doug S.: Perhaps we could put it in the documentation, "My colors look wrong, what should I do?" and have a couple of things for people to try.
Jonathan: Just taking a step back, one of the big goals of defining a universal color space set for nanoColor is to have a rigorously defined set of spaces that every application can rely upon. And that set doesn't have to cover every color space an artist would work with in a pipeline. It's completely fine for them to be working in a show-specific LUT. But when they store it in a MaterialX or USD file, the expectation is that they would convert those colors into one of the many universal color spaces for storage so that it'll be accessible everywhere in the world and have a consistent visual interpretation in all of those places. So with that in mind, to me it seems that the only value in storing an sRGB display space that is distinct from sRGB texture is if there were actually a known transform between those two spaces. And I'm getting the sense that there isn't a singular transform that goes back and forth between sRGB texture and sRGB display. They're more conceptual distinctions, but they don't necessarily have a different gamut or a different encoding. Let me know if that seems untrue to you all. But if my assumptions are correct, I would argue against adding a second space that effectively in the context of MaterialX and USD would have to be interpreted identically to sRGB texture.
Doug W.: In the context of a specific show, you would know what view transform to use to convert.
Larry: Another one of the goals of nanoColor is the thing you can use for those canonical things when you don't have a config to fall back on that defines all your show-specific stuff. Maybe that's not the goal of the group, but it's the way I've been thinking about it. That's why you still need full OCIO right, when you need the full flexibility.
Doug S.: The original discussion was trying to wrap the nanoColor box around the color spaces that would typically be needed in a rendering pipeline, generally speaking, where the super high efficiency and quick evaluation of the transfer of these colors is critical. The one little thorn in that side is these sRGB display-referred textures that frequently get brought in in various pipelines, especially if we're including web applications as a potential target for the nanoColor library. I think there needs to be something in there.
Jonathan: Am I wrong in my understanding that if you find a texture on the internet, the best you can assume is that it is using the Rec709 primaries and that it's encoded using the sRGB encoding, partially linear, partially gamma. And that that's what applications do, even applications like Photoshop, when you load that texture, they assume it's in that color space if no other information is specified. And that we should likely do the same in nanoColor, and thus in MaterialX and USD.
Doug S.: I'm trying to figure is there a sensible way to have a nice little box with this one little lobe sticking out into ugly land that is extremely common to see? And if so, what is it?
Jonathan: Just to give a common example, if you take a material from the mega textures library and it doesn't state color spaces for anything, when we convert those into MaterialX, we tend to assume that the color textures are sRGB texture, that the roughness and metallic textures are just raw values. And that I think that that's as accurate of a job as you can possibly do with those textures. Does that seem right to everyone?
Nick: It sounds like you're saying that there's no such thing as srgb_display.
Thomas: Well, I guess it depends how picky you want to be. When we have such textures coming in we tend to apply as Doug was saying the inverse display rendering transform on them, just to expand them into scene space. An alternative to that is to use, for example, the Adobe ACR camera raw curve. So you undo the contrast that you find typically used by photographers. But we try to guesstimate what would be the scene appearance of the texture we picked up off the internet. And sometimes they are just right, they are effectively scene-referred, simply gamma-encoded, with no display rendering curve applied. But it's a perilous exercise and there is no one size fits all recipe for that.
Jonathan: Just to focus on a particular example, if we're taking mega textures, Quixel assets from the internet that have no color information applied, those textures are meant to be formatted in such a way so that if you drag and drop them into Unreal or into Substance Painter or Mari without any color management at all, you would get the expected result. All of those applications, if no other information is suggested, they assume sRGB texture as the color space. So I guess to my mind it would seem appropriate for us to assume that as well in this common case of textures off the web.
Doug S.: It sounds like where we're winding up is there is no srgb_display within nanoColor. If you want to do anything involving that, you need full OCIO.
Larry: I think Nick's comment about there not really being an srgb_display color space, I think that's just like tautologically true in the sense that we all know that that only makes sense if you know what the DisplayViewTransform is and that can vary from facility to facility, show to show. By definition, there is no canonical sRGB display. If we're trying to restrict this to the canonical color spaces that are going to mean the same thing to everybody, that necessitates sort of excluding it from this layer of the onion. It's got to live somewhere else, different layer of the onion. We should only include the things everyone can agree on being the same thing everywhere, srgb_display can't be because by definition, it's not.
Doug W.: Well, actually it's the other way around, really. If you go read the sRGB spec, which is an IEC standard, it's a display-referred thing. Really what it's standardizing is if you put these values on a specific monitor, this is the color appearance that it generates. If you download an sRGB thing off the web, really what the agreement is if it's following the standard is if you put it on the monitor, it's going to look this way. That's srgb_display and it's display-referred. In other words, the color appearance is referenced to a display. In the context of rendering, you're applying the sRGB math to this file, but you're generating a scene-referred thing that's going to go through the rendering process and then it's going to have a view transform applied to it to generate the thing that goes to the display that you see. It's really the srgb_texture thing that's always going to look different because it's always going through a different view transform.
Jonathan: Does that imply that we should simply rename srgb_texture to srgb_display for nanoColor and thus for MaterialX, USD, and so on?
Doug W.: No, srgb_texture is as you said Jonathan, the example you gave was correct. If you receive something from Megascans and you're going to use it in Unreal Engine or something, you're applying the sRGB transfer function and the specific matrix to get into your rendering space. That is sRGB texture and I think it's in that core set of texture spaces for a good reason. The reality is that it's specifying a color stimulus in the scene and how that actually looks to someone on a display is going to vary from show to show based on what view transform gets applied to it. Jonathan: I see what you mean, that helps to clarify.
Cuneyt: So the industry has been using sRGB in a non-standard way, generally.
Doug W.: The reason I brought this up was to figure out how it would work for people that want to invert a specific view transform to convert srgb_display back to a scene-referred rendering space. It sounds like what we're saying is that nanoColor would not be able to support that?
Doug S.: Yes, I think we have to say that. Maybe there is a miniColor or something that is between nanoColor and full OCIO, but there's a lot of issues to figure out, and would we just be reinventing OCIO itself? Whereas if we say that anything that is display-referred is outside of the scope of nanoColor, that's a very clean, clear thing to say. If you really want to work with display-referred colors, you should transform them into one of the canonical nanoColor color spaces before you work with them. Or if you don't really care about it, just call it srgb_texture and you won't be horribly far off.
Doug W.: But we should make sure that applications that do want to support this would have a path or option to use full OCIO rather than nanoColor. Doug S.: Yes, it's part fo the texture I/O system, not part of the rendering graph. Before it's a texture file that the rendering graph would access, you would pre-process the texture, so the math on the rendering side is trivial. Larry: Yes, that's common.
Anders: Don't most people do that? Thomas: For people in Unreal or Unity, they are painting in the gamma space and the GPU is decoding that for you.
Doug W.: So if I wanted to represent the texture pre-processing in the context of a USD file, is it possible to do that?
Jonathan: We expect the application to perform that conversion upon saving to USD or MaterialX, making it a universal file.
Doug W.: So the application would have to duplicate your textures and convert them to a different color space?
Jonathan: In my 15 years at Lucasfilm, I haven't encountered a case where we wanted to store a texture in anything other than the MaterialX color spaces.
Larry: Jonathan, my experience is the same, but I just want to caution us that everyone on this call is from an enormous studio with a well-established pipeline and color scientists on hand, and so that generalization of "we never see anything outside the set" might not be as universally applicable as our direct experience. Doug W.: Yes.
Thomas: I think people at the smaller studios will be using Mari and Substance Painter with the default OCIO configs, so they're pretty much already in the same space.
Anders: Everything we deal with across quite a wide range of industries and stuff is sRGB. A handful of people, if they want to get really clever, will have stuff in linear ACEScg. But otherwise it's sRGB and that's that.
Custom color spaces
Doug W.: I refined the slides based on the discussion that Nick and I had at last week's meeting. This slide shows the transforms that would be available in nanoColor since they all analytically-defined invertible functions and they are needed for the built-in configs. But for the custom color spaces, it would just be the transforms in this inner box. This is consistent with Nick's prototype. This simplifies the color space data model in terms of what information would need to be serialized to define a custom color space. My only concern on this is whether it becomes a blocker for applications that are using full OCIO rather than NanoColor, like if they need to send information across the Hydra interface to a renderer.
Nick: I don't think so because if there's some special thing that needs to cross to the renderer that you're going to pass that special thing with the name of the OCIO config, as a bit of extra data that Hydra picks out and says, "Oh, hey, here's this string that you want to know about." I think that's pretty much the case today already where people need to get like an OCIO config in their renderer outside of any transformations provided by Hydra.
Anders: We actually can't today. You have to go via the environment variable. Nick: You're talking about the display transform. Anders: I'm talking about anything apart from what the rendering color space is, right? That's the only thing that you can know in Hydra. Anyway, that's tangential.
Nick: The point I'm trying to make is that you would put it in data as a reference to a config and pass it across to Hydra. There wouldn't be an expectation in that case that there's a mathematical path through for nanoColor. It would only be metadata that's sent.
Doug W.: So it would be a color space name, and as long as the renderer has access to the same config as the application that's creating the USD scene, it should work?
Nick: Yes, and as Anders points out, you might have to do some manual work at the moment, if there is a particular bit of data that's not explicitly called out in the specification at this point. But that's the way I would expect it to work.
Doug W.: Good. So if people are in agreement with all of that, then the information that would be serialized as a custom color space is the name string, the transform operators, which would be limited to OCIO's Matrix, Exponent, and ExponentWithLinear. The other piece of information is the reference color space that the transform is relative to. The strawman proposal is that this would be limited to ACES2065-1, the same as the current built-in configs in OCIO. And we may want to allow some optional information to help the scenario where you're starting with an OCIO color space and want to transport it as a custom color space and then match it up later with an existing OCIO config.
Thomas: On the reference space, is there a good reason to have ACES rather than CIE-XYZ-D65? The reason I'm asking that is that there will be as many reference spaces as there are chromatic adaptation transforms. Right now, mainly two. So we really have two versions of ACES 2065-1, if you think about sRGB, DisplayP3, and all the other color spaces that are D65 based. In order to reduce that, I would say it's better to have a D65-based space, XYZ ideally.
Doug W.: I'm certainly open to allowing that as an option, in fact, that was included in the first version of the slide.
Anders: I prefer XYZ-D65.
Nick: Part of the original Nanocolor thesis was to use the RP177-1993 basic equations for color television, which uses the CIE XYZ-D65 as the reference space.
Thomas: But that's not necessarily D65, right? Like those equations, they will go to whatever illuminants you have. When you go from ACES to something else, if the white point is different, you need to pick a chromatic adaptation transform. I can think of about 10 different matrices to use for that conversion, that makes basically 10 different spaces, effectively.
Doug W.: As Thomas is saying, the SMPTE RP document takes the RGB and white point chromaticities and gives you a matrix from that. What the SMPTE document doesn't specify is what you would do if your source and destination had different white point values. If you just followed the SMPTE spec, you would have neutrals in one space mapping to non-neutrals in the other space. Thomas: Exactly. White will change the color temperature. Basically, they wouldn't stay white under the new illuminant, which we don't want. Or, we do, but that's another conversation.
Doug W.: It sounds like there's definitely some interest in expanding the list of reference space options. If people are happy with this, I can start drafting documents up. The goal was to have a bunch of deliverables at SIGGRAPH. We need to start writing up these various documents so that we can have a chance to review and iterate on them before we show them to a wider audience.
Nick: This was a good discussion. Doug S.: Yes, it's always good when you end up with more clarity at the end of a meeting than when you started.