In the previous episode, we’ve seen that there were many people with different views on the production process. From a purely technical stand point, let see how we can transport the color information from one to another.
What problem are we trying to solve?
We want the people along the chain to see the same thing, which is technically impossible both because of displays and human visual system. Let’s just be realistic and stick to the goal of having a consistent perception about the displayed image, which is almost the same thing. This has some cultural aspects as we’ll discuss later, but let’s forget about them for now.
Step 1: same machine, different displays
There are different components to get there, so let’s start with the basics: you have 2 machines with the same operating system, running the same software on the same hardware, or one machine with 2 monitors. That means you need to make sure you have the monitors to transform the same signal into the same colors: for that you need a calibration solution, with a decent probe (we’ll talk about that later) that will measure the colors emitted by the displays when sending a stimulus signal. Once you’ve made the two profiles that describe all the colors the display devices are capable of, you need to match them, you have options. You already need to make some philosophical choices there: you may want to keep the full quality of the best display and let the lesser one do what it can to match, or reduce the performance of the best display so it matches exactly the lesser one. Are you a freakin communist? (If you don’t know me personnally, don’t take this first degree please).
Once you’re done, you load the profiles in the system and you feel much better, because in your room with no light and no window, you now have the same color (sensation) when moving your cat picture from the left monitor to the right monitor. Happiness.

Step 2: same machine, different softwares
OK now we’re talking. You cut your video in one application, and grade it in another one, and then make the deliverables out of the first one. You like adventure. It means that you need to have consistency between the color interpretation of the same color signal in different softwares. That is a very political topic, because there is never just one way of doing things. You may have developers who know enough of color science to prefer an algorithms to the one they could simply copy and paste from StackExchange or Wikipedia. For a while a well known grading system company had such opinion that there was a right way of converting DCI P3 to XYZ, which was arguably better but just not the one picked by the rest of the industry… after a nice discussion it became a checkbox in a configuration menu.
It can be as simple as playing the same video file in different players. You could also have different ways to address the graphics board, using the operating system’s color management, or bypassing it. A more twisted one, back in the days when playing the same video in Quicktime player twice, there was a gamma shift because the first instance was taking the graphics card overlay and the second wasn’t… I’m sure it made some people quit their job. Toodee’s done a great job on those little gamma shifts that turn people nuts.
Step 3: different operating systems
Talking about the differences between operating systems, we also have drivers issues. Gone are the days when Mac, Unix and Windows had different gamma values for the display, but now with the nascent HDR implementations on the desktop I think we’re up to some serious fun to get things aligned. And then we have the mobile thingies.
Step 4: interpretation
We have file that can’t be displayed directly and want to have a layer of technical color management along the image so it shows correctly on the screen. This can be useful when displaying images with a large gamut or encoded a different way than the display device: linear, some kind of log… It can be a manual thing, when your software is not smart enough to recognize what type of file is ingested, or something automatic triggered by the reading of some kind of metadata that is written in the header or the name of the file. Think about the ICC profile in a JPG picture. Some applications read it, some don’t. Same for writing. We rely on proper implementations of the metadata reading and associated interpretation mechanism.
Step 6: intentions
We’re getting really ambitious, we want to carry some artistic intention along with the picture, on top of technical color management. That allows for example to add some context to a VFX shot, say that it’s graded to be a day for night shot, it’s better to have the original ungraded plate to do the comp and being capable of adding the look on top when needed. Still in a lot of cases that workflow is managed by a LUT created by the grading system, containing or not the technical part for the display device. This may lead to issues and discrepancies, either because the VFX and the grading system don’t interpretate the LUT the same way or the viewing device is not matching the same target (DCI P3 vs sRGB for example). Or a combination of the two. Or something else…
Step 7: editable intentions
You may want to go for something that you can change along the line and use as a starting point. That was the purpose of the ASC-CDL (I made one of the first application as a Lustre plugin back in the days) but this was working only if you had a complete control over the working colorspaces in the different applications. Worked perfectly well, but very few implementations because it took too many color scientists to change the light bulb: you can’t expect to have one in every facility working on a project. Ultimately this will find a more integrated future with its brainchild the ACES Metadata file.
You may even have more than a pixel based color correction and have a full grading pipeline with zones and all bells and whistles. If every step is managed using the same proprietary system such as Baselight’s BLG file it’s rather easy as you have one rendering engine and one vendor controlling the implementation. Going for a standard across multiple applications is a big effort, and as a tool developer you have to get sufficient traction from the industry to justify the cost. But it would be really cool if we could do that with some kind of shaders.
Step X: different viewing environments
So you’ve finished your pictures in your dim lit room with its nice 6500K LED bar bouncing on the neutral gray back wall, and you proudly send the result to your clients that just hates it. Happens that he checked the result on his phone, while having a coffee outside on a nice restaurant terrace, next to a blue wall (but I suspect it was actually purple). I’ve already talked about viewing environment, nothing you can do, it’s all about psychology, or lack thereof. And it’s so much more fun once we do that in HDR!
Conclusion
So you get it, we need standards. That’s why we have Rec 709, BT1886, SMPTE 2084, 2086, 2094, Rec2020, DCI-P3, that’s why we have tools like ICC, ASC-CDL and ACES. ICC is v4, ACES is v1.2, it takes time, and it’s never finished.
We used to have words to describe what we want and get a human to make an interpretation, now we can dial numbers so a machine can do it, but we also have to embrace the fact that the discussion, the alternative proposition made by the other guy, is worth consideration as that point of view may bring something to the story. We also need to stay humble on the expectations: as much as the technology can do, the dream of having automatically everything consistently displayed along a content production line is far from us. We’ll have a lot of fun building this!
Cedric
Feel free to comment on our Discord server
Originally published on Rockflowers, August 2020