Improved Color Blending

Improved Color Blending

On March 30, 2012, in All Posts, Games, Graphics, by stu

Linear RGB color interpolation is not as great as its popularity would make it out to be. In this post I introduce blending in the CIE-LCh color-space, which is dramatically better for certain applications.

NOTE: you may want to read my article on Twisted Hue Lighting before reading this post.

Background and Issues

The standard way to blend between two colors (C1 and C2) is just to linearly interpolate each red, green, and blue component according to the following formulas:

R' = R1 + f * (R2 - R1)
G' = G1 + f * (G2 - G1)
B' = B1 + f * (B2 - B1)

Where f is a fraction the range [0,1]. When f is 0, our result is 100% C1, and when f is 1, our result is 100% C2. Simple enough. If we sample 7 points along this range, it produces a gradient, such as this one:

If you visualize RGB color-space as a 3D cube (one axis for each component), then this blend is just taking uniform color samples along a line that connects the two color points within that space (hence the term “linear” interpolation).

This technique is used a lot in games, for example if you have a changing dynamic light source – like the light from a setting sun. Or perhaps you’ve got an particle effect that blends from one color to another. The code for this is fairly cheap to run. However, depending on your art style, a linear blend in RGB color-space may not always be the best choice. Suppose we do the same blend as above, but with blue as the starting color:

Observant readers will notice something a bit strange about this gradient. The middle color, where f=0.5, is gray. It has no color at all. We can measure the saturation of these colors (by converting them to the HSB color-space), with the following result:

[1.0] [0.8] [0.5] [0.0] [0.5] [0.8] [1.0]

The brightness also changes in a similar way – it is not linear across this gradient.

In some applications, you may want to maintain color saturation, especially if you’re using these colors as a diffuse material color. It changes the characteristics of a scene dramatically when the saturation (color) goes away and the brightness is cut in half, such as when the middle color became gray in the blue-to-yellow example above. In the real world, a dramatic sunset is created by the particles in the air refracting the sunlight into wavelengths that create colors of yellow, orange, red, purple, and blue. The colors are blended in the same way a rainbow sends various wavelengths of light into your eye.

Polar Color-Space Interpolation

If your goal is to maintain color saturation and brightness, you can perform the interpolation in a polar-coordinate color space. Instead of the cube-shaped space of RGB, polar color spaces are cylindrical or conical in shape. Examples of polar coordinate spaces include HSB (also called HSV), HSL and CIE-LCh. *

* Technically HSB is a hexagonal cone shape, but we can consider that it approximates a circular cone.

To blend in HSB space, for example, you need to first convert each RGB color to an HSB color. For code to convert colors, follow the link at the end of this article. The code isn’t free, so don’t use it on 10,000 particles, but later I’ll show ways to optimize this.  Once you have two HSB colors, their saturation and brightness can be interpolated linearly, using the same formulas that I introduced at the beginning of this article. The hue component is an angle around a circle, so you need to handle the discontinuity from 360 to 0 degrees and interpolate the shortest route around the circumference. Here is the algorithm as pseudo-code:

d = h2 - h1
if (h1 > h2) then swap(h1, h2), d = -d, f = 1 - f
if (d > 180) then h1 = h1 + 360, h = ( h1 + f * (h2 - h1) ) mod 360
if (d <= 180) then h = h1 + f * d

Here I’m assuming that the hue ranges from [0, 360), but some implementations use [0, 1), so adjust accordingly. Using the blue-to-yellow example, we now get:

If the two colors have the same saturation and brightness (they do in the above example), this interpolation will just rotate around the circumference, taking the shortest path (could be clockwise or counter-clockwise). Notice the color saturation is maintained through the entire gradient.

One issue with the HSB color-space is that it doesn’t do a great job of representing brightness as our eye sees it. The disadvantages of HSV and HSL are explained very well in this Wikipedia article. In the above gradient, the blue on the left is perceptually much darker than the yellow on the right, even though in HSB, the brightness component is 1.0 over the entire gradient. The brightness is also inconsistent across the gradient, and the greens are too close to one another in color.

Perception Rules

To solve this, we can do the interpolation in a different polar color space. The CIE-LCh color-space is similar in geometry and function to HSB, but takes into account human perception of color differences. It is a polar-coordinate version of the CIE-Lab color-space. I’ve put together a simple page so you can see the difference in the two color-spaces side-by-side.

Here is a blend of the same blue and yellow used above, but interpolated in CIE-LCh color-space:

Great! This time perceived brightness appears to change uniformly across the gradient.

Converting from RGB to CIE-LCh is not cheap: it requires converting RGB to XYZ, then XYZ to CIE-Lab, and finally CIE-Lab to CIE-LCh. After you compute the interpolated colors, you need RGB values to render the colors onto the screen. This means you have to reverse this conversion for each color along the gradient.

Approximation Trick

If you are generating a gradient with a lot of steps, you don’t have to do this conversion at every step. You can approximate the CIE-LCh interpolation by converting only a subset of steps back to RGB and then interpolate in RGB-space using the old school technique. If you just do this with the halfway point (f=0.5), for example, your cost is only 2 conversions to CIE-LCh space and 1 conversion back to RGB space. The rough approximation looks pretty close to our CIE-LCh result above:

What About CIE-Lab Space?

Even though CIE-Lab is a perceptual-based color-space, linear interpolations within it do not maintain saturation, even though brightness is uniformly blended. Here’s an example, showing the same gray problem:

Compared to the interpolation in CIE-LCh where saturation and brightness are uniform:

For an interesting comparison, here’s the same interpolation in HSB space:

And in RGB space:

Final Comparison

Here are all the gradients of blue-to-yellow for comparison purposes:

Row 1: RGB Interpolation
Row 2: HSB Interpolation
Row 3: CIE-LCh Interpolation

Here is a cyan-to-red interpolation that also exhibits the gray problem in RGB space:

It is a dramatic example of the improved quality that you can get by interpolating in the CIE-LCh color-space.

In a future post, I’ll talk about how to do a weighted-blend of an arbitrary number of colors in polar coordinate systems.

Open-Source Code Examples

Example code to convert RGB color to HSB and back (C/C++): Stack Overflow

ColorJizz color conversion library (AS3, PHP, Javascript, C#): Google Code



Twisted-Hue Lighting

On March 15, 2012, in All Posts, Games, Graphics, by stu

3D graphics gurus and hardcore gamers continually lust-after the latest in 3D rendering hardware so that they may get closer to the holy grail of ray-tracing, real-time volumetric lighting, and other rendering effects. But these science-based effects, while accurate, are geared mainly toward the reproduction of reality. For the game I’m making now, I’m much more interested in creating an experience in a world that is visually driven by aesthetics, not reality. A world that is rich with character – more like a painting than a photograph or a blockbuster movie.

While reality is often the goal in 3D computer graphics, an artistic interpretation is usually more compelling.

To present this contrast, let me reintroduce the most basic lighting model, the Phong reflection model. The formula for the Phong model computes the amount of light hitting a point based on the surface normal at that point, the direction of a light source, and the direction of the viewer. The equation can be broken down into 3 components: an ambient lighting term (bouncing light from all directions), a diffuse term (direct light from a source), and a specular term (reflected light).

Let us now imagine a simple 3D object. We’ll choose a nice green for the diffuse color of its surface material. (“Nice greens” usually have red in them, by the way). The gradient below represents, from left-to-right, the diffuse term in the Phong model; from the brightest point (surface color facing the light) to the opposite side of the object (the parts in shadow):

Diffuse lighting gradient is aesthetically boring.

While the green color I’ve chosen is quite splendid, the gradient itself is rather boring. This is a well-known condition – a topic for Art 101. Artists are trained to avoid boring lighting gradients and embellish these with interesting color. Often these colors use a completely different hue. In the example of a skin tone on the right, an artist will highlight the basic yellow with warm, red hues. No outdoor watercolor would be complete without shadows of blue and purple, instead of just darker versions of a subject’s actual color.

Computer scientists thought they could solve this problem by adding the ambient lighting term. In theory, you could add in a dark blue color to give the effect of a vivid shadow. The problem is, this also affects the color of the lit part of the object:

The same gradient, with ambient blue light added.

What this means is that a lighting artist typically has to mess with the lighting to get it to look the way they want it to. This means adding in fill lights to a game world until the scene looks the way they want it to. These extra lights often must be “baked” into the game world and are static (can’t move or change) because there are too many to efficiently calculate in a pixel shader.

Graphics programmers usually think of color in terms of its RGB (red, green, and blue) components. But artists are trained to think in terms of HSB, or hue, saturation, and brightness. To get a visual sense of the HSB color space, look at any color wheel. The code to convert between these systems does consume a few CPU cycles, but it can be very useful for a programmer to start using the HSB (also known as HSV) color space for certain applications. In this space, you can take any color and simply tweak a single value up or down to alter the saturation, hue, or brightness of that color. When you apply some artistic principals to this idea, you can take any color and “twist” it to get another color. One that compliments it in a really attractive way, creating a color harmony. This is called color scheming and I won’t go into these ideas here, but you can read more about them elsewhere.

I began to experiment with these ideas in the context of lighting. Colors that are next to each other on the color wheel look good together (analogous colors). I used this principle to generate a shadow color for any particular surface color. I ended up with the following technique:

  1. convert the surface color to HSB (looks best when color has saturation less than 50%)
  2. darken it to a value between 20-35%.
  3. bump the saturation up to 100%.
  4. “twist” the hue by 30-50 degrees in either direction.

Using this twisted hue technique, our green lighting gradient can now looks like this:



…which preserves the green color and produces a much more compelling lighting model than the Phong model gradients that I showed earlier. Here are some other twisted hue examples using other surface colors (the last two use the same surface color):


Finally, here are some simple examples to show the difference. These terrain renderings are captured from my game engine running on the iPhone. The top renderings are using a standard N*L lighting model to modulate the texture color (a desaturated yellow color, similar to the above gradients). The bottom renderings use N*L to interpolate along the twisted hue lighting gradient.


For more information on HSV and HSL color spaces, see Wikipedia.

Last week I went to the doctor for my annual physical and I admit that I do not look forward to the blood draw. Mainly I don’t like the “not eating breakfast” part, but the “stab your arm with a needle” part isn’t so great, either. This time I got my blood test results online. FINALLY! Online patient records are about 10 years too late if you ask me.  I’m happy to report that my cholesterol results were great. My bad cholesterol is down from last year and my good cholesterol is up! Must be all those salty nuts I ate over the holidays.

Just like cholesterol, mistakes can be good and bad. What you want is less of the bad ones and more of the good ones. I would state that a good mistake is anything that you can recover from (either physically, mentally, or financially). All too often we get into behavioral routines that create barriers to creativity.

  • Good mistakes can sometimes show us surprising things we would not have though of otherwise.
  • Good mistakes teach us how to avoid bad mistakes.
  • Good mistakes can still be painful, but if we can learn from them, we become stronger individuals.

In 1946, Percy Spencer, a scientist at Raytheon, was experimenting with a new kind of vacuum tube, a magnetron. When, to his surprise, a candy-bar melted in his pocket. Yikes! Luckily he was over 50 at the time and probably already had his 3 kids by then. Anyway, the next day he placed an egg next to the magnetron and it started vibrating. He went in for a closer look and it exploded and got all over his face. The microwave oven was born.

The dreaded cockle bush could make you into a millionaire.

Two years later, a Swiss engineer named George de Mestral went for a hike and mistakenly walked through a grove of cockle-burrs. His socks and pants were covered with them. Upon his return home, cursing his predicament, he began the tedious process of removing them. Upon closer examination, he noticed thousands of tiny hooks on the burr that had clung to his fabric. 8 years later, after many experiments to refine his invention, he released the incredibly useful Velcro to the world.

There are many other examples of mistaken discoveries like these: Post-It notes, the Slinky, and gunpowder to name a few.

My mistakes generally do not have as profound results as these. Recently I was prototyping a stylized cloud effect for my game project. I went to export a particle texture and I apparently exported it at the wrong size. Huge, in fact! But when I ran the effect, it was actually kind of cool. It showed me a better way to build clouds.

It got me thinking that one of the big differences between an artist and an engineer is that an artist is free to make mistakes and is OK with it. They embrace play. Engineers strive to correct mistakes and avoid them in the future. This needs to change. Engineers need to play too. They need to think like artists to be creative.

To end this… a collection of interesting quotes on mistakes:

“The things which hurt, instruct.” – Benjamin Franklin

“The fellow who never makes a mistake takes his orders from one who does.” – Hebert Brocknow

“If silly things were not done, intelligent things would never happen.” – Tom Peters

“If you are very, very careful, nothing good or bad will ever happen to you.” – Dr. Robert Maurer

That last one sounds a bit “Finding Nemo-ish.”


Cat got your router?

On March 7, 2012, in All Posts, Commentary, by stu

Founding member of the "Cat/Router Love Association"

A couple years ago I got an Apple Airport Extreme WiFi Base Station so I could have a networked backup drive and a more reliable 802.11n router (that’s the “latest cool shit” tech, if you’ve been under a rock the past few years). Over the last week or so, I’ve noticed that my WiFi speeds have been terrible. Like “solar-flare season” terrible.

About the same time this started, I had upgraded my router firmware. If you Google pretty much any firmware update for any gadget you own, you’ll find a ton of users saying that it broke something. I’m guessing 99% of those complaints are usually coincidence. Let’s face it, the damn gadget was probably broken or slow to begin with and now you just have something concrete to blame it on since you can no longer return it to the store!

So I found myself falling into this trance, where I too was *certain* it was the damn firmware! The solution was to revert back to the previous firmware version. However, the Airport Utility software I have doesn’t let me do that. I have to get version 5.6, and I only have version 5.5.3. But the folks at Apple are sneaky bastards. This is how they get you. Version 5.6 and later only supports Mac OS X Lion (10.7). I have Mac OS X Snow Leopard (10.6). So no going backwards on the firmware until I plunk down some ca$h and reinstall all my programs. Thanks Apple.

Now here’s the good part. Today I noticed something. My cat Gabby, who is very old and like all old cats, is suffering from slow kidney failure. She sleeps and drinks a lot. Water…drinks a lot of water. Anyway, she also gets cold and likes to sleep on warm things, like floor heat registers, and on me in bed, and on laps and on routers. That’s right, I have a hot router.

Think this is unusual behavior?  Then head on over to Google Images and search for cat sleeping on router and you’ll find 293,000 results. This scene could very well be the most common cat photo on the internet. I’m breaking this news first. Heck, that doesn’t even include images of “kitty sleeping on router” or “squiggles sleeping on router.” My God. Shocking.

Another tangent, so you probably forgot that I said that I noticed something today. I was trying to download a video on YouTube of two mice battling-it-out with light sabers, but it failed completely. I picked up a hammer (I keep one next to my desk) and ran downstairs to where the router was, ready to bash it to pieces…er, I mean tap it a little. Instead, I find cute Gabby all snuggwee-wovey on the router again. Hmm…coincidence? I think not!

Sure enough, I raised up the router a couple of inches so she couldn’t sleep on it and ran back upstairs to find the most blazing fast speeds I’ve ever experienced in my upstairs office. Now there’s bound to be some crazy cat ladies out there on the internet that are going to accuse me of cruelty. But not to worry, I left the USB hard drive sitting on the table where Gabby is free to sleep on that all she wants. That’s right, I have a hot hard drive too.

Keep your cats away from your routers, people. Fur messes with radio waves.

Tagged with:

First post!

One of the cool problems I’m working on now is object distribution/spawning in my 3D game world. For some background, the game world I’m building is, in theory, endless in each of the cardinal directions. The game builds terrain patches on-the-fly, procedurally, as the player moves around the world. More on that in a later post. Placed onto the terrain are objects, of a mostly decorative sort, whose purpose is to make the world look friendly and, well, occupied.

Object Coordinates in range 0.0 to 1.0 from Base 2 (X) and Base 3 (Y) Halton Sequences

So how does one go about determining where to place these objects once you’ve rented a spot to put them from your game engine?

The first thing I tried is a pseudo-random distribution using coordinates generated from two Halton sequences. Sounds scary, but really these are just a progression of “random-like” values in the range of 0 to 1 that cycle, depending on what radix (or numerical base) you choose. I am using base 2 for X and base 3 for Y. The great thing about this algorithm is that it produces a nice uniform distribution in the range of [0,1] horizontally and vertically (see image on right). This fits nicely next to another patch of terrain, like a puzzle piece. The algorithm avoids overlapping objects just by the elegant nature of the numerical sequences.

I have to stop and give credit to Andrew Willmott and the team at Maxis, who contributed the Halton Sequence algorithm and many other great ideas from Spore to Sigraph in 2007. Their papers and videos can be found here. Spore is one of those rare games that really pushed the envelope of what’s possible with procedurally-generated content and art-minded engineering.

This worked great, but my objects are fairly large, not like the grass and shrubs that Andrew was placing on the terrain in Spore. On my landscape, this tended to produce a “noise” of objects, and didn’t allow enough space for movement in between them.

A Clustering of Rocks

I decided that I needed to try clustering the objects. Then I could maintain the same count of objects, but create a look that was a bit more artistic and intentional. As an example of what I was going for, check out some of the sketches on this page. I spent about a day thinking about how to do this, then after a good night’s sleep, I realized that this could be solved with a simple circle-packing algorithm. Each circle represents the “personal space” of an object in a cluster from a top-down perspective.

As any engineer who wants to prototype something visual will do, I pulled up my copy of Processing and started mocking something up. Here’s my first attempt below. Wait for the circles to appear below in the Java Applet box (give it permission to run first!), then click on the box and press SPACE to generate a new cluster of objects.

For each run, this applet iterates a hundred times before the circles come to rest in an optimal packing. It works very much like spring/particle physics systems, using relaxation – moving objects around repeatedly, until they finally reach an equilibrium. Each cycle pushes intersecting objects away from one another and then moves the objects closer to the center of the cluster. I want something much faster, and it only needs to work for a small handful of objects.

Here is my final prototype, which only spawns between 3-7 objects and goes through a mere 10 iterations before stopping:

The circles are sorted in the code by size such that the relaxation algorithm tends push the smaller ones to the outside, which is what I want. A cluster like this would then be created for each of the first handful of Halton Sequence coordinates, which will space them at a good distance appart from one another. I’ll post images of the results at a later date.