Designing accessible color spectrums

As a web developer who likes to produce info graphics, I’ve often run into the problem of choosing good color palettes for charts and, in the harder case, a smooth color spectrum. Colors should be aesthetically pleasing and able to convey differences in accordance with our perceptual abilities.

UPDATE 1: Jan 9, 2011: I made a number of mistakes the first time around, including having the diagrams flipped.

There are a number of types of color blindness (see Wikipedia) but the most common are the absence or dysfunction of the “red”, “blue”, and “green” cones in the retina. The “red” and “green” cones affect up to 10% of men, the “blue” cones and all cones in women must more rarely. When a cone is absent, the individual can’t make distinctions among colors that vary only in the quantity of that color, if you think about colors as being a mix of red, green, and blue. (Actually the cones don’t respond to prototypical red, green, and blue but instead have a distribution of response over a range of color wavelengths not necessarily particularly close to the prototypical wavelengths.)

Until today I didn’t really understand the mechanics of color blindness and so it was difficult to understand how to choose good color spectra. Worse, the only simple guide I could find through Googling gave some examples of good color palettes to use without explanation and without relation to the various types of color blindness.

Here’s what I’ve learned this afternoon.

The CIE 1931 color space is a mathematical model for our perception of color based on the activity of the three types of cones. As with “normal” seeers who have three types of cones, the CIE 1931 model has three dimensions of color. And it has two powerful implications: first, it gives us a coordinate system that covers the gamut of colors that can be perceived. Second, it gives a mathematical model that can be used to understand what happens for color blind people. In particular, in the CIE 1931 flattened two-dimensional color space, color blindless is represented by radial lines emanating from a red, green, or blue “copunctal point”. Color blind individuals (of each type) cannot distinguish two colors if they fall on the same radial line. These are called confusion lines. (Also, this is an interesting way of understanding the reduction of one dimension of perception.)

CIE 1931 Color Space and Confusion Lines for "Red" Color Blindness, from

I found this fascinating.

Now let’s make this practical. For a web designer concerned about accessibility, avoid following radial lines! More on this in a moment.

Backing up from color blindness, it’s important that the choice of colors on a spectrum are spaced to correspond with our ability to distinguish nearby colors. One drawback of the CIE 1931 model is that, for instance, green gets an unfairly large region compared to other colors — that whole area up top just looks like the same green to me. A newer alternative color space called CIE 1976 (L*, a*, b*) (a.k.a. CIE LAB) is a transformation of the older CIE 1931 space so that equal distances in the color space represent equal amounts of perceptual distance. A color spectrum for a chart should be taken by drawing a line or curve through this type of color space. (In the images below, the color space is larger than the colored region but the black areas cannot be represented on computer screens because they do not fall in the RGB color space.)

Now we have to put these two together. The CIE 1931 color space gives a model for how to choose accessible colors: avoid the confusion lines. The best way to avoid a confusion line is to go perpendicular to confusion lines. But we want to go perpendicularly in CIE LAB space so that we make the most perceptually distinct step. (I’m assuming that perceptual distinctiveness between two points in CIE LAB space is unaffected by color blindness. It’s probably wrong but good enough.)

Let’s start with protanopia, the lack of “red” cones, as an example. The image below plots in the dark dotted lines the confusion lines for protanopia on the CIE LAB color space. Note that because CIE LAB space is a distortion of the CIE 1931 space, the confusion lines no longer appear to radiate from a point.  (This color space has a third dimension for lightness, L in 0-100, not shown. Here I’m choosing L=50.) For a “normal”-sighted individual, any path through the color space will be perceptually useful for a chart. For a protanopic, only paths that go perpendicular to the confusion lines will have maximal perceptible differences. If you follow confusion lines, the protanopic will not be able to tell the difference. As you can tell, going from red to green is not a good idea since it follows a confusion line. The perpendiculars are indicated by white arrows. Good gradients follow the perpendiculars.

Protan Spectrum Lines in CIE LAB L=50

Here’s the full set of images for protanopia (red, left), deuteranopia (green, middle), and tritanopia (blue, right), for different values of lightness:

As you can see, “red” and “green” cone color blindness is similar. “Blue” cone color blindness is totally different, in fact it’s practically a 90-degree rotation of the other two making it impossible to follow a line that is maximally perceptible by everyone.

Since the first two are similar and the prevalence of tritanopia and tritanomaly are considerably rarer compared to the other two, if we put tritanopia and tritanomaly aside (for now!) and design for the other two, we might be able to choose a single color spectrum that at least works reasonably well for those cases. A good color spectrum to use will be a vertical line that stays within the RGB boundary, either orange to blue or red to purple. That said, if we vary from the perpendiculars a little bit we might be able to satisfy everyone a little. Orange to turquoise and green to pink go diagonally across the color space and so might cover everyone.

That said, this is all theoretical. I’m not color blind so I don’t have any intuitions about whether this is right. Also, this is my first time getting into the math of colors so… maybe I got it wrong somewhere. In fact, in my first version of this I had numbers backwards and perpendicular lines that weren’t. Hopefully this is closer to the truth now. (And I appreciate the great explanations given by Daniel Flück at his blog.)

Finally, apparently everyone can see lightness, so the most accessible spectrum is just varying the lightness (and the color doesn’t matter).

These images were created with a Python script and the grapefruit, numpy, and matplotlib libraries. Here is the code:

# Usage: python L protan|deutan|tritan
# e.g. python 50 protan
# e.g. python 50 protan
# To generate all of the images in at a bash shell:
#    for L in {25,50,75}; do for b in {protan,deutan,tritan}; do echo $L $b; python $L $b; done; done


import sys
from math import sqrt, atan2
from grapefruit import Color
import matplotlib.pyplot as plt
import numpy

w, h = (480, 480)
L = float(sys.argv[1])
bt = sys.argv[2] # blindness type

# According to
# These are points for each type of color blindness around which the dimensionality
# of the color space is reduced, in CIE 1931 color space.
copunctal_points = {
	"protan": (0.7635, 0.2365),
	"deutan": (1.4000, -0.4000),
	"tritan": (0.1748	, 0.0000)

# Draw the color space.
colorspace = [[(0.0,0.0,0.0) for x in xrange(0, w)] for y in xrange(0, h)]
for x in xrange(0, w):
	for y in xrange(0, h):
		# Compute the CIE L*, a*, b* coordinates (easy, since our x,y coordinates
		# are just a translation and scaling of the LAB coordinates).
		a = 2.0*x/(w-1) - 1.0
		b = 1.0 - 2.0*y/(h-1)

		# Convert this into sRGB so we can plot the color, and plot it.
		clr = Color.NewFromLab(L, a, b)
		r, g, b = clr.rgb
		if r < 0 or g < 0 or b < 0 or r > 1 or g > 1 or b > 1:

		colorspace[y][x] = (r,g,b)

# Draw the confusion line or spectrum line gradient.
csegs = 15
contourpoints = {
	"x": [[0 for x in xrange(0, csegs)] for y in xrange(0, csegs)],
	"y": [[0 for x in xrange(0, csegs)] for y in xrange(0, csegs)],
	"spectrum": [[0 for x in xrange(0, csegs)] for y in xrange(0, csegs)],
	"confusion": [[0 for x in xrange(0, csegs)] for y in xrange(0, csegs)]
for xi in xrange(0, csegs):
	for yi in xrange(0, csegs):
		# Compute pixel coordinate from grid coordinate.
		x = xi/float(csegs-1) * (w-1)
		y = yi/float(csegs-1) * (h-1)

		# Compute the CIE L*, a*, b* coordinates (easy).
		a = 2.0*x/(w-1) - 1.0
		b = 2.0*y/(h-1) - 1.0

		# Compute the corresponding CIE 1931 X, Y, Z coordinates.
		X, Y, Z = Color.LabToXyz(L, a, b)

		# Convert CIE 1931 X, Y, Z to CIE 1931 x, y (but we'll keep capital
		# letters for the variable names). The copuntual point is in CIE 1931
		# x, y coordinates.
		X, Y = (X / (X + Y + Z), Y / (X + Y + Z))

		contourpoints["x"][yi][xi] = x
		contourpoints["y"][yi][xi] = y

		# To compute the confusion lines, we plot a contour diagram where
		# the value at each point is the point's angle relative to the copunctal
		# point. Two points on the same confusion line will have the same angle,
		# and contour plots connect points of the same value.
		dY, dX = Y - copunctal_points[bt][1], X - copunctal_points[bt][0]
		contourpoints["confusion"][yi][xi] = atan2(dY, dX) # yields confusion lines

# To compute the spectrum lines, we want lines perpendicular to the
# confusion lines. In my first attempt at this, I computed perpendiculars
# in the CIE 1931 space by choosing the contour plot value at a point to
# be the *distance* from the point to the copunctal point. This plotted
# the concentric circles around the copunctal point, transformed to
# CIE LAB space.
# However this is wrong, because the perpendiculars should be computed
# in CIE LAB space, which is the perceptual space. To compute the
# perpendiculars, we compute the gradient of the matrix that underlies
# the confusion lines. Then the gradient is plotted with the quiver plot type.
contourpoints["spectrum"] = numpy.gradient(numpy.array(contourpoints["confusion"], dtype=numpy.float))
for xi in xrange(0, csegs): # normalize!
	for yi in xrange(0, csegs):
		d = sqrt(contourpoints["spectrum"][0][yi][xi]**2 + contourpoints["spectrum"][1][yi][xi]**2)
		contourpoints["spectrum"][0][yi][xi] /= d
		contourpoints["spectrum"][1][yi][xi] /= d

# Draw it.
plt.imshow(colorspace, extent=(0, w, 0, h))
plt.quiver(contourpoints["x"], contourpoints["y"], contourpoints["spectrum"][1], contourpoints["spectrum"][0], color="white", alpha=.5)
plt.contour(contourpoints["x"], contourpoints["y"], contourpoints["confusion"], w/15, colors="black", linestyles="dotted", alpha=.25)
plt.text(0, 0, "~".join(sys.argv[2:]) + "; L=" + sys.argv[1], color="white")
plt.savefig("colorspace_" + "_".join(sys.argv[1:]) + ".png", bbox_inches="tight", pad_inches=0)

Screen resolutions of today’s web users

I was curious today what screen resolutions people are using these days. Google Analytics reports the screen resolutions of your visitors but doesn’t give it to you in a way that is useful. It lists each unique screen resolution e.g. 1152×864 and how many visitors came with that resolution. But what you want to know is, how many people have a horizontal resolution of 1152 or more? That calls for a cumulative histogram.

Here are histograms for horizontal and vertical resolutions based on visitors to my site over the last month. The horizontal resolutions show that around 95% of visitors support at least 1024 pixels, but it drops off to only around 70% of visitors supporting a greater horizontal resolution. The 70% hangs out till about 1280 pixels (meaning, should we be designing for 1280 pixels now and make things harder for just the remaining 30%?). Then it drops again to a mere 35% for anything greater than 1280. And as for the standard wide-screen resolution of 1680, it’s just around 15%.

For reference, the iPad’s resolution (in its most popular orientation) is 768×1024.

With 1024 pixels horizontally still the resolution most widely supported, it’s not surprising that 780 pixels vertically is the point of a big drop off too, from around 95% down to less than 50% supporting anything greater. While 70% of visitors support 1280 pixels horizontally, only around 30% support its 4:3-corresponding vertical resolution of 1024 (probably as more people are using widescreens).