The geometry that describes the relationship among colors, and the neural mechanisms that support color vision, are unsettled. Here, we use multivariate analyses of measurements of brain activity obtained with magnetoencephalography to reverse-engineer a geometry of the neural representation of color space. The analyses depend upon determining similarity relationships among the spatial patterns of neural responses to different colors and assessing how these relationships change in time. We evaluate the approach by relating the results to universal patterns in color naming. Two prominent patterns of color naming could be accounted for by the decoding results: the greater precision in naming warm colors compared to cool colors evident by an interaction of hue and lightness, and the preeminence among colors of reddish hues. Additional experiments showed that classifiers trained on responses to color words could decode color from data obtained using colored stimuli, but only at relatively long delays after stimulus onset. These results provide evidence that perceptual representations can give rise to semantic representations, but not the reverse. Taken together, the results uncover a dynamic geometry that provides neural correlates for color appearance and generates new hypotheses about the structure of color space.