Now here’s a puzzle for you: why is it that we can’t see light? I mean, we can see objects that are illuminated by light, but we can’t see light itself. As a newly-minted researcher into colour perception, this question drives me nuts…

Take a torch and switch it on. Can you see the beam? Nope. Well, you might be able to see it if it’s a misty night or you’re in a dusty room or if there’s plenty of smoke about but if the air is clear of reflective particles you won’t see nothin’. Just as you can’t see light from the sun during the day, or from the distant stars at night when you dance by the light of the moon…

Of course you can see a light when you look directly at it. You can see the sun itself and you can see a lightbulb. And if that light is shone through a material which is not completely opaque, like stained glass or coloured gel, you can see its glow. But what, exactly are you seeing when you look at that light source? 

We seem to have stumbled on some kind of paradox. But on reflection (pardon), perhaps the more important question is not so much why can’t we see light? Rather, acknowledging that we can’t see radiation of any kind, we should ask: why can we see a light (source)?

The light-sensitive components of our eyes – the so-called rods and cones that respond favourably to a tiny range of radiation within the much broader electromagnetic spectrum – don’t see the ‘visible’ radiation at all, despite what it says on the box. Rather, they respond to light reflected off physical objects. And specifically, only the part of the spectrum that hasn’t been absorbed by the surface we’re looking at.

Orange and Un-oranges

It is generally accepted that we see an orange as orange because the un-orange part of the spectrum is absorbed by the surface of the fruit. The remaining radiation is reflected away from the surface and that, in turn, enters our eye. Which is where the real magic begins…

The light radiation, travelling as photons or particles-that-behave-like-waves, first arrives at the cornea – the outer part of the eye covered by contact lenses if you happen to wear them – then moves through the pupil (the aperture or hole in the iris which varies in size according to the volume of ambient light); is cleverly focused by the lens; moves on through the ridiculously-named jelly-like wasteland of the vitreous humour; then finally arrives at the retina where the heavy lifting is performed.

The retina contains a stack of cells which respond in their own special way: the thin strips we know as rods are by far the most plentiful – humans have a massive matrix of about 120 million cells per eye – which are mostly sensitive to light volume (that’s luminance or brightness in the graphic-arts world); and the 6 million-odd collection of red, green and a relative few blue-sensitive cells which kinda look conical although I, for one, would never have described them as ‘cones’ if it was my job.

If you want to get picky they’re not specifically red-green-blue sensitive at all but each tend to respond to the longer (reddish), middling (greenish) and shorter (blueish) wavelengths of the spectrum with plenty of overlap. It’s the data collected from the relative stimulation of these cells that allow our brains to discern colour.

Transmutation

The rods and cones do a remarkable thing: they convert the light particles into electrical signals by a process called phototransduction before passing the signal on through the optical nervous system via synapses and connections in the same way that other brain functions are transmitted. There are some double-ended (bi-polar) cells first, then an array of ganglion transmission cells. There are fewer of them than the rods and cones (about 100 cones connect to one ganglion) and we didn’t know much about them until the 90s but now we think they’re important for non-imaging reasons like psychological responses to colour, circadian rhythms etc.

Anyway, after all that is said and done, the relative electrical charges which were sympathetic to the original light source pass on to the brain by way of the venerable optic nerve. After that, it’s a process of neurology which leads to the business of perception and recognition. So when certain rods and cones are stimulated in a particular way then the brain determines it must be orange we’re seeing. Simples.

But is it orange?

We call it orange without really knowing if there’s a universal experience of orange, or whether my orange is the same as your orange, or without prejudice as to whether orange is good or bad or indifferent, or whether it makes you feel happy or sad, randy or glad etc. Strangely, we literally call it orange because of the orange fruit, not the other way around…

Anyway, we can chalk this up alongside other uncertainties of human experience like, ‘does a tree make a sound when it falls (without audience) in the forest?’ and ‘does the refrigerator light go out when we close the door’ or even ‘what happened to Schrödinger’s cat?’ But surely there’s a way we could determine whether the experience of orange is in some way universal or at least common?

Well, not exactly, but there is an area of the brain we call the colour centre which is critical in the perception and processing of colour signals received by the eye, which ultimately results in colour vision. With this in mind, we can map the stimulation of the brain in response to the presence of a colour and that, at least, is more or less consistent.

But what about the philosophical question that people always want to talk about when the conversation becomes colourful: are we all seeing or experiencing the same thing when we talk of, or see, a given colour? Is my orange the same as your orange?

Well… That’s a bit meaningless to be honest. I mean, are we all hearing the same note on the piano? Feeling the same softness of a kitten’s fur? Experiencing the same, um, orgasm? I have no idea but let’s get back to the problem of not seeing light…

RGB vs. RYB

If you read anything about colour within the graphic arts (Photoshop etc), you’re bound to learn about red-green-blue or RGB colour as opposed to the primary (irreducible) colours we learned about in school which were (well, still are) red, yellow and blue. If you’re a fine artist (I love that expression) – a painter or a sculptor working with physical paint or glaze – then it’s this second world of passive, subtractive, colours that appears to make sense.

The more paint you add to the page, the darker it gets, right? And you will recall that if you combine yellow and blue paint you get a kind of green; red and blue you get purple. And that goes for anything that isn’t transmitting light like our friend the orange, the clothes we’re wearing, or the paint on the wall…

But it’s the opposite effect that happens when you look at your screen or shine coloured lights onto a stage or sit in a cinema. In that case, the more coloured light you throw at the subject the brighter it gets. We call it an additive colour space because it adds up to being white light when you combine it all together.

Isaac Newton is usually credited with figuring out that you can reverse engineer white light into its component parts by encouraging the light to spread its relative wings out from slowest to fastest through a prism. He could see the same colours that you can see in a rainbow and decided there were seven colour bands with a whole lot of others in between.

It’s from Newton’s observations that we get our famous ROY-G-BIV rainbow colours although that’s a bit of a stretch really. There are more like 1o million colours that we can discern as humans – far fewer of them are in the rainbow to be fair (the spectral colours) but there are a lot more than 7. But Newton really wanted there to be 7 because there are 7 notes in the musical octave and it would be so poetic if they were a perfect match! Well they’re not but there are indeed some fascinating parallels in the perception of light/colour and sound/music which we’ll look at in another article…

Anyway, 7 or not, it is indeed possible to reduce the additive colours down to just 3 primary building blocks which when combined evenly produce white. That’s how we get colour on our computer screens and televisions and devices: tiny little charged liquid crystals (or light-emitting diodes these days) are arranged in filtered clusters of red, green and blue which from even a short distance give the illusion of the colour spectrum very effectively.

Interestingly, if you take just two of the additive primaries – red and blue – you get a blueish red or pink known to printers as magenta – the very same red as the primary colour of the subtractive space. If you combine red and green alone you get yellow; combine blue and green and you get a greenish blue known to printers as cyan and to the rest of us, as turquoise or aqua blue – the same primary colour blue as the subtractive space. A case, you might say, of moving through Alice’s looking glass and seeing everything in reverse…

Why CMY(K)?

So what about CMY? Well when offset lithography began to gain traction at the beginning of the 20th century as the most practical form or commercial printing, engineers figured out that if you put coloured dots on a page or other substrate (material) very close together then they tend to blend together to form new colours.

If you took that turquoise-cyan as the blue primary, that pink-magenta as the red primary and, well, yellow, you could pretty much get the entire spectrum of colour required to reproduce anything (well, nearly anything) we can see in the real world. You need to alter the density and frequency of the dots using a process called half-toning but that’s more or less the way it works.

Along the way they figured out that you really need to add in black (that’s the K or key colour) to get darker colours or they’d end up a bit wishy-washy. And voilà: CMYK – the basis of process colours as used in commercial lithography and likewise the components of toner-based laser printers like you may well have in your office if you have a real job (unlike me).

Converting from RGB to CMYK is something we do in the graphic arts under the banner of colour management. Now, for future reference this is absolutely not merely a matter of opening your RGB photographs in Photoshop and changing the colour mode to CMYK because in so doing you ignore the kind of printer, substrate and ink that will eventually reproduce the colour but let’s hold off on that for now…

Is RGB colour visible at all?

So can we even see RGB colours? Given that RGB colour is transmitted as light radiation how can we see it? Well we only really see the net result of its action in falling on an RYB surface which in turn appears illuminated. So that would be a no in terms of the everyday, passive world of things. But that doesn’t account for what you’re reading on your screen right now. That’s RGB light-colour you’re seeing…

OK, here’s what we know…

  • We cannot see radiation of any kind.
  • Radiation within a certain, tiny spectrum of around 400 to 800 nanometers in wavelength, causes objects to be seen by we humans.
  • We can’t see the reflected light from those objects but we can see the objects.
  • If the light shines through something translucent it appears to glow.
  • We can see the glow of a light source but not see the light leaving it: once it leaves it disappears!
  • RGB light falling on a coloured RYB surface allows that surface to be seen and the resulting colour we see is a consequence of the bias of the incident light (the colour of the light falling on the surface) and the colour of the surface.
  • If that surface is highly reflective, light can leave that surface and excite another surface with its RGB light but we still can’t see the light – only the surface of the objects which the reflected light falls on and that surface is RYB in nature.
  • We can’t see RGB light.

And the answer?

I suspect there may be some kind of explanation in terms of energy which disperses on leaving its source. Visible light is technically non-ionising, or relatively weak electromagnetic radiation. That is, it only carries enough energy to excite electrons and sub-atomic particles rather than enough for them to break their covalent bonds, leave their homes and families, and wreak havoc with DNA and anything else in their path… That’s the stuff of ionising, nuclear radiation such as X-rays and gamma rays which, by the way, we can’t see either but do allow us to see further than light can travel under its own steam. That’s why we x-ray our limbs to see beneath the skin to the bone within and why the operators of such machines protect their private parts with dense metals that can withstand their penetrating gaze…

But while therein might lie some kind of descriptive, technical explanation, it does little to explain such an odd paradox: no matter how hard we try, we can’t see light. And if we accept that in good faith and humility, why can we see a light at all? 

Feel free to comment, if you can shed some, um, light…