Consider this...

"Please God, protect me from your followers." - bumper sticker seen in southern Sweden

An earlier article, Reality and perception (2) — perception = the five senses. discussed how our body’s sensors (aka, the five senses) only detect a limited range of stimuli.

Our ears only “hear” pitches from about 20 to 20,000 Herz; our eyes only “see” light between the ultraviolet and the infra-red frequencies, and so on. The article stopped there, as its intent was only to indicate why we do not perceive the physical world (space, fields and particles) as it really “is” (a result of natural selection).

The article also mentioned that we “perceive 400nm light as violet in our minds/brains”, but opted out of discussing that further, pointing out that it “is another subject…”. A pretty cowardly way of avoiding the subject, right? So now let’s go into it a bit further, but not too far…

It should come as a surprise to no one that our body’s sensors react to certain external stimuli and then generate electrical signals which are sent over nerves to the brain — and that is where all the work is done of analyzing the sense data and converting it into understandable concepts like color, shape, movement, sound and all the rest. Our brains are an essential part of our perception.

Since vision is probably the neurologically best-understood sense, let us start there — by considering a modern digital camera. Advance warning: The following analogy only goes so far.

Light passes through the camera’s objective (a series of lenses) which focuses it onto the detector (e.g., a CCD) in the back of the camera. Each tiny component (pixel) of the detector receives signals of varying strength which indicate the intensities of three colors (red, green and blue) composing the light at that point. The light falling on the detectors causes them to generate minute electrical currents which are then read by the camera’s computer. According to how you have configured your camera, the computer may simply store the untreated (RAW) data pixel by pixel onto its memory card, or it may convert the data into the popular jpeg format. The latter conversion throws out some of the information detected by the sensors, but keeps enough to make an acceptable copy of the subject viewed, i.e., of the light reflected off the subject.

Still awake? Good.

In a human eye, light is similarly focused by the lens onto the retina, the back of the eye, where two different kinds of detectors, rods and cones, convert the intensities of the received light into signals which are sent over nerves to the brain. (For now, we will not delve any more into signal transmission by nerves.) As even John learned in high-school biology (in Florida, yet!), the rods are more sensitive to dim light, but not to color; the cones can distinguish colors (again, red, green or blue, depending on which cone) but are insensitive to dim light. Hence the French saying, “La nuit, tous les chats sont gris.” (At night, all cats are grey. Even Figaro, our black cat.) So the detectors act similarly to the digital camera’s CCD, generating pixel-by-pixel currents in the nerves. The currents are conveyed by the nerves to the brain — and this is where the camera analogy must be dropped. The analogy, not the cam… Damn!

By the way, the sensors in the retina are buried somewhat deep in the retina behind a network of blood vessels and other stuff which the light must cross before being detected, certainly not the most efficient way to capture the light. Would an intelligent designer have made them that way? Evolution, however, working by accident, certainly could have.

Stay tuned for the next and last part