Taking Photographs That Match Your Mind

Nina scoping her shot with her iPhone (photo by Merridy Cox)

You see something breathtaking and say to yourself: I have to take a picture of that! You snap it with your camera or phone, happy that you’ve captured the moment. When you return home and review your shots on the computer to share, you get to that breathtaking scene and your first thought is: why on Earth did I take a picture of that?!

The shot is nothing like what you remembered. That dull and lifeless scene is the farthest thing from breathtaking. What happened?

Nina checks her photo on her iPhone (photo by Merridy Cox)

When Your Mind and Your Camera Don’t Agree

We see with our eyes, but we feel and process meaning with our brain. And it’s the brain that determines what we finally see. What we see is our brain’s interpretation of the scene. We adjust what we see with meaning.

The camera doesn’t interpret. It is a tool that works based on principles of light, focus, depth of field, breadth of field, and resolution & detail. What a DSLR camera set on automatic, a compact camera and a smartphone have in common is that they are all set to capture the best shot, given the right conditions of light, contrast and motion. If you shoot with a camera set on automatic, it is acting as your brain, but without the interpretation of meaning. You’ve given away that power. Like a benevolent dictator, the camera/phone is boss of your shots, dictating what it was designed to do to get the best shot in those particular conditions. The trouble with that is the camera doesn’t see with your brain. It’s idea of the ‘best shot’ is based on a set of criteria created by a manufacturer. It works great only in certain conditions—those best anticipated by the manufacturer (e.g. optimum light and distance). But, make no mistake: you will not get what your brain sees. You might think so, but you won’t.

A short while ago, when I was visiting a good friend in British Columbia, we got into talking about photography and I mentioned how I had returned from using a tablet and phone (for convenience) to my Canon DSLR camera (for quality); I’d ditched the camera in favour of the light convenient iPhone, which I found easy, particularly when travelling. But I soon became frustrated and disappointed at not achieving what my brain saw. Returning to the DSLR camera allowed me to significantly improve my shots. My friend’s daughter—an avid picture taker with her mobile phone—challenged me: “Are you sure your camera takes better pictures?” I wanted to laugh, but then I realized that she was serious, born from the confidence of her own pictures—which I’d seen and must acknowledge are very good for composition and sharpness. Closer inspection reveals that these were all achieved within a boundary of conditions. The lighting was optimal, the distance was good, the composition sufficiently simple to accommodate the camera’s limitations; so what her brain saw, the camera reflected, at least fairly well.

Nina (decades ago) with her Minolta SLR and long lens (photo by H. Klassen)

But it is impossible for a smartphone or any automatic camera to achieve certain effects that only my DSLR camera set on manual or semi-manual can provide (e.g. setting my depth of field, adjusting for that right bokeh, playing with exposure, achieving natural light and a high resolution image in a low-light situation, getting very close or zooming far away with a dedicated lens). In addition, DSLR cameras outperform smartphone cameras because their sensors are much larger, let in more light, and produce more dynamic range in low-light scenarios. This allows them to capture greater detail than smartphone cameras or compact cameras. Ultimately, as Smartframe acknowledges, “the gap between what’s possible on the smartphones and dedicated cameras remains significant.” The argument is similar for a regular camera set on automatic vs one set on manual or semi-manual.   

I’ve been there. Automatic settings on a camera and smartphone (which is basically like a camera on automatic) can only do so much to match what your brain sees. And they can be mighty annoying—particularly when the camera’s brain prefers to focus on the wrong thing.

Above: automatic setting went for background focus; below, setting corrected for foreground focus (photos of Earthstars in a cedar forest by Nina Munteanu)

If you truly want to get what your brain sees, you have to take over the brainpower of the camera. That means either tricking the automatic setting or going off automatic to manual or semi-manual on a camera (no smartphones currently come with manual settings, nor will they; although they may have some correcting software, which isn’t the same thing.) For the past decade the market is changing for phone cameras and compact cameras—there is Nikon’s Coolpix S800c, which combines an Android OS with a long zoom lens and touchscreen-based interface and Panasonic’s Lumix CM1 blends a traditional smartphone with a 1-inch sensor. Samsung’s Galaxy Camera 2 integrates an Android OS with 3G capabilities and a 21x optical zoom. They all remain limited with respect to matching what your brain sees to what your camera takes.

Getting Your Camera To Agree with Your Brain

Successfully getting your camera (or smartphone) to match your brain-sight starts with recognizing the various aspects of a captured image. These include:

  • focus (sharp or soft): what’s in focus and what isn’t in focus
  • depth of field: how deep the focused region is
  • lighting: colour saturation and contrast
  • resolution (sharpness)
  • motion (or lack of it)
  • composition (what is in focus and what isn’t and where everything sits)
  • bokeh (the look of the unfocused part)

All of these, once recognized, can be manipulated on your camera. On a smartphone or auto-camera, most of these factors must be addressed as best as you can by shifting your position or aim, changing the time of day or lighting when you take your picture, or changing your subject and surroundings. In other words, by manipulating what your brain sees.

I won’t lie; it’s not easy to manipulate what the camera takes to match what your brain sees. It takes dedication and time. But it starts with recognizing what needs manipulating: training your eyes and brain to really see what you’re taking a photo of and understanding what your camera has to do to achieve it.

Nina photographing a tributary of the Otonabee River, ON, with her Canon DSLR (photo by Matthew P. Barker, Peterborough Examiner)

How Our Eyes and Brains See

It helps to understand how our eyes see and how our brains process what we see, particularly what is different from what a camera does. This includes angle of view; resolution and detail; and sensitivity and dynamic range. 

Angle of View: Our angle of view isn’t straightforward like a camera with a particular lens with set focal length (e.g. wide angle vs. telephoto lens). Cambridge in Colour tells us that “even though our eyes capture a distorted wide angle image, we reconstruct this to form a 3D mental image that is seemingly distortion-free.” Our central angle of view—around 40-60º—is what most impacts our perception. “Subjectively, this would correspond with the angle over which you could recall objects without moving your eyes,” says Cambridge in Colour.

Rendition of what eye / brain focuses on (image from Cambridge in Colour)

Resolution and Detail: Cambridge in Colour tells us that 20/20 vision is mostly restricted to our central vision; we never actually resolve that much detail in a single glance. Away from the centre, our visual ability decreases and at the periphery we only detect large-scale contrast and minimal colour. A single glance, therefore, mostly perceives the centre in resolution. Because our brain remembers memorable textures, colour and contrast (not pixel by pixel), our eyes focus on several regions of interest in rapid succession, which paints our perception. “The end result is a mental image whose detail has been prioritized based on interest.” It is our interest that dictates what we see and ultimately informs our memory of that image.

How our eye / brain integrates depth of field and exposure for background and foreground (image by Cambridge in Colour)

Sensitivity & Dynamic Range: According to Cambridge in Colour, our eyes have the equivalent of over 24 f-stops. This is because our brains integrate background and foreground to create a mental image that integrates these.

Matching the Camera to Our Brain

The next step is to learn how to manipulate the camera to achieve these. This means learning how to use the f-stop, how to manipulate the shutter speed, how to change the ISO setting, and what all these, in turn, produce in terms of focus, depth of field, lighting, exposure, saturation, resolution, bokeh and more. Taking a course in photography is a good way to start. Experiment with settings. Learn about the equipment. Lenses. Filters. Tripods. Go on a camera shoot with a photographer who knows about these. It promises to be ultimately rewarding and fulfilling.

I wanted the entire foreground group of Shaggy Main mushrooms to be in focus and the background less focused but recognizable; I therefore set my f-stop at 18, which gave me a slower shutter speed (and I had to stabilize my camera) with sufficient depth of field (photo by Nina Munteanu)
I used a higher speed and smaller f-stop of these cardamom pods and seeds to create a more shallow depth of field that focuses attention on a particular aspect of interest and keeps the image from looking flat (photo by Nina Munteanu)
A medium f-stop allowed me to freehold my camera and capture a crisp shot of the person and sled but a motion-blurred shot of the dog–achieving a sense of motion in the shot (photo by Nina Munteanu)
I oriented my camera for a portrait (vs landscape) shot to showcase the height and gigantic size of these red cedars in Lighthouse Park, Vancouver, and ensured a person was in the shot for perspective (photo by Nina Munteanu)
I used a low f-stop (which in good light does not appreciably reduce depth of field) to achieve high speed in capturing the three divers off the cliff (photo of ocean cliff in BC by Nina Munteanu)
I used a high f-stop and stabilized camera to achieve a softer look to the moving water and also get higher depth of field to see both stationary foreground and background (photo by Nina Munteanu)

I’ve been on my journey for over a decade and I’m still learning. From my son, from others, from my own experiences. That’s the fun part, after all. It’s an adventure of discovery…

My Canon camera on its tripod (photo taken with tablet by Nina Munteanu)

NINA MUNTEANU is a Canadian ecologist / limnologist and novelist. She is co-editor of Europa SF and currently teaches writing courses at George Brown College and the University of Toronto. Visit  www.ninamunteanu.ca for the latest on her books. Nina’s bilingual “La natura dell’acqua / The Way of Water” was published by Mincione Edizioni in Rome. Her non-fiction book “Water Is…” by Pixl Press(Vancouver) was selected by Margaret Atwood in the New York Times ‘Year in Reading’ and was chosen as the 2017 Summer Read by Water Canada. Her novel “A Diary in the Age of Water” was released by Inanna Publications (Toronto) in June 2020.