Man and camera see the world differently
Often a scene that looked attractive to our eyes appears completely unpresentable in the photo – with a whitish, illuminated sky, with black dips in the place of shadows, with surreal color shades. What is the reason? Why can’t the camera simply take and display the scene as it is? She’s actually trying. Due to its modest capabilities. The problem is that we ourselves never see the world as it really is. Our eyes and brain do tremendous work so that we can admire the surrounding reality. The camera does not know how, and you have to think for it, perform non-obvious and not always natural manipulations to get images that look natural.
Central and peripheral vision
The field of view, susceptible to detail, is very small – about three degrees. You will be convinced of this if you hold your eyes on a letter in this text and try to look at the surrounding letters without moving your eyes. As you move away from the center, you will rapidly lose the ability to distinguish between small details. Peripheral vision is very sensitive to movement, but not to detail. In order to get a detailed image in the brain, the eye constantly scans the scene, every moment sending information about its individual fragments to the brain, from which, after their individual processing, the whole picture is formed. The camera draws the entire scene as it is, without worrying about the fact that different fragments of the scene have different semantic meaning, need different color and brightness correction, and, in essence, should be photographed in completely different ways. Hence all the problems.
When the eye looks at light or dark parts of the scene, the pupil changes its diameter, narrowing when looking at bright objects and expanding when looking at the shadows, thus regulating the amount of light entering the retina. In addition, retinal receptors are able to vary their sensitivity to light depending on its intensity. As a result, we can distinguish between details, both in highlights and in shadows, adapting to conditions of high contrast. The camera exposes the entire scene with constant, pre-set aperture, shutter speed and ISO, and therefore is not able to capture the difference in the degree of illumination of a high-contrast scene. The solution is: avoid scenes whose contrast does not fit the dynamic range of your camera. If the contrast is high, try softening it using a reflector or fill flash to lighten the shadows slightly. If you can not influence the lighting, and are forced to sacrifice either light or dark parts of the scene – sacrifice shadows. We are more adapted to the perception of details in the light, and therefore black shadows look much less unnatural than flat whitened lights. In the end, if you are not lazy (and I’m usually lazy), no one forbids using the HDR (High Dynamic Range) technique, i.e., making several exposures of the same scene, having worked separately on dark and light sections, and then combine them into one image in a graphical editor.
This is not like sunset because the sky is whitened by over exposure. Let’s try a little underexposure the next shot.
Got better. The sky looks the way it looked in kind, but the meadow in the foreground drowned in darkness. The mess.
Combining the two previous pictures, we get a realistic image.
The next interesting feature of human vision is its selectivity. We see what we are interested in and ignore details that are insignificant for us. Seeing a worthy object, for example, a flowering spring tree, the photographer points the camera at him and presses the shutter. Later, looking at the picture taken at home, he disappointedly discovers that in the background behind the tree you can see dull and not blooming buildings, a dustbin was sheltered under the tree, and high-voltage wires cross the cloudless blue sky. I am exaggerating, but you understand the essence of the problem. How to be You need to carefully monitor the garbage in the frame and try to eliminate all unwanted objects. Pay special attention to the corners of the frame – there is often something extra. The more carefully you will be at the time of shooting, the less time you have to spend on subsequent editing of the picture.
A person has binocular vision. The presence of two eyes allows us to estimate the distance to various objects in the three-dimensional world. Photography is a flat interpretation of the originally voluminous scene. A camera (if it is, of course, not intended for stereo shooting) produces a flat, two-dimensional picture, and not every three-dimensional scene preserves its volume and depth when projected onto a plane. You can verify this even before shooting by closing one eye and looking at the scene like your camera would.