DigitalPhotography​.com


Everything you need to know about digital photography

Technology

Digital cameras contain a wealth of technical features and devices - with new technology constantly appearing and making its way from professional equipment through enthusiast's cameras and then into consumer devices. On this page you can find out how some of this technology works.

Sensors

A digital camera's sensor takes the place of the film in a film camera. The camera's lens projects an image of a scene onto the sensor, where it is converted into an electronic representation that can be stored in the camera's memory. In order to perform this conversion, the front-facing surface of the sensor is divided up into an array of tiny square pixels, each just a few micrometers across.

Every pixel acts like a kind of reservoir for storing electrical charge, which is generated by light falling on the pixel's surface. Before the start of an exposure all the reservoirs are emptied, so that they can fill up while the shutter is open and allowing light to fall on the sensor.

Once the shutter has closed, the charge that has built up in each pixel has to be measured and converted into a number for storing in an image file. The way in which this is achieved depends on the sensor technology - whether it is a CCD or CMOS device.

In a CMOS sensor, each pixel site has associated electronics that amplify and convert the pixel's charge to a signal which can be routed to a digitising circuit, frequently built into the same chip. CMOS sensors are produced using the same fabrication methods as most other electronic chips, so they are relatively cheap to manufacture and are consequently used in most low- to mid-range cameras. They have a disadvantage in that the extra electronics at each pixel take up space that is consequently unavailable for gathering light. Also, random variations in component characteristics can add a certain level of noise to the image as a result of the amplifying circuitry. However, advances in manufacturing mean that CMOS technology is now become more widely used in top-level camera sensors.

In a CCD sensor, the pixel information is read out by shifting all the data vertically from the sensor one row at a time into a separate storage row - this row is then shifted horizontally one pixel at a time into the circuitry that converts the pixel's charge into a digital value. Having read one complete row, the data is shifted vertically again, allowing the next row to be read out, and the process repeats until the whole sensor array has been digitised. CCD sensors have less electronics at each pixel site and so can dedicate more surface area to collecting light than a CMOS sensor can, and the technology is also less prone to generating noise. However fabrication is more costly than CMOS manufacture. CCDs are more commonly found in high-end cameras - they are also used in high sensitivity video or CCTV devices due to their low noise performance at low light levels.

The sensor pixels as described do not have any way to determine the colour of light falling on them, so can only produce a monochrome image. In order to generate a colour information a grid of coloured filters (normally red, green and blue) is positioned on top of the sensor so that adjacent pixels are always filtered with a different colour. In this manner it is possible to reconstruct a lower-resolution image in each of the filtered colours, which can be reassembled to create a full-colour image. Some information is lost in the filtering and reconstruction, since each colour component is only recorded at a fraction of the total number of pixel sites, but for most scenes the camera software can make a pretty good attempt at reconstructing the missing data. Usually the camera will perform some sharpening to regenerate edges that may have been softened in the reconstruction, and this can occasionally lead to artefacts in the image.

Autofocus

All digital cameras offer an autofocus feature, where the on-board firmware will somehow decide (or be told) where the photograph's subject lies in the image, and then adjust the focus so that point is the sharpest. Two main technologies are currently used for autofocusing: contrast detection and phase detection - both rely on analysis of the light coming through the camera lens, but the techniques have different strengths and weaknesses.

Contrast detection

Contrast detection is perhaps the more straightforward method to understand. In an out-of-focus image light that should have been concentrated in a single pixel is instead spread out over several pixels. A consequence of this is that the average difference in brightness between neighbouring pixels - the image contrast - increases as focusing approaches the optimum point. Electronics in the camera measure this contrast as the lens is swept through a range of focal positions - after the sweep the lens is returned to the location that produced maximum contrast.

Phase detection

Phase detection is slightly harder to visualise. It relies on the camera having a lens with a fairly wide aperture, typically f/5.6 or wider. Two small image sensors are positioned so that they receive the light projected through two widely separated zones on the main lens. When the scene is in focus, both of these sensors will receive the same image. However if the lens is focusing the scene behind the sensors then the leftmost sensor's image will be shifted somewhat to the left, and the rightmost sensor's image will similarly be shifted to the right. If the lens is focusing the scene in front of the sensors then the shifts are reversed. Software in the camera compares the two sensor images to find how far they are shifted from each other and in which direction - this information immediately indicates which direction the focus needs to be adjusted and by how much, so the lens can be rapidly set to the correct position for optimum focus.

Both systems require a lens with a fairly wide aperture, but not too wide - if the depth of field is too narrow then it may not be possible to focus accurately enough, but if the depth of field is large then there may be insufficient variation for the electronics to discover the optimum focus position.

Phase detection is more commonly used in DSLRs which tend to have larger aperture lenses (needed to get enough separation of the two sub-images), as well as having space in the camera body for the optical equipment that separates out the views for the separate autofocus sensors. It has the advantage that accurate focusing can be very quick, since one or two measurements will indicate exactly how far the lens should be adjusted to achieve focus. A disadvantage of phase detection is that it can be 'fooled' by repeating patterns such as vertical or horizontal stripes, although special cross-shaped sensors can help in this area.

Contrast detection is a less expensive option to implement, since it can use the main imaging sensor as the focusing detector. However to home in on the perfect focal position the lens needs to be swept through the range of positions, with measurements being made constantly - this can take some time.

Image Stabilisation

Blurred images caused by camera shake are often a problem with handheld shots taken in dim light or at long focal lengths (zoom). Dim light causes problems because it requires a longer exposure duration (or boosting of ISO with resulting noise issues), and any camera movement during the exposure will blur the image. Long focal lengths are a problem because small movements of the camera are magnified by the telephoto lens, meaning that the image is again easily blurred.

These problems can be addressed to an extent with various technological fixes. Most require the camera to include some kind of accelerometer to detect the degree of camera shake at any particular instant.

Conceptually (and mechanically) the simplest method simply requires the camera to analyse the shaking of the camera, and to delay the actual opening of the shutter until the camera's movement is at a minimum. This method has the advantage of being relatively inexpensive to implement, but has the drawback that is is not possible to determine in advance whether the camera will shake during the exposure, only that is is relatively still at the start of it. Once the shutter has opened, any subsequent shaking will blur the image.

Another technique that is easy to understand is for the camera to take a sequence of images when the shutter is pressed, and to analyse them for sharpness and keep only the best one. This requires no accelerometer, but will not work if flash is required since there will not be time to recharge the flash - and for short-lived events the delay before the particular chosen image is taken may mean that the intended shot was missed.

Some camera systems incorporate special moveable elements within the lens system. In the same way that a spoon in a glass of water appears to be broken when viewed from the side, a tilted sheet of glass can appear to shift the scene behind it. If such a sheet is placed in the path of the light coming through the lens, and arranged to be tilted under electronic control, then it is possible for the camera to compensate for any image movement that would be caused be camera shake, by tilting the glass in the opposite direction.

For this to work, the camera needs to know the focal length of the lens so as to be able to calculate the image shift caused by a given amount of shake. It then needs to know how much compensatory shift is introduced by a given rotation of the special element. It also has to be able to move the element very rapidly and accurately, and to measure the camera's shaking at a high rate. Since the moveable element is contained within the lens body, DSLRs incorporating this technique will require the element and associated drive motors to be incorporated into all interchangeble lenses, which can add to the expense of a new lens.

A technique which does not require additional complications in the lenses, is to actually move the camera's sensor in such a way as to keep the same area of the sensor under the same (moving) part of the image projected onto it by the lens. Again, the camera needs to know the focal length currently in use, and has to sample the camera shake at a high rate. It also has to be able to move the relatively heavy sensor rapidly and accurately from side to side without moving it towards or away from the lens, since that would cause the focus to become incorrect. However this system is all incorporated in the camera body so lenses need no expensive moveable elements and motors.

In all cases, stabilisation should only be enabled when appropriate. If the camera is mounted on a tripod, or even placed on a stable surface, then there is little chance of camera shake and any adjustment made by the active electronics is liable to make the image worse rather than better. In addition, the active circuitry will be a drain on the camera's battery. It is also probably a good idea to disable stabilisation when panning the camera to shoot a moving object - in this case you do not want the camera to be trying to compensate for the movement you are deliberately creating.