Copyright © 2007-2011 by Zack Smith.
All rights reserved.
The purpose of this web page is to explain the concept of sensor element size and to provide some examples from actual digital cameras, so that you can get an idea of which digital cameras are more likely to offer the best image quality.
IntroductionA digital camera sensor is what senses an image's light and converts it into an electronic image. It consists of an array of millions of pixel sensors. Typically each pixel sensor is made up of sensors for component colors, normally red, green, and blue. These sensors are usually adjacent to each other, but in some cameras they stacked on top of one another.
What really matters when determining digital camera image quality ("IQ") is actually not how many megapixels it has, but rather what the size of each of these microscopic pixels is. In this article, I measure pixel size square micrometers (µm2).
The size of a pixel directly impacts how much noise an image will have in low light, and in some cases even in daylight. The bigger the pixel is, the lower the noise because more photons can reach a bigger pixel sensor.
To illustrate, suppose you have two cameras that both have a sensor that is 1/1.8" in size (this represents the area as a single number since the numerator is always 1), but different numbers of pixels.
Therefore it initially appears that the camera with fewer megapixels and therefore large pixels will have better image quality with less noise.
This is a good reason to be skeptical about the marketing hype and salespeople who claim or imply that it is better to purchase higher megapixel cameras.
Of course, pixel size is not the sole determinant of image quality. There are others:
PixelsNow let's talk about the geometry of the sensors themselves. A sensor's overall width and height typically have a 4:3 ratio. Each pixel in a sensor has 4 color-specific sensor elements -- two green elements, one blue and one red, placed adjacently (except in the Foveon sensor, where they are stacked). Pixels and color-speific sensors are square. Like this:
1/X designationFor cheaper cameras, the information about the size of the sensor is always expressed as 1 over X", where X is a number that varies by sensor.
For expensive cameras they instead usually give you the width and height in millimeters, or the name of the sensor size e.g. APS-C.
The 1/X" format is an old way of describing sensor sizes devised for Vidicon television cameras. It means that the diagonal of the 4:3 sensor is 1 over X inches, times two thirds. The reasons behind this arcane standard are best left to the history books, and not bothered with here.
Determining pixel dimensions from sensor width & heightIf you are told the actual dimensions of the sensor, determining pixel area is simple:
Area of entire sensor (in mm2) = width in mm * height in mm
Area of entire sensor (in µm2) = 1,000,000 * area in mm2
Area of one pixel = area of sensor in µm2 / # pixels
Determining pixel dimensions from 1/X sizeWhile often it is possible to learn the width and height of a sensor from a company's camera manual or specification sheet, sometimes all you can get quickly is the 1/X" value.
To determine 4:3 sensor width and height from 1/X", let's solve this equation:
(1 / X") * 0.667= sqrt ( (4a)2 + (3a)2 )Or simply...
0.444 / X2 = 16a2 + 9a2And from this we get...
0.444 / X2 = 25a2And then this...
sqrt (0.444 / 25X2) = aAnd like so...
0.667 / 5X = aAnd finally...
Width = 4a = 4 * 0.667 / 5X...but we need the total area, too:
Height = 3a = 3 * 0.667 / 5X
Area = width * height = 0.21333 / X2Plus we need to convert to metric!
Width in micrometers = 25,400 * width in inchesSo, the final equation is:
Height in micrometers = 25,400 * height in inches
Area in µm2 = 645,160,000 * area in inches
Area of the entire sensor in µm2 = 137,630,000 / X2The area occupied by one pixel is:
Pixel area = area of sensor in µm2 / Y
...where Y = total pixels
Camera manufacturers typically provide two numbers for the total pixels in the camera, e.g. 10MP effective and 10.3MP actual. You have to use the actual number of pixels in the equation above i.e. the higher of the two numbers usually specified.
Point and shoot pixel area values
Special case of a 16:9 Panasonic cameraThe Panasonic Lumix DMC-LX2 is the same camera as the Leica D-LUX 3. Both manufacturers claim the sensor, which has the "widescreen" format, can be thought of as a 1/1.65" even though that standard was invented for old-time television cameras. I was unable to find the manual online to get the real dimensions. The Panasonic version of this camera generally does not review well, perhaps because the odd-dimensioned sensor is a new and unusual creation that has not been refined yet.
Bayer digital SLR pixel area values
Foveon digital SLR pixel area valuesFoveon sensors have some natural advantages over the Bayer format. Whereas the Bayer's pixel area is like a mosaic, with pixel area divided into two green sensors (½ the area), a red sensor and a blue sensor (¼ the area each), the Foveon's entire pixel area is available to each color because the color sensors are stacked.
Therefore Foveon color sensors receive more light than Bayer elements do given the same area. Or to put that differently, a Bayer sensor would have to be larger than a Foveon for its pixel sensors to receive the same amount of light.
In addition, because all sensor colors are at one photosite, Foveon sensors naturally produce sharper images than Bayer sensors, even those with 2 times as many pixels.
There is a wrinkle in the stacking approach, however. The sensor is made of silicon, which is only translucent. In order words is passes some light but absorbs it as well. As you go deeper into the silicon, it consequently absorbs fewer and fewer photons. Furthermore different wavelengths penetrate to different depths. Blue penetrates silicon the shallowest, so the blue detector is the top and thinnest layer. Green penetrates deeper but not as deeply as red, so green is the next layer below blue, and then red is the deepest and thickest layer.
My experiments in infrared photography using the Sigma DP1 showed that the Foveon sensor is highly sensitive to these long wavelengths despite fewer photons reaching the red layer.
As far as I know, the DP1 and DP2 use the same sensor however there are varying reports as to how many pixels each has. So here are two calculations: