Sampling Rate and Resolution

Pixel Size, Resolution and Sampling

The ‘sampling rate’ of an imaging system can often cause confusion but is even more likely to result in a hot debate!

So what is ‘sampling rate’? To understand sampling rate, we first need to consider resolution.

Imagine that you have a telescope with a focal length of 1,000mm and you wish to image a deep sky object using a digital camera with a sensor whose pixels are 5 micron square in size. The theoretical resolution of the system in arcseconds per pixel can be found by using the following calculation:-

Resolution = (CCD Pixel Size / Telescope Focal Length ) * 206.265

In our example, Resolution = (5 / 1000) * 206.265 = 1.03arcseconds/pixel.

Now, notice the use of the word ‘theoretical’ above. One of the biggest influences on image resolution is not the instrument in use but the atmosphere and in particular the ‘seeing’ (the steadiness of the atmosphere). Poor seeing results in loss of detail, particularly when imaging the Moon and planets where close up views of these bright objects show them shimmering and pulsating quite alarmingly. Deep sky objects are not as badly affected as solar system objects but the shimmering can reduce the detail captured over the long exposure times that are required. Typical seeing from a back garden location in the UK is between 3 and 4 arcseconds although really good nights might reduce this figure down to 2 to 3 arcseconds.

To capture fine detail in your images, your camera must record or ‘sample’ the light from the object at the appropriate resolution. If your sensor’s pixel size is larger than the detail focused onto it by the telescope then image detail will be lost. Conversely, if the sensor’s pixel size is a lot smaller than the detail you want to capture then you won’t gain anything extra. There is a narrow range of pixel sizes that will fit the bill but it is clear that the pixel size must at least be smaller than the detail you want to record.

Back in the 1920’s, an engineer at Bell Laboratories called Harry Nyquist devised a theorem concerning the ‘digital sampling’ of electrical signals. In essence, Nyquist’s theorem suggests that to accurately reproduce an analogue signal in a digital format, the digital sampling rate must be twice the frequency of the analogue signal. Nyquist’s interest here was in audio signals so he was suggesting that if you wanted to record an analogue tone of 261.6Hz (the musical note Middle ‘C’) in a digital format, you would need to sample the signal at a rate of 523.2Hz.

To apply Nyquist’s theorem to CCD imaging, to accurately reproduce the image focused by a telescope (the analogue signal), we will need to digitally sample the image using our digital camera at a rate twice that of the signal frequency. This means that the resolution of the camera should be twice that of the telescope. However, a musical note or other sound is a two-dimensional signal comprising frequency and amplitude whereas an image of a star is three-dimensional comprising area (width x width) and amplitude (brightness). On top of this, the image of the star has already been modified by the optics which take the original point source and produce a (good) approximation of the star as a ‘Gaussian curve’ (also known as a ‘bell curve because of its shape when plotted). Such a plot shows a bright centre with light intensity falling off towards the edges. Does Nyquist’s theorem still hold good for this type of signal? Many imagers work on this basis and the Internet is littered with excellent images using this ‘standard’ sampling rate.

It is commonly thought that the best images are produced when the star images are 2 pixels wide and, assuming typical 4 arcsecond seeing, this equates to a resolution of 2 arcseconds per pixel. This would mean that each pixel would cover 2 arcseconds of sky and each star would be covered by four pixels to achieve good sampling. However, experimentation in ideal conditions indicates that perhaps a sampling rate of three times the analogue signal has measurable advantages, especially when the effects of image stacking are taken into consideration.

Images captured at a sampling rate higher than optimal (using small pixels for example) are described as ‘oversampled’ and those captured at a sampling rate lower than optimal (using large pixels for example) are described as ‘undersampled’.

This brings us neatly to the real world of amateur astrophotography where we cannot necessarily match our camera’s sensor exactly to the focal length of our telescope because there isn’t an infinite choice of equipment available to us and other factors become more relevant. For example, if we require small pixels to match a given telescope, we may be able to find a camera sensor that uses the correct size pixels. But, if we also require a wide field of view, we will need a large sensor which, combined with the small pixels, will result in a higher cost. Smaller pixels are also less sensitive than larger pixels so this too might be an important consideration. Of course, you can modify the effective focal length of the telescope by the use of either a focal reducer or an amplifier lens like a ‘Barlow’ to get closer to the optimal sampling rate.

For deep sky imaging, if you aim for a sampling rate of between 1.5 and 3arcseconds per pixel you can’t go far wrong. However, don’t let just the sampling rate dictate your choice of camera and telescope – it is far more important that the combination will allow you to capture the size and type of object that interests you most. A quick look at the wonderful images on the Internet will show that many successful imagers use non-optimal sampling rates!