astrophotography, redux   Leave a comment

This time of year, M57 (the ring nebula) is advantageously located in our night sky, so this is a good opportunity to discuss astrophotography.

The basic problem to be solved is that the objects are of very low intensity and are moving. Here’s a sample image:
DSC01207

This is a 20-second exposure using our 800mm f/5.6 lens, ISO 2000. Because we know how fast the stars are (apparently) moving- 360 degrees in 24 hours- we can calculate how long the shutter could be open before this motion blur occurs. For this lens and camera (pixel size = 6 microns, but there is a Bayer filter present), the maximum shutter speed is about 1/4 second: within 1/4 second, the stars move less than a pixel. That’s suboptimal, to say the least. We could try to improve things by using a faster lens and higher sensor gain, but our setup is already close to the limit of what is commercially available.

The solution is to use a ‘tracking mount‘. There are lots of different designs, ours is a ‘German equatorial mount‘. The basic procedure is very simple- align the polar axis of the mount to the North star and turn on the motors. When aligned, The two motors correspond to declination (latitude) and right ascension (longitude). Then, the mount essentially ‘unwraps’ the Earth’s rotation, ensuring the telescope remains pointed at the same part of the night sky. This is also a 20-second exposure, but taken with the tracking mount aligned:

DSC01121

much better! The final step is to take a lot of these images and average them all together (‘image stacking’).

Naturally, life is not as simple as that. Most images look like this:
DSC01108

What’s the deal? There are lots of reasons why this image still has motion blur: vibrations, polar misalignment, gear error, etc., and it’s illuminating to calculate acceptable limits. First, let’s dispense with the pixel size issue- the sensor dimensions result in a single ‘pixel’ as being 6 microns on a side. However, in order to generate color images, a Bayer filter is placed over the pixel array, so that neighboring pixels are assigned different colors (and detect slightly different parts of the object). A detailed analysis is highly complicated- 3 independent non-commensurate samplings of the image plane- but if our image does not have features smaller than say, 12 microns (corresponding to a 2×2 pixel array), software interpolation that generates a color pixel using the neighboring elements will likely give an accurate result, and we can pretend that our sensor has 6-micron color pixels.

And in fact, examining our ‘best’ single image, stars are imaged as bright blobs of radius 3 pixels (and brighter stars appear as even bigger blobs).

Ok, so how much can the sensor move without causing motion blur? The stringent limit is that the sensor (or the image projected onto the sensor) must move less than 0.5 pixel (3 microns) during an exposure. If the lever arm of the lens is 0.5m, the allowed angular displacement is 1.2 arcsec. In terms of vibrations, this is a very stringent requirement! Similarly, we can calculate the maximum allowed polar misalignment: if the telescope pointing is allowed to drift no more than 0.5 pixel during an exposure, since each pixel subtends 1 arcsec (for diffraction-limited performance using this lens), the allowed misalignment is about 6 arcmin (http://canburytech.net/DriftAlign/DriftAlign_1.html is a good reference).

Speaking of diffraction-limited, what is the limit of our system? Each star should be imaged as a single pixel! Clearly, there is image degredation not just from movement, but from *seeing*- clear air turbulence appears as blur in long time exposures. How much degredation? Our “best” images correspond to using a lens at f/30, or an entrance pupil diameter of 27mm (instead of f/5.6, 140 mm entrance pupil diameter). The seeing conditions in Cleveland are *awful*!

So why do astrophotography? Our images are not meant to compete with ‘professional’ telescope images. It’s also a nice experience to learn about the night sky and work on our imaging technique. Here’s the result of stacking enough ‘best’ 20-second exposure images to produce a single 29 minute long exposure:

29m Composite crop

Not bad! And we can continue to improve the image- either by ‘dithering’ the individual frames to allow sub-pixel features to emerge:

29m_2x Composite (RGB)

or by deconvolving the final image, using a 3-pixel radius Gaussian blob as the point-spread function:

deconvolved 29m crop

The image improvements may not appear that significant, but as always, the rule of post-processing is *subtle* improvements- no artifacts must be introduced.

Advertisements

Posted July 17, 2013 by resnicklab in Physics, pic of the moment, Science

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: