Archive for the ‘astrophotography’ Tag

Back to school….   2 comments

Summer’s over, school is back in session.

 

We had a fairly productive summer: a paper was accepted, we have some encouraging results transfecting our cell line with a GFP-tubulin construct, and started to commercialize our Tissue Interrogator.  This picture has been featured in Physics Today and seems to be getting people’s attention:

SONY DSC

Stay tuned for developments on these and other projects.

Summer vacation was also productive- I have been waiting 3 years for skies clear enough to make these images:

trails

These are ‘star trails’: just leave the shutter open for a looooong time, and the stars trace out orbits due to the Earth’s rotation.  See Polaris in the lower left? It’s not located *exactly* on the axis of rotation.  The bright ‘dash’ in the lower right is a Air Force jet hitting it’s afterburner.

Alternatively, by stitching together multiple fields-of-view, we have the entire milky way: (warning, this full size image is *large*: 14k x 4k pixels

Picture saved with settings applied.

Finally, a smaller region of the milky way, featuring several Messier objects:

Picture saved with settings applied.

 

Posted August 28, 2014 by resnicklab in Physics, pic of the moment, Science

Tagged with

Astrophotography redux times two   Leave a comment

It started as a simple question: “What is the faintest star we can image with our equipment?” Figuring out the answer turned out to be delightfully complicated.

The basics are straightforward: since the apparent magnitude of a star is proportional to the amount of light striking the camera sensor, all we need to do is figure out how much light we need to generate barely-detectable signal.

Sky & Telescope magazine published a few articles in 1989 and 1994 under the title “What’s the faintest star you can see?”, and while much of that discussion is still valid, the question was posed pre-digital imaging, and so the results reflect the natural variability in human vision. It would seem that digital sensors, with their well-defined specifications, would easily provide an unambiguous way to answer that question.

Answering the question seems simple enough- begin by relating the apparent magnitudes of two stars to the ratio of “brightness” of the two stars: m1 – m2 = -2.5 log(b1/b2). So, if you have two stars (or other celestial objects), for example the noonday sun (m1 = -27) and full moon (m2 = -13), the difference in magnitudes tells us that the noonday sun is 400,000 times brighter than the full moon. Going further, the full moon is 3,000,000 times as bright as a magnitude 6 star (the typical limit of unaided human vision), which is 2.5 times as bright as a magnitude 7 star, etc. etc. But that doesn’t really tell us how much light is hitting the sensor, only an amount of light that is relative to some other amount.

Unfortunately, astronomers have their own words for optical things. What astronomers call ‘brightness’, physicists call ‘irradiance’. In radiometry (actually, photometry), ‘brightness’ means something completely different- it is a subjective judgment about photometric (eye-response weighted) qualities of luminous objects.

In any case, we have a way to make the ‘relative brightness’ scale into a real, measureable, quantity. What we do is ‘standardize’ the scale to a convenient standard irradiance, something that stays constant over repeated measurements, is easily replicated, and commonly available- for example, the irradiance of the noonday sun = 1 kW/m2.

So now we have our apparent magnitude scale (say: sun, moon, mag 6, mag 12, mag 16) = (-27, -13, 6, 12, 16) and corresponding relative brightness scale (sun: moon, moon: mag. 6, mag 6: mag 12, mag12: mag 16) that we make absolute by standardizing to the solar irradiance: 1 kW/m2, 3*10-3 W/m2. In table form:

magnitude rel. brightness irradiance [W/m2]
-26.74         1.00E+00           1.00E+03
-12.74         2.51E-06             2.51E-03
6                  8.02E-14             8.02E-11
12                3.19E-16              3.19E-13
16                8.02E-18             8.02E-15
21                8.02E-20            8.02E-17

By specifying the entrance pupil of our telephoto lens (maximum aperture = 140mm; diameter = 1.5 *10-2 m2), for any given star we can calculate how many watts of optical power is incident onto the sensor. But we have to be careful: not all the light emitted by the sun (or any luminous object) is detected by the sensor.

In addition to not being a perfect detector (the ‘efficiency’ or ‘responsivity’ of a detector is always < 1), not all the colors of light can be detected by the sensor: for example, the sun emits radio waves, but our camera sensor is not able to detect those. Of the 1 kW/m2 of light incident on the earth, how much of that light is in the visible region- or equivalently, within the spectral sensitivity of the sensor?

Stars are blackbodies, and the blackbody spectrum depends on temperature- different stars have different temperatures, and so appear differently colored. For ‘typical’ temperatures ranging from 5700K (our sun) to 8000K, the fraction of light in the visible waveband is about 40%. So visible sunlight (V-band light), accessible to the camera sensor, provides an irradiance of about 600 W/m2 on the earth’s surface.

So much for the sources, now the detector- how much light is needed to generate a detectable signal? There are several pieces to this answer.

First is the average ‘quantum efficiency’ of the sensor over the visible waveband. Manufacturers of scientific CCDs and CMOS imagers generally provide this information, but the sensor in our camera (Sony Exmor R) is a consumer product, and technical datasheets aren’t readily available. Basing the responsivity of the Exmor R based on other CMOS imagers on the market, we estimate responsivity as about 0.7 (70%) This means that on average, 1 absorbed photon will produce 0.7 e-. That’s pretty good- our cooled EMCCD camera costs 100 times as much and only has a slightly higher quantum efficiency- 0.9 (90%).

So if we know how many photons are hitting the sensor, we know how many electrons are being generated. And since we know (based on other CMOS sensors) that the Exmor has a ‘full well capacity’ of about 21000 e- and a dark current level of about 0.2 e-/s, if we know the number of photons incident during an exposure, we can calculate how may electrons accumulate in the well, compare that to the noise level and full-well capacity, and determine if we can detect light from the star. How many photons are incident onto the sensor?

We can calculate the incident optical power, in Watts. If we can convert Watts into photons/second, we can connect the magnitude of the star with the number of electrons generated during an exposure. Can we convert Watts into photons/second?

Yes, but we have to follow the rules of blackbody radiation- lots of different colors, lots of different photon energies, blah blah blah. Online calculators came in handy for this. Skipping a few steps, our table looks like this:

magnitude rel. brightness irradiance [W/m2] power incident on sensor [W] 6000K blackbody V-band phot/sec             e-/s
-26.74           1.00E+00         1.00E+03                 1.54E+01                                        6.83E+18                                 4.78E+18
-12.74            2.51E-06           2.51E-03                 3.87E-05                                          1.72E+13                                   1.20E+13
6                     8.02E-14          8.02E-11                  1.23E-12                                           5.48E+05                                  3.83E+05
12                   3.19E-16            3.19E-13                   4.91E-15                                          2.18E+03                                   1.53E+03
16                   8.02E-18           8.02E-15                1.23E-16                                            5.48E+01                                   3.83E+01
21                   8.02E-20          8.02E-17                   1.23E-18                                         5.48E-01                                     3.83E-01
18                    1.27E-18           1.27E-15                   1.96E-17                                          8.68E+00                                  6.08E+00

You may have noticed a slight ‘cheat’- we kept the temperature of the blackbody constant, when in fact different stars are at different temperatures. Fortunately, the narrow waveband of interest means the variability is small enough to get away with 1 or 2 digits of accuracy. As long as we keep that in mind, we can proceed. Now, let’s check our answers:

Moon: we have 1.2E+13 electrons generated every second, which would fill the well (saturated image) after 2 nanoseconds. This agrees poorly with experience- we typically expose for 1/250s. What’s wrong?

We didn’t account for the number of pixels over which the image is spread: the full moon covers 2.1*106 pixels, so we actually generate 5.8E+06 electrons per pixel per second- and the time to saturation is now 1/280 second, much better agreement!

Now, for the stars: a mag. 6 star (covering 20 pixels) saturates the pixels after 1 second, a mag. 12 star after 280 seconds, and a magnitude 16 star requires 10,000 seconds- about 2.5 hours- of observation to reach saturation.

But we don’t need to saturate the pixel in order to claim detection- we just need to be higher than the noise level. A star of magnitude 18.5 produces (roughly) 0.2 e-/s and thus the SNR =1. In practice, we never get to this magnitude limit due to the infinitely-long integration time required, but image stacking does get us closer: our roughly 1-hour image stacks start to reveal stars and deep sky objects as faint as 15 magnitude, according to the database SIMBAD. Calculations show that observing a mag. 15 object requires 4300 seconds of time to saturate the detector, in reasonable agreement with our images.

This is pretty good- our light-polluted night sky is roughly mag. 3!

Why did we even think to ask this question? Wide field astrophotography is becoming more common due to the ready supply of full-frame digital cameras and free/nearly free post-processing software. In fact, the (relatively) unskilled amateur (us, for example) can now generate images on par with many professional observatories, even though we live in a relatively light-polluted area.

What we have recently tried out is panoramic stitching of stacked images to generate an image with a relatively large field of view. This time of year, Cygnus is in a good observing position, so we have had direct views of the galactic plane in all it’s nebulous glory:

cygnus

This image, and those that follow, are best viewed at full resolution (or even larger- 200% still looks great) on a large monitor to fully appreciate what nighttime at Atacama or Antarctica are probably like.

Some details about how we make these images- we acquire a few hundred images at each field of view, separately stack each field of view with Deep Sky Stacker and then fuse the images with Hugin. It may be surprising to know that each field of view must be slightly ‘tuned’ to compensate for the differences in viewing direction, as opposed to simply stacking all of the images at once and choosing “maximum field of view” in DSS.

Here are two subfields within Cygnus- the edge of the North American Nebula, and the Veil Nebula:

full_field_filtered

16_FOV1-16_FOV3

Posted July 31, 2014 by resnicklab in Physics, pic of the moment, Science

Tagged with

Transit of Venus   Leave a comment

The clouds parted shortly before Venus made first contact with the sun, so we were able to have clear viewing of this rare event:

and a single image just as Venus made ‘second contact’:

Posted June 6, 2012 by resnicklab in pic of the moment, Science

Tagged with , , ,

Astrophotography in the digital age   Leave a comment

I’ll discuss the optics of ‘image stacking’ shortly, but first, some eye candy. Here are some recent images: Saturn with (from lower left) Titan, 74 Vir and 72 Vir-

The globular cluster M53 (which we have shown already) and M3:

The globular cluster M13 in Hercules:

And this, which I am calling the Nikonian Deep Sky Survey:

The goal of this image was to see how far I could push my imaging equipment- let’s see how it does. The lens used was a 400mm f/2.8 lens, and the camera setting was 0.8 s seconds acquisition time at ISO 3200- how that translates to electronic gain is unclear, but it’s pretty high- there’s a lot of noise present:

The bright stars in this image are about magnitude 7.5- the viewfinder appears black as these stars are too faint to be seen by eye. I ‘sight’ the image off nearby ρ Vir, a magnitude 4.9 star. This image tells us a few things. First, the stars image with a FWHM of 4 pixels- this should be compared with the diffraction-limited Airy disk, which corresponds to 3.5 pixels. The image is (spatially) pretty good. However, dim objects can’t readily be seen- here’s what I mean.

The image is an 8-bit greyscale image- the brightest stars are at a grey value of 255. The noise in this image has a grey value of about 22: the ‘dynamic range’ of this image, in terms of stellar magnitudes, is 2.5[log(255/22)] = 2.66. This means, if the brightest object in the frame is magnitude 7.5, the dimmest object is only magnitude 10. Given the limit of a 6″ telescope is magnitude 13, there’s *lots* of room for improvement. ‘Image stacking’ is a way to increase the signal-to-noise ratio, by distinguishing between background noise (which scales as the square root of exposure time) and signal (which scales linearly). The way we do this is to acquire a *lot* of images- like 6000. These are split into groups of about 700 (batches), and each batch is stacked over the weekend. After a few days, we have reduced (say) 6000 8-bit images to 8 16-bit images. We subtract the background from each of these intermediate images, and then stack those to produce a the final image:

This doesn’t look much better. BUT, the bright stars (in this case) have a grey value of 59000 and the background noise has a level of 50, for a dynamic range of 7.7 stellar magnitudes. So, we should be able to find objects as faint as magnitude 14.7- easily beating the ‘limit’. But there’s still a problem- the display only shows 8 bits of greyscale, so somehow we have to compress the dynamic range. This process is much more ‘bespoke’ and requires a lot of trial-and-error tweaking, but generally involves changing ‘gamma’ (a nonlinear contrast enhancement similar to midtone adjustments) and unsharp masking combined with denoising to clean up the compressed image. The reason the final image has concentric circles is an artifact of all this post-processing, and decreases as the number of images increases (remember, the background increases *slower* than the signal). So let’s see what we can find- here’s the image again with some ‘goodies’ circled:

“Obvious” galaxies are circled (blue are Messier objects, yellow are New General Catalogue (NGC)- some of these are magnitude 13, so a more careful search will likely find more. There are about 100 in this section of sky, most of them magnitude 15 or brighter, so there’s a chance we can find a *lot* more. Stay tuned!

Astrophotography in Cleveland   Leave a comment

Cleveland is not known for the quality of the night sky. In fact, the combination of city lights and high humidity make visibility quite poor. Even so, we can use physics to tweeze out signal photons from the background noise. Here are some images (approximately 1/2 the total image size) of the Ring Nebula (M57), the Whirlpool Galaxy, and the Sombrero galaxy:

These images were assembled by ‘stacking’ many (many, many, many) frames together. The Whirlpool galaxy image resulted from stacking together over 3000 images, for example. Stacking increases the signal-to-noise ratio for two reasons. First, since we don’t have a tracking mount, our exposure times are limited due to the rotation of the Earth- 0.8 seconds is about the longest exposure before stars stop being points and start looking like dashes. Stacking lets us go from a 0.8 second exposure time to minutes or hours (although, a 1 hour exposure represents over 4000 images…). Second, stacking lets us separate signal photons, which are constant, from noise photons, which fluctuate and obey Poisson statistics. Stacking together many frames lets us ignore the fluctuating noise and keep only the signal- the Whirlpool galaxy image has stars as faint as magnitude 15 in it.

To be sure, implementation of the stacking algorithm is not simple- frames have to be moved and rotated into alignment, and there is a lot of image processing on each frame to determine the location of stars (which move around from frame-to-frame due to atmospheric clear-air turbulence). We use freeware (DeepSkyStacker) for all of our astrophotography.

Ok- enough of that- here are some of the goodies hidden in the images above: NGC5229, a low-surface brightness galaxy

a few other galaxies: NGC 5198 and IC 4263/NGC 5169/NGC 5173

And finally: the Orion nebula

Posted May 8, 2012 by resnicklab in Physics, pic of the moment, Science

Tagged with , ,