Exercice 1 - Introduction to Data Handling: Basic Concepts and Tools

The goal of this chapter is to understand the basics of image reduction and to get used to the tools used to process astronomical data and to display them.

“Reducing” data is the first mandatory step when dealing with an observational astrophysical project. Depending on the complexity of the instrument, it can be very simple (e.g., for simple imaging) or it can involve the development of new specific softwares that are as important as the development of the instrument that took the data itself. In the present work we keep things very simple and propose to process nice-looking images of the nearby spiral galaxy NGC 613, and to characterize them.

Two sets of data are available, taken with the ESO VLT (http://www.eso.org) and with the Hubble Space Telescope (http://www.stsci.edu). Although all the data are already on your disk, it is proposed to follow the lines of work without skipping steps! This will allow to discover NGC 613 with instruments of increasing quality in terms of spatial resolution.

1.1 - NGC 613: a nearby spiral galaxy

You can have a quick look at the galaxy we will study, through the ESO Digital-Sky-Survey (DSS) web page (http://archive.eso.org/dss/dss), where a digital map of the whole sky is available. This archive has been constructed by scanning old (but very wide field) photographic plates obtained at Palomar observatory and in Australia. Enter the name of the galaxy in the appropriate field and display it as a gif image. Download it as well on your disk as a FITS image. FITS is the standard format to store data in astrophysics. All data will be processed in this format.

The standard software to open FITS files is ds9. You can install it from the official website.

1.2 - Displaying images and basic measurements

Open ds9 in order to see the FITS image. Use the mouse right click to change the cut levels and try to see the extended very faint parts of the galaxy. You note that the signal between two objects, also called empty sky regions is not zero. This is due to the fact that the night sky is not so dark as one may think. It is emitting a small but measurable amount of light. This amount of light is sufficient to affect the observations. The sky level is often higher than the object brightness itself ! This is why observatories must be located far away from any source of light pollution.

Lets carry out basic measurments on these data:

  1. Assuming that the Hubble constant is H_0=70\,\rm{km\,s^{-1}\,Mpc^{-1}} and using the measured radial velocity V_r=1479\,\rm{km\,s^{-1}} of NGC 613, compute the distance to the galaxy. The Hubble law for the expansion of the Universe is applicable to NGC 613. Why can’t you use it for large distances, like several thousands of Mpc (Mega parsecs; 1\,\rm{pc} = 3.26\,\rm{light years}) ?

  2. What is the pixel size (in arcseconds) of this image ? Using straightforward trigonometry, compute the diameter of the galaxy, in light-years and in parsecs. How does this compare with our own Milky Way ?

  3. Try to fit a 2D gaussian using a least square technique to measure the resolution of the image in arcseconds, i.e., the Full-Width-Half-Maximum of the (unsaturated) stars in the image. How does this translates in parsecs or light years ? What is the largest globular cluster in the Milky Way ? What is its size (have a look on the web !) ? Would you be able to resolve it if it was in the image you have of NGC 613 ? Same question for the planetary nebula M 57.

1.3 - Characterizing the VLT images

  1. The list of the observations is on your disk as an ASCII file. Have a look at it. Display the raw images on NGC~613.

  2. he pixel size of the FORS1 instrument (see the ESO web page) is 0.2 arcsecond. What is the size of the galaxy using these new and much deeper observations. “Deeper” means that much fainter light levels are reached.

  3. Measure the resolution (called “seeing”) in arcseconds in the three images taken in the blue (B), green (V) and red (R) filters. Do this using several isolated stars in order to get a precise mean value and an estimate of the error bar on your measurement. Is there a difference in resolution between the three filters ? The three images are taken one after the other, with little changes of the atmospheric conditions between each exposure. If you measured a different seeing in the images, give a possible explanation for it. Note that the atmosphere acts as a prism on light rays.

  4. Compute the theoretical diffration limit for the VLT which has a diameter of 8.2m. What are the values in arcsecond for the three filters B (4300 \AA), V(5500 \AA), R(6500 \AA).

  5. Compare the behaviours of the measured and theoretical resolutions as function of wavelength. Is the measured trend compatible with the theoretical expectations ? You guess correctly that the answer is no. Try to imagine why.

1.4 - Reducing the VLT images

A raw astronomical image, right after it is acquired at the telescope can be interpreted as a signal D(x,y), which gives the intensity or the flux level recorded, as a function of the position (x,y) on the CCD array. It can be written as:

D(x,y) =  \left[I(x,y) + Sky(x,y)\right]F(x,y) + B(x,y)

where I(x,y) is the signal of sicentific interest, Sky(x,y) is the sky level, F(x,y) is the pixel-to-pixel response of the camera, called the “flat-field”. B(x,y) is called the “bias level”. It is an arbitrary positive constant set in the hardware of the camera, aimed at avoiding any negative value in the image, due to the readout noise. The goal of data reduction is to extract I from the data D, given a number of calibration frames that we will use in the following:

  1. Display the bias images and the flat fields. What is the mean flux level in the bias images ? In the flat fields ? Take the mean of the 2 biases, 3 biases, 4 biases. Compare the noise with one single bias. By how much is it improved and how does this factor compare with the number of biases in the mean bias image ?

  2. Using python, plot the standard deviation in the 4 biases you just created (i.e., single or combined), as a function of the number of images in the combined bias.

  3. Subtract the best mean bias from each flat field.

  4. Normalise the flat fields so that their mean level is 1.

  5. Flat field the images following the equation above.

  6. Subtract the sky from all the images. This can be approximated by a constant value measure in the areas free of any object. Measure it in many places in the image. Compare the mean value of the sky level with the sky noise (standard deviation of the pixels in a part of the image with sky only). Is the sky level really constant accross the image ? Compare the sky level at different location in the frames and compare its variation with the sky noise.

  7. Bring all the images to a common resolution using a gaussian convolution and assuming the stars are perfect gaussians. To do that, convolve each image with the appropriate Gaussian, so that all images have the same resolution. Of course the common resolution is the worse one of the three.

  8. In the software DSG in the RGB mode to construct a “true-color” image using the B, V and R frames (go to Frame/New Frame RGB).

    An alternative to ds9 is to use the stiff software, that directly outputs a TIFF file.

    Where do you think the star forming regions are ? Where do you think dust is present ? Why ? Remember that dust absorbs blue light more than red light and that hot young stars have a higher temperature than the others (think of the black body radiation).

  9. The color image can be complemented by a color map of the galaxy. Compute the ratio between the R and B image and take the logarithm. In that way, you obtain an image of the color of the galaxy. High R/B traces red regions while a lower ratio traces blue regions.

1.5 - Comparing with (already reduced) HST images

  1. Reducing images obtained with the Hubble Space Telescope is very similar to the reduction of VLT images. Because space data are more stable in time than ground based data (e.g., constant weather condition), the reduction process can be automatized easily. The Space Telescope Science Institute has developped a pipeline that does this work for you and reduced images can be retreived directly on the web. We have downloaded these data for you. Have a look at them.

  2. Measure the resolution in each image. The pixel size is 0.045 arcsec. Does the resolution match the theoretical one this time ? Why ? What is the smallest detail you can resolve in NGC 613, in parsecs ?

  3. Convolve all the images to the same resolution and compute the true-color image and the color map of the galaxy. Compare with the VLT image.

  4. A supermassive black hole with a mass between 10^9\,\rm{M_{\odot}} and 10^{12}\,\rm{M_{\odot}} resides in the center of this galaxy. Calculate the radius of the black hole, assuming that it is defined as the distance at which a photon can not escape the black hole anymore. This radius is called the Schwarzschild radius r_s. What is the Schwarschild radius of the black hole in parcsecs ? Would you be able to see it with the HST (assuming it can still emit light !) ? What would be the diameter of the telescope needed to see it ? Can you think of any (realistic) technical solution to reach such high angular resolutions ?