REDUCING CCD IMAGES

1. INTRODUCTION 5. FLAT FIELDS 9. SKY SUBTRACTION
2. BIAS 6. COSMIC RAYS 10. PHOTOMETRIC CALIBRATION
3. NON-LINEARITY 7. BAD PIXELS 11. NOISE AND OTHER FEATURES
4. DARK CURRENT 8. PLATE SCALE

1. INTRODUCTION

A CCD-chip is not an optical ideal receiver/detector, but is one of the best we have. This document describes the limitations of a CCD and how we use calibration measurements to correct, or at least interprete, these shortcomings.

An ideal detector would:

1. Be infinitely fine-grained, so that the "pixels" do not degrade the resolution beyond the telescope and atmospheric limitations.

2. Have a 100% quantum efficiency; every photon falling on the chip results in an event (electron) that can be counted.

3. Be noise free; contribute no uncertainties beyond those intrinsic to the quantum nature of the light falling on it.

4. Be totally linear; the output is exactly proportional to the input so that the result of two superimposed sources, e.g. a star and the sky emission, is simply the sum of the results of the two sources independently.

5. Contribute no signals of its own.

Now some reality:

1. A CCD consists of an array of semiconductor elements called "wells" or pixels. When a photon strikes this pixel, then, with a certain probability, an electron is freed from the semiconductor and stored in the well. When the exposure is finished, the stored electron charges are transferred from well to well (that is why these things are called Charged Coupled Devices), and finally past an amplifier and a analog-to-digital converter (ADC) that converts the charges into numbers or analog data units ("ADUs") which are stored on a computer hard disk or tape. The files stored on this CD-ROM are formatted versions of these lists of ADUs.

Not all the pixels which are read out from the CCD chip represent real data. The TEK chips used in this survey have 1024 pixels in each row and column, but the images consists of 1124 rows of 1124 pixels. The other 100 "overscan" pixels in each row and column are actually a group of pixels that have been read out a second time. There are 50 such "overscan" pixels at the beginning and end of each row, and 100 overscan rows at the end of the 1024 legitimate rows. Since the real charge in each well is removed during readout these twice read pixels should have no charge and represent the response of the amplifier and ADC to zero input (see Section 2).

Occasionally a cosmic ray (c.f. Section 6) strikes a pixel between the first and second readings, and leaves some signal in the overscan regions.

2. BIAS

The amplifier that increases the signal before the ADC, has, for technical reasons, a built-in offset or bias, so that even when no electrons come in, a positive signal is output. This bias varies a little with position on the chip and slowly with time (very slowly if the temperature of the chip is kept constant).

Usually the first step in reducing CCD images is to remove the bias from all digital images. There are two methods for doing this:

A. BIAS FRAMES
Before or after our observing night we can take a bias frame: a very short exposure with the shutter closed so that no photons fall on the chip. In theory (and generally in practice) these frames contain only bias numbers and some readout noise and hum (see Section 11 below.). By subtracting one frame, or the average of several frames, from each subsequent image the bias is removed. In order to reduce the readout noise for the bias frames below that of the true images, we generally take several bias frames and average them (which reduces the noise) before subtraction.

The practical method is then very simple: read in a number of bias frames into your reduction system; average them, and subtract this averaged frame from all subsequent frames (including calibration frames like flat fields and standard stars).

B. OVERSCAN PIXELS
Because the charge in the overscan pixels has been removed prior to readout, they can be used to estimate and remove the bias levels. For each row then, one usually averages the overscan pixels at the beginning and end of the row and subtracts this value from all pixels in the row. One can then discard the overscan pixels from any further analysis.

Which method to use? This depends somewhat on the quality of the chip. If the chip is very stable in time the bias frame method is straightforward and works fine. The overscan method is logistically more convenient because one does not have to store and process tens of bias frames before reducing the real images. As just noted, one has to exercise a certain caution in extracting the row averages.

In any case, once you have chosen a bias removal method, you apply it to all images used in subsequent processing steps: flat fields, standard stars, and scientific exposures.

3. NON-LINEARITY

If too many photons fall on one pixel well, the signal will no longer be proportional to the number of photons. This has two causes:

A. As electrons accumulate in the well, it develops a negative charge. The electric field from this charge repels further electrons.

B. Even if all the charge has been collected in the well, the output amplifier and analog-to-digital converter cannot accurately count charges above a certain limit, called the saturation limit.

The ADU level at which a chip becomes non-linear various from chip to chip, and for the chip used in this project the values are given in the file DESCRCCD.HTM or DESCRIPT.CCD. One can calibrate the non-linearity for values above this limit, and try to correct the effect, but usually the effect becomes very serious very quickly once one is above this limit, so it is better to limit exposure times so this non-linearity effect does not happen.

4. DARK CURRENT

Thermal vibrations in the semiconductor pixels occasionally free an electron which is counted as if it were a sky photon. These electrons add to the signal at each pixel a value proportional to the integrating time. This additional signal is called the "dark current". because it is there whether the shutter is open or not. It varies slowly with time (if the chip is kept at constant temperature) and somewhat from pixel to pixel.

One can measure the dark current by taking long exposures with the shutter closed, removing the bias, correct for cosmic rays (see Section 6) and divide by the exposure time. For a given target exposure one multiplies this dark image by the exposure time and subtracts it from the target image.

For modern, cooled, CCDs the dark current is negligible for normal exposure times, so this procedure is unnecessary. This disk contains no dark frames.

5. FLAT FIELDS

Each of the million or so pixels in the detector has its own sensitivity, which can be expressed as a quantum efficiency (fraction of photons converted to electrons) or as a photometric sensitivity (number of ADUs produced per erg/cm^2/Angstrom falling on the pixel). The sensitivity at each pixel includes electronic effects of the chip itself, but also absorption in the telescope optics, the filters, and the atmosphere above the telescope. These effects are all removed at the same time.

The calibration of these sensitivities occurs in two steps. First the pixel to pixel variations with respect to the average over the chip are removed ("Flat Fielding"). Then the average sensitivity is determined in absolute units by observing stars of which the absolute intensities are known: Standard Stars (Section 10).

For the first step we take images of -hopefully- completely uniform surfaces called flat fields. Any variation in the detected signals is then due to sensitivity variations. Because the sensitivities are wavelength dependent, a separate flat field must be taken for each filter used. An extreme case of wavelength dependence occurs when very narrow band filters are used, or when the light falling on the optics contains a strong component at a single wavelength. Multiple reflections within the CCD-chip or the filters in front of it can cause wavelike patterns across the image called "fringes". The exact pattern of fringes depends strongly on the exact wavelength falling on the chip. Consequently correcting for fringing requires a flat field whose wavelength corresponds closely to that of the image being corrected.

In addition to small pixel-to-pixel variations, flat fields usually show several recognizable features:

a. Bad pixels (see Section 7) which are consistently hot or cold on all flatfields

b. Small sharp dark features with the same percentage absorption on all flatfields. These come from dust particles on the CCD chip.

c. Vague ring or donut shaped features. These come from dust on the filters, which are out of focus as seen from the chip. They are the same on all exposures with the same filter, but obviously differ from filter to filter, and can differ from time to time.

d. Darker regions around the edges of the chip. This "vignetting" is caused by various out of focus obstructions in the light path from the telescope opening to the chip, e.g. by the filter holder.

Flat fields are usually taken of two types of objects: the dome and the sky ("dome" flats and "sky" flats).

A. DOME FLATS are images taken of the inside of the telescope dome, usually with an incandescent lamp, that emits broad band light free of emission lines, shining on it. Because the domes are usually smooth, diffuse reflectors, and because the dome is completely out of focus for the telescope optics, the dome image is effectively featureless. Dome flats are convenient to take because, like bias frames, they can be taken in unlimited numbers during the day, rather than at night or twilight when time is critical. There are two disadvantages to dome flats, however:

1. The light from the curved dome does not fall on the telescope from quite the same angles as the light from the sky. This has no effect on the small scale, pixel to pixel, sensitivity corrections but may lead to small errors on the large scale illumination pattern. This difference in illumination is particularly visible in the vignetting and dust patterns.

2. The wavelength content, i.e. color, of the light from the lamp is not identical with that of the sky. This can lead to inadequate correction for fringing. For narrow band filters this is usually not a problem; the wavelength of the light falling on the chip is determined by the filter, so is the same whether the source is lamp or sky. For a broad band filter this is not necessarily true; the light from the sky contains narrow emission lines from excited atoms in the upper atmosphere. For these filters it may be necessary to use sky flats, where the color content better approximates that of the scientific exposures.

B. SKY FLATS are taken at twilight when the sky is relatively bright. The must be taken during a period when the sky is much brighter than the stars that are accidently in any field, but not so bright that the chip is overexposed ("saturated", see Section 2). The optimum period for exposure depends on the filter. A narrow filter, or a filter for which the chip is insensitive, or where the Sun emits little light (e.g. U-band), can be taken nearer sunrise (for morning flats) than a broadband filter at the peak of chip sensitivity. For these reasons taking sky flats is sometimes a panic ridden activity and they are not always optimally exposed: long enough that photon noise is negligible, but not saturated. Because the sky flats are taking near sunrise, the interior of the dome is illuminated, and light from the dome reaches the chip by internal reflections in the telescope. Thus sky flats show some of vignetting and dust effects seen in dome flats.

The process of flat fielding a specific scientific exposure occurs as follows:

1. Select one or more flat exposures, either dome or sky, taken with the same filter as the scientific exposure and relatively near in time.

2. Inspect the flat exposures carefully to determine whether they were adequately exposed and choose the best ones.

3. Remove the bias from these images.

4. If necessary, remove cosmic rays of other defects from the flats (see Sections 5 and 6), and average the flats if more than one is available.

5. Generally, one divides all the values in a flat field by the average of the whole image (the median is more reliable than the mean) to yield values that are all near unity. This process is not necessary, and the exact normalization is not important, because the true normalization of the sensitivity occurs as a separate step using standard stars. It is convenient to normalize to values near unity, so that images after flat fielding have similar numerical intensities to those before flat fielding.

6. If multiple flat field exposures are available they can be compared by dividing one by the other. The resultant image would ideally be unity everywhere. Variations from unity give you an idea of how reliable the flat field is.

7. Once you have a reliable flat field, you divide all the (debiased) exposures taken with the same filters by this flat before further processing.

6. COSMIC RAYS

When a cosmic ray particle hits a CCD pixel it knocks out a number of electrons which are indistinguishable from those released by photons. These electrons are usually, though not always, confined to one pixel, and form then a point error; at that pixel we cannot, without additional information, find out what the true sky brightness value should have been. We correct for this by taking multiple exposures of the sky+object and hope that cosmic rays do not strike the same pixels in each frame. For a CCD with 10^6 or so pixels it is normal that a cosmic ray strikes a pixel every few seconds. An exposure of a few minutes may contain 100 or so cosmic ray hits, or one per 10,000 pixels.

Ideally we would take many short exposures of each object, so that it would be easy to spot cosmic rays by looking for extreme values in the series of intensities at each pixels. Unfortunately, the overhead of taking lots of short exposures is prohibitive: it increases the data one must handle (about 2 megabytes per exposure), it wastes time reading out the chip to the hard disk (about 3 minutes per readout), and lastly, it adds noise to the final image (see Section 11B on readout noise). Pragmatically then, we take a small number, usually two or three, per object, and then go about comparing the pixels. For most of the images on this disk, only two exposures were taken.

There are several, more or less sophisticated, ways search for cosmic rays among the images; I only discuss a few of them here.

1. If there are 3 or more images, one of the simplest methods is to compute the value of each output pixel by taking the median of the pixel values at the same location in each of the input images. The median, unlike the mean, is relatively insensitive to extreme nontypical values. Before doing this one must check that the exposure times for each image is the same. If not, they must be scaled by dividing each by a constant proportional to the exposure time. Also, you should check that the image positions of stars correspond exactly, i.e. the telescope did not shift between exposures. If not, you must shift the image ("register them") until the stars coincide (as well as possible).

The disadvantages of the median are that you need at least 3 images and that the statistical properties of the median, in particular the reduction of noise by combining images, are less optimal that the properties of the mean. Therefore we consider other methods.

2. With two images, we could just take the pixel by pixel minimum of the two images, since the cosmic ray always causes a too-high pixel value. This, however biases the resultant values everywhere to lower than the true values, and has poor noise reduction performance.

3. One can, for each pixel, look at the distribution of the measured intensities for that pixel in the various images. One can estimate, by various ways, the measurement uncertainty (noise) at that position. Given that estimate, if one or more of the intensities is inconsistent with the others it is thrown out, and the mean of the remaining values is the result. Again one must specially treat the case of different exposure times or misregistration.

A. If there are three or more values at each pixel, you can determine the mean and root mean square (rms) of the value by standard statistical means. If any of the values differs from the mean by more than a given value (about 6 for an images of 10^6 pixels) times the rms, you reject that value, redetermine the mean and rms and try again. If all the values agree within acceptable limits, you accept the mean as the output value. For historical reasons this method is called "sigma clipping".

B. A more sophisticated, and probably more reliable method, that can be applied even if there are only two images, is to calculate the mean and rms at each pixel, store the pixel value and rms, and when you are done with the entire image, plot the rms as a function of pixel value. This plot will show a great deal of fluctuations, including discrepant values from cosmic rays, but if you smooth it and throw away discrepant points (for example by computing the running median of every 1000 points or so), you finish with a smooth curve that you can use to predict the rms noise for each measured pixel value. If only photon noise (see Section 11A) were important, the rms would vary as the square root of the pixel value. However, in practice the true relation will show a constant rms for low intensities (readout noise), then a portion where the noise varies as the square root of the intensity (photon noise), and lastly a portion where the uncertainties vary linearly with intensity (calibration errors, registration errors).

After you know this relationship, you go back to the individual pixels and predict the noise from the mean value (or perhaps from the minimum of the measured values), reject points outside of 6 or so times the noise, and average the rest. If you only have two values there is no automatic way to determine which one is the wrong value. Therefore, by lack of other choices, and if their difference is excessive, you eliminate the higher value. Although this does bias the result statistically, only a few pixels are affected.

SINGLE IMAGES: Sometimes, because of a shortage of time or because of equipment failures, only one image is available with a given filter. Then, as described below under BAD PIXELS, the true intensity at a pixels hit by a cosmic ray cannot be determined. At best we can fix the image "cosmetically" by replacing the cosmic ray hit with a typical value. A common technique is to construct an image where the value at each pixel is replaced by the average (preferably the median) of the 9 pixels in the surrounding 3x3 box of pixels. If the value of the original pixel exceeds that of the average by more than is reasonable, given the noise and the nature of the image, consider it to be a cosmic ray, and replace its value by the average.

7. BAD PIXELS

On each CCD there are a few pixels which consistently give very low values ("cold pixels") or very high values ("hot pixels"). There are usually several tens of such pixels per chip. You can find them by looking at flat fields for pixels values that differ greatly from the average and appear as such on every flat field exposure (otherwise they may be cosmic rays). As with cosmic rays, the true intensity belonging to these pixels is lost and cannot be directly recovered. One recovery technique is to shift the telescope pointing between exposures to that each bit of sky falls on different pixels on different exposures; in most cases at most one of these pixels will be hot or cold. In your computer you can then re-shift the images back until the star positions correspond, mark the bad pixels that you found on the flat fields, and average the remaining good pixels into the final image.

For the exposures on this disk, for reasons of operational convenience, the position was not shifted between exposures, and the above procedure cannot be used. The true sky intensity at the bad pixel positions cannot be recovered, but they can be replaced cosmetically. That is, you can replace the hot or cold value by a value that is typical for the surrounding area. This is done, for example, by taking the average of the surrounding points, and replacing the bad value by this average. If the image is being used for accurate photometry, you should still keep track of which pixels have been treated as such. If you, at a later time, compute the average or total emission for an area including bad pixels, they should not be included in the summation.

8. PLATE SCALE

We need to know the image plate scale (arcsec/pixel) if we want to convert counts per pixel into surface brightness, or if we want to measure accurate position differences on the images. Measuring absolute positions is difficult unless we know the absolute position of at least two objects in the field. The pointing positions registered in the image header files are not very accurate: probably only within 10 arcsec of correct.

The plate scale itself depends on the pixel size and the effective focal length of the telescope. These have been measured, and in the file DESCRIPT.CCD you can find the plate scale for the images on this disk, as reported in the JKT documentation. On the other hand, it is probably more accurate and reliable to measure it yourself. You can do this by finding an image where several stars with known positions are recorded (e.g. the images of M 42 = Orion, which contain the Trapezium stars), measuring their positions on the CCD frames, and calculating the plate scales from the position offsets. A reasonable source of positions is the Yale Bright Star Catalogue, by D. Hoffleit and C. Jaschek, Yale University Observatory, New Haven, 1982. Note that an approximate position of M42 is (1950) RA = 5h32m45s, Dec=-05d25m14s.

9. SKY SUBTRACTION

The sky is not absolutely dark at any time. It is a diffuse source of radiation with contributions from scattered sunlight, scattered moonlight, and emission from excited atoms high in the atmosphere. If we want to measure the radiation from an astronomical object, we have to correct for the sky contribution. The sky contribution changes with time -- slowly at most times, but rapidly as the sun comes up or goes down -- but only varies slowly or not at all with position in the observed field.

If we want to measure the radiation from a small object, i.e. a star, we can correct for the sky in a number of ways. The most popular are:

A. Aperture photometry
Here we take an annulus around the star, far enough away that little or no radiation from star is scattered into the annulus. Within the annulus we eliminate any pixels containing background stars, then measure the average intensity (per pixel) of all remaining pixels. We then draw a smaller circle around the star, total all the counts within the circle, and subtract the just measured average intensity times the number of pixels from this total. The resulting total, when scaled by the exposure time and the flux scale from the standard stars, gives the flux from the star.

B. Point Spread Function fitting
Here we average a number of bright, but unsaturated, stellar images around the image, to determine exactly what the intensity distribution over the pixels from an unresolved star should be. This distribution is called the Point Spread Function (PSF). If we then want to determine the intensity of a particular star, we do a (least squares) fit of the PSF to the image of the star plus sky. That is, we say that the pixel values around the star image must approximate a model formed by the sum of a constant (sky) plus a second constant (intensity) times the PSF. We determine these two constants by insisting that the mean square difference between the model and the actual measured pixel values be as small as possible. Multiplying the best value for the second constant times the PSF and summing gives the best guess for the total number of counts coming from the star itself.

For very extended objects, where we are interested in the surface brightness rather than the flux of a small object, the best we can do is to determine the sky brightness at a few points in the field where there is no radiation from celestial objects, fit a simple model for the variation of the brightness over the field (for example, assume that there is no variation), and subtract this average brightness from all pixels on the image. A good way to determine the brightness in a patch is to take the median of all the values there; this is less sensitive to the contribution of faint stars in the patch than taking the mean value.

If the target objects are extremely extended, so that they cover the whole chip (which happens for some of the larger galaxies in this survey) there is no accurate way to determine the sky emission. One can perhaps (1) subtract values measured around the edge of the chip, knowing that this is probably too large a value, or (2) model the dropoff of emission toward the outside and extrapolate to large distances, knowing that this is probably inaccurate, or (3) find an exposure of a small object with the same filter, in the same general part of the sky and near in time to the large target, and use the value of sky emission there to correct the large image.

10. PHOTOMETRIC CALIBRATION

The flat fielding (Section 5) yields an image in which the sensitivity of every pixel is the same, but as yet unknown. For quantitative photometry, or for useful color comparisons, we want to know the sensitivity in absolute units, for example erg/s/cm^2/arcsec^2/Angstrom for a diffuse continuum source; erg/s/cm^2/Angstrom for an unresolved continuum source, and erg/s/cm^2/arcsec^2 for an extended line emitting region.

The basic procedure in all these cases is the same, but the details differ. We select from various catalogues a star nearby our desired target whose magnitude is known in the desired band. Then we take a short exposure of the star (enough counts for good statistics, but not saturating) through the same atmosphere, telescope, filters, and detectors as the target exposure. In the image, after bias subtraction, flat fielding and sky subtraction, we add up all the counts in a region around the star large enough to contain all the light from the star.

From the star catalogue we know the stellar magnitude in the appropriate band. From standard tables we can look up the conversion from magnitude to erg/s/cm^2/Angstrom, and from the exposure time we can then calculate the erg/cm^2/Angstrom falling on the atmosphere above the telescope. Since we have just calculated the number of counts detected from the star, we know the conversion from erg/cm^2/Angstrom to (flat fielded) counts. Note that it is essential that we have treated the star and the scientific exposures identically to this point.

For an unresolved target, the rest of the procedure is simple. We find the target on the reduced image, total the counts coming from the target, and correct, if necessary, for sky emission in the same area. From what we just did, these counts can be converted to erg/cm^2/Angstrom, and by dividing by the exposure time we get the required erg/cm^2/s/Angstrom.

For resolved objects we obtain the surface brightness, in erg/cm^2/s/Angstrom/arcsec^2 by dividing counts per pixel by the conversion factor obtained from the standard star (erg/cm^2/Angstrom/count), by the exposure time (seconds) and by the plate scale squared (arcsec/pixel)^2 to get the required surface brightness.

Measuring emission line fluxes is more difficult. First, one must remove the continuum mission contained within the line filter bandpass. One selects a broadband filter containing, within the bandpass, the narrow line wavelength. For the H-alpha line filter, this is usually the R-band broad filter. One photometrically calibrates both the narrow and broad filter images as just described (for either resolved or unresolved objects, as the case may be). One then subtracts the broad image from the narrow one. This should make most of the objects in the resulting image disappear. If they don't, you have probably made a small calibration error, and you should multiply the broadband image by a factor slightly different from unity until they do disappear. What is left over is line emission. To get the strength in absolute units (erg/s/cm^2 or erg/s/cm^2/arcsec^2) to first order you multiply the previously calculated "continuum" emission strength (erg/s/cm^2/Angstrom for instance) by the width of the narrow band filter in Angstrom, which you determine by finding the filter transmission curve in the FILTERS directory, normalizing the peak to unity, and integrating it with respect to wavelength (yields a width in Angstroms). To second order you have to correct this for the fact that there was also line emission in the broadband filter (so when you subtracted it, you subtracted too much at the position of the line source). To correct for this you divide the line flux you just found by (1-W_N/W_B), where W_N is the width of the narrow band filter, and W_B is the width of the broadband filter.

AIR MASS CORRECTIONS
Ideally the standard stars should be measured near in time to the target sources (because the atmosphere changes with time) and at the same elevation, so that the path to the star transverses the same length through the atmosphere as that to the target. This path length is proportional to the cosecant of the elevation, which is known as the "air mass", and is recovered from the FITS headers to the images.

For precise photometry, one should measure standard stars at various air mass values and interpolate the photometric corrections to the air mass of the target object. For our purposes this would consume a prohibitive amount of time measuring standards. In addition the weather at La Palma during this project was sufficiently variable that high photometric accuracy could not be achieved in any case. So, in reducing these images, one should search through the observer's logs to find those standard measurements near in time to the target observations and matching as best as possible the target air mass.

11. NOISE AND OTHER FEATURES

Various features limit the accuracy to which intensities can be measured with a telescope/detector. Here we mention the most important.

A. QUANTUM NOISE

The light falling on the CCD consists (or so we think) of individual packets called photons. Each photon excites at most one electron into a well. Thus our output signal can only take certain values, proportional to the number of photons received. The photons arrive at random times, and in any time interval we may count a few more or a few less than the true average we would expect in that interval. This counting of random events is called "Poisson statistics" and it can be shown that if the true average number of events we expect in some time interval is "u", which need not be an integer, then the probability that we count exactly "n", which must be an integer, is u^n exp(-u)/n! Here u^n means u to the power n and n! is n factorial. For large values of u and n this probability distribution can be approximated by an Gaussian with standard deviation sqrt(u). Thus if we measure exactly n electrons, and n is large, then the true average number of electrons we should have measured in the same time, generally lies between n+sqrt(n) and n-sqrt(n) (at least there is an 68% probability of this). Thus the relative error -- the uncertainty divided by the value itself -- is approximately 1/sqrt(n), where again n is the number of detected electrons.

The only way to decrease this uncertainty is to collect more electrons, which you can do by using a larger telescope, a more efficient detector, a broader filter, or by integrating longer. If these things are fixed the only alternative is adding pixels together; if the source is extended we can combine the counts from all the pixels it covers. This gives us a more accurate flux determination, at the cost of a loss of spatial resolution.

Note that the sqrt(n) uncertainty depends on the TOTAL number of electrons counted, including sky electrons. So if the sky is much brighter than the object you are looking at, the flux error is determined by the number of sky electrons rather than object electrons.

B. READOUT NOISE

When the chip amplifier magnifies the electron signal into a countable voltage, the thermal oscillations in the amplifier electronics adds some noise to the output signal. This is called readout noise. It is essentially constant with time and position on the chip, but it gets worse if you try to read out the chip faster. The value is given in the chip description file DESCRIPT.CCD. For very low surface brightness objects, for a dark sky, and for short integration times (only a few counted electrons), the readout noise dominates the uncertainties.

C. HUM

The CCD readout amplifier must amplify very weak charge signals (few electrons) up to a level where they can be easily measured. If the chip is not perfectly shielded from electromagnetic radiation, the electronics may act as an antenna, and the amplifier may strengthen the weak interference signals up to a level where they are visible on the images. The principal form of interference is the 50 Hz power lines that supply energy to the lights and motors in the telescope area. This is called "hum" because you can hear it as a hum if you would hook up an audio amplifier and loudspeaker to your equipment. It is visible on some images as a wavy pattern of lines over the whole image. It is most visible on images where the sky signal is weak, i.e. those taken with narrow band filters or the U-filter. It is also sometimes visible on bias frames.

D. BRIGHT STARS

The light from bright stars can concentrate a large charge in a small area of the chip. Besides saturating the chip in that area the charge sometimes leaks out into adjacent pixels in the same column. This is called "bleeding". Also the excess charge can disturb the readout amplifier for a short period of time, creating an artifact visible as a bright line for a few rows after that containing the star.