DESCRIPTION AND GENERAL PROCEDURE FOR THE LINEARIZATION OF THE SPEX ARRAYS I. Data In order to characterize the response of our arrays, a series of flat field exposures were taken with varying integration times such that the entire well up to saturation was well sampled (Figure 1). For every integration time, multiple samples were taken (discarding the first sample) for statistical purposes (a mean and variance of the mean were calculated and used for the data shown in Figure 1). Between each of the different integration times a one second exposure was taken to monitor the flux throughout the data set. II. Method The correction for the nonlinearity of an array is applied on a pixel by pixel basis. This section describes the method used for calculating the correction for each pixel. A count rate is first calculated by dividing the observed counts by the integration time for each datum. Figure 2 shows a plot of this count rate as a function of the well depth in counts. A perfectly linear array would produce a flat response. A polynomial is then fit to this data giving the count rate as a function of counts: count rate[counts] = counts/itime = a0 + a1*counts + ... + an*counts^n . (1) The correction to the flat response is then counts' = (counts/(count rate[counts]))*a0 = a0*itime = counts/(1 + (a1/a0)*counts + ... + (an/a0)*counts^n) (2) since we then have counts'/itime = a0 (3) where we have arbitrarily decided to set the level to correct to as a0, which is the intercept of the count rate curve (the count rate at zero counts). To make this more transparent let us speak in terms of photons. Since we are assuming that the number of photons detected in each pixel on the array is proportional to the integration time for our response characterization data set, essentially, when a fit is made to the count rate data with respect to the well depth in counts, we are obtaining a relation between the observed counts to photons ratio and the well depth in counts (up to a constant of proportionality). Therefore Equation 1 can be rewritten as count rate[counts] = counts/itime = counts/(constant*photons) = counts-photons ratio[counts]/constant (4) This relation, aside from the constant, is a characteristic of the pixel, independent of how the data is taken (the constant will vary with the brightness of the flat field). Thus when some number of counts is observed at any time, we simply divide out this relation to linearize the data giving counts'' = counts/(count rate[counts]) = counts/(counts/(constant*photons)) = constant*photons , (5) which is close to what is desired - the observed counts are proportional to the photons that are detected by the pixel. Additionally, it is important that the constant of proportionality between observed counts and photons to be independent of the way in the response characterization data is taken. We must thus find a quantity that is proportional to the constant in Equation 5. Now the a0 that was arbitrarily decided on in Equation 2 is simply the count rate at zero counts or, from Equation 4, count rate[0] = counts-photons ratio[0]/constant . (6) Multiplying the correction in Equation 5 by this factor gives counts' = (counts-photons ratio[0])*photons , (7) which is reasonable since the counts-photons ratio is a characteristic of the pixel (within this reasonable conclusion, it is still arbitrary as to what well depth should choosen to take the corresponding counts-photons ratio as the constant of proportionality in Equation 7 - for our correction zero counts was used). III. Procedure Our linearization routines comprise a two part process. The first is the reduction of the characterization data into a few important parameters for each pixel (polynomial fit coefficients, saturation roll-off point, and a bad pixel indicator). This is accomplished by the LINCALC procedure. The second step is to use the reduced data to correct an image, which is realized by the LINCORRECT function. III.A. LINCALC The LINCALC routine reads in the characterization FITS files and outputs up to three FITS files that contain the fit coefficient images, a saturation roll-off point image, and an optional bad pixel mask. The reduction process is split up into two tasks with an optional third intermediate task that corrects for an observed flux drift throughout the data set. First, the data is read in and sorted with the LINREAD routine. If requested, the integration times are then corrected for the observed flux drift with LINFLUXCORRECT. Finally, the information is passed to the LINLINEFIT procedure, which fits the polynomial, finds the saturation roll-off, and can also determine which pixels are bad. In the end, LINCALC takes the output variables of LINLINEFIT and writes them to FITS files. III.A.1. LINREAD This is the procedure that handles the task of reading in the characterization data set as well as sorting the data into a few important variables. To conserve the amount of memory that the output variables take up, LINREAD supplies the option to restrict the reading to a particular rectangular box on the array (through the COLRANGE and ROWRANGE keywords). Also as a memory conservation tactic, the sorting is done first via the headers of the FITS files. The images are then read in as needed. This section gives a summary of the general procedure. In order to sort the data before reading in the images, the headers of each FITS file are read in and the COMMENT fields as well as the ITIME fields are stored. The COMMENT fields are used to differentiate between the images taken for the purpose of later correcting for a flux drift and the actual data images. From the ITIME fields of the actual data image headers, a list of the unique integration times is made and stored in an output variable (called itimes). If the user has requested that the information pertaining to making a correction for a flux drift be saved (let us call this the fluxcheck information), the TIME_OBS and DATE_OBS fields are also stored. These univeral time fields are then converted to seconds with respect to the first image taken using the TIMEDIFF function. The time (in seconds) of the middle of each exposure is saved. These times are then sorted into those corresponding to the flux monitoring images and those corresponding to the actual data files. The images that were taken for flux monitoring are then read in and stored (only for the pixels within the box specified by the COLRANGE and ROWRANGE keywords) along with the corresponding times metioned above in an output variable (called fluxcheck). Using the list of unique integration times, LINREAD runs through a loop where each iteration corresponds to a different integration time. For a single iteration, the images with integration times equal to the one for that iteration are found. These images, excluding the first one taken, are then read in and truncated to the specified box. A mean image as well as a variance image are calculated and stored in two output variables (data and var respectively). Also, if the user has requested the fluxcheck information, the universal times in seconds corresponding to the middle of those exposures are averaged and stored in an output variable (called rtimes). Once the loop has finished, the output variables contain all the characterization data sorted in a fashion that makes the information easily accessible. If the user already has the parameters needed to correct for the nonlinearity (by a run of the LINCALC routine), LINREAD supplies the option to correct the images as they are read in (before averaging) using the LINCORRECT routine. If this option is set, the user can force an output (called saturated) of the saturated pixels in the output data set (where the mean images described above is greater than the saturation image). Also, the variance described above is by default the variance of the mean. LINREAD supplies the option to calculate the variance of the distribution instead. III.A.2. LINFLUXCORRECT This function is used to correct the integration times for a flux drift detected by the images taken to monitor the flux throughout the data set. The correction is based on the following principles: The images taken to monitor the flux gives information from which we can interpolate the flux at any time (in particular the time that a given integration time set was taken - one of the values in the rtimes variable described above). Now we would like the integration times to be proportional to the photons received. From the relation photons = flux*(integration time) , (8) we see that this is the case if the flux is constant. However, if we find from our flux monitoring data that the flux is in fact not constant, we would like to take this variation into the integration times as well. Thus we have photons = (some constant flux)* {(integration time)*[flux/(some constant flux)]}, (9) which gives the photons proportional to the quantity in the {}. So if we multiply our original integration times by the ratio of interpolated flux to some constant flux we should obtain our desired goal of having a quantity that is proportional to the photons received. When actually implementing the correction, it is done on a pixel by pixel basis since the flat field is not really flat (especially for a spectrographic instrument). Since the flux monitoring images that we took are only single second exposures, these images are quite noisy and they are smoothed (with a 5 X 5 pixel smoothing window) first by the LINFLUXCORRECT routine (which also supplies the option to skip this smoothing). Then for each pixel, the counts vs. time data (shown in Figure 3) is smoothed (a 19 data point window was used for the spectrographic array on SpeX) to get rid of extra noise since we are only concerned with a general drift (this option can also be turned off). Next, a spline function is used on this data to interpolate to the flux that we should see at the times (given by the rtimes output variable of LINREAD) at which the characterization data are taken (Figure 3). A constant flux to correct to is determined by taking the median of the data shown in Figure 3. To avoid singular corrected integration times, when this constant flux was found to be zero, a small arbitrary value was used instead. The corrected integration times are then given by the quantity in the {} in Equation 9. The new integration times are returned to the user after the routine has completed this procedure for every pixel. III.A.3. LINLINEFIT The LINLINEFIT procedure does the bulk (if not all) of the reduction of the characterization data. Taking the output variables of the LINREAD and possibly the LINFLUXCORRECT routines as inputs, it calculates the parameters needed to make a correction for the nonlinearity of an array. The method used for this correction is described above in Section II. As mentioned, the reduction is done on a pixel by pixel basis. Along with the fit coefficients described, it is also important to define a level (well depth), above which it is inconsequential to make a correction since obviously once the pixel is truly saturated any precise information about the number of photons detected is lost and a correction is impossible. Let us reserve the word "saturation" for this level that is to be defined (we will use "true saturation" when speaking of the point where the counts are independent of photons detected). The following summarizes the general procedure of LINLINEFIT (for a single pixel). The data that is input into LINLINEFIT is shown in Figure 1. As described, we first convert the data into a count rate by dividing the counts by the corresponding integration time. This is plotted over counts in Figure 2 showing the data that we intend to fit a polynomial to. Since the errors are calculated statistically and the fitting routine we are using (POLY_FIT1D) does not allow errors of zero, any vanishing errors were replaced with the average of the surrounding errors (in count space). Now it is important that a reasonable range within which to fit the polynomial is found. To do this, LINLINEFIT first starts off with a fit range from the minimum counts observed for this pixel to 90% of the maximum counts observed - essentially taking the samples from the bottom of the well to 90% of the maximum well depth. A first attempt at the fit is made within this range (we used a third order polynomial). Since, near saturation, we can expect that the curvature of this count rate verses well depth curve has a negative curvature, to prevent noisy data from corrupting the fit, if the curvature at the upper fit boundary is found to be positive, the fit is repeated with a slightly larger range (always begining with the minimum counts - we incremented the upper boundary by 50 counts) until this curvature is below zero. An added benefit of performing a linearity correction in this manner is that it lends to a nice way of identifying bad pixels. This function is an option in LINLINEFIT where if set, a mask of the bad pixels is returned to the user by way of an output variable. There are three different criteria for determining whether a pixel is bad or not. They will be described in order of strength. The first is that a pixel is deemed bad if there is too little (zero or one) data within the initial fit range (less than 90% of the maximum observed counts) to do a fit. In this case, the pixel is obviously dead, which is why this is the strongest criteria. Whether or not the option is set to find the bad pixels, the coefficients and saturation are set to an undefined value (NaN in IDL) and the rest of the reduction for the pixel is skipped. If the option is set, the pixel is marked as bad. The second criterion for determining whether a pixel is bad uses the reduced chi-squared value of the fit since it gives a measure of how well the fit represents the data. Thus, a pixel is deemed bad if the reduced chi-squared is found to be above a certain threshold (we used a threshold of 100 - a good fit should have a reduced chi-squared value of one). Only if the option to find the bad pixels is set, then the coefficients and saturation values are set to "undefined", the pixel is marked as bad, and the rest of the reduction is skipped. The last criterion and by far the weakest, is that the curvature of the count rate vs. well depth curve at the final upper fit boundary must be below zero. Thus, if the fit range has been expanded to include all of the data and this curvature is still positive, the pixel is marked as bad. However, since this mostly occurs only because the characterization data does not include the true saturation point for a given pixel, the parameters are kept so they may still be used. The coefficients are saved, the saturation is defined as the maximum observed counts, and the routine moves on to the next pixel (all these steps are only carried out if the option to find the bad pixels is set). Finally, the saturation point is determined by the point where the data deviates from the fit by more than some threshold percentage of the fit level (we used one percent). This deviation point is only searched for outside of the fit range (greater than the upper fit boundary). If the number of data points outside of the fit range is less than two, then saturation is set to the maximum observed counts. Otherwise, LINLINEFIT first calculates residuals and performs a weighted (by the errors) smoothing over three data points of these residuals (the endpoints are averaged with their neighbors). The deviation point is then defined as the first (lowest counts) instance where the smoothed residual is greater than the threshold percentage of the fit level by two sigma. If this deviation point exists, then saturation is defined by some number of data points (we used two) back from saturation. If not, then it is defined as the maximum observed counts. Figure 4 shows the important elements of the reduction for a single pixel. After all of this is finished, LINLINEFIT moves to the next pixel repeating the entire process until all the pixels are reduced. At the end, the coefficients, saturation, and optional bad pixel images are returned to the user via output variables. III.B. LINCORRECT Once the coefficients and the saturation points have been calculated by the LINCALC routine, they may be used to correct an image for the nonlinearity of the array. This correction, explained in Section II, is done by the LINCORRECT function, which takes as inputs, the image to correct, a variable that contains the coefficients for each pixel, another variable containing the saturation points, and an optional variable that is to contain information as to which pixels were above the saturation point. Given these inputs, LINCORRECT first finds the saturated pixels and saves their original values so that they can be replaced after the correction is made. If the user has supplied the optional saturated pixel output variable then this variable is marked where the saturated pixels were found, generating a saturated pixel mask. The correction is then carried out by first dividing the coefficients by the constant coefficient, then dividing the image by the correction from Equation 2. Since this is a simple manipulation of matricies, IDL can handle this task for the whole image at once very efficiently. After the image is corrected, the saturated pixels are replaced with their original values and the new image is returned to the user, which completes our linearity correction. IV. Support Routines Along with the routines that are involved in performing the linearity correction, a couple of support routines that calculate important information about the array were also developed. The PCTDEVFROMLIN routine calculates the fractional deviation of the observed counts from the corrected counts as a function of well depth, while the GAINCALC routine does the linear fit to the variance vs. well depth data that is needed to calculate the gain and readnoise of the detector. A plotting routine called LINPLOT was also developed to assit visualization of the various forms of data. IV.1. PCTDEVFROMLIN This function uses the coefficients obtained from the LINLINEFIT routine to calculate the average (over all pixels within a specified box) fractional deviation of the observed counts from the corrected counts (or equivalently the detector's average deviation from linearity, assuming that the correction is perfectly linear - let us call this the "average percent deviation") as a function of well depth. It takes as inputs, a variable (called counts) that either contains the sampling of the well depth that PCTDEVFROMLIN is to calculate the percent deviation for, or that is to contain (upon completing execution) the sampling (evenly spaced) that is used, and the filenames of the files containing the coefficients, saturation points, and (if desired) the bad pixel mask calculated by LINLINEFIT. The function then returns the average percent deviations of the sampling of the well depth that is used. The following is a description of the method used to calculate the average percent deviation. Starting off for a single pixel, let us define P[c, (ai/a0)] as the polynomial P[c, (ai/a0)] = 1 + (a1/a0)*c + ... + (an/a0)*c^n , (10) where c is the observed counts for the pixel and the ai's are the coefficients calculated by LINLINEFIT. Thus the correction is given by c' = c/P[c, (ai/a0)] . (11) Now the percent deviation for a single pixel is given by percent deviation = 1 - c/c' , (12) or substituting from Equations 11 and 10, percent deviation = 1 - P[c, (ai/a0)] = -(a1/a0)*c - ... - (an/a0)*c^n . (13) When averaging over all pixels, where p is the pixel index and N is the number of pixels, we obtain = (sum over p)[-(a1p/a0p)*c - ... - (anp/a0p)*c^n]/N . (14) If we move the sum over p inside the sum over the coefficients and use Equation 10 we finally get = -((sum over p)[a1p/a0p]/N)*c - ... - ((sum over p)[anp/a0p]/N)*c^n = 1 - P[c, (sum over p)[aip/a0p]/N] . (15) Thus, we see that we can calculate the average percent deviation by first performing an average over the coefficients and then calculating the percent deviation like we would for a sigle pixel. This greatly facilitates the calculation. Now before entering into the main calculation, if requested, PCTDEVFROMLIN searches for bad pixels that could possibly corrupt the data resulting in an inaccurate calculation of the average percent deviation. In addition to the bad pixel mask (if given), PCTDEVFROMLIN can reject pixels with a saturation or constant coefficient value below some value (defined in the code as 1000 counts for saturation and 200 counts for the constant coefficient). Once these pixels are rejected, the routine proceeds to take the averages in Equation 15. An average saturation value is then calculated by averaging the saturation values and if the counts variable is defined, any elements above this value are excluded and the average saturation point is appended (the routine also supplies the option to leave the counts variable unaltered by use of the NOALTCOUNTS keyword). If not defined, a set (with some large number of elements determined by a constant in the code - we used 1000) of values evenly spaced from zero to the average saturation value is stored in this variable. Finally, the average percent deviation is calculated and returned to the user. IV.2. GAINCALC The GAINCALC function uses the variance and counts information from the characterization data set to calculate the values needed in figuring out the gain an readnoise of the detector. These values are the coefficients of a first order fit to the variance vs. counts data. Given these coefficients, the gain (in electrons/ADU) and readnoise (in ADU) are: gain = 1/a1 and (16) readnoise = sqrt(a0) , (17) where a0 is the constant coefficient and a1 is the first order coefficient. The function takes as inputs, the data and var variables that are outputs of the LINREAD routine (with the NOVAROFMEAN keyword set), and outputs the coefficients described above. Two optional outputs (the countbins and meanvar variables) give the data that the fit was made to back to the user. Because the data for a single pixel is often very noisy, GAINCALC can take an average over a box on the array (specified by the COLUMNS and ROWS keywords). In the same manner that is done in the PCTDEVFROMLIN routine, pixels that may corrupt the calculation can be rejected. Now in taking the average, we cannot simply average the values for all the pixels that are to be averaged since different pixels give different numbers of counts for equal integration times. Instead, we would like to average data points that have similar count values. Thus, we bin all the data that was not rejected into bins along the count axis and average their variances within each bin. The count bins are determined by two constants in the code that give the maximum counts and the number of bins (we set these to 10000 and 100 respectively). When actually carrying out the fit, the middle of each bin is used as the count value for that bin. If the saturation file is given, in addition to being used to reject certain pixels, any count values that are above some value (we used 2000 counts) back from the corresponding saturation point are also rejected. Once the data is all collected into bins and averaged, a plot appears and the user may be asked to give a maximum count value at which to attempt the fit (input by clicking on the plot). Using the POLY_FIT1D routine, the fit is then performed in the range from zero counts to this maximum given by the user. If desired (by setting the AUTO keyword), the user can surpass this step, performing the fit with the ROBUSTPOLY1D routine which automatically rejects bad data points. After the fit is completed, it is ploted over the data and the coefficients are returned. IV.3. LINPLOT This procedure does not perform any calculations and is simply a plotting routine designed to quickly produce various plots involving the characterization data and analysis for multiple pixels (the box of pixels to be plotted is specified by the NCOLS, NROWS, STARTCOL, and STARTROW keywords). It is also easily expandable to include different types of plots. There are currently five different plots available (set with the PLOTTYPE keyword): IV.3.A. PLOTTYPE = 0 This is the standard counts vs. integration time plot (Figure 1). For this type of plot, the integration times, corresponding average counts, and corresponding variances (itimes, data, and var outputs of the LINREAD routine respectively) must be input by the user. IV.3.B. PLOTTYPE = 1 This is the count rate vs. well depth plot shown in Figure 2. The same inputs as in the PLOTTYPE = 0 case are required. IV.3.C. PLOTTYPE = 2 This plot adds the fit as well as the saturation point to the PLOTTYPE = 1 plot. In addition to the same inputs as above, the coefficients and saturation points (from the LINLINEFIT routine) must be supplied. IV.3.D. PLOTTYPE = 3 This is similar to the PLOTTYPE = 0 plot except that it is used to compare two different data sets (for example, two runs of LINREAD where the option to correct for the nonlinearity has been set on one of them). The two data sets must be input separated by two dummy variables to take the place of the coefficients and saturation inputs. IV.3.E. PLOTTYPE = 4 This is a plot of the frational deviation of the observed counts from the corrected counts as a function of the well depth in counts. The same inputs needed for the PLOTTYPE = 3 case are required. V. Unresolved Issues V.1. Flux Drift Correction As mentioned above, in the process to calculate the parameters needed for the characterization of the nonlinearity of an array, there is an option (created by the LINFLUXCORRECT routine) to correct for a flux drift observed in the data. We have still not decided whether or not to make this correction. After reducing the characterization data both ways, there is a huge discrepancy as shown by the fractional deviation from linearity - an output of the PCTDEVFROMLIN routine (Figure 5, which shows data for the guider array). The reason why there is a question about whether or not to carry out this correction is that we know for a fact that the observed drift is not due to a drift in the flux from the source (if it were then we should obviously make the correction). The reasoning behind this conclusion is described in the next section. For now let us just mention that the data has hinted that the drift we are seeing may be caused by residual charge that is left on the array. If this is completely the case, since residual charge has the same effect as dark current and flux (i.e. it results in a signal proportional to the integration time) we can safely apply the flux drift correction. V.2. Faulty Characterization Data V.2.A. Relization of the problem In the previous section, we mentioned that our observed flux drift is not due to a flux drift at all and may have something to do with a residual charge left on the array. We started to form this hypothesis after a close inspection of the flux calibration data (Figure 6, which plots the average over a large box on the array). As a reminder, the flux calibration data are one second exposures that are taken before each set of exposures for a given integration time. The particular data shown in Figure 6 was taken with increasing integration times (i.e. after the one second integration times comes the two second then the three second ones and so on). Note from the figure the odd behavior that the signal is not monotonically increasing but decreases in sets of two or three before jumping up again. It was decided that the flux source (the calibration mirror) could not have been causing this behavior and we hypothesized that the drift had something to do with the way we took the data (also, the magnitude of the increase could not be explained by a reasonable temperature drift). As a check, we took the data using decreasing integration times instead. The flux calibration data from this set is shown in Figure 7. Inspection of this figure tells right away that the manner in which the data is taken has a huge effect on the data itself. One explanation for this drift is that for the longer exposure times, there is a large amount of residual charge left on the array. Since some of the residual charge gets taken away with each read, this effect gets diminished with the shorter integration times. Although this helps to explain the general drift in Figures 6 and 7, the oscillations in groups of two or three is still a mystery. If the problem is caused by residual charges, then as reasoned above, we should be able to simply apply the correction to the drift. If this actually works, then our linearity correction should be independent of the way that the characterization data was taken (i.e. the correction should be identical whether we use the data taken with incresing integration times or the data taken with decreasing integration times). However, this is not the case as shown by Figure 8, which plots the fractional deviation from linearity calculated by all of the various ways. V.2.B. Search for better ways to take the data After discovering that the problems imposed by the observed "flux drift" were not going to be easily solved by simply applying a correction, we have decided to try to get rid of this systematic error entirely. V.2.B.1. Checking the residuals The first step we did was to check the residuals and what effect flushing the array has on beating down the residuals. To do this what we did was to expose the array and take an image, then after covering up the array to any light, taking set of a few images to watch the residuals. This was also done with some flushing going on in between the images. For a complete description of the data taking check one of the image files in the OBJECT field of the FITS header for the macro that was used and look into the lin_char/macros/Guider/ directory for the macro. From these data we concluded that there was a large residual and that flushing does not really help so much (look at the figures in the lin_char/25apr/ directory with the .sc.ps extension). V.2.B.2. Trying to beat down the residuals with zeroing the bias Next we tried various schemes to beat down the residuals by zeroing the bias. The various schemes again can be found in the lin_char/macros/Guider/ directory. In the end of the we never found a perfect way to take the data but made significan progress on being able to take the data with as little residuals as possible. V.3. Inherent Problem of Undetermined Well Depth Aside from the problem of how well we took the characterization data, there is a more inherent problem that is associated with the manner in which the observers take their data. Before discussing this problem let us review a little about the different schemes in which the array is read out. V.3.A. Reading out the array The array is read out a sigle pixel at a time starting from one corner of the array and ending in the opposite corner - taking a short but non-zero amount of time. Since the array is still recieving photons during this short interval while reading-out, there is an inherent non-uniformity across the array for each single read (i.e. the last pixel read will have a greater number of counts than the first for a spatially uniform illumination). We can depict this readout in a manner shown in Figure 9 where we once again see the familiar operating curve giving the observed output signal as a function of input photons (we have decided to label the abscissa axis by collected photons rather than time because this is more representative of what is really going on and this curve then applies to all different flux levels). The rectangle in the middle of the curve represents the readout where the horizontal dimension depicts the number of photons recieved during the readout (or equivalently, for a constant photon source, the time the readout takes), while the vertical direction depicts the difference in the output signal between the first pixel and the last pixel (again this is only the case for a spatially uniform illumination). In addition to reading-out the array by these "single-samples", one can also readout the array by a method called "correlated double-sampling". This method helps to eliminate some of the non-uniformity from a single read (as well as also having other benefits) by taking two consecutive reads and differencing them. The first read is called the "pedestal" and the second is called the "signal". Figure 10 shows this type of readout. As evident from the figure, the inherent nonlinearity of the detector causes this differencing to imperfectly remove the non-uniformity. However, as the time that the array takes to perform each single read is extremely short (it is greatly exaggerated in this and the last figure), combined with the tiny nonlinearity, this imperfection is second order and thus negligible. V.3.B. Problems with correcting correlated double-sample images for nonlinearity Recall from our discussion above on correcting for the nonlinearity of an array that the correction is dependent simply on the counts that a given pixel records. Now for a single-sample image, the counts recorded by each pixel gives a representative value on the well depth and would lead to accurate corrections. However, for correlated double-sample images (which are most often used) we have the problem of losing the information about where we are in the well because we do not know where the pedestal was taken (since only the difference between signal and pedestal are recorded). Figure 11 illustrates this point by comparing correlated double-sample images taken at two different flux levels. The integration times for these two images are such that both give nearly the same number of counts although it is clear that they operate on two different parts of the well. Since our method for nonlinearity correction has no way of knowing what the real well depth is, the correction for both of these images would be almost identical. V.3.C. A possible remedy to the problems of correlated double-samples To remedy this problem of an undeterminable well depth for correlated double-samples, we simply need to recover the pedestal. Thus, any correlated double-sample image (or identical images of a set) must be preceded by a very accurate sigle-sample image with an integration time equal to the time it takes to take the pedestal (see below for schemes to average images electronically to beat down the readnoise). This pedestal must then be added to every image to recover the actuall depth in the well. However, in order to apply a correction for the nonlinearity this will also have to be done to the characterization data so that we work with consistent well depths. V.3.D. Methods for electronically averaging images There are two ways in which to electronically add images together: co-adds and Non-Destructive Reads (NDRs). With co-adds the array is reset before a new image (can be either single- or double-sampled) is taken. Each consecutive image is electronically added to a stack which produces desirable effects on the noise. Since the array is reset before every read, in terms of linearity, this is just like averaging the images after saving the files and has no consequence for making a correction (except for the possibility that switching the electronics to co-add mode could change the linearity curve of the detector). On the other hand, NDRs are taken without reseting the array. Images are simply read out and summed consecutively while the array is still receiving photons and moving up the operating curve. Figure 12 illustrates this in the case of doing correlated double-samples. From the figure it is apparent that NDRs degrade the information one has on the well depth by averaging images taken at different parts in the well. Thus it is recommended that if the desire is to have highly linear data, NDRs should not be used because of the loss of information on precise well depth.