# Algorithms

Manuscripts or documents on new methodologies for astronomical data analysis

- The Evolution of Early-Type Galaxies in Distant Clusters II.: Internal Kinematics of 55 Galaxies in the z=0.33 Cluster CL1358+62 (Kelson et al 2000)
- A new algorithm for measuring velocity dispersions is presented and used to measure the internal kinematics of the galaxies. Another new algorithm is also described for using Fourier-based cross-correlations of subsections in 2D spectroscopic data for deriving rectification transformations. Furthermore, a new method is also devised for automating the wavelength calibration of astronomical spectra using cross-correlations and ratios of line spacings.
- Optimal Techniques in Two-dimensional Spectroscopy: Background Subtraction for the 21st Century (Kelson 2003)
- In two-dimensional spectrographs, the optical distortions in the spatial and dispersion directions produce variations in the sub-pixel sampling of the background spectrum. Using knowledge of the camera distortions and the curvature of the spectral features, one can recover information regarding the background spectrum on wavelength scales much smaller than a pixel. As a result, one can propagate this better-sampled background spectrum through inverses of the distortion and rectification transformations, and accurately model the background spectrum in two-dimensional spectra for which the distortions have not been removed (i.e. the data have not been rebinned/rectified). The procedure, as outlined in this paper, is extremely insensitive to cosmic rays, hot pixels, etc. Because of this insensitivity to discrepant pixels, sky modeling and subtraction need not be performed as one of the later steps in a reduction pipeline. Sky-subtraction can now be performed as one of the earliest tasks, perhaps just after dividing by a flat-field. Because subtraction of the background can be performed without having to ``clean'' cosmic rays, such bad pixel values can be trivially identified after removal of the two-dimensional sky background.
- Optimal Measurements of Redshifts using the Weighted Cross-Correlation (Kelson et al 2003)
- A large component of astronomy involves the measurement of redshifts using absorption line spectroscopy. Typically such data have non-uniform sources of noise and other systematic defects not easily dealt with when one employs Fourier-based techniques because such methods explicitly weight the data uniformly. Here we develop a method for the measurement of redshifts using the cross-correlation in the Real domain, in which one is free to employ non-uniform weighting. The implementation we describe in this paper allows for the arbitrary exclusion of bad data, and weights each remaining pixel by the inverse of the variance. This prescription for weighting the pixels has the advantage that the units of the cross-correlation are exactly half that of $\chi^2$. Thus, the topology of the peak of the weighted cross-correlation is directly related to the confidence limits on the measured redshifts. The validity of the redshifts and formal errors derived with this method are tested using simulations of galaxy spectra with a broad range of signal-to-noise ratios. These simulations also include tests of the effects of template mismatch. Overall, template mismatch is only significant when the data have high signal-to-noise ratios, and in such cases the systematic error due to mismatch is minimized when one chooses the template that minimizes the error in the redshift. While the weighted cross-correlation is here discussed in the context of extragalactic redshift surveys, this method is also useful for measuring the radial velocity of stars and other astronomical objects.
- Optimal Techniques in Spectroscopy: The Extraction of Multiple Spectra from Single and Multiple Exposures using Moments and B-Splines (Kelson 2005)
- Spectrograms are distorted in both the spatial and dispersion directions. As a result the data are not generally sampled on a uniform rectilinear grid of physically useful coordinates. Observers have often been required to resample their data onto rectilinear coordinate systems in order to continue with standard methods of analysis. The rebinning process involves replacing the data with an interpolating function, followed by sampling of that function at desired regular intervals of physically useful coordinates. These interpolating functions usually do not make full use of the information available in the data, and tend to degrade the resolution of spectra. With modern computing resources and knowledge of the distortions, one can construct interpolating functions that optimally reproduce the data. The information content of the data is preserved, without, for example, degradation of the resolution in the spectra. This working document discusses how to construct such interpolating functions for use in rebinning and extracting spectra. While the discussion is focused on the specific application for echellograms obtained with the MIKE spectrograph at Magellan, the method has been successfully applied to data obtained with other instruments, including MaGE, FIRE, LDSS3, IMACS, DEIMOS, FORS, and FORS2.
- ImageMatch (Kelson & Burns)
- Non-parametric PSF matching for image subtraction. Incorporates image rectification and point-spread-function matching. Primarily used to subtract multiple-epoch imaging data in order to isolate the flux from transient sources, ImageMatch uses a non-parametric NxN kernel to artificially blur one image to match the seeing in another and subtract one from the other. While ImageMatch is not as fast as other algorithms, it can handle cases where a simple Gaussian kernel is insufficient for the task: multiple telescopes/instruments, few point-sources in the field, etc.