CSP Data Reduction Recipe
This is a step-by-step recipe for reducing the CSP PANIC data. But it should be useful for getting a feel for the procedure in general.
- Start IRAF (make sure you have a display going). Load the PANIC tools:
cl> p_reduce
- Put a single night's data into a directory. Find the first and last squence number of the data. Let's say 1 and 180. Check that the headers are good (they almost always have problems with extra characters in the names).
cl> p_chk_headers 1 180 > headers
- Check the file headers. The output has as columns: 1-filename, 2-title, 3-object name, 4-obstype, 5-airmass, 6-night. In particular, make sure that obstype is one of: dark, tflat, dflat, astro, standard. Object name only matters for standards and they must be of the form sj#### (e.g., sj9115) so that the pipeline finds the right standard magnitude (see p_4go). NOTE: spaces in any of the columns will cause the pipeline to barf, even if they are quoted.
- Update the headers:
cl> p_fix_headers headers
- Make the tflat_pairs and dflat_pairs files. Best to use the logs (especially for tflats, since there may have been several trial exposures to get the counts right). NEW: There is now a python script that can automate most of this for routine observing. You can find it here.
- Make the uncrowded pairs file. Use the logs to figure this out, but the following lines can get you started:
cl> hselect irp*_001.fits FILE 'OBSTYPE=="astro"&&DITHER==1' > astro1
cl> hselect irp*_001.fits FILE 'OBSTYPE=="astro"&&DITHER==NDITHERS' > astro2
cl> ! paste astro1 astro2 > unc_pairs - Make the standard pairs file. You can do a similar thing as above. However, you'll need to add two more columns which correspond to n1,n2 values for a pass close in time to the standard with the same filter (used for making a sky frame). Use the logs.
cl> hselect irp*_001.fits FILE 'OBSTYPE=="standard"&&DITHER==1' > stand1
cl> hselect irp*_001.fits FILE 'OBSTYPE=="standard"&&DITHER==NDITHERS' > stand2
cl> ! paste stand1 stand2 > std_pairs Check to see if you have all the darks you need. Here's a quick way to do it:
cl> hselect irp*_001.fits EXPTIME 'OBSTYPE!="dark"' > exptimes
cl> ! sort -nu exptimes > astro_times
cl> hselect irp*_001.fits EXPTIME 'OBSTYPE=="dark"' > dark_exps
cl> ! grep -v -f dark_exps exptimesThis will output a list of exposure times for which there is not dark. You'll need to get a backup dark as detailed in the next item.
- Check to see if you have all the calibration files you need. This includes darks and flats. If any are missing, make a file called defaults. For example, if there are no 5-sec dark exposures taken and no J twilight flat, but some were taken the night before, you might have:
cl> ! cat defaults
dk_005_22Jan2006.fits /data/2006/22Jan/
nptflatJc_22Jan2006.fits /data/2006/22Jan/ - Pick a fiducial star for each of the uncrowded pairs and standard pairs. It is essential to choose the standard star as the fidicial star (doesn't matter for the uncrowded pairs). Make sure you have the finder charts handy.
cl> p_mfid u
cl> p_mfid s - Run the pipeline. The latest version of p_1go has several options besides n1,n2 and the processing flag. You can also choose which steps in the pipeline should be run. Epar p_1go, then execute it. Or do everything (the default):
cl> p_1go 1 180 nus
- Once the pipeline is done, check the science images (stack_test_###_###.fits). Also check the standards (stk_std_####_####.fits). If everything looks good, proceed to next step. Otherwise, fix what needs fixing and re-run the pipeline.
- The pipeline produced a file ngt_[date] where [date] is the night. Edit it and look for night constants that are unusual. You can check the standard images that produced these odd values, or just throw them out. Save the updated ngt_[date] file as ngt_[date]_keep. Now run the Y-band calibration script on it and the averaging script:
cl> p_yband_cal_1 [date]
cl> p_zavg [date] - Create a list of files for which you wish to have ZMAG updated and run p_4go to do so:
cl> ls stack_test*.fits > snlist
cl> p_4go snlist - Go forth and be merry.