Script Writing
Saturday, September 15, 2012
IRAF Reduction of Hamilton Spectra
***Right now, I am stuck at identifying spectral lines to transform pixel number to wavelength. The arclamp spectra seem to be unstable over short timescales, and I have few reliable reference spectra to compare to. If anyone has any experience with this, feel free to contact me! ****
Below is a somewhat detailed list of steps for reducing Hamilton Echelle Spectra. It makes use of IRAF commands, as well as a python script which I wrote to remove the scattered light.
Step 1: run imred.ccdred.combine on all calibration files (separated by filter) - bias, flats, arclamps (use "average" for arclamps, "median" for everything else, turn off "mclip") (rename files *_comb.fits)
Step 2: subtract the combined bias frame from all calib files and data files using ccdred.imarith (rename files *_sub.fits)
Step 3: Trim everything (calib AND data files) using ccdproc with everything but "trim" command turned off. Find peak flux in center of the image, go out to where the counts drop by 10%. (for N2 files, [850:3550,1:2293])
Step 4: Identify apertures using imred.echelle.apfind. Find the column number corresponding to the center (ish) of the image (for N2, it's ~1300). nsum - 10, nfind- 80-90, minsep ~15, maxsep~45 (examine the image to make sure that the apertures you want are actually being found. If the image looks too smooth, the program may be working along the wrong axis. In this case, epar imred.echelle to change the axis to "1". Hit q to quit the window, save the file (will create a "database" folder)
Step 5: epar aptrace: use legendre, order 4. keep hitting q, enter to approve the fit. Otherwise, click on the outlier and hit "d" to delete it. Then "f" to refit the points, and q/enter to approve.
Step 6: use apedit (use on wide_trim, with ref=flat_trim) to make sure aperture profiles look good (q/enter)
Step 7: (optional) WRITE A PYTHON PROGRAM to find the interorder minima along a column, fit a polynomial to that list, then create a 2D image with each column being the filled values from the polynomial function. subtract that image from the wide flat which creates a new wide flat to work with.
Step 8: Run apflatten on wide_trim to flatten the apertures. JUST keep the flatten and fit values set to yes, use fit2d, no clean, sat=65,000, readnoise=4, gain=0.9, fit a legendre polynomial of order ~20). Hit q/enter, or change order when necessary. There is an issue with this program, where speckles inherent in the detector (areas of lower detector sensitivity) create bad edge effects in the apertures, and (in some cases) wipe out an order altogether. The best way we have found to deal with this is to use fit2d instead of fit1d in apflatten, and to widen the apertures which are being wiped out. To widen one aperture use apedit with commands listed below.
Step 9: flatten data, arclamps using ccdproc. turn off everything but flattening. (d#_trim.fits --> d#_flat.fits). use the latest version of wide_flat
Step 10: Do aperture tracing on the science targets. apedit your data file, use flat_trim as a reference (the science flat). hit "a", "c" to center all apertures. Start with the highest order, and hit ., b to view the background plot. If the background level doesn't cut out too much of the flux in the aperture, hit q, -, b to see the next aperture. If the background level is bad, hit r to redraw, z to delete the bad sampling section, then s twice to reset the sampling section. then hit f to refit, and move on to the next aperture.
Step 11: aptrace the flattened water star
Step 12: for arclamp spectrum extraction, use apsum (with ONLY extract turned on). Use arc_trim file, and the reference star should be the flattened water star. output = arc_ext.fits
Step 13: for extraction of data spectra, use apall on data files, with the flattened water star as a reference. background=average, skybox=6 (for hamilton data), weights=better exposed files (200 counts above baseline) don't need weighting. When using weighting, set fit2d, and manually change the hidden parameter: apall1.polysep=[0.1,0.95]. clean=no (yes for weighting), saturation level ~few 100 counts above peak (if weighting). If not weighting, can leave as INDEF. q/enter through all apertures.
Step 14: Identify arclamp lines using ecident. Run this on the extracted arclamp (arc_ext.fits). Identify lines by comparing to previous ThAr lamp spectra (from Carl, or online on the Lick website). Mark using "m", and type in correct wavelength. Do this for a couple of lines in at least 10 apertures. Use J and K to move between apertures. Then hit f to fit. Re-open it, and the program will have automatically identified more lines.
Useful commands:
epar a task, hit :go to run it without quitting.
ls files (or *.fits) > listname
When calling a list in the parameters, use @listname
mkiraf in whatever directory you want to be able to start iraf from - this makes a login.cl file in that directory.
in apedit:
use "y" to change the width of an aperture to whatever the width of the line at the point of the horizontal cursor is
use "l' and "u" to set the upper and lower limits of the width
use "z" to resize
c to find the center
a to select all the apertures (hit a again to deselect)
. to select cursor aperture
to widen all apertures at once, set llimit=-8, ulimit=8, r_grow=10, "a","z","a"
"r" to redraw the plot
To zoom in on a section, hit "w", then define the corners of the box by hitting "e" twice. to zoom out again, hit "w", "a"
SED Fitting Routine
Starting with 12 data points: Hipparcos B and V mags, 2MASS J, H, and K mags, and WISE Bands 1-4, I want to fit a stellar photosphere from the Hauschildt et al. 1999 models. The purpose of the SED fitting (in my case) is to determine if there is an excess (flux above the photosphere) at 22 micron (WISE Band 4). This can be indicative of a dusty debris disk around a star.
Downloading the models was step one - I am storing a folder on my computer which contains a set of 94 2-column text files. These text files are just the x and y coordinates for an SED. Each one corresponds to a stellar photosphere of a different temperature and logg. A simple chi-squared fit to the data points should be sufficient to predict the photosphere.
The model SEDs are not sufficient on their own, however. Each data point corresponds to a bandpass with a certain filter response function. This function must be convolved with the model SED over the bandwidth at each data point to properly predict the measurable photospheric flux at that wavelength.
This is proving to be more difficult. I am attempting to keep the relevant files contained in a single folder so that when the script is finished and working, I can send it to anyone so that they can run it on their computer easily.
Downloading the models was step one - I am storing a folder on my computer which contains a set of 94 2-column text files. These text files are just the x and y coordinates for an SED. Each one corresponds to a stellar photosphere of a different temperature and logg. A simple chi-squared fit to the data points should be sufficient to predict the photosphere.
The model SEDs are not sufficient on their own, however. Each data point corresponds to a bandpass with a certain filter response function. This function must be convolved with the model SED over the bandwidth at each data point to properly predict the measurable photospheric flux at that wavelength.
This is proving to be more difficult. I am attempting to keep the relevant files contained in a single folder so that when the script is finished and working, I can send it to anyone so that they can run it on their computer easily.
Subscribe to:
Comments (Atom)