SSB jwst trac wiki home

IRAF replacement aspects

Generally speaking, this concerns replacing spectral data reduction tools that exist in IRAF, in particular within the onedspec and twodspec packages.

The work described here presumes use of general WCS machinery developed for the JWST calibration pipelines to describe the dispersion/spatial relationships of the raw data to spectral and spatial coordinates.

The following attempt to categorize the kinds of tools needed while listing corresponding IRAF tools (omitting format-related and info utilities):

1) wcs determination, manipulation

1ds) autoidentify, identify, reidentify, scoords, sflip, specshift

2ds) fceval, fitcoords, identify, reidentify, sflip, specshift,/apfind, aptrace

2) spectral feature fitting

1ds) continuum, fitprofs, sfit

2ds) background, apfit

3) instrumental or environmental corrections

1ds) calibrate, deredden, disptrans, dopcor, sbands, sensfunc, setairmass, skytweak, telluric

2ds) calibrate, deredden, dopcor, illumination, response, sensfunc, setairmass, apflatten, apnoise, apnormalize, apscatter

4) visualization/analysis (often folds in fitting and other capabilities)

1ds) bplot, specplot, splot,

2ds) bplot, specplot, splot

5) resampling and spectrum combining

1ds) dispcor, odcombine, scombine, sinterp,

2ds) lscombine, transform

6) extraction, aprecenter, apsum

2ds) apall

Lurking in the background is the fact that some of the IRAF tasks involve interactive graphics capabilities. We will need to determine how we want to solve that problem. If the interactions are kept simple, then matplotlib interactive capabilities may suffice, with the benefit that they would be quite portable. On the other hand, they are likely to be more clunky to the user. A more gui-based solution means settling on a gui toolkit (probably Qt) to handle interaction issues. As part of the early work, we will need to settle on an approach (unless we do both, of course :-). My impression is that the key functionality to implement is:

1) wcs determination (at some level for some spectral modes anyway)

2) spectral extraction for 2d data

3) visualization/interaction

Instrumental and environmental corrections, spectral fitting, and spectral combination can probably wait a bit. We need some spectral fitting capability for wcs determination, though only a subset.

Relationship to tools needed for JWST

We must keep in mind what needs to be done for JWST too, that isn't necessarily part of replacing existing IRAF functionality. It is good to list the kinds of things that are needed there. To date, the biggest concern on the part of the mission office and INS has been with tools to analyze and visualize IFU data. The following list has folded in a number of tools that Tracy Beck has outlined as being desirable for JWST.

I've included the original text along with comments on how I've interpreted it (perhaps mistakenly), and raising issues that haven't been explicitly raised.

So before starting with the list, I'll immediately raise a couple of such issues. While the list is mostly oriented towards a visualization aspect, I like to separate what is intrinsically visualization, and what isn't since we will be trying to keep these sorts of things separate. Computational aspects are good to keep isolated since they can be reused in many ways and by other tools, and used for scripting purposes.

The second aspect is that the list doesn't really address what form of data it is working on. I think it is implicitly that the data cube is resampled data, but for many computational aspects, it doesn't really need to be. We could even envision displaying resampled data, but performing computations based on unresampled data. For example, the user selects a region on the visualization to be summed (e.g. the first item in the list), but the summing is actually done on unresampled data (not that this is the most compelling case for doing it; it isn't). Generally, operations on unresampled data are more complex, but there is the fact that it is often considered more desirable to do so. The general presumption is that the early versions of these tools will work on resampled data, and we will fold in the ability to work on unresampled data at a later stage. For example, fitting line profiles may end up being done on unresampled data (a better case than summing) and the wcs model will be used to transform the fitting results into real wavelengths or velocities after the fit.

Some very basic IFU data analysis tasks include (but are not limited to):

  • Save summed 1-D spectrum from a selected spatial region as a .fits file [two tools really, one to interactively define regions, another to sum given region definitions.]
  • Collapse a wavelength range in the z dimension to create a 2-D image – user input includes range in wavelength to collapse data and method for collapsing (e.g., sum, average, maximum pixel value). [Likewise, perhaps a tool to identify the wavelength range (and perhaps a weighting bandpass?), and another to do the integration in wavelength]
  • Provide basic cube arithmetic options… add/subtract/multiply/divide 2-D or 1-D data from a 3-D cube region [this shouldn't be too difficult given numpy machinery. The main issue is how this is handled in a visual tool, if it is at all]
  • Provide a method for selecting regions based on data characteristics – i.e., extract a 1-D spectrum from all regions where the flux is greater than XX value. [Again, two aspects, defining regions based on thresholds or clustering algorithms such as exist in ndimage (and how much much of that is handled interactively, e.g., through adjusting the threshold and seeing the 3-d (2-d, 1-d) regions that result)]
  • Create a 3-D or 2-D image that has been masked for data at low flux value – i.e., set regions with flux less than XX to 0.0. (This is important for analysis of velocity profiles – need to mask out low flux data to limit noise). [almost the same thing]

A few fancier but very common IFU data analysis methods include (but are not limited to):

  • Fit continuum flux level in the wavelength dimension and subtract it off of an emission line (users define continuum region and emission line region). Provide the option to create a collapsed 2-D data product, in addition to the continuum subtracted 3-D product. Make this possible/easy to do for multiple emission lines. [A combination of a user graphical tool and a fitting tool. But is this fitting continuum in 3-D, 2-D, or just 1-D. I'm not sure I understand what the 2-D product is, summed in which dimension. Seems to me it would be nice if an algorithm could guess where emission lines are and provide a starting guess as to the continuum regions and let the user adjust. But strictly speaking, for a cube, this is a 3-D mask, with attendant visualization problems. Presumably she wants a 1-D mask for the whole cube. Perhaps the minimal masked region (in wavelength) is what should be applied to all spatial points (presumably the strongest emission signals)? ANDing in the spatial dimension?]
  • Provide the means to take a few inputs – such as emission line images or emission line datacubes – to derive physical parameters (i.e., n_e from [Fe II] line ratio maps or A_v from H2 line maps). Allow for user inputs to define the mathematical relations between spectra or images (i.e., user can define - final product = (img/cube 1 ) * (img/cube 2) * relation). Have several canned relations available automatically (such as gas parameters for [Fe II] and H2 line regions, or common physical parameters derived from optical emission lines in red-shifted galaxies). [Doesn't sound like a good GUI tool initially. One should be able to do this with numpy expressions in a much more straightforward way.] [Comments from others in the branch echo the fact that this is hard to do within a gui, and that we are moving into areas that depend quite a bit on individual needs; so the impression is that we should provide tools to make calculations of such things easy through the use of appropriate building blocks and scripts]
  • Fit a Gaussian to and derive centroids and higher order moments of an emission line profile to derive gas kinematics of emission – input is a 3-D datacube sub-region, output is a set of 2-D images that describe the Gaussian fits, and velocity centroid, dispersion and higher order moments of the profile. [GUI only to set the initial central wavelengths?]
  • Fit TWO Gaussian profiles to an emission line profile to derive gas kinematics of emission – input is a 3-D datacube sub-region, output is two sets of 2-D images that describe the velocity centroid, dispersion and higher order moments of the profile. [Likewise]
  • Fit a user input 1-D spectrum (e.g., instrumental line profile model) to an emission line profile to derive gas kinematics of emission – input is a 3-D datacube sub-region and 1-D spectral template, output is two sets of 2-D images that describe the velocity centroid, dispersion and higher order moments of the profile. [ditto]
  • Correlate the IFU datacube to a template spectrum to generate stellar kinematic diagnostics from absorption line spectra in a continuum source. Inputs are the 3-D datacube region (or sub-cube region) plus a template 1-D spectrum (uploadable in the analysis window), outputs are 2-D image maps of the velocity and dispersion. [very similar to the preceeding]

[The following probably is doable mostly with matplotlib if interactive aspects aren't super important.]

In addition to analyzing 3-D .fits files, it is very important for users to be able to easily make publication quality figures for their projects. Creating some plotting tools that can be called and accessed through a GUI interface can speed the process of presenting the complex IFU datasets. Some views of the DS9 2-D and 3-D displays can already be captures and saved as publication level figures.

Some expanded plotting and presentation options include (but are not limited to):

  • Create a publication quality figure using two 2-D images – using a continuum emission image, overplot contours from the other image. User inputs include display levels for image and contour levels and a color table. (This is already largely doable in the current version of DS9 – might be useful to make a wrapper script that can create this with fewer clicks and no need for external saves).
  • Create a velocity channel map using a 3-D input cube
  • Create a 3-D plot view with parameters from DS9 inputs
  • Create a figure presenting multiple 2-D images (not linked in velocity) using input from DS9
  • Create a plot with multiple 1-D spectra from DS9 input
  • Create recipes or canned wrapper macros that permit creation of some of these figures with fewer clicks by the user, and no external saves.

Some of the above plotting and data presentation methods are possible in the current DS9, but the interfaces could be updated and combined to require fewer clicks or interactions to get the final product.

Initial iteration:

1) ability to fit lines in lamp spectra using standard line profiles, and possibly a standard combination of line profiles for blended lines.

2) basic 1-d WCS fitting tools, using interactive display capabilities

a) trace fitting for 2-d spectra of point sources

b) fitting 2-d wcs models based on line identification/fits, and trace fits

3) basic extraction for point source spectra

4) interactive display of 1-d spectra, and 2-d spectral images, marking/recording of spectral features part of display capability.

5) determine best 3-d visualization framework. Candidates include:

a) DS9 with modifications (some work already done there; some concerns about performance and expandability)

b) use existing VTK/Python tools (performance optimized; installation issues)

c) some other existing 3-d visualization tool

less detailed roadmap for future iterations


Level 5 Proposed Requirements

Spectral Data Object Capabilities

  • Spectral Data Objects (of any dimensionality) shall have the capability of containing the necessary WCS information to map detector coordinates to sky/spectral (or velocity) coordinates.
  • Spectral Data Objects (of any dimensionality) shall have the capability of containing the units used for the WCS output coordinates.
  • Spectral Data Objects (of any dimensionality) shall have the capability of containing the necessary physical units used for the intensity information.

Spectral Model Capabilities

  • Spectral Models shall use the Models framework and support standard analytic models found in that framework.
  • 1D Spectral Models shall have the capability of being combined with other Spectral Models to produce a new spectral model through addition and subtraction.
  • 1D Spectral Models shall have the capability of being combined with transmission functions to produce a new spectral model through multiplication and division.
  • Spectral Models shall have the capability of containing the necessary physical units that represent the intensity information
  • Spectral Models shall provide a method to redshift the model by a specified value.
  • There shall be a mechanism to integrate Spectral Models over the spectral dimension.
  • There shall be a mechanism to bin Spectral Models onto an arbitrary spectral grid
  • There shall be a mechanism to sample Spectral Models at multiple spectral coordinates and return an array of values corresponding to the supplied spectral coordinates.
  • Spectral Models shall not lose any details in the spectral dimension when combined with other Spectral Models

Spectral Cube Building and Combining Capabilities

  • A 3-D Drizzle algorithm shall be implemented [provisional, if JWST data requires it]
  • A Drizzle application shall be implemented that resamples one or more Spectral Data Objects onto the same cube using the associated WCS model for each Spectral Data Object

1 Dimensional Spectral Visualization and Interaction Capabilities

  • There shall be a mechanism to generate plots of Spectral Data Objects and Models with the correct units for all quantities if these objects contain the necessary information.
  • A visualization tool for 1-D spectra shall support fitting standard spectral models to the displayed spectral data object including, but not limited to, nth degree polynomials, power laws, Gaussians, and Black Body spectrum, as well as fitting user-supplied template spectra by scaling and redshift parameters.
  • A visualization tool for 1-D spectra shall support interactive zooming in both x and y.
  • A visualization tool for 1-D spectra shall support smoothing of the data using a variety of filters including, but not limited to boxcar and Gaussian.

Cube Visualization

  • A Cube Visualization tool shall support displaying, as an image, any selected plane of the cube, perpendicular to any the 3 possible coordinate axes, and updating such displayed image in real time as the user selects the position along the selected axis to extract a plane to display.
  • A Cube Visualization tool shall support displaying a 1-dimensional plot of a selected spatial point, or the integrated flux within a specified region.
  • A Cube Visualization tool shall support restricting the displayed image or plots in the range of the displayed axes interactively.
  • A Cube Visualization tool shall support defining a region from the following classes of regions: circular/elliptical, square/rectangular (both aligned with the spatial dimensions), and arbitrary number of sided polygon, either drawn by the user, or read in from a special regions definition file.
  • A Cube Visualization tool shall allow smoothing in one or more dimensions using a variety of filters including, but not limited to, boxcar and gaussian, and subsequent subsampling of the result.
  • A Cube Visualization tool shall allow saving selected subsets of the loaded cube to a file.
  • A Cube Visualization tool shall support arbitrarily oriented views of the cube using iso surface or transparent volume rendering.
Last modified 5 years ago Last modified on 03/06/14 15:09:55