Photo's Fundamental Datatypes

ATLAS_IMAGE
The portion of a frame containing the pixels that have been determined to belong to an object. (Pixels that have too small a S/N to be found individually will not appear in the ATLAS_IMAGE. On the other hand the individual pixel values are not very useful, and their smoothed properties may be recovered from the 4x4 binned image produced by the frames pipeline.
MASK
An 8-bit region, with one byte corresponding to each pixel in a region. Photo doesn't use MASKs, preferring to use SPANMASKs for reasons given below.
OBJC
An object in all five filters, containing the OBJECT1s from each band.
OBJC_IO
An OBJC reorganised to be written to a fits binary file.
OBJECT
A collection of pixels above threshold, with associated PEAK information.
OBJECT1
An object in one filter, complete with information about its parent region, in which pixels it was detected, its PEAKs, and all its measured properties.
OBJMASK
An `object' describing which pixels in an image have a certain property. Objmasks are used to describe objects (i.e. in which pixels an object is detected), and also things like cosmic rays (i.e. which pixels a cosmic ray was removed from), and interpolated pixels (all pixels which have been interpolated over). A comparison with dervish's MASKs is given below.
PEAK
A single peak within an object. The list of peaks within an object is crucial to the operation of the deblender, as well as interesting in its own right. Photo takes considerable pains to keep this list correct at all times.
REGION
Dervish's datatype supporting a rectangular collection of pixels.
SPANMASK
A collection of OBJMASKs, stored as a chain. This is the equivalent of a bitplane in one of dervish's MASKs.

Note on Masks Structures

The standard dervish mask is a bitmask, with 8 bits for each pixel in the image. They have various disadvantages; they take up a lot of memory (a full mask is 3Mby) --- not so much a concern per se, but a cache disaster; you cannot find all the saturated pixels (say) without running an object finder on the object mask; you have to check each pixel as you process it to see if it's OK. All of these concerns are addressed by the representation of each bitplane as a set of `OBJMASK's, which are a structure consisting of {row, col1, col2} for each line-segment where the bitmask would have been set (they also contain a bounding box and other book-keeping information). These sets, currently CHAINs, are then assembled into `SPANMASK's, which are arrays of the chains of OBJMASKs, one for each bitplane in the old shiva masks.

Note that as our bitplanes are sparse, much less memory is involved in these data structures; you can ask for a list of all cosmic rays (it's a (possibly empty) chain of OBJMASKs); and you can ask for all the pixels in this object that aren't in the NOTCHECKED chain, and then simply loop over the spans in question, or ask for the pixels in this object that are saturated.

The downside is that a SPANMASK doesn't match a fits binary table very well. An chain of OBJMASKs isn't too bad --- one row for each OBJMASK, with the {r, c1, c2} triples in the heap as properly-byte-swapped-shorts --- but to put a whole SPANMASK in requires that all of the OBJMASK data appear in the heap, and then there's a nightmare about byte order, as you have to know if the heap data is 1, 2, or 4 byte units.