Pipeline modules#

This page contains a list of all available pipeline modules and a short description of what they are used for. Reading modules import data into the database, writing modules export data from the database, and processing modules run a specific task of the data reduction or analysis. More details on the design of the pipeline can be found in the Architecture section.

Note

All PynPoint classes ending with Module in their name (e.g. FitsReadingModule) are pipeline modules that can be added to an instance of Pypeline (see Pypeline section).

Important

The pipeline modules with multiprocessing functionalities are indicated with “CPU” in parentheses. The number of parallel processes can be set with the CPU parameter in the configuration file and the number of images that is simultaneously loaded into the memory with the MEMORY parameter. Pipeline modules that apply (in parallel) a function to subsets of images use a number of images per subset equal to MEMORY divided by CPU.

Important

The pipeline modules that are compatible with both regular imaging and integral field spectroscopy datasets (i.e. 3D and 4D data) are indicated with “IFS” in parentheses. All other modules are only compatible with regular imaging.

Reading modules#

Writing modules#

Processing modules#

Background subtraction#

Bad pixel cleaning#

Basic processing#

Centering#

Dark and flat correction#

Denoising#

Detection limits#

Extract star#

Filters#

Flux and position#

Frame selection#

Image resizing#

PCA background subtraction#

PSF preparation#

PSF subtraction#

Stacking#