pynpoint.readwrite package

Submodules

pynpoint.readwrite.fitsreading module

Module for reading FITS files.

class pynpoint.readwrite.fitsreading.FitsReadingModule(name_in: str, input_dir: str = None, image_tag: str = 'im_arr', overwrite: bool = True, check: bool = True, filenames: Union[str, List[str]] = None)[source]

Bases: pynpoint.core.processing.ReadingModule

Reads FITS files from the given input_dir or the default directory of the Pypeline. The FITS files need to contain either single images (2D) or cubes of images (3D). Individual images should have the same shape and type. The header of the FITS is scanned for the required static attributes (should be identical for each FITS file) and non-static attributes. Static entries will be saved as HDF5 attributes while non-static attributes will be saved as separate data sets in a subfolder of the database named header_ + image_tag. If the FITS files in the input directory have changing static attributes or the shape of the input images is changing a warning appears. FitsReadingModule overwrites by default all existing data with the same tags in the central database.

Parameters:
  • name_in (str) – Unique name of the module instance.
  • input_dir (str, None) – Input directory where the FITS files are located. If not specified the Pypeline default directory is used.
  • image_tag (str) – Tag of the read data in the HDF5 database. Non static header information is stored with the tag: header_ + image_tag / header_entry_name.
  • overwrite (bool) – Overwrite existing data and header in the central database.
  • check (bool) – Print a warning if certain attributes from the configuration file are not present in the FITS header. If set to False, attributes are still written to the dataset but there will be no warning if a keyword is not found in the FITS header.
  • filenames (str, list(str, ), None) – If a string, then a path of a text file should be provided. This text file should contain a list of FITS files. If a list, then the paths of the FITS files should be provided directly. If set to None, the FITS files in the input_dir are read. All paths should be provided either relative to the Python working folder (i.e., the folder where Python is executed) or as absolute paths.
Returns:

None

Return type:

NoneType

read_single_file(fits_file: str, overwrite_tags: list) → Tuple[astropy.io.fits.header.Header, tuple][source]

Function which reads a single FITS file and appends it to the database. The function gets a list of overwriting_tags. If a new key (header entry or image data) is found that is not on this list the old entry is overwritten if self.m_overwrite is active. After replacing the old entry the key is added to the overwriting_tags. This procedure guaranties that all previous database information, that does not belong to the new data set that is read by FitsReadingModule is replaced and the rest is kept.

Parameters:
  • fits_file (str) – Absolute path and filename of the FITS file.
  • overwrite_tags (list(str, )) – The list of database tags that will not be overwritten.
Returns:

  • astropy.io.fits.header.Header – FITS header.
  • tuple(int, ) – Image shape.

run() → None[source]

Run method of the module. Looks for all FITS files in the input directory and imports the images into the database. Note that previous database information is overwritten if overwrite=True. The filenames are stored as attributes.

Returns:None
Return type:NoneType

pynpoint.readwrite.fitswriting module

Module for writing data as FITS file.

class pynpoint.readwrite.fitswriting.FitsWritingModule(name_in: str, data_tag: str, file_name: str, output_dir: str = None, data_range: Tuple[int, int] = None, overwrite: bool = True)[source]

Bases: pynpoint.core.processing.WritingModule

Module for writing a data set of the central HDF5 database as FITS file. The data and all attached attributes will be saved. Besides typical image stacks it is possible to export for example non-static header information. To choose the data set from the database its tag / key has to be specified. FitsWritingModule is a Writing Module and supports to use the Pypeline default output directory as well as a own location. See pynpoint.core.processing.WritingModule for more information. Note that per default this module will overwrite an existing FITS file with the same filename.

Parameters:
  • name_in (str) – Unique name of the module instance.
  • data_tag (str) – Tag of the database entry the module has to export as FITS file.
  • file_name (str) – Name of the FITS output file. Requires the FITS extension.
  • output_dir (str, None) – Output directory where the FITS file will be stored. If no folder is specified the Pypeline default is chosen.
  • data_range (tuple, None) – A two element tuple which specifies a begin and end frame of the export. This can be used to save a subsets of huge dataset. If None the whole dataset will be exported.
  • overwrite (bool) – Overwrite existing FITS file with identical filename.
Returns:

None

Return type:

NoneType

run() → None[source]

Run method of the module. Creates a FITS file and saves the data as well as the corresponding attributes.

Returns:None
Return type:NoneType

pynpoint.readwrite.hdf5reading module

Module for reading HDF5 files that were created with the Hdf5WritingModule.

class pynpoint.readwrite.hdf5reading.Hdf5ReadingModule(name_in: str, input_filename: str = None, input_dir: str = None, tag_dictionary: dict = None)[source]

Bases: pynpoint.core.processing.ReadingModule

Reads an HDF5 file from the given input_dir or the default directory of the Pypeline. A tag dictionary has to be set in order to choose the datasets which will be imported into the database. Also the static and non-static attributes are read from the HDF5 file and stored in the database with the corresponding data set. This module should only be used for reading HDF5 files that are created with the Hdf5WritingModule. Reading different type of HDF5 files may lead to inconsistencies in the central database.

Parameters:
  • name_in (str) – Unique name of the module instance.
  • input_filename (str, None) – The file name of the HDF5 input file. All files inside the input location will be imported if no filename is provided.
  • input_dir (str, None) – The directory of the input HDF5 file. If no location is given, the default input location of the Pypeline is used.
  • tag_dictionary (dict, None) – Dictionary of all data sets that will be imported. The dictionary format is {tag_name_in_input_file:tag_name_in_database, }. All data sets in the input HDF5 file that match one of the tag_name_in_input_file will be imported. The tag name inside the internal Pypeline database will be changed to tag_name_in_database.
Returns:

None

Return type:

NoneType

read_single_hdf5(file_in: str) → None[source]

Function which reads a single HDF5 file.

Parameters:file_in (str) – Path and name of the HDF5 file.
Returns:None
Return type:NoneType
run() → None[source]

Run method of the module. Looks for all HDF5 files in the input directory and reads the datasets that are provided in the tag dictionary.

Returns:None
Return type:NoneType

pynpoint.readwrite.hdf5writing module

Module for writing a list of tags from the database to a separate HDF5 file.

class pynpoint.readwrite.hdf5writing.Hdf5WritingModule(name_in: str, file_name: str, output_dir: str = None, tag_dictionary: dict = None, keep_attributes: bool = True, overwrite: bool = False)[source]

Bases: pynpoint.core.processing.WritingModule

Module which exports a part of the PynPoint internal database to a separate HDF5 file. The datasets of the database can be chosen using the tag_dictionary. The module will also export the static and non-static attributes.

Parameters:
  • name_in (str) – Unique name of the module instance.
  • file_name (str) – Name of the file which will be created by the module.
  • output_dir (str, None) – Location where the HDF5 file will be stored. The Pypeline default output location is used when no location is given.
  • tag_dictionary (dict, None) – Directory containing all tags / keys of the datasets which will be exported from the PynPoint internal database. The datasets will be exported as {input_tag:output_tag, }.
  • keep_attributes (bool) – If True all static and non-static attributes will be exported.
  • overwrite (bool) – Overwrite an existing HDF5 file.
Returns:

None

Return type:

NoneType

run() → None[source]

Run method of the module. Exports all datasets defined in the tag_dictionary to an external HDF5 file.

Returns:None
Return type:NoneType

pynpoint.readwrite.nearreading module

Module for reading FITS files obtained with VLT/VISIR for the NEAR experiment.

class pynpoint.readwrite.nearreading.NearReadingModule(name_in: str, input_dir: str = None, chopa_out_tag: str = 'chopa', chopb_out_tag: str = 'chopb', subtract: bool = False, crop: Union[Tuple[int, int, float], Tuple[None, None, float]] = None, combine: str = None)[source]

Bases: pynpoint.core.processing.ReadingModule

Pipeline module for reading VLT/VISIR data of the NEAR experiment. The FITS files and required header information are read from the input directory and stored in two datasets, corresponding to chop A and chop B. The primary HDU of the FITS files should contain the main header information, while the subsequent HDUs contain each a single image (alternated for chop A and chop B) and some additional header information for that image. The last HDU is ignored as it contains the average of all images.

Parameters:
  • name_in (str) – Unique name of the instance.
  • input_dir (str, None) – Input directory where the FITS files are located. The default input folder of the Pypeline is used if set to None.
  • chopa_out_tag (str) – Database entry where the chop A images will be stored. Should be different from chop_b_out_tag.
  • chopb_out_tag (str) – Database entry where the chop B images will be stored. Should be different from chop_a_out_tag.
  • subtract (bool) – If True, the other chop position is subtracted before saving out the chop A and chop B images.
  • crop (tuple(int, int, float), None) – The pixel position (x, y) around which the chop A and chop B images are cropped and the new image size (arcsec), together provided as (pos_x, pos_y, size). The same size will be used for both image dimensions. It is recommended to crop the images around the approximate coronagraph position. No cropping is applied if set to None.
  • combine (str, None) – Method (‘mean’ or ‘median’) for combining (separately) the chop A and chop B frames from each cube into a single frame. All frames are stored if set to None.
Returns:

None

Return type:

NoneType

check_header(header: astropy.io.fits.header.Header) → None[source]

Method to check the header information and prompt a warning if a value is not as expected.

Parameters:header (astropy.io.fits.header.Header) – Header information from the FITS file that is read.
Returns:None
Return type:NoneType
read_header(filename: str) → Tuple[astropy.io.fits.header.Header, Tuple[int, int, int]][source]

Function that opens a FITS file and separates the chop A and chop B images. The primary HDU contains only a general header. The subsequent HDUs contain a single image with a small extra header. The last HDU is the average of all images, which will be ignored.

Parameters:filename (str) – Absolute path and filename of the FITS file.
Returns:
  • astropy.io.fits.header.Header – Primary header, which is valid for all images.
  • tuple(int, int, int) – Shape of a stack of images for chop A or B.
read_images(filename: str, im_shape: Tuple[int, int, int]) → Tuple[numpy.ndarray, numpy.ndarray][source]

Function that opens a FITS file and separates the chop A and chop B images. The primary HDU contains only a general header. The subsequent HDUs contain a single image with a small extra header. The last HDU is the average of all images, which will be ignored.

Parameters:
  • filename (str) – Absolute path and filename of the FITS file.
  • im_shape (tuple(int, int, int)) – Shape of a stack of images for chop A or B.
Returns:

  • numpy.array – Array containing the images of chop A.
  • numpy.array – Array containing the images of chop B.

run() → None[source]

Run the module. The FITS files are collected from the input directory and uncompressed if needed. The images are then sorted by the two chop positions (chop A and chop B). The required FITS header keywords (which should be set in the configuration file) are also imported and stored as attributes to the two output datasets in the HDF5 database.

Returns:None
Return type:NoneType
uncompress() → None[source]

Method to check if the input directory contains compressed files ending with .fits.Z. If this is the case, the files will be uncompressed using multithreading. The number of threads can be set with the CPU parameter in the configuration file.

Returns:None
Return type:NoneType

pynpoint.readwrite.textreading module

Modules for reading data from a text file.

class pynpoint.readwrite.textreading.AttributeReadingModule(name_in: str, data_tag: str, file_name: str, attribute: str, input_dir: str = None, overwrite: bool = False)[source]

Bases: pynpoint.core.processing.ReadingModule

Module for reading a list of values from a text file and appending them as a non-static attributes to a dataset.

Parameters:
  • name_in (str) – Unique name of the module instance.
  • data_tag (str) – Tag of the database entry to which the attribute is written.
  • file_name (str) – Name of the input file with a list of values.
  • attribute (str) – Name of the attribute as to be written in the database.
  • input_dir (str, None) – Input directory where the text file is located. If not specified the Pypeline default directory is used.
  • overwrite (bool) – Overwrite if the attribute is already exists.
Returns:

None

Return type:

NoneType

run() → None[source]

Run method of the module. Reads a list of values from a text file and writes them as non-static attribute to a dataset.

Returns:None
Return type:NoneType
class pynpoint.readwrite.textreading.ParangReadingModule(name_in: str, data_tag: str, file_name: str, input_dir: str = None, overwrite: bool = False)[source]

Bases: pynpoint.core.processing.ReadingModule

Module for reading a list of parallactic angles from a text file.

Parameters:
  • name_in (str) – Unique name of the module instance.
  • data_tag (str) – Tag of the database entry to which the PARANG attribute is written.
  • file_name (str) – Name of the input file with a list of parallactic angles (deg). Should be equal in size to the number of images in data_tag.
  • input_dir (str, None) – Input directory where the text file is located. If not specified the Pypeline default directory is used.
  • overwrite (bool) – Overwrite if the PARANG attribute already exists.
Returns:

None

Return type:

NoneType

run() → None[source]

Run method of the module. Reads the parallactic angles from a text file and writes the values as non-static attribute (PARANG) to the database tag.

Returns:None
Return type:NoneType

pynpoint.readwrite.textwriting module

Modules for writing data as text file.

class pynpoint.readwrite.textwriting.AttributeWritingModule(name_in: str, data_tag: str, attribute: str, file_name: str = 'attributes.dat', output_dir: str = None, header: str = None)[source]

Bases: pynpoint.core.processing.WritingModule

Module for writing a 1D or 2D array of non-static attributes to a text file.

Parameters:
  • name_in (str) – Unique name of the module instance.
  • data_tag (str) – Tag of the database entry from which the PARANG attribute is read.
  • attribute (str) – Name of the non-static attribute as given in the central database (e.g., ‘INDEX’ or ‘STAR_POSITION’).
  • file_name (str) – Name of the output file.
  • output_dir (str, None) – Output directory where the text file will be stored. If no path is specified then the Pypeline default output location is used.
  • header (str, None) – Header that is written at the top of the text file.
Returns:

None

Return type:

NoneType

run() → None[source]

Run method of the module. Writes the non-static attributes (1D or 2D) to a a text file.

Returns:None
Return type:NoneType
class pynpoint.readwrite.textwriting.ParangWritingModule(name_in: str, data_tag: str, file_name: str = 'parang.dat', output_dir: str = None, header: str = None)[source]

Bases: pynpoint.core.processing.WritingModule

Module for writing a list of parallactic angles to a text file.

Parameters:
  • name_in (str) – Unique name of the module instance.
  • data_tag (str) – Tag of the database entry from which the PARANG attribute is read.
  • file_name (str) – Name of the output file.
  • output_dir (str, None) – Output directory where the text file will be stored. If no path is specified then the Pypeline default output location is used.
  • header (str, None) – Header that is written at the top of the text file.
Returns:

None

Return type:

NoneType

run() → None[source]

Run method of the module. Writes the parallactic angles from the PARANG attribute of the specified database tag to a a text file.

Returns:None
Return type:NoneType
class pynpoint.readwrite.textwriting.TextWritingModule(name_in: str, data_tag: str, file_name: str, output_dir: str = None, header: str = None)[source]

Bases: pynpoint.core.processing.WritingModule

Module for writing a 1D or 2D data set from the central HDF5 database as text file. TextWritingModule is a pynpoint.core.processing.WritingModule and supports the use of the Pypeline default output directory as well as a specified location.

Parameters:
  • name_in (str) – Unique name of the module instance.
  • data_tag (str) – Tag of the database entry from which data is exported.
  • file_name (str) – Name of the output file.
  • output_dir (str, None) – Output directory where the text file will be stored. If no path is specified then the Pypeline default output location is used.
  • header (str, None) – Header that is written at the top of the text file.
Returns:

None

Return type:

NoneType

run() → None[source]

Run method of the module. Saves the specified data from the database to a text file.

Returns:None
Return type:NoneType

Module contents