pynpoint.core package

Submodules

pynpoint.core.attributes module

Module to obtain information about the implemented attributes.

pynpoint.core.attributes.get_attributes()[source]

Function to get a dictionary with all attributes.

Returns:Attribute information.
Return type:dict

pynpoint.core.dataio module

Modules for accessing data and attributes in the central database.

class pynpoint.core.dataio.ConfigPort(tag, data_storage_in=None)[source]

Bases: pynpoint.core.dataio.Port

ConfigPort can be used to read the ‘config’ tag from a (HDF5) database. This tag contains the central settings used by PynPoint, as well as the relevant FITS header keywords. You can use a ConfigPort instance to access a single attribute of the dataset using get_attribute().

Constructor of the ConfigPort class which creates the config port instance which can read the settings stored in the central database under the tag config. An instance of the ConfigPort is created in the constructor of PypelineModule such that the attributes in the ConfigPort can be accessed from within all type of modules. For example:

memory = self._m_config_port.get_attribute('MEMORY')
Parameters:
  • tag (str) – The tag name of the port. The port can be used to get data from the dataset with the key config.
  • data_storage_in (pynpoint.core.dataio.DataStorage) – The input DataStorage. It is possible to give the constructor of an ConfigPort a DataStorage instance which will link the port to that DataStorage. Usually the DataStorage is set later by calling set_database_connection().
Returns:

None

Return type:

NoneType

get_attribute(name)[source]

Returns a static attribute which is connected to the dataset of the ConfigPort.

Parameters:name (str) – The name of the attribute.
Returns:The attribute value. Returns None if the attribute does not exist.
Return type:str, float, or int
class pynpoint.core.dataio.DataStorage(location_in)[source]

Bases: object

Instances of DataStorage manage to open and close the Pypeline HDF5 databases. They have an internal h5py data bank (self.m_data_bank) which gives direct access to the data if the storage is open (self.m_open == True).

Constructor of a DataStorage instance. It needs the location of the HDF5 file (Pypeline database) as input. If the file already exists it is opened and extended, if not a new File will be created.

Parameters:location_in (str) – Location (directory + filename) of the HDF5 database.
Returns:None
Return type:NoneType
close_connection()[source]

Closes the connection to the HDF5 file. All entries of the data bank will be stored on the hard drive and the memory is cleaned.

Returns:None
Return type:NoneType
open_connection()[source]

Opens the connection to the HDF5 file by opening an old file or creating a new one.

Returns:None
Return type:NoneType
class pynpoint.core.dataio.InputPort(tag, data_storage_in=None)[source]

Bases: pynpoint.core.dataio.Port

InputPorts can be used to read datasets with a specific tag from the HDF5 database. This type of port can be used to access:

  • A complete dataset using the get_all() method.
  • A single attribute of the dataset using get_attribute().
  • All attributes of the dataset using get_all_static_attributes() and get_all_non_static_attributes().
  • A part of a dataset using slicing. For example:
in_port = InputPort('tag')
data = in_port[0, :, :] # returns the first 2D image of a 3D image stack.

(More information about how 1D, 2D, and 3D data is organized can be found in the documentation of OutputPort (append() and set_all())

InputPorts can load two types of attributes which give additional information about a dataset the port is linked to:

  • Static attributes: contain global information about a dataset which is not changing through a dataset in the database (e.g. the instrument name or pixel scale).
  • Non-static attributes: contain information which changes for different parts of the dataset (e.g. the parallactic angles or dithering positions).

Constructor of InputPort. An input port can read data from the central database under the key tag. Instances of InputPort should not be created manually inside a PypelineModule but should be created with the add_input_port() function.

Parameters:
  • tag (str) – The tag of the port. The port can be used in order to get data from the dataset with the key tag.
  • data_storage_in (pynpoint.core.dataio.DataStorage) – It is possible to give the constructor of an InputPort a DataStorage instance which will link the port to that DataStorage. Usually the DataStorage is set later by calling set_database_connection().
Returns:

None

Return type:

NoneType

get_all()[source]

Returns the whole dataset stored in the data bank under the tag of the Port. Be careful using this function for loading large datasets. The data type is inferred from the data with numpy.asarray. A 32 bit array will be returned in case the input data is a combination of float32 and float64 arrays.

Returns:The full dataset. Returns None if the data does not exist.
Return type:numpy.ndarray
get_all_non_static_attributes()[source]

Returns a list of all non-static attribute keys. More information about static and non-static attributes can be found in the class documentation of InputPort.

Returns:List of all existing non-static attribute keys.
Return type:list(str, )
get_all_static_attributes()[source]

Get all static attributes of the dataset which are linked to the Port tag.

Returns:Dictionary of all attributes, as {attr_name:attr_value}.
Return type:dict
get_attribute(name)[source]

Returns an attribute which is connected to the dataset of the port. The function can return static and non-static attributes (static attributes have priority). More information about static and non-static attributes can be found in the class documentation of InputPort.

Parameters:name (str) – The name of the attribute.
Returns:The attribute value. Returns None if the attribute does not exist.
Return type:str, float, int, or numpy.ndarray
get_ndim()[source]

Returns the number of dimensions of the dataset the port is linked to.

Returns:Number of dimensions of the dataset. Returns None if the dataset does not exist.
Return type:int
get_shape()[source]

Returns the shape of the dataset the port is linked to. This can be useful if you need the shape without loading the whole data.

Returns:Shape of the dataset. Returns None if the dataset does not exist.
Return type:tuple(int, )
class pynpoint.core.dataio.OutputPort(tag, data_storage_in=None, activate_init=True)[source]

Bases: pynpoint.core.dataio.Port

Output ports can be used to save results under a given tag to the HDF5 DataStorage. An instance of OutputPort with self.tag=`tag` can store data under the key tag by using one of the following methods:

  • set_all(…) - replaces and sets the whole dataset
  • append(…) - appends data to the existing data set. For more information see function documentation (append()).
  • slicing - sets a part of the actual dataset. Example:
out_port = OutputPort('Some_tag')
data = np.ones(200, 200) # 2D image filled with ones
out_port[0,:,:] = data # Sets the first 2D image of a 3D image stack
  • add_attribute(…) - modifies or creates a attribute of the dataset
  • del_attribute(…) - deletes a attribute
  • del_all_attributes(…) - deletes all attributes
  • append_attribute_data(…) - appends information to non-static attributes. See add_attribute() (add_attribute()) for more information about static and non-static attributes.
  • check_static_attribute(…) - checks if a static attribute exists and if it is equal to a given value
  • other functions listed below

For more information about how data is organized inside the central database have a look at the function documentation of the function set_all() and append().

Furthermore it is possible to deactivate a OutputPort to stop him saving data.

Constructor of the OutputPort class which creates an output port instance which can write data to the the central database under the tag tag. If you write a PypelineModule you should not create instances manually! Use the add_output_port() function instead.

Parameters:
  • tag (str) – The tag of the port. The port can be used in order to write data to the dataset with the key = tag.
  • data_storage_in (pynpoint.core.dataio.DataStorage) – It is possible to give the constructor of an OutputPort a DataStorage instance which will link the port to that DataStorage. Usually the DataStorage is set later by calling set_database_connection().
Returns:

None

Return type:

NoneType

activate()[source]

Activates the port. A non activated port will not save data.

Returns:None
Return type:NoneType
add_attribute(name, value, static=True)[source]

Adds an attribute to the dataset of the Port with the attribute name = name and the value = value. If the attribute already exists it will be overwritten. Two different types of attributes are supported:

  1. static attributes: Contain a single value or name (e.g. The name of the used Instrument).
  2. non-static attributes: Contain a dataset which is connected to the actual data set (e.g. Instrument temperature). It is possible to append additional information to non-static attributes later (append_attribute_data()). This is not supported by static attributes.

Static and non-static attributes are stored in a different way using the HDF5 file format. Static attributes will be direct attributes while non-static attributes are stored in a group with the name header_ + name of the dataset.

Parameters:
  • name (str) – Name of the attribute.
  • value (str, float, or int) – Value of the attribute.
  • static (bool) – Indicate if the attribute is static (True) or non-static (False).
Returns:

None

Return type:

NoneType

add_history(module, history)[source]

Adds an attribute with history information about the pipeline module.

Parameters:
  • module (str) – Name of the pipeline module which was executed.
  • history (str) – History information.
Returns:

None

Return type:

NoneType

append(data, data_dim=None, force=False)[source]

Appends data to an existing dataset with the tag of the Port along the first dimension. If no data exists with the tag of the Port a new data set is created. For more information about how the dimensions are organized see documentation of the function set_all(). Note it is not possible to append data with a different shape or data type to the existing dataset.

Example: An internal data set is 3D (storing a stack of 2D images) with shape (233, 300, 300) which mean it contains 233 images with a resolution of 300 x 300 pixel. Thus it is only possible to extend along the first dimension by appending new images with a size of (300, 300) or by appending a stack of images (:, 300, 300). Everything else will raise exceptions.

It is possible to force the function to overwrite the existing data set if and only if the shape or type of the input data does not match the existing data. Warning: This can delete the existing data.

Parameters:
  • data (numpy.ndarray) – The data which will be appended.
  • data_dim (int) – Number of data dimensions used if a new data set is created. The dimension of the data is used if set to None.
  • force (bool) – The existing data will be overwritten if shape or type does not match if set to True.
Returns:

None

Return type:

NoneType

append_attribute_data(name, value)[source]

Function which appends a single data value to non-static attributes.

Parameters:
  • name (str) – Name of the attribute.
  • value (str, float, or int) – Value which will be appended to the attribute dataset.
Returns:

None

Return type:

NoneType

check_non_static_attribute(name, comparison_value)[source]

Checks if a non-static attribute exists and if it is equal to a comparison value.

Parameters:
  • name (str) – Name of the non-static attribute.
  • comparison_value (numpy.ndarray) – Comparison values
Returns:

Status: 1 if the non-static attribute does not exist, 0 if the non-static attribute exists and is equal, and -1 if the non-static attribute exists but is not equal.

Return type:

int

check_static_attribute(name, comparison_value)[source]

Checks if a static attribute exists and if it is equal to a comparison value.

Parameters:
  • name (str) – Name of the static attribute.
  • comparison_value (str, float, or int) – Comparison value.
Returns:

Status: 1 if the static attribute does not exist, 0 if the static attribute exists and is equal, and -1 if the static attribute exists but is not equal.

Return type:

int

copy_attributes(input_port)[source]

Copies all static and non-static attributes from a given InputPort. Attributes which already exist will be overwritten. Non-static attributes will be linked not copied. If the InputPort tag = OutputPort tag (self.tag) nothing will be changed. Use this function in all modules to keep the header information.

Parameters:input_port (pynpoint.core.dataio.InputPort) – The InputPort with the header information.
Returns:None
Return type:NoneType
deactivate()[source]

Deactivates the port. A non activated port will not save data.

Returns:None
Return type:NoneType
del_all_attributes()[source]

Deletes all static and non-static attributes of the dataset.

Returns:None
Return type:NoneType
del_all_data()[source]

Delete all data belonging to the database tag.

del_attribute(name)[source]

Deletes the attribute of the dataset with the given name. Finds and removes static and non-static attributes.

Parameters:name (str) – Name of the attribute.
Returns:None
Return type:NoneType
flush()[source]

Forces the DataStorage to save all data from the memory to the hard drive without closing the OutputPort.

Returns:None
Return type:NoneType
set_all(data, data_dim=None, keep_attributes=False)[source]

Set the data in the database by replacing all old values with the values of the input data. If no old values exists the data is just stored. Since it is not possible to change the number of dimensions of a data set later in the processing history one can choose a dimension different to the input data. The following cases are implemented:

  • (#dimension of the first input data#, #desired data_dim#)
  • (1, 1) 1D input or single value will be stored as list in HDF5
  • (1, 2) 1D input, but 2D array stored inside (i.e. a list of lists with a fixed size).
  • (2, 2) 2D input (single image) and 2D array stored inside (i.e. a list of lists with a fixed size).
  • (2, 3) 2D input (single image) but 3D array stored inside (i.e. a stack of images with a fixed size).
  • (3, 3) 3D input and 3D array stored inside (i.e. a stack of images with a fixed size).

For 2D and 3D data the first dimension always represents the list / stack (variable size) while the second (or third) dimension has a fixed size. After creation it is possible to extend a data set using append() along the first dimension.

Example 1:

Input 2D array with size (200, 200). Desired dimension 3D. The result is a 3D dataset with the dimension (1, 200, 200). It is possible to append other images with the size (200, 200) or other stacks of images with the size (:, 200, 200).

Example 2:

Input 2D array with size (200, 200). Desired dimension 2D. The result is a 2D dataset with the dimension (200, 200). It is possible to append other list with the length 200 or other stacks of lines with the size (:, 200). However it is not possible to append other 2D images along a third dimension.

Parameters:
  • data (numpy.ndarray) – The data to be saved.
  • data_dim (int) – Number of data dimensions. The dimension of the first_data is used if set to None.
  • keep_attributes (bool) – All attributes of the old dataset will remain the same if set to True.
Returns:

None

Return type:

NoneType

class pynpoint.core.dataio.Port(tag, data_storage_in=None)[source]

Bases: object

Abstract interface and implementation of common functionality of the InputPort, OutputPort, and ConfigPort. Each Port has a internal tag which is its key to a dataset in the DataStorage. If for example data is stored under the entry im_arr in the central data storage only a port with the tag (self._m_tag = im_arr) can access and change that data. A port knows exactly one DataStorage instance, whether it is active or not (self._m_data_base_active).

Abstract constructor of a Port. As input the tag / key is expected which is needed to build the connection to the database entry with the same tag / key. It is possible to give the Port a DataStorage. If this storage is not given the Pypeline module has to set it or the connection needs to be added manually using set_database_connection().

Parameters:
Returns:

None

Return type:

NoneType

close_port()[source]

Closes the connection to the DataStorage and forces it to save the data to the hard drive. All data that was accessed using the port is cleaned from the memory.

Returns:None
Return type:NoneType
open_port()[source]

Opens the connection to the DataStorage and activates its data bank.

Returns:None
Return type:NoneType
set_database_connection(data_base_in)[source]

Sets the internal DataStorage instance.

Parameters:data_base_in (pynpoint.core.dataio.DataStorage) – The input DataStorage.
Returns:None
Return type:NoneType
tag

Getter for the internal tag (no setter).

Returns:Database tag name.
Return type:str

pynpoint.core.processing module

Interfaces for pipeline modules.

class pynpoint.core.processing.ProcessingModule(name_in)[source]

Bases: pynpoint.core.processing.PypelineModule

The abstract class ProcessingModule is an interface for all processing steps in the pipeline which read, process, and store data. Hence processing modules have read and write access to the central database through a dictionary of output ports (self._m_output_ports) and a dictionary of input ports (self._m_input_ports).

Abstract constructor of a ProcessingModule which needs the unique name identifier as input (more information: pynpoint.core.processing.PypelineModule). Call this function in all __init__() functions inheriting from this class.

Parameters:name_in (str) – The name of the ProcessingModule.
add_input_port(tag)[source]

Function which creates an InputPort for a ProcessingModule and appends it to the internal InputPort dictionary. This function should be used by classes inheriting from ProcessingModule to make sure that only input ports with unique tags are added. The new port can be used as:

port = self._m_input_ports[tag]

or by using the returned Port.

Parameters:tag (str) – Tag of the new input port.
Returns:The new InputPort for the ProcessingModule.
Return type:pynpoint.core.dataio.InputPort
add_output_port(tag, activation=True)[source]

Function which creates an OutputPort for a ProcessingModule and appends it to the internal OutputPort dictionary. This function should be used by classes inheriting from ProcessingModule to make sure that only output ports with unique tags are added. The new port can be used as:

port = self._m_output_ports[tag]

or by using the returned Port.

Parameters:
  • tag (str) – Tag of the new output port.
  • activation (bool) – Activation status of the Port after creation. Deactivated ports will not save their results until they are activated.
Returns:

The new OutputPort for the ProcessingModule.

Return type:

pynpoint.core.dataio.OutputPort

apply_function_in_time(func, image_in_port, image_out_port, func_args=None)[source]

Applies a function to all pixel lines in time.

Parameters:
  • func (function) – The input function.
  • image_in_port (pynpoint.core.dataio.InputPort) – Input port which is linked to the input data.
  • image_out_port (pynpoint.core.dataio.OutputPort) – Output port which is linked to the results.
  • func_args (tuple, None) – Additional arguments which are required by the input function. Not used if set to None.
Returns:

None

Return type:

NoneType

apply_function_to_images(func, image_in_port, image_out_port, message, func_args=None)[source]

Function which applies a function to all images of an input port. Stacks of images are processed in parallel if the CPU and MEMORY attribute are set in the central configuration. The number of images per process is equal to the value of MEMORY divided by the value of CPU. Note that the function func is not allowed to change the shape of the images if the input and output port have the same tag and MEMORY is not set to None.

Parameters:
  • func (function) –

    The function which is applied to all images. Its definitions should be similar to:

    def function(image_in,
                 parameter1,
                 parameter2,
                 parameter3)
    
  • image_in_port (pynpoint.core.dataio.InputPort) – Input port which is linked to the input data.
  • image_out_port (pynpoint.core.dataio.OutputPort) – Output port which is linked to the results.
  • message (str) – Progress message.
  • func_args (tuple) – Additional arguments that are required by the input function.
Returns:

None

Return type:

NoneType

connect_database(data_base_in)[source]

Function used by a ProcessingModule to connect all ports in the internal input and output port dictionaries to the database. The function is called by Pypeline and connects the DataStorage object to all module ports.

Parameters:data_base_in (pynpoint.core.dataio.DataStorage) – The central database.
Returns:None
Return type:NoneType
get_all_input_tags()[source]

Returns a list of all input tags to the ProcessingModule.

Returns:List of input tags.
Return type:list(str, )
get_all_output_tags()[source]

Returns a list of all output tags to the ProcessingModule.

Returns:List of output tags.
Return type:list(str, )
run()[source]

Abstract interface for the run method of a ProcessingModule which inheres the actual algorithm behind the module.

class pynpoint.core.processing.PypelineModule(name_in)[source]

Bases: object

Abstract interface for the PypelineModule:

Each PypelineModule has a name as a unique identifier in the Pypeline and requires the connect_database and run methods.

Abstract constructor of a PypelineModule. Needs a name as identifier.

Parameters:name_in (str) – The name of the PypelineModule.
Returns:None
Return type:NoneType
connect_database(data_base_in)[source]

Abstract interface for the function connect_database which is needed to connect the Ports of a PypelineModule with the DataStorage.

Parameters:data_base_in (pynpoint.core.dataio.DataStorage) – The central database.
name

Returns the name of the PypelineModule. This property makes sure that the internal module name can not be changed.

Returns:The name of the PypelineModule.
Return type:str
run()[source]

Abstract interface for the run method of a PypelineModule which inheres the actual algorithm behind the module.

class pynpoint.core.processing.ReadingModule(name_in, input_dir=None)[source]

Bases: pynpoint.core.processing.PypelineModule

The abstract class ReadingModule is an interface for processing steps in the Pypeline which have only read access to the central data storage. One can specify a directory on the hard drive where the input data for the module is located. If no input directory is given then default Pypeline input directory is used. Reading modules have a dictionary of output ports (self._m_out_ports) but no input ports.

Abstract constructor of ReadingModule which needs the unique name identifier as input (more information: pynpoint.core.processing.PypelineModule). An input directory can be specified for the location of the data or else the Pypeline default directory is used. This function is called in all __init__() functions inheriting from this class.

Parameters:
  • name_in (str) – The name of the ReadingModule.
  • input_dir (str) – Directory where the input files are located.
Returns:

None

Return type:

NoneType

add_output_port(tag, activation=True)[source]

Function which creates an OutputPort for a ReadingModule and appends it to the internal OutputPort dictionary. This function should be used by classes inheriting from ReadingModule to make sure that only output ports with unique tags are added. The new port can be used as:

port = self._m_output_ports[tag]

or by using the returned Port.

Parameters:
  • tag (str) – Tag of the new output port.
  • activation (bool) – Activation status of the Port after creation. Deactivated ports will not save their results until they are activated.
Returns:

The new OutputPort for the ReadingModule.

Return type:

pynpoint.core.dataio.OutputPort

connect_database(data_base_in)[source]

Function used by a ReadingModule to connect all ports in the internal input and output port dictionaries to the database. The function is called by Pypeline and connects the DataStorage object to all module ports.

Parameters:data_base_in (pynpoint.core.dataio.DataStorage) – The central database.
Returns:None
Return type:NoneType
get_all_output_tags()[source]

Returns a list of all output tags to the ReadingModule.

Returns:List of output tags.
Return type:list(str, )
run()[source]

Abstract interface for the run method of a ReadingModule which inheres the actual algorithm behind the module.

class pynpoint.core.processing.WritingModule(name_in, output_dir=None)[source]

Bases: pynpoint.core.processing.PypelineModule

The abstract class WritingModule is an interface for processing steps in the pipeline which do not change the content of the internal DataStorage. They only have reading access to the central data base. WritingModules can be used to export data from the HDF5 database. WritingModules know the directory on the hard drive where the output of the module can be saved. If no output directory is given the default Pypeline output directory is used. WritingModules have a dictionary of input ports (self._m_input_ports) but no output ports.

Abstract constructor of a WritingModule which needs the unique name identifier as input (more information: pynpoint.core.processing.PypelineModule). In addition one can specify a output directory where the module will save its results. If no output directory is given the Pypeline default directory is used. This function is called in all __init__() functions inheriting from this class.

Parameters:
  • name_in (str) – The name of the WritingModule.
  • output_dir (str) – Directory where the results will be saved.
Returns:

None

Return type:

NoneType

add_input_port(tag)[source]

Function which creates an InputPort for a WritingModule and appends it to the internal InputPort dictionary. This function should be used by classes inheriting from WritingModule to make sure that only input ports with unique tags are added. The new port can be used as:

port = self._m_input_ports[tag]

or by using the returned Port.

Parameters:tag (str) – Tag of the new input port.
Returns:The new InputPort for the WritingModule.
Return type:pynpoint.core.dataio.InputPort
connect_database(data_base_in)[source]

Function used by a WritingModule to connect all ports in the internal input and output port dictionaries to the database. The function is called by Pypeline and connects the DataStorage object to all module ports.

Parameters:data_base_in (pynpoint.core.dataio.DataStorage) – The central database.
Returns:None
Return type:NoneType
get_all_input_tags()[source]

Returns a list of all input tags to the WritingModule.

Returns:List of input tags.
Return type:list(str, )
run()[source]

Abstract interface for the run method of a WritingModule which inheres the actual algorithm behind the module.

pynpoint.core.pypeline module

Module which capsules the methods of the Pypeline.

class pynpoint.core.pypeline.Pypeline(working_place_in=None, input_place_in=None, output_place_in=None)[source]

Bases: object

A Pypeline instance can be used to manage various processing steps. It inheres an internal dictionary of Pypeline steps (modules) and their names. A Pypeline has a central DataStorage on the hard drive which can be accessed by various modules. The order of the modules depends on the order the steps have been added to the pypeline. It is possible to run all modules attached to the Pypeline at once or run a single modules by name.

Constructor of Pypeline.

Parameters:
  • working_place_in (str) – Working location of the Pypeline which needs to be a folder on the hard drive. The given folder will be used to save the central PynPoint database (an HDF5 file) in which all the intermediate processing steps are saved. Note that the HDF5 file can become very large depending on the size and number of input images.
  • input_place_in (str) – Default input directory of the Pypeline. All ReadingModules added to the Pypeline use this directory to look for input data. It is possible to specify a different location for the ReadingModules using their constructors.
  • output_place_in (str) – Default result directory used to save the output of all WritingModules added to the Pypeline. It is possible to specify a different locations for the WritingModules by using their constructors.
Returns:

None

Return type:

NoneType

add_module(module)[source]

Adds a Pypeline module to the internal Pypeline dictionary. The module is appended at the end of this ordered dictionary. If the input module is a reading or writing module without a specified input or output location then the Pypeline default location is used. Moreover, the given module is connected to the Pypeline internal data storage.

Parameters:module (ReadingModule, WritingModule, or ProcessingModule) – Input pipeline module.
Returns:None
Return type:NoneType
delete_data(tag)[source]

Function for deleting a dataset and related attributes from the central database. Disk space does not seem to free up when using this function.

Parameters:tag (str) – Database tag.
Returns:None
Return type:NoneType
get_attribute(data_tag, attr_name, static=True)[source]

Function for accessing attributes in the central database.

Parameters:
  • data_tag (str) – Database tag.
  • attr_name (str) – Name of the attribute.
  • static (bool) – Static or non-static attribute.
Returns:

The attribute values.

Return type:

numpy.ndarray

get_data(tag, data_range=None)[source]

Function for accessing data in the central database.

Parameters:
  • tag (str) – Database tag.
  • data_range (tuple(int, int), None) – Slicing range which can be used to select a subset of images from a 3D dataset. All data are selected if set to None.
Returns:

The selected dataset from the database.

Return type:

numpy.ndarray

get_module_names()[source]

Function which returns a list of all module names.

Returns:Ordered list of all Pypeline modules.
Return type:list(str, )
get_shape(tag)[source]

Function for getting the shape of a database entry.

Parameters:tag (str) – Database tag.
Returns:Dataset shape.
Return type:tuple(int, )
get_tags()[source]

Function for listing the database tags, ignoring header and config tags.

Returns:Database tags.
Return type:numpy.asarray
remove_module(name)[source]

Removes a Pypeline module from the internal dictionary.

Parameters:name (str) – Name of the module that has to be removed.
Returns:Confirmation of removal.
Return type:bool
run()[source]

Walks through all saved processing steps and calls their run methods. The order in which the steps are called depends on the order they have been added to the Pypeline.

Returns:None
Return type:NoneType
run_module(name)[source]

Runs a single processing module.

Parameters:name (str) – Name of the pipeline module.
Returns:None
Return type:NoneType
set_attribute(data_tag, attr_name, attr_value, static=True)[source]

Function for writing attributes to the central database. Existing values will be overwritten.

Parameters:
  • data_tag (str) – Database tag.
  • attr_name (str) – Name of the attribute.
  • attr_value (float, int, str, tuple, or numpy.ndarray) – Attribute value.
  • static (bool) – Static or non-static attribute.
Returns:

None

Return type:

NoneType

validate_pipeline()[source]

Function which checks if all input ports of the Pypeline are pointing to previous output ports.

Returns:
  • bool – Confirmation of pipeline validation.
  • str – Module name that is not valid.
validate_pipeline_module(name)[source]

Checks if the data exists for the module with label name.

Parameters:name (str) – Name of the module that is checked.
Returns:
  • bool – Confirmation of pipeline module validation.
  • str – Module name that is not valid.

Module contents