API Documentation

Processing

Functions supporting the most common operations

Pix4Dengine Python SDK.

pix4dengine.create_project(proj_name, image_dirs, *, work_dir='.', on_start=<function _noop>, on_success=<function _noop>, external_geoloc=None, recursive_input_search=False, camera_config=None)

Create a new project.

Parameters:
  • proj_name (str) – project name, it must contain only alphanumerical, hyphen or underscore characters. It should not start with an hyphen.
  • image_dirs (Union[str, Sequence[str]]) – path or list of paths to the directories containing the images.
  • work_dir (str) – the work directory, defining where the project is created.
  • on_start (Callable) – optional callback to execute before work is started.
  • on_success (Callable) – optional callback to execute after work is finished without errors.
  • external_geoloc (Union[ExternalGeolocation, Geotag, Sequence[Geotag], None]) – either a ExternalGeolocation instance, or a Geotag instance, or a sequence of Geotag instances to geolocate the project images.
  • recursive_input_search (bool) – if set to True, will search recursively for images in image_dirs.
  • camera_config (Union [CameraConfig, None]) – optional camera configuration.
Return type:

Project

Returns:

an instance of Project if the project is successfully created.

Raises:
pix4dengine.open_project(proj_name, work_dir='.')

Open an existing project.

Parameters:
  • proj_name (str) – project name, it must contain only alphanumerical, hyphen or underscore characters. It should not start with an hyphen.
  • work_dir (str) – directory where the project is located.
Return type:

Project

Returns:

an instance of Project if the project is successfully opened.

Raises:
pix4dengine.login_seat(email, password, license_key=None)

Activate a license seat through the Pix4D licensing server.

Acquires a license seat using the provided login credentials. If a different user is logged in, their license seat is released first. If a license_key is provided, an attempt to acquire a seat from that specific license will be made. Otherwise, the user’s first Enterprise license found will be used.

Parameters:
  • email (str) – login email
  • password (str) – login password
  • license_key (Optional[str]) – optional key identifying the license to acquire a seat from
Raises:

LoginError on failure

Return type:

None

pix4dengine.logout_seat()

Deactivate the current license seat.

Raises:LogoutError on failure.
Return type:None
pix4dengine.get_auth_token(email, password)

Acquire the authorization token to access the Pix4D licensing system.

Note: The token has a validity of roughly two days. An expired token may be used to login once, in which case it is used internally to obtain a new token. Once renewed, the token returned by this function becomes invalid and cannot be used for further logins. Given that the renewal for a given token is a one-time only event, it is recommended to avoid using the same token across sessions. Once a session has been logged in, authentication renewal happens internally as needed, meaning a session can last much longer than the two day duration of the initial token.

Return type:str
pix4dengine.login_with_token(token, license_key=None)

Acquire the authorization to use Pix4Dengine from the Pix4D licensing system.

Return type:None
pix4dengine.number_of_cuda_gpus()

Return the the number of CUDA GPUs supported by pix4dengine.

Return type:int

Interface to a project

Module containing the Project class, used to define and configure project.

class pix4dengine.project.Project(proj_name, work_dir='.', version='UNKNOWN')

Class representing a project.

Note: for the most common use-cases, use the functions provided in the top-level module:
add_3d_gcp_with_marks(gcp, marks)

Add a 3D GCP to the current project.

Parameters:
  • gcp (GCP3D) – a GCP defined by a GCP3D object.
  • marks (Sequence[Mark]) – marks associated to this 3D GCP, as a sequence of Mark objects.
Raises:
  • ValueError – the id or the label of the 3D GCP is already in use.
  • Exception – less than two marks were provided.
Return type:

None

add_3d_gcps(gcps)

Add 3D GCPs to the current project.

Parameters:gcps (Sequence[GCP3D]) – a sequence of GCPs defined by GCP3D object.
Raises:ValueError – the id or the label of the 3D GCP is already in use.
Return type:None
coord_sys

An instance of CoordSys.

get_indices()

Return a list of indices currently defined in the project.

Return type:List[Index]
Returns:a list of Index instances
get_list_of_3d_gcps()

Get the list of 3DGCPs in the project.

Return type:List[GCP3D]
get_list_of_images(remove_path=False)

Get the list of images in the project.

Return type:List[str]
get_marks(gcp_id)

Return the list of marks associated with a GCP.

Parameters:gcp_id (int) – the id number of a GCP
Return type:List[Mark]
Returns:a list of Mark
Raises:ValueError if gcp_id does not refer to an existing GCP.
get_option_value(option)

Get the value of an algorithmic or export option.

Parameters:option (Enum) – Either a pix4dengine.options.AlgoOption or a pix4dengine.options.ExportOption object.
Return type:Any
Returns:The value of the requested option.

Example

project.get_option_value(ExportOption.Densification.PCL_LAS) returns True if the point cloud is set to be exported in LAS format, False otherwise.

Note

The value is returned in the expected type, e.g., boolean options are returned as bool type.

Raises:KeyError – the option value could not be determined for the project.
get_processing_area()

Get the processing area of the project.

Returns:A tuple containing the list of pix4dengine.utils.project.PointXY and the pix4dengine.utils.project.MinMaxRange if processing area is defined or None otherwise.
Return type:Optional[tuple[Sequence[PointXY], MinMaxRange]]
logfile_path

Full path of processing log file.

File system path of the logfile produced by the standard C++ processing pipeline. It is largely equivalent to the Pix4Dmapper log file and can be very useful for debugging.

Note

It is separate from the Engine SDK python logfile, which deals with higher level SDK related and user defined activity.

Return type:str
p4d_path

Full path to the p4d file.

Return type:str
proj_name

Project name.

Return type:str
proj_path

Project full path.

Return type:str
remove_3d_gcp(gcp_id)

Remove a 3D GCP and its associated marks.

Raises:ValueError if the GCP with the given id does not exist.
Return type:None
set_processing_area(points, height_interval=MinMaxRange(min=-9999.9, max=9999.9))

Add a sequence of points defining the processing area.

Any previous definition of the processing area is overwritten. points and height_interval are defined using x, y, z map coordinates.

Parameters:
  • points (Sequence[PointXY]) – a sequence of PointXY objects, defining the extent of the processing area horizontally.
  • height_interval (MinMaxRange) – a MinMaxRange object, defining the vertical extent of the processing area.
Raises:

ValueError – if less than 3 points are passed, or if the height_interval is badly defined.

Note

The points defining the horizontal processing area can be given in clockwise or counterclockwise order. However, the points must be given in either of the two orderings consistently. Consider, e.g., a processing area to be defined using four points (A, B, C, D) given in clockwise order. Equivalent definitions would be (D, C, B, A), (B, C, D, A), (C, B, A, D),… However, a sequence (A, C, B, D) is not equivalent, and would not be used correctly during processing. In short, connected points should be given one after the other.

Return type:None
version

Pix4Dengine or Pix4Dmapper version used to create this project.

Return type:str
work_dir

Work directory.

Return type:str

Pipeline Interface

Pipeline submodule.

class pix4dengine.pipeline.EngineTask(step, project, *, on_start=<function _noop>, on_success=<function _noop>, on_error=<function _noop>, config=None, validator=None)

Task to be executed using Pix4Dengine.

Parameters:
  • step – define which processing step will be executed, passing a ProcessingStep.
  • project – the project to be processed, identified by an instance of Project
  • on_start – optional callback to execute before work is started.
  • on_success – optional callback to execute after work is finished without errors.
  • on_error – optional callback to call in the event of an error.
  • config – dictionary containing configuration options for the task, in the form {option: value}, e.g., {AlgoOption.CameraCalibration.MATCH_GEOMETRICALLY_VERIFIED, False}.
  • validator – a callable that can be used for validating the project report, based on user-defined quality requirements. validator will be called after each processing step, passing it the quality Report as argument. If the return value of validator is False, a FailedValidation is raised, otherwise processing continues normally. If validator has a message attribute, this is used as the error message in the FailedValidation that is raised. By default, no check is performed and no exception is raised.
get_config()

Return the configuration of this task.

Returns:
A dictionary containing configuration options for the task, in the form
{option: value}.
run(**kwargs)

Run the engine task.

Runs the on_start callbacks, the task’s work, followed by the on_success callbacks.

set_config(config)

Set the configuration of this task.

Parameters:config – dictionary containing configuration options for the task, in the form {option: value}, e.g., {AlgoOption.CameraCalibration.MATCH_GEOMETRICALLY_VERIFIED, False}.
update_config(config)

Add new options to or replace existing options in the configuration of this task.

Parameters:config – dictionary containing configuration options for the task, in the form {option: value}.
class pix4dengine.pipeline.Pipeline(project, *, algos=('CALIB', 'DEF_PROC_AREA', 'DENSE', 'ORTHO'), validators=None, max_cpus=-1, enable_cuda=True)

Standard Pix4D photogrammetry pipeline.

By default, this pipeline performs the camera calibration, adds a default processing area including all cameras, then runs the point cloud densification and orthomosaic steps. The tasks to be run can however be modified or removed by the user, or new ones can be defined and added. See set_default_proc_area() for details on the default processing area defintion.

Initialise a pipeline.

Parameters:
  • project – the Project instance on which to operate.
  • algos – a sequence of tasks to be executed, identified by the strings listed in STD_ALGOS. Note that, for running a single task, one must still provide a sequence of strings as, e.g., algos=("CALIB", ). If not set, the default photogrammetry pipeline is executed.
  • validators – is an optional dictionary where keys are task names (see algos) and values are callables described below. The pipeline executes the validator (callable value) after the algorithm specified by the key completes. The callable can be used for validating the project report, based on user-defined quality requirements. validators[algo] will be called with the quality report Report as argument. If the return value of validators[algo] is False, a FailedValidation is raised, otherwise processing continues normally. If validators[algo] has a message attribute, this is used as the error message in the FailedValidation that is raised. By default, no check is performed and no exception is raised.
  • max_cpus – limit to max_cpus the number of CPUs used.
  • enable_cuda – if False, disable the usage of CUDA during calibration.

Note

The pipeline is initialised by default with the configuration defined in pix4dengine.pipeline.templates.Default.

STD_ALGOS = ('CALIB', 'DEF_PROC_AREA', 'DENSE', 'ORTHO')
run()

Run the pipeline.

pix4dengine.pipeline.apply_template(pipeline, *templates)

Apply configuration templates to a pipeline.

Apply one or more configuration templates to a pipeline. Pre-defined templates are available in pix4dengine.pipeline.templates. If more than one template is passed, they will be applied in order. If the same option is configured by more than one template, the value of the latter template will be set.

Parameters:
  • pipeline (Pipeline) – the pipeline to which the template configuration is applied.
  • templates – one or more configuration templates. Pre-defined templates are available from pix4dengine.pipeline.templates.

Note

If an algorithm referenced by a template is not found in the pipeline, no error is raised but a warning is logged.

pix4dengine.pipeline.pipeline_from_template(project, *templates, **kwargs)

Create a pipeline from a given template(s).

Parameters:
  • project – the Project instance on which to operate.
  • templates – one or more configuration templates that define the sequence of algorithms to run and their configuration. Pre-defined templates are available from pix4dengine.pipeline.templates.
  • kwargs – arguments needed to instantiate :py:mod:pix4dengine.pipeline.Pipeline except algos which is defined by the templates.
Raises:

ValueError if any of the templates configure an unknown task.

Pipeline configuration templates

Pipeline configuration templates, tools and template definitions.

The Default template is applied on every new pipeline. The options in other templates are usually defined on top of a “base” TemplateOptions, i.e., they set all the options defined by the base, with the modifications shown in the new template. Some templates only configure a single algo in a Pipeline, or a part of it. To apply the templates, use pix4dengine.pipeline.apply_template().

class pix4dengine.pipeline.templates.TemplateOptions(base=None, options=None)

Store options in a dictionary with respect to a base.

The purpose of this class is to define options of a pipeline template, optionally updating (the copy of) another TemplateOptions object (the base). This allows to define the options with less clutter, as only the options that differ from the base need to be passed.

Parameters:
  • base – the optional TemplateOptions to be updated.
  • options – a dictionary of options to be applied to a pipeline algo.
class pix4dengine.pipeline.templates.Default

Default options for a pipeline.

Note

This is the default configuration set when instantiating a pix4dengine.pipeline.Pipeline object. Other templates are defined as a difference with respect to this.

CALIB = < {CameraCalibration.KEYPT_SEL_METHOD: Automatic, CameraCalibration.IMAGE_SCALE: 1, CameraCalibration.MATCH_TIME_NB_NEIGHBOURS: 2, CameraCalibration.MATCH_USE_TRIANGULATION: True, CameraCalibration.MATCH_RELATIVE_DISTANCE_IMAGES: 0.0, CameraCalibration.MATCH_IMAGE_SIMILARITY_MAX_PAIRS: 1, CameraCalibration.MATCH_MTP_MAX_IMAGE_PAIR: 5, CameraCalibration.MATCH_TIME_MULTI_CAMERA: False, CameraCalibration.KEYPT_NUMBER: 10000, CameraCalibration.MATCH_GEOMETRICALLY_VERIFIED: False, CameraCalibration.CALIBRATION_METHOD: Standard, CameraCalibration.CALIBRATION_INT_PARAM_OPT: All, CameraCalibration.CALIBRATION_EXT_PARAM_OPT: All, CameraCalibration.REMATCH_STRATEGY: Auto, CameraCalibration.REMATCH: True, CameraCalibration.ORTHOMOSAIC_IN_REPORT: True, CameraCalibration.UNDISTORTED_IMAGES: False} >
DENSE = < {Densification.PCL_IMAGE_MULTISCALE: True, Densification.PCL_IMAGE_SCALE: 1/2, Densification.PCL_DENSITY: Optimal, Densification.PCL_MIN_NO_MATCHES: 3, Densification.PCL_XYZ_DELIMITER: Space, Densification.PCL_MERGE_TILES: False, Densification.PCL_USE_PROCESSING_AREA: True, Densification.PCL_USE_ANNOTATIONS: True, Densification.PCL_AUTO_LIMIT_CAMERA_DEPTH: False, Densification.PCL_CLASSIFY: False, Densification.PCL_WINDOWS_SIZE: 7, Mesh.MAX_OCTREE_DEPTH: 12, Mesh.TEXTURE_SIZE: 8192, Mesh.DECIMATION_CRITERIA: Quantitative, Mesh.MAX_TRIANGLES: 1000000, Mesh.DECIMATION_STRATEGY: Sensitive, Mesh.TEXTURE_COLOR_BALANCING: False, Mesh.TILED_OBJ: False, Mesh.SAMPLE_DENSITY_DIVIDER: 1, Densification.PCL_XYZ: False, Densification.PCL_LAZ: False, Densification.PCL_PLY: False, Densification.PCL_LAS: True, Mesh.DXF: False, Mesh.OBJ: True, Mesh.FBX: True, Mesh.PLY: False} >
ORTHO = < {Index.RELATIVE_RESOLUTION: 1, Index.POINT_SHP_GRID_SIZE: 200, Index.DOWNSAMPLING_METHOD: Gauss, Index.POLYGON_SHP_GRID_SIZE: 400, Ortho.DSM_GRID_SPACING: 100, Ortho.MOSAIC_NO_TRANSPARENCY: False, Ortho.MOSAIC_RELATIVE_RESOLUTION: 1, Ortho.DSM_NOISE_FILTER: True, Ortho.DSM_FILTER_SMOOTHING: True, Ortho.DSM_FILTER_SMOOTHING_TYPE: Sharp, Ortho.DSM_XYZ_DELIMITER: Space, Ortho.DTM_RELATIVE_RESOLUTION: 5, Ortho.CONTOUR_BASE: 0.0, Ortho.CONTOUR_ELEVATION_INTERVAL: 10.0, Ortho.CONTOUR_RESOLUTION: 100.0, Ortho.CONTOUR_MIN_LINE_SIZE: 20, Index.REFLECTANCE: False, Index.POINT_SHP: False, Index.REFLECTANCE_MERGED: False, Index.POLYGON_SHP: False, Index.INDEX_TIFF: True, Index.INDEX_TIFF_MERGED: True, Ortho.CONTOUR_SHP: False, Ortho.CONTOUR_PDF: False, Ortho.CONTOUR_DXF: False, Ortho.DTM_TIFF_MERGED: True, Ortho.MOSAIC_TIFF: True, Ortho.MOSAIC_TIFF_MERGED: True, Ortho.MOSAIC_KML: False, Ortho.DSM_GRID_LAZ: False, Ortho.DSM_GRID_LAS: False, Ortho.DSM_TIFF_MERGED: True, Ortho.DSM_XYZ: False, Ortho.DTM_TIFF: False, Ortho.DSM_TIFF: True} >
class pix4dengine.pipeline.templates.Maps3D

High-quality 3D map from aerial images.

Suited for both nadir and oblique flights using a grid flight plan with high overlap. The configuration aims at reliable results rather than processing speed.

Can produce outputs for point cloud, 3D mesh, DSM and orthomosaic.

CALIB = < base=Default, changed={} >
DENSE = < base=Default, changed={} >
ORTHO = < base=Default, changed={} >
class pix4dengine.pipeline.templates.Model3D

High-quality 3D model.

Suited for oblique flights or terrestrial images, with high overlap. The configuration aims at reliable results rather than processing speed.

Can produce outputs for point cloud and 3D textured mesh.

CALIB = < base=Default, changed={CameraCalibration.MATCH_TIME_NB_NEIGHBOURS: 4, CameraCalibration.MATCH_USE_TRIANGULATION: False, CameraCalibration.MATCH_RELATIVE_DISTANCE_IMAGES: 5.0, CameraCalibration.MATCH_IMAGE_SIMILARITY_MAX_PAIRS: 4, CameraCalibration.MATCH_MTP_MAX_IMAGE_PAIR: 50, CameraCalibration.ORTHOMOSAIC_IN_REPORT: False} >
DENSE = < base=Default, changed={Densification.PCL_AUTO_LIMIT_CAMERA_DEPTH: True, Densification.PCL_WINDOWS_SIZE: 9} >
class pix4dengine.pipeline.templates.AgriMultispectral

High-quality multispectral map.

Suited for nadir flights using a multispectral camera (Sequoia, Micasense RedEdge,…). The configuration aims at reliable results rather than processing speed.

Can produce outputs for reflectance, index, and application maps.

CALIB = < base=Default, changed={CameraCalibration.KEYPT_SEL_METHOD: CustomNumberOfKeypoints, CameraCalibration.MATCH_GEOMETRICALLY_VERIFIED: True, CameraCalibration.CALIBRATION_METHOD: Alternative, CameraCalibration.REMATCH_STRATEGY: Custom} >
DENSE = < base=Default, changed={Densification.PCL_DENSITY: Low, Densification.PCL_LAS: False, Mesh.OBJ: False, Mesh.FBX: False} >
ORTHO = < base=Default, changed={Index.REFLECTANCE: True, Index.POLYGON_SHP: True, Ortho.MOSAIC_TIFF: False, Ortho.MOSAIC_TIFF_MERGED: False, Ortho.DSM_TIFF_MERGED: False, Ortho.DSM_TIFF: False, Index.INDICES: [Index(name='ndvi', formula='(nir - red) / (nir + red)', enabled=True)]} >
class pix4dengine.pipeline.templates.AgriRGB

High-quality orthomosaic for precision agriculture.

Suited for nadir flights over flat terrain with an RGB camera, typically an RGB camera for agriculture (e.g., Sequoia RGB).

Can produce outputs for orthomosaic.

CALIB = < base=Default, changed={CameraCalibration.MATCH_GEOMETRICALLY_VERIFIED: True, CameraCalibration.CALIBRATION_METHOD: Alternative} >
DENSE = < base=Default, changed={Densification.PCL_IMAGE_SCALE: 1/4, Densification.PCL_DENSITY: Low, Densification.PCL_LAS: False, Mesh.OBJ: False, Mesh.FBX: False} >
ORTHO = < base=Default, changed={Ortho.DSM_TIFF_MERGED: False, Ortho.DSM_TIFF: False} >
class pix4dengine.pipeline.templates.AgriModifiedCamera

High-quality map for precision agriculture.

Suited for nadir flights with a modified RGB camera.

Can produce outputs for reflectance, index, and application maps.

CALIB = < base=AgriRGB, changed={} >
DENSE = < base=AgriRGB, changed={} >
ORTHO = < base=AgriRGB, changed={Index.REFLECTANCE: True, Index.POLYGON_SHP: True, Ortho.MOSAIC_TIFF: False, Ortho.MOSAIC_TIFF_MERGED: False, Index.INDICES: [Index(name='ndvi', formula='(nir - red) / (nir + red)', enabled=True)]} >
class pix4dengine.pipeline.templates.RapidMaps3D

Rapid generation of 3D map from aerial images.

Suited for rapid assessment of the acquired dataset.

Can produce outputs for point cloud, 3D mesh, DSM and orthomosaic.

CALIB = < base=Maps3D, changed={CameraCalibration.IMAGE_SCALE: 0.25} >
DENSE = < base=Maps3D, changed={Densification.PCL_IMAGE_SCALE: 1/4, Densification.PCL_DENSITY: Low, Densification.PCL_LAS: False, Mesh.OBJ: False, Mesh.FBX: False} >
ORTHO = < base=Maps3D, changed={Ortho.MOSAIC_RELATIVE_RESOLUTION: 4} >
class pix4dengine.pipeline.templates.RapidModel3D

Rapid 3D model generation.

Suited for rapid assessment of the acquired dataset.

Can produce outputs for point cloud and 3D textured mesh.

CALIB = < base=Model3D, changed={CameraCalibration.IMAGE_SCALE: 0.25} >
DENSE = < base=Model3D, changed={Densification.PCL_IMAGE_SCALE: 1/4, Densification.PCL_DENSITY: Low, Mesh.TEXTURE_SIZE: 2048, Mesh.MAX_TRIANGLES: 100000, Densification.PCL_LAS: False, Mesh.OBJ: False, Mesh.FBX: False} >
class pix4dengine.pipeline.templates.RapidAgriRGB

Rapid generation of orthomosaic.

Suited for rapid assessment of the acquired dataset.

Can produce outputs for orthomosaic.

CALIB = < base=AgriRGB, changed={CameraCalibration.IMAGE_SCALE: 0.25} >
ORTHO = < base=AgriRGB, changed={Ortho.MOSAIC_RELATIVE_RESOLUTION: 4} >
class pix4dengine.pipeline.templates.RapidAgriModifiedCamera

Rapid map for precision agriculture.

Suited for rapid quality assessment of the dataset acquired by a nadir flight with a modified RGB camera.

Can produce outputs for reflectance, index, and application maps.

CALIB = < base=AgriModifiedCamera, changed={CameraCalibration.IMAGE_SCALE: 0.25} >
ORTHO = < base=AgriModifiedCamera, changed={} >
class pix4dengine.pipeline.templates.ThermalCamera

High-quality temperature map.

Suited for nadir flights with a thermal camera (e.g., FLIR).

Can produce outputs for thermal index map.

CALIB = < base=Default, changed={CameraCalibration.IMAGE_SCALE: 2, CameraCalibration.CALIBRATION_METHOD: Alternative} >
DENSE = < base=Default, changed={Densification.PCL_IMAGE_SCALE: 1, Densification.PCL_LAS: False, Mesh.OBJ: False, Mesh.FBX: False} >
ORTHO = < base=Default, changed={Index.REFLECTANCE: True, Ortho.MOSAIC_TIFF: False, Ortho.MOSAIC_TIFF_MERGED: False, Ortho.DSM_TIFF_MERGED: False, Ortho.DSM_TIFF: False} >
class pix4dengine.pipeline.templates.AerialCalibMatch

Optimize the pair matching for aerial grid or corridor flight paths.

Only applies to the “CALIB” algo of a pipeline.

CALIB = < {CameraCalibration.MATCH_TIME_NB_NEIGHBOURS: 2, CameraCalibration.MATCH_USE_TRIANGULATION: True, CameraCalibration.MATCH_RELATIVE_DISTANCE_IMAGES: 0.0, CameraCalibration.MATCH_MTP_MAX_IMAGE_PAIR: 5, CameraCalibration.MATCH_IMAGE_SIMILARITY_MAX_PAIRS: 1} >
class pix4dengine.pipeline.templates.FreeFlightCalibMatch

Optimize the pair matching for free-flight paths or terrestrial images.

Only applies to the “CALIB” algo of a pipeline.

CALIB = < {CameraCalibration.MATCH_TIME_NB_NEIGHBOURS: 4, CameraCalibration.MATCH_USE_TRIANGULATION: False, CameraCalibration.MATCH_RELATIVE_DISTANCE_IMAGES: 5.0, CameraCalibration.MATCH_MTP_MAX_IMAGE_PAIR: 50, CameraCalibration.MATCH_IMAGE_SIMILARITY_MAX_PAIRS: 4} >
class pix4dengine.pipeline.templates.MeshLowRes

Configure the mesh generation for low resolution, fast results.

Only applies to the “DENSE” algo of a pipeline.

DENSE = < {Mesh.MAX_OCTREE_DEPTH: 10, Mesh.TEXTURE_SIZE: 4096, Mesh.MAX_TRIANGLES: 100000} >
class pix4dengine.pipeline.templates.MeshNormalRes

Configure the mesh generation for intermediate resolution and speed.

Only applies to the “DENSE” algo of a pipeline.

DENSE = < {Mesh.MAX_OCTREE_DEPTH: 12, Mesh.TEXTURE_SIZE: 8192, Mesh.MAX_TRIANGLES: 1000000} >
class pix4dengine.pipeline.templates.MeshHighRes

Configure the mesh generation for high resolution, slow results.

Only applies to the “DENSE” algo of a pipeline.

DENSE = < {Mesh.MAX_OCTREE_DEPTH: 14, Mesh.TEXTURE_SIZE: 16384, Mesh.MAX_TRIANGLES: 5000000} >

Input and output

External image georeferencing

Module for data structures needed for image geotagging.

class pix4dengine.geotag.ExternalGeolocation

Data class for a CSV file used to geolocate project images.

Parameters:
  • file_format – a ExternalGeolocationFormat instance describing the format of the CSV file.
  • file_path – path to the CSV file.

Create new instance of ExternalGeolocation(file_format, file_path)

file_format

Alias for field number 0

file_path

Alias for field number 1

class pix4dengine.geotag.ExternalGeolocationFormat

Formats of geolocation data in an external geolocation file.

LAT_LONG = "Latitude, Longitude, Altitude" file format
LONG_LAT = "Longitude, Latitude, Altitude" file format
class pix4dengine.geotag.Geotag

Geographical identification data of an image.

The data includes GPS coordinates, GPS accuracies, and camera orientation angles.

Parameters:
  • image – image file name
  • latitude – GPS latitude, [degrees]
  • longitude – GPS longitude, [degrees]
  • altitude – GPS altitude, [m]
  • hor_accuracy – (optional) horizontal GPS accuracy, [m]
  • ver_accuracy – (optional) vertical GPS accuracy, [m]
  • omega – (optional) angle to rotate the (X,Y,Z) geodetic coordinate system around the X axis in order to align it with the image coordinate system, [degrees]
  • phi – (optional) angle to rotate the (X,Y,Z) geodetic coordinate system around the Y axis in order to align it with the image coordinate system, [degrees]
  • kappa – (optional) angle to rotate the (X,Y,Z) geodetic coordinate system around the Z axis in order to align it with the image coordinate system, [degrees]

Create new instance of Geotag(image, latitude, longitude, altitude, hor_accuracy, ver_accuracy, omega, phi, kappa)

altitude

Alias for field number 3

hor_accuracy

Alias for field number 4

image

Alias for field number 0

kappa

Alias for field number 8

latitude

Alias for field number 1

longitude

Alias for field number 2

omega

Alias for field number 6

phi

Alias for field number 7

ver_accuracy

Alias for field number 5

Processing constants

Constants used to define and configure the processing of a project.

class pix4dengine.constants.processing.ExternalGeolocation

[Deprecated] Data class to use a CSV file to geolocate project images.

This class is deprecated in this module and will be removed in a future release. Use the ExternalGeolocation class from the geotag module instead.

Parameters:
  • file_format – a ExternalGeolocationFormat instance describing the format of the CSV file.
  • file_path – path to the CSV file.

Create new instance of ExternalGeolocation(file_format, file_path)

file_format

Alias for field number 0

file_path

Alias for field number 1

class pix4dengine.constants.processing.ExternalGeolocationFormat

[Deprecated] External geolocation formats.

This class is deprecated in this module and will be removed in a future release. Use the ExternalGeolocationFormat class from the geotag module instead.

LAT_LONG = "Latitude, Longitude, Altitude" file format
LONG_LAT = "Longitude, Latitude, Altitude" file format
class pix4dengine.constants.processing.ProcessingStep

Processing steps.

CALIB = 'camera calibration step'
DENSE = 'point cloud densification step'
ORTHO = 'orthomosaic step'

Processing options

Options for configuring the algorithm, and for setting and accessing the output files.

class pix4dengine.options.AlgoOption

Algorithmic configuration options.

For further information, please refer to the support pages.

class CameraCalibration

Algorithmic options for the camera calibration (initial step).

CALIBRATION_EXT_PARAM_OPT = type=str, description= External camera parameters (position and orientation) to optimize. "All" (default) optimizes the rotation and position of the camera as well as the linear rolling shutter in case the camera model follows the linear rolling shutter model. "None" does not optimize the external camera parameters. This only makes sense if "GeolocationAndOrientation" method is used for calibration, and if geolocation and orientation are known and precise. "Orientation" optimizes only the orientation of the cameras. This only makes sense if "GeolocationAndOrientation" method is used for calibration, and if geolocation is known and accurate, but orientation is not. , allowed_values=(None, Orientation, All)
CALIBRATION_INT_PARAM_OPT = type=str, description= Internal camera parameters to optimize. "All" (default) optimizes all the internal camera parameters. It is recommended to use this method when processing images taken with small UAVs, whose cameras are more sensitive to temperature variations and vibrations. "None" does not optimize any of the internal camera parameters. It is recommended for large cameras, if already calibrated, and if these calibration parameters are used for processing. "Leading" optimizes the most important internal camera parameters. This option is used to process some cameras, e.g., with a slow rolling shutter speed. The most important internal camera parameters for perspective lens camera models are the focal length and the first two radial distortion parameters. For fisheye lens cameras, they are the polynomial coefficients. "All Prior" forces the optimal internal parameters to be close to the initial values, useful for difficult to calibrate projects, where however the initial camera parameters are known to be reliable. , allowed_values=(None, Leading, All, AllPrior)
CALIBRATION_METHOD = type=str, description= Method for the optimization of camera parameters. "Standard" is the default, adequate in most cases. The "Alternative" method is optimized for aerial nadir images with accurate geolocation, low texture content and relatively flat terrain (e.g., fields). This method requires less than 5% oblique images (>35 deg) and at least 75% images geolocated in the dataset. "GeolocationAndOrientation" method is optimized for projects with very accurate image geolocation and orientation. This method requires all images to be geolocated and oriented. , allowed_values=(Standard, Alternative, GeolocationAndOrientation)
IMAGE_SCALE = type=str, description= Image size at which the keypoints are extracted, in comparison to the initial size of the images. A smaller image scale produces fast, less-precise results. An image scale of 2 may have a positive impact on the results quality when using low-resolution images (e.g., thermal cameras). , allowed_values=(0.125, 0.25, 0.5, 1, 2)
KEYPT_NUMBER = type=int, min=100, max=1000000, units=Unitless item count, description= Maximum number of keypoints to be extracted per image. This is only used if AlgoOption.CameraCalibration.KEYPT_SEL_METHOD is set to "CustomNumberOfKeypoints".
KEYPT_SEL_METHOD = type=str, description= If set to "CustomNumberOfKeypoints", AlgoOption.CameraCalibration.KEYPT_NUMBER is used. The keypoint selection is otherwise performed automatically. , allowed_values=(Automatic, CustomNumberOfKeypoints)
MATCH_ABSOLUTE_DISTANCE_IMAGES = type=float, min=0.0, max=10000.0, units=Length units of the coordinate system in use [m] or [ft], description= Match images closer to each other than this distance. If set to 0.0, the distance is not used. Note that if AlgoOption.CameraCalibration.MATCH_RELATIVE_DISTANCE_IMAGES is also set, whichever is set last determines the configuration.
MATCH_GEOMETRICALLY_VERIFIED = type=bool, description= If True, geometrically inconsistent matches are discarded. This check adds substantial processing overhead but produces more robust results. Useful when many similar features are present throughout the project: rows of plants in a field, windows on a building wall, etc.
MATCH_IMAGE_SIMILARITY_MAX_PAIRS = type=int, min=0, max=50, units=Unitless item count, description= Match image pairs based on image similarity. The number defines the maximum number of image pairs that can be matched based on similarity. Zero disable matching based on similarity.
MATCH_MTP_MAX_IMAGE_PAIR = type=int, min=0, max=100, units=Unitless item count, description= Match image pairs based on shared manual tie points (MTPs). The number defines the maximum number of image pairs that can be connected by a single MTP. Zero disables matching based on MTPs.
MATCH_RELATIVE_DISTANCE_IMAGES = type=float, min=0.0, max=100.0, units=Relative units, description= Match images closer to each other than the product of the average image distance and of this factor. If set to 0.0, the distance is not used. Note that if AlgoOption.CameraCalibration.MATCH_ABSOLUTE_DISTANCE_IMAGES is also set, whichever is set last determines the configuration.
MATCH_TIME_MULTI_CAMERA = type=bool, description= Match the images from multiple flights using time information. The option is useful for flights where no geolocation is available, but where the same flight plan over the same area is repeated multiple times with different camera models.
MATCH_TIME_NB_NEIGHBOURS = type=int, min=0, max=50, units=Unitless item count, description= Match images according to their time of capture. The number defines how many consecutive images are considered for pair matching. Zero disables matching based on the time of capture.
MATCH_USE_TRIANGULATION = type=bool, description= Match image pairs by triangulating the image geolocation. This option only makes sense for aerial, geolocated images.
ORTHOMOSAIC_IN_REPORT = type=bool, description=Generate a low resolution orthomosaic to include it in the quality report.
REMATCH = type=bool, description= Enable rematching if AlgoOption.CameraCalibration.REMATCH_STRATEGY is set to "Custom".
REMATCH_STRATEGY = type=str, description= Add more matches after the first part of the initial processing. This usually improves the quality of the results. "Automatic" (default) enables rematching only for projects with less than 500 images. "Custom" allows the user to manually control if rematch is performed using the AlgoOption.CameraCalibration.REMATCH toggle. , allowed_values=(Auto, Custom)
class Densification

Algorithmic options for the point cloud densification.

PCL_AUTO_LIMIT_CAMERA_DEPTH = type=bool, description= If True, avoid reconstructing background objects, useful for 3D models of objects.
PCL_CLASSIFY = type=bool, description=Enable the point cloud classification.
PCL_DENSITY = type=str, description= Density of the densified point cloud. "Optimal": compute a 3D point every 4 / PCL_IMAGE_SCALE pixel (default, recommended). For example, if the PCL_IMAGE_SCALE is 1/2, one 3D point is computed every (4 / 0.5) = 8 pixels of the original image. "High": a 3D point is computed every PCL_IMAGE_SCALE pixel. The result is an oversampled point cloud, that requires several times more memory and processing time for being produced. Usually, this point cloud option does not significantly improve the results. "Low": a 3D point is computed for every (16 / PCL_IMAGE_SCALE) pixel. The resulting point cloud is less dense but can be produced with faster and using less memory. , allowed_values=(High, Optimal, Low)
PCL_FLAG_OUTLIERS = type=bool, description= Flag outlier points in a densified point cloud. When set to True, a statistical analysis of a point's neighborhood is run and the points that don't meet a given criterion are flagged as outliers. The algorithm computes the average distance from each point to its neighbors and flags a point as an outlier if that average distance is larger than the overall mean of all average distances plus some margin.
PCL_FLAG_OUTLIERS_NEIGHBOR_COUNT = type=int, min=0, description= A tuning parameter for the outlier detection algorithm. The number of neighbors of a point to consider when computing the average distance from that point to its neighbors.
PCL_FLAG_OUTLIERS_SIGMA_COEF = type=float, min=0.0, description= A tuning parameter for the outlier detection algorithm that controls the threshold on the average distance from a point to its neighbors. The threshold is defined as the mean of all average distances plus a certain number of times the standard deviation of all average distances, i.e. t = x + SIGMA_COEF * σ.
PCL_IMAGE_MULTISCALE = type=bool, description= Compute additional 3D points on multiple image scales, starting with the scale chosen in AlgoOption.Densification.PCL_IMAGE_SCALE down to the 1/8 scale. This is useful for computing additional 3D points in vegetation areas keeping details in areas without vegetation. This option can however produce additional noise in the point cloud, and cause artifacts in the mesh.
PCL_IMAGE_SCALE = type=str, description= Scale of the images at which additional 3D points are computed. "1": use original image size to compute additional 3D points. More points are computed than with other values, especially in areas where features can be easily matched (e.g. cities, rocks, etc.). This option may require several times more memory and processing time than lower image scales. It may only be useful with feature-poor images (e.g., thermal). "1/2": half size images are used to compute additional 3D points (default, recommended). "1/4": quarter size images are used to compute additional 3D points. While this setting generally produces less points than with higher scales, it generates more points in areas with features that cannot easily be matched such as vegetation. This value is thus recommended for projects with vegetation. "1/8": eighth size images are used to compute additional 3D points. This value has similar properties to "1/4", but overall produces less points. , allowed_values=(1, 1/2, 1/4, 1/8)
PCL_MERGE_TILES = type=bool, description=Merge the point cloud tiled output into one file.
PCL_MIN_NO_MATCHES = type=int, min=2, max=6, units=Unitless item count, description= Minimum number of valid re-projections that a 3D point must have on images to be kept in the point cloud. A value of 2 is useful for projects with small overlap. 3 is the default value. Higher values can reduce noise, but also decrease the number of 3D points computed. Values of 5 or 6 are recommended for oblique images with very high overlap.
PCL_USE_ANNOTATIONS = type=bool, description=If image annotations are present, use them to filter the point cloud and mesh.
PCL_USE_PROCESSING_AREA = type=bool, description=Use the processing area (if defined) for point cloud and mesh generation.
PCL_WINDOWS_SIZE = type=int, units=[pixel], description= Size of the square grid used for matching the densified points in the original images. A value of 7 is suggested for aerial nadir images, while 9 is suggested for oblique and terrestrial images. A value of 9 is useful for more accurate positioning of the densified points in the original images. , allowed_values=(7, 9)
PCL_XYZ_DELIMITER = type=str, description=Delimiter used for exporting the point cloud in xyz text format., allowed_values=(Space, Tab, Comma, Semicolon)
class Index

Algorithmic options for the index generation.

ABSOLUTE_RESOLUTION = type=float, min=0.0, units=Length units of the coordinate system in use [m] or [ft], description= Absolute resolution of the index in cm. Note that if AlgoOption.Index.RELATIVE_RESOLUTION is also set, whichever is set last determines the configuration.
DOWNSAMPLING_METHOD = type=str, description= If a resolution larger than 1 GSD is chosen, define which downsampling method to use. "Gauss": use a Gaussian filter. For all other options, the pixel value is computed applying one of the following functions in the window centered at the pixel. "Average": mean, "Median": median, "75%Quantile": 75% quantile, "Min": minimum value, "Max": maximum value. , allowed_values=(Gauss, Median, 75%Quantile, Average, Min, Max)
INDICES = type=list, description=A list of indices used to produce index maps.
POINT_SHP_GRID_SIZE = type=int, min=1, max=10000, units=[cm/grid], description=Grid size of the point shapefile (if generated).
POLYGON_SHP_GRID_SIZE = type=int, min=1, max=10000, units=[cm/grid], description=Grid size of the polygon shapefile (if generated).
RELATIVE_RESOLUTION = type=int, min=0, units=Relative units, description= Resolution of the index, computed multiplying the project GSD with this factor. Note that if AlgoOption.Index.ABSOLUTE_RESOLUTION is also set, whichever is set last determines the configuration.
class Mesh

Algorithmic options for the mesh generation.

DECIMATION_CRITERIA = type=str, description= In the first phase of mesh creation, too many triangles are created. This parameter controls how the spurious triangles are discarded. "Quantitative": triangles are discarded until their total number reaches the desired count, controlled by AlgoOption.Mesh.MAX_TRIANGLES. "Qualitative": triangles are discarded trying to maintain the original geometry. The decimation strategy is further controlled by AlgoOption.Mesh.DECIMATION_STRATEGY. , allowed_values=(Quantitative, Qualitative)
DECIMATION_STRATEGY = type=str, description= Strategy used for triangle decimation if "Qualitative" is used as AlgoOption.Mesh.DECIMATION_CRITERIA. "Sensitive": triangles are selected maintaining the original geometry of the 3D mesh as a priority. "Aggressive": triangles are selected to obtain a lower number of triangles. , allowed_values=(Sensitive, Aggressive)
LOD_NUM_LEVELS = type=int, min=1, max=7, description=Number of levels in the LOD mesh.
LOD_TEXTURE_QUALITY = type=str, description= Texture quality of the LOD mesh. Values "Low", "Medium" and "High" correspond to LOD node texture sizes of 512x512, 1024x1024, and 4096x4096 respectively. , allowed_values=(Low, Medium, High)
MAX_OCTREE_DEPTH = type=int, min=5, max=20, units=Unitless item count, description= To create the 3D textured mesh, the project is iteratively subdivided into 8 subregions. These are organized in a tree structure, and this parameter indicates how many such iterations should be performed. Higher values generate higher resolution meshes at the cost of longer computing times.
MAX_TRIANGLES = type=int, min=100, max=20000000, units=Unitless item count, description= Maximum number of triangles in the final 3D mesh, if "Quantitative" is used as AlgoOption.Mesh.DECIMATION_CRITERIA. The number also depends on the geometry and the size of the project.
SAMPLE_DENSITY_DIVIDER = type=int, min=1, max=5, units=Unitless item count, description= Values higher than 1 generate more mesh triangles in regions with lower point density. This is useful to avoid holes in the meshes, but can also increase noise in the mesh.
TEXTURE_COLOR_BALANCING = type=bool, description= Use a color balancing algorithm when generating the texture of the mesh. The algorithm ensures uniform mesh colors.
TEXTURE_SIZE = type=int, units=[pixel], description=Pixel size of the mesh texture., allowed_values=(256, 512, 1024, 2048, 4096, 8192, 16384, 32768, 65536, 131072)
TILED_OBJ = type=bool, description=Allow tiling of the output obj mesh file.
class Ortho

Algorithmic options for the orthomosaic generation.

CONTOUR_BASE = type=float, min=0, max=10000, units=Length units of the coordinate system in use [m] or [ft], description=Reference altitude from which contour lines are generated upwards and downwards.
CONTOUR_ELEVATION_INTERVAL = type=float, min=0.001, max=10000.0, units=Length units of the coordinate system in use [m] or [ft], description=Interval at which contour lines are computed.
CONTOUR_MIN_LINE_SIZE = type=int, min=4, max=1000, units=Unitless item count, description= Minimum number of vertices that a contour can have, higher values reduce noise.
CONTOUR_RESOLUTION = type=float, min=0.001, max=10000.0, units=[cm], description=Horizontal distance at which altitude values are sampled for a contour.
DSM_FILTER_SMOOTHING = type=bool, description= Filter small artifacts that can still be present in the DSM, even after noise removal. The filtering approach is controlled by AlgoOption.Ortho.DSM_FILTER_SMOOTHING_TYPE.
DSM_FILTER_SMOOTHING_TYPE = type=str, description= Smoothing type performed if AlgoOption.Ortho.DSM_FILTER_SMOOTHING is True. "Sharp": try to preserve the orientation of surfaces and sharp features such as building corners. Only approximately planar areas are flattened. "Smooth": assume that sharp features are due to noise only, and that they should thus be removed. Most surfaces are heavily simplified. "Medium": intermediate setting, trying to preserve sharp features while flattening roughly planar areas. , allowed_values=(Smooth, Medium, Sharp)
DSM_GRID_SPACING = type=int, min=1, max=1000, units=[cm], description=Distance between two 3D points in the Grid DSM.
DSM_NOISE_FILTER = type=bool, description= Point clouds can be noisy. Filtering the point cloud before computing the DSM corrects the altitude of noisy points using the median altitude of the neighbors.
DSM_XYZ_DELIMITER = type=str, description=Delimiter used for exporting the DSM in xyz text format., allowed_values=(Space, Tab, Comma, Semicolon)
DTM_ABSOLUTE_RESOLUTION = type=float, min=5.0, max=10000.0, units=Length units of the coordinate system in use [m] or [ft], description= Absolute resolution of the DTM in cm. Note that if AlgoOption.Ortho.DTM_RELATIVE_RESOLUTION is also set, whichever is set last determines the configuration.
DTM_RELATIVE_RESOLUTION = type=int, min=1, max=5, units=Relative units, description= Resolution of the DTM, computed multiplying the project GSD with this factor. Note that if AlgoOption.Ortho.DTM_ABSOLUTE_RESOLUTION is also set, whichever is set last determines the configuration.
MOSAIC_ABSOLUTE_RESOLUTION = type=float, min=0.0, units=Length units of the coordinate system in use [m] or [ft], description= Absolute resolution of the orthomosaic in cm. Note that if AlgoOption.Ortho.MOSAIC_RELATIVE_RESOLUTION is also set, whichever is set last determines the configuration.
MOSAIC_NO_TRANSPARENCY = type=bool, description=If True, do not make no-value areas of the orthomosaic transparent.
MOSAIC_RELATIVE_RESOLUTION = type=int, min=1, units=Relative units, description= Resolution of the orthomosaic, computed multiplying the project GSD with this factor. Note that if AlgoOption.Ortho.MOSAIC_ABSOLUTE_RESOLUTION is also set, whichever is set last determines the configuration.
class pix4dengine.options.ExportOption

Options controlling which output files are produced.

For further information, please refer to the support pages.

class CameraCalibration

Camera calibration exports.

UNDISTORTED_IMAGES = type=bool, description= Store an undistorted copy of each original image using the optimized distortion parameters of the selected camera model.
class Densification

Densification exports.

Each export format stores the x, y, z coordinates and color information of each point in the point cloud.

PCL_LAS = type=bool, description=Export the point cloud as a LiDAR las file.
PCL_LAZ = type=bool, description= Export the point cloud as a compressed LiDAR las file.
PCL_PLY = type=bool, description=Export the point cloud as a ply file.
PCL_XYZ = type=bool, description=Export the point cloud as an ASCII xyz file.
class Index

Index exports.

INDEX_TIFF = type=bool, description=Export index maps to GeoTIFF format. Output can be tiled., allowed_values=(True)
INDEX_TIFF_MERGED = type=bool, description=Merge the tiled index GeoTIFFs into a single file., allowed_values=(True)
POINT_SHP = type=bool, description=Export the index map as a grid shapefile.
POLYGON_SHP = type=bool, description=Export the index map as a polygon shapefile.
REFLECTANCE = type=bool, description= Generate and export a reflectance map in GeoTIFF format. Output can be tiled.
REFLECTANCE_MERGED = type=bool, description=Merge the tiled reflectance GeoTIFFs into a single file.
class Mesh

Mesh exports.

Each export format stores the x, y, z coordinates of each vertex in the 3D mesh. The coordinates are not geo-referenced. All formats except dxf also store the texture data.

DXF = type=bool, description=Export mesh as a dxf file (no texture information).
FBX = type=bool, description=Export mesh as an fbx file (embedded texture data).
LOD_OSGB = type=bool, description=Export LOD mesh in OSGB format).
LOD_SLPK = type=bool, description= Export LOD mesh in SLPK format). Note: the SLPK LOD mesh is geo-referenced only if the project is. If not geo-referenced, it may not load correctly into 3rd party tools.
OBJ = type=bool, description=Export mesh as an obj file (jpg and mtl texture files).
PLY = type=bool, description=Export mesh as a ply file (jpg texture file).
class Ortho

Orthomosaic exports.

CONTOUR_DXF = type=bool, description= Export contour lines in dxf format. If the DTM is being generated, contour lines represent the DTM. Otherwise, contour lines represent the DSM.
CONTOUR_PDF = type=bool, description= Export contour lines in pdf format. If the DTM is being generated, contour lines represent the DTM. Otherwise, contour lines represent the DSM.
CONTOUR_SHP = type=bool, description= Export contour lines as a shapefile. If the DTM is being generated, contour lines represent the DTM. Otherwise, contour lines represent the DSM.
DSM_GRID_LAS = type=bool, description= Export the grid DSM as a LiDAR las file, with position and color information of each grid point.
DSM_GRID_LAZ = type=bool, description= Export the grid DSM as a compressed LiDAR las file, with position and color information of each grid point.
DSM_TIFF = type=bool, description=Export the DSM as a GeoTIFF file. Output can be tiled.
DSM_TIFF_MERGED = type=bool, description=Merge the tiled DSM GeoTIFFs into a single file.
DSM_XYZ = type=bool, description=Export the DSM as an ASCII xyz file.
DTM_TIFF = type=bool, description=Export the DTM as a GeoTIFF file. Output can be tiled. DSM_TIFF option must also be set.
DTM_TIFF_MERGED = type=bool, description=Merge the tiled DTM GeoTIFFs into a single file. DSM_TIFF_MERGED option must also be set.
MOSAIC_KML = type=bool, description=Export the orthomosaic to Google Earth kml files. Requires a GeoTIFF DSM.
MOSAIC_TIFF = type=bool, description=Export the orthomosaic as a GeoTIFF file. Output can be tiled. Requires a GeoTIFF DSM.
MOSAIC_TIFF_MERGED = type=bool, description=Merge the tiled orthomosaic GeoTIFFs into a single file. Requires a GeoTIFF DSM.
class pix4dengine.options.StandardExport

Options to access output files which are always produced during processing.

For further information, please refer to the support pages.

class CameraCalibration

Calibration parameters.

BINGO = type=bool, description= Image coordinates for GCPs, check points and some of the automatic tie points. For more information please refer to https://support.pix4d.com/hc/en-us/articles/203590305. , allowed_values=(True)
CAMERA_POS = type=bool, description=Calibrated camera positions file., allowed_values=(True)
CAMERA_SSK = type=bool, description=Camera parameters data, for use with ImageStation or other compatible software., allowed_values=(True)
PHOTO_SSK = type=bool, description=Images information, for use with ImageStation or other compatible software., allowed_values=(True)
class Report

Quality reports.

HTML = type=bool, description=Quality report in html format., allowed_values=(True)
XML = type=bool, description=Quality report in xml format., allowed_values=(True)

Finding output files

Interface to the output files of a project.

pix4dengine.exports.get_output(project, export_option, index_name='*')

Get a list of available output files from an export option.

Parameters:
  • project (Project) – a Project object.
  • export_option (Enum) – the target export option, whose output we want to collect.
  • index_name (str) – the name of the index for the output we want to collect. By default, all indices are collected. If export_option does not refer to an index output, setting this argument has no effect.
Return type:

Union[Sequence[str], Mapping[Enum, Sequence[str]]]

Returns:

a list of paths to the associated output files is returned.

Raises:
pix4dengine.exports.get_report(project)

Return a Report object from a Project instance.

Return type:Report
class pix4dengine.exports.Report(report_path)

Parser for the Pix4Dengine quality report.

Parameters:report_path (str) – path to the XML report produced by Pix4Dengine
absolute_geolocation_rms()

Return the absolute geolocation RMS.

Return type:GeolocationRMS
Returns:The geolocation RMS in units of the output coordinate system (meter, foot, or U.S. Survey Foot). The RMS is calculated from the initial and computed image positions.
Raises:ReportParsingException – the report does not contain the absolute geolocation RMS.
calibration_quality_status()

Returns the calibration quality status.

Return type:CalibrationQualityStatus
Returns:A CalibrationQualityStatus object.
camera_opt_rel_diff()

Relative difference (%) between initial and optimized internal camera parameters.

Return type:float
gsd_in_cm()

Get the Ground Sampling Distance in cm.

Return type:float
gsd_in_inch()

Get the Ground Sampling Distance in inch.

Return type:float
image_dataset_info()

Return the image dataset information.

Return type:ImageDatasetInfo
Returns:An ImageDatasetInfo object.
keypoints_median_per_image()

Get the median number of keypoints per image.

Return type:int
matches_median_per_image()

Get the median number of matches per image.

Return type:int
number_of_3d_gcps()

Returns the number of 3D GCPs used to georeference the project.

Return type:int
exception pix4dengine.exports.ReportParsingException

Error while parsing the report from Pix4Dengine.

exception pix4dengine.exports.OutputFilesNotFound

No export file is found where expected.

exception pix4dengine.exports.UnknownExportLocation

The expected path for an export is unknown.

exception pix4dengine.exports.UnsetExportException

The engine was configured not to produce the requested output file.

Multispectral index constants

Radiometry index constants for use in the code.

class pix4dengine.constants.index.Index

Radiometry index.

Create new instance of Index(name, formula, enabled)

enabled

Alias for field number 2

formula

Alias for field number 1

name

Alias for field number 0

class pix4dengine.constants.index.Indices

List of predefined indices.

BLUE = Index(name='blue', formula='blue', enabled=True)
GRAYSCALE = Index(name='grayscale', formula='0.2126 * red + 0.7152 * green + 0.0722 * blue', enabled=True)
GREEN = Index(name='green', formula='green', enabled=True)
NDVI = Index(name='ndvi', formula='(nir - red) / (nir + red)', enabled=True)
NIR = Index(name='nir', formula='nir', enabled=True)
RED = Index(name='red', formula='red', enabled=True)
RED_EDGE = Index(name='red_edge', formula='red_edge', enabled=True)

Quality report

Data structures used in the quality report parser.

class pix4dengine.utils.report.CalibrationQualityStatus

Container for the calibration quality status report.

Create new instance of CalibrationQualityStatus(images, dataset, camera_optimization, matching, georeferencing)

camera_optimization

Alias for field number 2

dataset

Alias for field number 1

georeferencing

Alias for field number 4

images

Alias for field number 0

matching

Alias for field number 3

class pix4dengine.utils.report.GeolocationRMS

Container for geolocation RMS.

Create new instance of GeolocationRMS(x, y, z)

x

Alias for field number 0

y

Alias for field number 1

z

Alias for field number 2

class pix4dengine.utils.report.ImageDatasetInfo

Container for the image dataset information.

Create new instance of ImageDatasetInfo(total, enabled, calibrated, calibrated_enabled, calibrated_percentage, disabled)

calibrated

Alias for field number 2

calibrated_enabled

Alias for field number 3

calibrated_percentage

Alias for field number 4

disabled

Alias for field number 5

enabled

Alias for field number 1

total

Alias for field number 0

class pix4dengine.utils.report.Quality

Quality enumeration for CalibrationQualityStatus.

Short status description for a quality item.

FAILURE = 'failure'
SUCCESS = 'success'
WARNING = 'warning'

Utilities

Task system

Module for tasks and task runners.

class pix4dengine.task.Task(name, work, *, on_start=<function _noop>, on_success=<function _noop>, on_error=<function _noop>)

Class representing a unit of work with associated call-backs.

Initialize a task with a name, work and optional call-backs.

Parameters:
  • name – the name of the task. Must be unique in the context in which it is being executed.
  • work – a callable object that does the work of the task.
  • on_start – optional callback to execute before work is started.
  • on_success – optional callback to execute after work is finished without errors.
  • on_error – optional callback to call in the event of an error.
name

The name identifying a task.

run(**kwargs)

Run the task.

Runs the on_start callbacks, the task’s work, followed by the on_success callbacks.

set_callbacks(on_start=None, on_success=None, on_error=None)

Set the callbacks of this task.

class pix4dengine.task.TaskRunner

Class for storing and running a set of tasks with dependencies.

Initialize the TaskRunner as empty.

add_task(task, after=None, before=None)

Add a task to this runner.

Add a task to this runner, optionally specifying a relative ordering with respect to some tasks.

Parameters:
  • task – the task to be added
  • after – (optional) task should run after these tasks. after accepts either a single task name or a sequence. The tasks in after must have already been added to the TaskRunner.
  • before – (optional) task should run before these tasks. before accepts either a single task name or a sequence. The tasks in before must have already been added to the TaskRunner.

Note

before and after parameters define an interval. In other words, if requesting to run task B after task A and before C, B will run between A and C, but other tasks may run in between. For the sake of the example, the final sequence of tasks may, e.g., result in: A, D, E, B, F, C.

Raises:LookupErrorafter or before is not the name of a task registered in the runner.
get_task(name)

Get the task with a given name or raise py:exc:LookupError.

run()

Run all the tasks.

tasks

Iterable of tasks contained by the runner.

pix4dengine.task.run_tasks(tasks, on_start=<function _noop>, on_success=<function _noop>, on_error=<function _noop>)

Run a sequence of tasks with optional callbacks.

Parameters:
  • tasks – a sequence of tasks to be executed.
  • on_start – an optional callback to be invoked before the ensemble of tasks is run.
  • on_success – an optional callback to be invoked if the ensemble of tasks completes successfully.
  • on_error – an optional callback to be invoked if any of the tasks produces an error.

GCP and mark definition

Data structures for working with GCP and marks.

class pix4dengine.utils.gcp.DefaultAccuracy

Default horizontal and vertical accuracy of 3D GCPs.

XY = 0.02
Z = 0.02
class pix4dengine.utils.gcp.GCP3D

3D Ground Control Point class.

To define a GCP, one must specify (lat, lon, alt), (x, y, z) or both. id and label must be unique within a project. xy_accuracy and z_accuracy are set to default values from DefaultAccuracy if omitted.

Example

GCP(label="gcp0", id=0, lat=3.14159, lon=9.51431, alt=42)

Parameters:
  • label – unique label of the GCP.
  • id – unique integer ID of the GCP.
  • x – x coordinate (map coordinates)
  • y – y coordinate (map coordinates)
  • z – z coordinate
  • lat – latitude
  • lon – longitude
  • alt – altitude
  • xy_accuracy – (x, y) accuracy
  • z_accuracy – z accuracy

Create new instance of GCP3D(label, id, x, y, z, lat, lon, alt, xy_accuracy, z_accuracy)

alt

Alias for field number 7

id

Alias for field number 1

label

Alias for field number 0

lat

Alias for field number 5

lon

Alias for field number 6

x

Alias for field number 2

xy_accuracy

Alias for field number 8

y

Alias for field number 3

z

Alias for field number 4

z_accuracy

Alias for field number 9

class pix4dengine.utils.gcp.Mark

Representation of a mark on an image.

Example

Mark(photo="/home/sdkuser/projects/example/images/DSC0001.JPG", x=300, y=200)

Parameters:
  • photo – absolute path to an image file
  • x – x position of the GCP in the image (in pixels)
  • y – y position of the GCP in the image (in pixels)
  • scale – [deprecated] parameter used to derive a weight given to the mark
  • gsd – ground sampling distance

Create new instance of Mark(photo, x, y, scale, gsd)

gsd

Alias for field number 4

photo

Alias for field number 0

scale

Alias for field number 3

x

Alias for field number 1

y

Alias for field number 2

Processing area

Utilities for defining a processing area.

The processing area can be defined manually in the Project, or a default definition can be inferred from the positions of calibrated cameras.

pix4dengine.utils.processingarea.get_proc_area_definition(project, buffer_width=30.0)

Get a default processing area definition from a calibrated project.

The default processing area is defined as the convex hull covering the positions of the calibrated images. A buffer region is added around this polygon.

Parameters:
  • project (Project) – the project from which we want to extract a processing area definition. The project must be calibrated beforehand.
  • buffer_width (float) – the width of the buffer region around the camera positions, in meters.
Return type:

List[PointXY]

Returns:

list of points defining the default processing area horizontally.

Raises:
  • ProcAreaDefinitionException – the file containing the information on the calibrated images positions cannot be parsed.
  • OutputFilesNotFound – the file with the calibrated positions cannot be found. This usually means that the project has not been yet calibrated.
pix4dengine.utils.processingarea.set_default_proc_area(project, buffer_width=30.0)

Set a default processing area in a project.

The default processing area is defined as the convex hull covering the positions of the calibrated images. A buffer region is added around this polygon.

Parameters:
  • project (Project) – the project for which the default processing area should be defined. The project must be calibrated beforehand.
  • buffer_width (float) – the width of the buffer region around the input points in meters.
Raises:
  • ProcAreaDefinitionException – the file containing the information on the calibrated images positions cannot be parsed.
  • OutputFilesNotFound – the file with the calibrated positions cannot be found. This usually means that the project has not been yet calibrated.
Return type:

None

Coordinate system constants

Constants used for the coordinate systems.

class pix4dengine.constants.coordsys.CoordinateSystemType

Coordinate system types.

ARBITRARY = '^LOCALCS\\["(.*?)"'
GEOGRAPHIC = '^GEOGCS\\["(.*?)"'
PROJECTED = '^PROJCS\\["(.*?)"'
class pix4dengine.constants.coordsys.Geoid

Geoids available for setting the vertical coordinate system.

GEOID2008 = 'EGM 2008 Geoid'
GEOID84 = 'EGM 84 Geoid'
GEOID96 = 'EGM 96 Geoid'
class pix4dengine.constants.coordsys.LengthUnit

Units of length used by different coordinate systems.

FOOT = 'UNIT\\["foot",0.3048,AUTHORITY\\["EPSG","9002"\\]\\]'
METER = 'UNIT\\["metre",1,AUTHORITY\\["EPSG","9001"\\]\\]'
US_SURVEY_FOOT = 'UNIT\\[(("US survey foot")|("Foot_US")),0.3048006096012192\\d*(,AUTHORITY\\["EPSG","9003"\\])*\\]'
class pix4dengine.constants.coordsys.ProjectCS

Enumeration of coordinate systems used in a project.

GCPS = 'Coordinate system used for the GCPs.'
IMAGES = 'Coordinate system used in the images.'
OUTPUT = 'Coordinate system used for the output files.'

Coordinate systems

Module with tools for working with different coordinate systems.

pix4dengine.coordsys.list_cs(search=None)

Return a list of all available coordinate systems.

Parameters:search (Optional[str]) – if search is defined, only the coordinate systems containing such a string will be returned. The search is not case-sensitive.
Return type:List[str]
class pix4dengine.coordsys.CoordSys(project)

Class to access and modify the coordinate systems of a project.

Parameters:project – an instance of Project.
get_cs_name(project_cs)

Get one of the project’s horizontal coordinate systems name.

Parameters:project_cs (ProjectCS) – an instance of ProjectCS defining which horizontal coordinate system to return.
Return type:str
get_cs_wkt(project_cs)

Get one of the project’s horizontal coordinate systems Well-Known Text string.

Parameters:project_cs (ProjectCS) – an instance of ProjectCS defining which horizontal coordinate system to return.
Return type:str
get_length_unit(project_cs)

Return the unit used to measure the specified horizontal coordinate system.

Parameters:

project_cs (ProjectCS) – an instance of ProjectCS specifying which coordinate system to query.

Return type:

LengthUnit

Returns:

an instance of py:class:~pix4dengine.constants.coordsys.LengthUnit

Raises:
get_vert_cs(project_cs)

Get one of the project’s vertical coordinate systems.

Parameters:project_cs (ProjectCS) – an instance of pix4dengine.constants.coordsys.ProjectCS defining which vertical coordinate system to return.
Return type:Union[float, Geoid, None]
Returns:Three different object types can be returned. If a float is returned, it is the height in meters above the WGS 84 ellipsoid. If a pix4dengine.constants.coordsys.Geoid instance is returned, it is the geoid in use. If None is returned, the coordinate system is arbitrary.
identify(project_cs)

Identify the type of the horizontal coordinate system.

Parameters:project_cs (ProjectCS) – an instance of ProjectCS defining which horizontal coordinate system to query.
Return type:CoordinateSystemType
Returns:an instance of pix4dengine.constants.coordsys.CoordinateSystemType.
Raises:ValueError if the coordinate system type cannot be identified
set_cs_from_name(project_cs, cs_name)

Set one of the project’s horizontal coordinate systems.

Parameters:
  • project_cs (ProjectCS) – an instance of ProjectCS defining which horizontal coordinate system to set.
  • cs_name (str) – name of the coordinate system to set.
Return type:

None

set_cs_from_wkt(project_cs, wkt)

Set one of the project’s horizontal coordinate systems.

Parameters:
  • project_cs (ProjectCS) – an instance of ProjectCS defining which horizontal coordinate system to set.
  • wkt (str) – Well-Known Text string
Raises:

NotImplementedError if project_cs is ProjectCS.IMAGES

Return type:

None

set_vert_cs(project_cs, coord_sys=None)

Set one of the project’s vertical coordinate systems.

Parameters:
Raises:

NotImplementedError if project_cs is ProjectCS.IMAGES

pix4dengine.coordsys.cs_name_to_wkt(cs_name)

Return a Well-Known Text (WKT) from a coordinate system name.

Parameters:cs_name (str) – coordinate system name (e.g., "JGD2011 / Japan Plane Rectangular CS VIII").
Return type:str
Returns:WKT (e.g., 'PROJCS["JGD2011 / Japan Plane Rectangular CS VIII",GEOGCS["JGD2011"...').
Raises:ValueError when the coordinate system cannot be found.
pix4dengine.coordsys.wkt_to_cs_name(wkt)

Return the coordinate system name from a Well-Known Text (WKT).

Parameters:wkt (str) – WKT (e.g., 'PROJCS["JGD2011 / Japan Plane Rectangular CS VIII",GEOGCS["JGD2011"...').
Return type:Optional[str]
Returns:the coordinate system name (e.g., "JGD2011 / Japan Plane Rectangular CS VIII"), None if not found.
pix4dengine.coordsys.unit_from_wkt(wkt)

Return the unit of length for the given WKT.

Parameters:wkt (str) – a Well-Known Text.
Return type:Optional[LengthUnit]
Returns:One of the units in pix4dengine.constants.coordsys.LengthUnit or None if the unit cannot be determined.
pix4dengine.coordsys.unit_from_cs_name(cs_name)

Return the unit of length for the given coordinate system name.

Parameters:cs_name (str) – a coordinate system name.
Return type:Optional[LengthUnit]
Returns:One of the units in pix4dengine.constants.coordsys.LengthUnit or None if the unit cannot be determined.
Raises:ValueError when the coordinate system is unknown.

Project tools

Utilities for use with Project.

class pix4dengine.utils.project.MinMaxRange

Class to represent a range.

This can be used, e.g., for defining the vertical range of a project processing area. See processingarea.

Example

MinMaxRange(min=100, max=150.0)

Parameters:
  • min – minimum height (meters)
  • max – maximum height (meters)

Create new instance of MinMaxRange(min, max)

max

Alias for field number 1

min

Alias for field number 0

class pix4dengine.utils.project.PointXY

Class to represent a point in 2D space.

This can be used, e.g., for defining a project processing area horizontally. See processingarea.

Example

PointXY(x=153.2, y=201.24)

Parameters:
  • x – x position (map coordinates)
  • y – y position (map coordinates)

Create new instance of PointXY(x, y)

x

Alias for field number 0

y

Alias for field number 1

class pix4dengine.utils.project.Version

Three digits version information.

Create new instance of Version(major, minor, micro)

major

Alias for field number 0

micro

Alias for field number 2

minor

Alias for field number 1

Custom cameras

Module with functions and structures to manage custom cameras.

pix4dengine.camera.user_camera_database(location=None)

Managed context with user database.

This is a context manager, designed to be used in with statements. It provides a safe way to change the camera database for the user, and makes sure that the user database is reverted to the initial value when exiting the with statement, even if an exception was raised.

Parameters:location (Union[str, Path, None]) – path to the XML file with user camera database. If None, then a new, temporary, database will be created.
Return type:Generator[Path, None, None]
Returns:Path to the xml file where user database is located.
pix4dengine.camera.add_camera_to_userdb(camera_config: pix4d::calibration::cameracreation::CameraConfig, camera_identifier: Optional[str]=None) → str
Add a new camera to the user database.
Parameters:
  • camera_config (CameraConfig) – object with new camera.
  • camera_identifier (str) – (optional) declare camera identifier, if it is not set, then the best possible will be set. It is safer to use exact identifier, it can be retrieved from an image with camera_identifier_from_image().
Return type:

str

Returns:

camera identifier under which the camera was added to the database.

Raises:
  • RuntimeError – if any of camera_config parameter is not correct, or the user database file cannot be saved.
  • NotLoggedIn if the user is not logged in.
pix4dengine.camera.remove_cameras_from_userdb(camera_identifier: str=None) → List[str]

Removes cameras from the user database.

All cameras that are identified by camera_identifier will be removed.

Parameters:

camera_identifier (str) – camera identifier to be removed.

Return type:

[str]

Returns:

list of camera names or mappings which were removed.

Raises:
  • RuntimeError – if the user camera database cannot be modified.
  • NotLoggedIn if the user is not logged in.
pix4dengine.camera.clean_userdb() → None

Removes all cameras from the user database.

pix4dengine.camera.camera_identifier_from_image(image_path: str) → str

Retrieve a camera identifier from an image file.

New camera has to be associated with this camera identifier while adding to database.

Parameters:image_path (str) – path to an image.
Return type:str
Returns:camera identifier
Raises:RuntimeError – if file cannot be accessed or parsed.
pix4dengine.camera.get_camera_config_from_image(image_path: str) → Tuple[pix4d::calibration::cameracreation::CameraConfig, str]

Get camera configuration for an image.

Parameters:

image_path (str) – path to an image.

Return type:

(CameraConfig, str)

Returns:

a pair of camera configuration and the camera identifier.

Raises:
  • RuntimeError – if file cannot be accessed or parsed.
  • NotLoggedIn if the user is not logged in.
pix4dengine.camera.get_camera_config_from_name(camera_identifier: str) → pix4d::calibration::cameracreation::CameraConfig

Get camera configuration from a database by camera identifier.

Parameters:

camera_identifier (str) – Expected format is CameraModel_FocalLength_WidthxHeight or CameraModel_LensModel_FocalLength_WidthxHeight.

Return type:

(CameraConfig)

Returns:

camera object.

Raises:
  • RuntimeError – if camera was not found.
  • NotLoggedIn if the user is not logged in.
pix4dengine.camera.change_user_database_location(path: str) → None

Set new path for the user camera database.

Parameters:path – user camera database file location.
Raises:NotLoggedIn if the user is not logged in.
pix4dengine.camera.default_user_database_location() → str

Returns default path to the user database.

Return type:(str)
Returns:path to the default location for user database.
Raises:NotLoggedIn if the user is not logged in.
pix4dengine.camera.user_database_location() → str

Returns a path to the user database.

Return type:(str)
Returns:path to the user database.
Raises:NotLoggedIn if the user is not logged in.
class pix4dengine.camera.CameraConfig(self: pyengine.camera.CameraConfig) → None
bands

Camera spectral bands.

image_height

Camera image heigh in pixels.

image_width

Camera image width in pixels.

lens_model

Lens model.

line_readout_time

Rolling shutter readout time for one line in microseconds.

maker_name

Camera maker name.

model_name

Camera model name.

pixel_size_in_um

Camera image pixel size in micrometers.

pixel_values

Range of values for a specific DataType. <DataType, min, max>. Where DataType can me one of: DUnknown, DRGB, DRGBA, DByte, DInt16, DInt32, DFloat32, DFloat64, DUInt16, DUInt32, DUInt12.

sensor

Sensor parameters.

serial_number

Camera serial number.

shutter_type

Camera shutter type. Can be one of ‘Global’, ‘RollingLinear’, ‘RollingIMU’, ‘RollingGeneral’.

velocity_number

Number of linear/angular velocities for general rolling shutter. [default= 1]

vignetting

Configure vignetting correction.

class pix4dengine.camera.VignettingPoly(self: pyengine.camera.VignettingPoly) → None
f_number

Set vignetting for this f-number.

poly

Polynomial coefficients (1D radial or 2D) or CMOS model values for Sequoia.

shape

Array dimensions. Size x, size y.

class pix4dengine.camera.PerspectiveSensorConfig(self: pyengine.camera.PerspectiveSensorConfig) → None
distortion_count

Number of nonzero distortion parameters. [default= 6]

focal_length_in_mm

Focal length in millimeters.

principal_point_x_in_mm

Principal point x-coordinate in millimeters.

principal_point_y_in_mm

Principal point y-coordinate in millimeters.

radial_K1

First radial distortion parameter.

radial_K2

Second radial distortion parameter.

radial_K3

Third radial distortion parameter.

tangential_T1

First tangential distortion parameter.

tangential_T2

Second tangential distortion parameter.

class pix4dengine.camera.BandConfig(self: pyengine.camera.BandConfig, name: str, central_wavelength: float, width: float, weight: float) → None
central_wavelength

Central wavelength in microns.

name

Spectral band name.

weight

Band weight for grayscale mapping.

width

Width of the band in microns.

Exceptions

Custom error classes raised by Pix4Dengine.

exception pix4dengine.utils.errors.FailedValidation(message=None)

Validation of the quality report failed, processing stopped.

exception pix4dengine.utils.errors.LoginError(error_msg='Default error message', pix4d_error_code='not found', stdout=None)

Failed authorizing to use Pix4Dengine via the Pix4D licensing system.

exception pix4dengine.utils.errors.LogoutError(error_msg='Default error message', pix4d_error_code='not found', stdout=None)

Failed logging out of the Pix4D licensing system.

exception pix4dengine.utils.errors.ProcAreaDefinitionException(message=None)

Error while trying to automatically define a processing area.

exception pix4dengine.utils.errors.ProjectCreationError(error_msg='Default error message', pix4d_error_code='not found', stdout=None)

Creation of the project failed.

exception pix4dengine.utils.errors.ProjectOpeningError(message=None)

Opening of the project failed.

exception pix4dengine.utils.errors.ProjectProcessingError(error_msg='Default error message', pix4d_error_code='not found', stdout=None)

Processing of the project failed.

exception pix4dengine.utils.errors.ReportingException(message=None)

Error while parsing STDOUT for reporting.