API Reference¶
This reference contains the code documentation for modules, classes and functions of the pix4dvortex public API. It is assumed that the user is familiar with its higher level concepts. Some of the concepts are defined in the Concepts: Data model, algorithms, utilities, exporters and Glossary sections.
Pix4Dvortex.
Authentication¶
Auth session
- pix4dvortex.session.is_logged_in() bool ¶
Check if authorization for a session has been granted. This check is independent of method of the session acquisition.
- Returns
True
if authorized,False
otherwise.
- pix4dvortex.session.login(*, client_id: str, client_secret: str, license_key: Optional[str] = None) None ¶
Request authorization for a PIX4Dengine session using the Pix4D license server.
- Parameters
client_id – oauth2 client ID.
client_secret – oauth2 client secret.
license_key – (optional) PIX4Dengine license key to use for the authorization request
- Raises
RuntimeError – on access failure.
Data Model¶
OPF data model¶
For more detailed documentation of each data type, refer to OPF documentation for main dmodel types.
PIX4Dengine SDK OPF compatible data model classes.
Module containing a set of data model classes with 1:1 correspondence to OPF data types. In most cases, the data type names, attributes and layout are identical to their OPF counterpart. In some cases there may be small divergences for user convenience. Users are advised to familiarize themselves with the core OPF data model and concepts, and to refer to the documentation of each specific type as needed.
- class pix4dvortex.dmodel.BandInformation¶
OPF band information.
- property central_wavelength_nm¶
- property fwhm_nm¶
- property name¶
- property weight¶
- class pix4dvortex.dmodel.BaseToCanonical(self: pix4dvortex.dmodel.BaseToCanonical, *, shift: Annotated[list[float], FixedSize(3)] = [0.0, 0.0, 0.0], scale: Annotated[list[float], FixedSize(3)] = [1.0, 1.0, 1.0], swap_xy: bool = False)¶
OPF base_to_canonical.
- property scale¶
An internal vector used to scale coordinates to make the SRS isometric
- property shift¶
An internal vector used to shift coordinates to center the SRS to the scene
- property swap_xy¶
An internal boolean used to swap coordinates to get a right-handed SRS
- class pix4dvortex.dmodel.CRS(self: pix4dvortex.dmodel.CRS, *, definition: str = '', geoid_height: Optional[float] = None)¶
OPF CRS.
- property definition¶
- static from_dict(arg0: dict) pix4dvortex.dmodel.CRS ¶
- property geoid_height¶
The geoid height to be used when the vertical SRS is not supported or cannot be retrieved from WKT
- to_dict(self: pix4dvortex.dmodel.CRS) dict ¶
- class pix4dvortex.dmodel.CalibratedCamera¶
OPF calibrated_camera.
- static from_dict(arg0: dict) pix4dvortex.dmodel.CalibratedCamera ¶
- property id¶
- property orientation_deg¶
- property position¶
- property rolling_shutter¶
- property sensor_id¶
- to_dict(self: pix4dvortex.dmodel.CalibratedCamera) dict ¶
- class pix4dvortex.dmodel.CalibratedCameras¶
OPF calibrated_cameras.
- property cameras¶
- static from_dict(arg0: dict) pix4dvortex.dmodel.CalibratedCameras ¶
- property sensors¶
- to_dict(self: pix4dvortex.dmodel.CalibratedCameras) dict ¶
- class pix4dvortex.dmodel.CalibratedControlPoint(self: pix4dvortex.dmodel.CalibratedControlPoint, id: str = '', coordinates: Annotated[list[float], FixedSize(3)] = [0.0, 0.0, 0.0])¶
OPF calibrated control point.
- property coordinates¶
The known measured (prior) 3D-position of a point in the scene.
- static from_dict(arg0: dict) pix4dvortex.dmodel.CalibratedControlPoint ¶
- property id¶
The camera ID.
- to_dict(self: pix4dvortex.dmodel.CalibratedControlPoint) dict ¶
- class pix4dvortex.dmodel.CalibratedControlPoints(self: pix4dvortex.dmodel.CalibratedControlPoints, *, control_points: list[pix4dvortex.dmodel.CalibratedControlPoint] = [])¶
OPF calibrated control points.
- property control_points¶
Deprecated. use
points
attribute instead.
- static from_dict(arg0: dict) pix4dvortex.dmodel.CalibratedControlPoints ¶
- property points¶
Collection of
CalibratedControlPoint
objects.
- to_dict(self: pix4dvortex.dmodel.CalibratedControlPoints) dict ¶
- class pix4dvortex.dmodel.CalibratedITP(self: pix4dvortex.dmodel.CalibratedITP, *, id: str, coordinates: Annotated[list[float], FixedSize(3)], calibrated_marks: list[pix4dvortex.dmodel.MarkWithSegments])¶
ITP OPF extension calibrated ITP.
- property calibrated_marks¶
- property coordinates¶
- static from_dict(arg0: dict) pix4dvortex.dmodel.CalibratedITP ¶
- property id¶
- to_dict(self: pix4dvortex.dmodel.CalibratedITP) dict ¶
- class pix4dvortex.dmodel.CalibratedIntersectionTiePoints(self: pix4dvortex.dmodel.CalibratedIntersectionTiePoints, *, points: list[pix4dvortex.dmodel.CalibratedITP])¶
ITP OPF extension calibrated ITPs.
- static from_dict(arg0: dict) pix4dvortex.dmodel.CalibratedIntersectionTiePoints ¶
- hash(self: pix4dvortex.dmodel.CalibratedIntersectionTiePoints) int ¶
- opf_type = 'ext_pix4d_input_intersection_tie_points'¶
- property points¶
- to_dict(self: pix4dvortex.dmodel.CalibratedIntersectionTiePoints) dict ¶
- class pix4dvortex.dmodel.CalibratedRigRelatives¶
OPF calibrated_rig_relatives.
- property rotation_angles_deg¶
- property translation¶
- class pix4dvortex.dmodel.CalibratedSensor¶
OPF calibrated_sensor.
- static from_dict(arg0: dict) pix4dvortex.dmodel.CalibratedSensor ¶
- property id¶
- property internals¶
- property rig_relatives¶
- to_dict(self: pix4dvortex.dmodel.CalibratedSensor) dict ¶
- class pix4dvortex.dmodel.Calibration(self: pix4dvortex.dmodel.Calibration, *, calibrated_cameras: pix4dvortex.dmodel.CalibratedCameras, sparse_pcl: Optional[pix4dvortex.dmodel.GLTFPointCloud] = None, gps_bias: Optional[pix4dvortex.dmodel.GPSBias] = None, calibrated_control_points: Optional[pix4dvortex.dmodel.CalibratedControlPoints] = None, calibrated_itps: Optional[pix4dvortex.dmodel.CalibratedIntersectionTiePoints] = None, features: Optional[pix4dvortex.dmodel.Features] = None, matches: Optional[pix4dvortex.dmodel.Matches] = None, original_matches: Optional[pix4dvortex.dmodel.OriginalMatches] = None)¶
OPF calibration.
- property calibrated_cameras¶
- property calibrated_control_points¶
- property calibrated_itps¶
- property features¶
- property gps_bias¶
- hash(self: pix4dvortex.dmodel.Calibration) int ¶
- property matches¶
- opf_type = 'calibration'¶
- property original_matches¶
- property sparse_pcl¶
- class pix4dvortex.dmodel.Camera¶
OPF camera.
- static from_dict(arg0: dict) pix4dvortex.dmodel.Camera ¶
- property id¶
- property image_orientation¶
- property model_source¶
- property pixel_range¶
- property pixel_type¶
- property sensor_id¶
- to_dict(self: pix4dvortex.dmodel.Camera) dict ¶
- class pix4dvortex.dmodel.CameraList¶
OPF camera_list.
- property cameras¶
Camera UID to image URI mapping
- static from_dict(arg0: dict) pix4dvortex.dmodel.CameraList ¶
- hash(self: pix4dvortex.dmodel.CameraList) int ¶
- opf_type = 'camera_list'¶
- to_dict(self: pix4dvortex.dmodel.CameraList) dict ¶
- property uid_generator¶
Camera UID to image URI mapping
- class pix4dvortex.dmodel.CameraOptimizationHints¶
OPF camera_optimization_hints extension.
- static from_dict(arg0: dict) pix4dvortex.dmodel.CameraOptimizationHints ¶
- hash(self: pix4dvortex.dmodel.CameraOptimizationHints) int ¶
- opf_type = 'ext_pix4d_camera_optimization_hints'¶
- to_dict(self: pix4dvortex.dmodel.CameraOptimizationHints) dict ¶
- class pix4dvortex.dmodel.CaptureElement¶
OPF capture element.
- property cameras¶
- static from_dict(arg0: dict) pix4dvortex.dmodel.CaptureElement ¶
- property geolocation¶
- property height_above_takeoff_m¶
- property id¶
- property orientation¶
- property reference_camera_id¶
- property rig_model_source¶
- property time¶
- to_dict(self: pix4dvortex.dmodel.CaptureElement) dict ¶
- class pix4dvortex.dmodel.Edge(self: pix4dvortex.dmodel.Edge, *, v1: int, v2: int, confidence: float)¶
2D segment graphs OPF extension 2D edge.
- property confidence¶
- property v1¶
- property v2¶
- class pix4dvortex.dmodel.Features(self: pix4dvortex.dmodel.Features, *, path: os.PathLike)¶
Features OPF extension image features binary file wrapper.
- hash(self: pix4dvortex.dmodel.Features) int ¶
- opf_type = 'ext_pix4d_features'¶
- property path¶
- class pix4dvortex.dmodel.FisheyeInternals¶
OPF sensor fisheye internals.
- property affine¶
- static from_dict(arg0: dict) pix4dvortex.dmodel.FisheyeInternals ¶
- property is_p0_zero¶
- property is_symmetric_affine¶
- property polynomial¶
- property principal_point_px¶
- to_dict(self: pix4dvortex.dmodel.FisheyeInternals) dict ¶
- type = 'fisheye'¶
- class pix4dvortex.dmodel.GCP(self: pix4dvortex.dmodel.GCP, *, is_checkpoint: bool = False, id: str = '', geolocation: pix4dvortex.dmodel.Geolocation = <pix4dvortex.dmodel.Geolocation object at 0x70ce7acb8670>, marks: list[pix4dvortex.dmodel.Mark] = [])¶
OPF GCP.
- property geolocation¶
- property id¶
The camera ID to reference the image.
- property is_checkpoint¶
If true, the GCP is provided by the users for the sake of quality assessment. It will not be passed to the calibration but used later to compute the re-projection error.
- property marks¶
The set of image points that represent the projections of a 3D-point.
- class pix4dvortex.dmodel.GLTFPointCloud(self: pix4dvortex.dmodel.GLTFPointCloud, *, model_path: os.PathLike)¶
OPF-glTF point cloud wrapper
- copy(self: pix4dvortex.dmodel.GLTFPointCloud, *, out_dir: os.PathLike) pix4dvortex.dmodel.GLTFPointCloud ¶
Return a self-contained copy of this object.
Creates and returns a fully self-contained copy of this object. As a result, all managed GLTF files are copied.
- Parameters
out_dir – Directory to copy internal data files into.
- Raises
RuntimeError – if
out_dir
is the same as the parent of this object’smodel_path()
.
- empty(self: pix4dvortex.dmodel.GLTFPointCloud) bool ¶
- hash(self: pix4dvortex.dmodel.GLTFPointCloud) int ¶
- model_path(self: pix4dvortex.dmodel.GLTFPointCloud) os.PathLike ¶
- opf_type = 'point_cloud'¶
- size(self: pix4dvortex.dmodel.GLTFPointCloud) int ¶
- class pix4dvortex.dmodel.GPSBias¶
OPF gps_bias.
- static from_dict(arg0: dict) pix4dvortex.dmodel.GPSBias ¶
- to_dict(self: pix4dvortex.dmodel.GPSBias) dict ¶
- property transform¶
- class pix4dvortex.dmodel.Geolocation(self: pix4dvortex.dmodel.Geolocation, *, crs: pix4dvortex.dmodel.CRS = <pix4dvortex.dmodel.CRS object at 0x70ce7aca5b30>, coordinates: Annotated[list[float], FixedSize(3)] = [0.0, 0.0, 0.0], sigmas: Annotated[list[float], FixedSize(3)] = [0.0, 0.0, 0.0])¶
OPF Geolocation
- property coordinates¶
- property crs¶
- static from_dict(arg0: dict) pix4dvortex.dmodel.Geolocation ¶
- property sigmas¶
- to_dict(self: pix4dvortex.dmodel.Geolocation) dict ¶
- class pix4dvortex.dmodel.ITP(self: pix4dvortex.dmodel.ITP, *, id: str, marks: list[pix4dvortex.dmodel.MarkWithSegments], modified_by_user: bool = False)¶
ITP OPF extension ITP.
- static from_dict(arg0: dict) pix4dvortex.dmodel.ITP ¶
- property id¶
- property marks¶
- property modified_by_user¶
- to_dict(self: pix4dvortex.dmodel.ITP) dict ¶
- class pix4dvortex.dmodel.ITPMarkCreationMethod(self: pix4dvortex.dmodel.ITPMarkCreationMethod, *, type: str = 'automatic')¶
ITP OPF extension mark creation method.
- property type¶
- class pix4dvortex.dmodel.InputCameras¶
OPF input_cameras.
- property captures¶
- static from_dict(arg0: dict) pix4dvortex.dmodel.InputCameras ¶
- hash(self: pix4dvortex.dmodel.InputCameras) int ¶
- opf_type = 'input_cameras'¶
- property sensors¶
- to_dict(self: pix4dvortex.dmodel.InputCameras) dict ¶
- class pix4dvortex.dmodel.InputControlPoints(self: pix4dvortex.dmodel.InputControlPoints, *, gcps: list[pix4dvortex.dmodel.GCP] = [], mtps: list[pix4dvortex.dmodel.MTP] = [])¶
OPF input_control_points.
- static from_dict(arg0: dict) pix4dvortex.dmodel.InputControlPoints ¶
- property gcps¶
- hash(self: pix4dvortex.dmodel.InputControlPoints) int ¶
- property mtps¶
- opf_type = 'input_control_points'¶
- to_dict(self: pix4dvortex.dmodel.InputControlPoints) dict ¶
- class pix4dvortex.dmodel.InputRigRelatives¶
OPF input rig relatives.
- property rotation¶
- property translation¶
- class pix4dvortex.dmodel.IntersectionTiePoints(self: pix4dvortex.dmodel.IntersectionTiePoints, *, itps: list[pix4dvortex.dmodel.ITP])¶
ITP OPF extension ITPs
- static from_dict(arg0: dict) pix4dvortex.dmodel.IntersectionTiePoints ¶
- hash(self: pix4dvortex.dmodel.IntersectionTiePoints) int ¶
- property itps¶
- opf_type = 'ext_pix4d_input_intersection_tie_points'¶
- to_dict(self: pix4dvortex.dmodel.IntersectionTiePoints) dict ¶
- class pix4dvortex.dmodel.MTP(self: pix4dvortex.dmodel.MTP, *, is_checkpoint: bool = False, id: str = '', marks: list[pix4dvortex.dmodel.Mark] = [])¶
OPF MTP.
- property id¶
The identifier or name of the MTP.
- property is_checkpoint¶
If true, the MTP is provided by the users for the sake of quality assessment. It will not be passed to the calibration but used later to compute the re-projection error.
- property marks¶
The set of image points that represent the projections of a 3D-point.
- class pix4dvortex.dmodel.Mark(self: pix4dvortex.dmodel.Mark, *, accuracy: float = 0.0, position: Annotated[list[float], FixedSize(2)] = [0.0, 0.0], camera_id: int = 0)¶
OPF mark.
- property accuracy¶
A number representing the accuracy of the click. This will be used by the calibration as a weight for this mark.
- property camera_id¶
The camera ID to reference the image.
- property position¶
The mark position in the image.
- class pix4dvortex.dmodel.MarkWithSegments(self: pix4dvortex.dmodel.MarkWithSegments, *, camera_id: int, common_endpoint_px: Annotated[list[float], FixedSize(2)], other_endpoints_px: list[Annotated[list[float], FixedSize(2)]], creation_method: pix4dvortex.dmodel.ITPMarkCreationMethod, accuracy: Optional[float] = None)¶
ITP OPF extension mark with segments.
- property accuracy¶
- property camera_id¶
- property common_endpoint_px¶
- property creation_method¶
- property other_endpoints_px¶
- class pix4dvortex.dmodel.Matches(self: pix4dvortex.dmodel.Matches, *, path: os.PathLike)¶
Matches OPF extension image matches binary file wrapper.
- hash(self: pix4dvortex.dmodel.Matches) int ¶
- opf_type = 'ext_pix4d_matches'¶
- property path¶
- class pix4dvortex.dmodel.OmegaPhiKappa(self: pix4dvortex.dmodel.OmegaPhiKappa, *, angles_deg: Annotated[list[float], FixedSize(3)] = [0.0, 0.0, 0.0], sigmas_deg: Annotated[list[float], FixedSize(3)] = [0.0, 0.0, 0.0], crs: str = '')¶
OPF Omega-Phi-Kappa camera orientation.
- property angles_deg¶
- property crs¶
- property sigmas_deg¶
- type = 'omega_phi_kappa'¶
- class pix4dvortex.dmodel.OriginalMatches(self: pix4dvortex.dmodel.OriginalMatches, *, path: os.PathLike)¶
Matches OPF extension original image matches binary file wrapper.
- hash(self: pix4dvortex.dmodel.OriginalMatches) int ¶
- opf_type = 'ext_pix4d_original_matches'¶
- property path¶
- class pix4dvortex.dmodel.PerspectiveInternals¶
OPF sensor perspective internals.
- property distortion_type¶
- property focal_length_px¶
- static from_dict(arg0: dict) pix4dvortex.dmodel.PerspectiveInternals ¶
- property principal_point_px¶
- property radial_distortion¶
- property tangential_distortion¶
- to_dict(self: pix4dvortex.dmodel.PerspectiveInternals) dict ¶
- type = 'perspective'¶
- class pix4dvortex.dmodel.ProjectedCameras¶
OPF projected_cameras.
- property captures¶
- static from_dict(arg0: dict) pix4dvortex.dmodel.ProjectedCameras ¶
- hash(self: pix4dvortex.dmodel.ProjectedCameras) int ¶
- opf_type = 'projected_input_cameras'¶
- property sensors¶
- to_dict(self: pix4dvortex.dmodel.ProjectedCameras) dict ¶
- class pix4dvortex.dmodel.ProjectedCapture¶
OPF projected_capture.
- static from_dict(arg0: dict) pix4dvortex.dmodel.ProjectedCapture ¶
- property geolocation¶
- property id¶
- property orientation¶
- to_dict(self: pix4dvortex.dmodel.ProjectedCapture) dict ¶
- class pix4dvortex.dmodel.ProjectedControlPoints(*args, **kwargs)¶
OPF projecred control points
Overloaded function.
__init__(self: pix4dvortex.dmodel.ProjectedControlPoints, *, projected_gcps: list[pix4dvortex.dmodel.ProjectedGCP] = []) -> None
__init__(self: pix4dvortex.dmodel.ProjectedControlPoints, *, input_control_points: pix4dvortex.dmodel.InputControlPoints, scene_ref_frame: pix4dvortex.dmodel.SceneRefFrame) -> None
- static from_dict(arg0: dict) pix4dvortex.dmodel.ProjectedControlPoints ¶
- hash(self: pix4dvortex.dmodel.ProjectedControlPoints) int ¶
- opf_type = 'projected_control_points'¶
- property projected_gcps¶
- to_dict(self: pix4dvortex.dmodel.ProjectedControlPoints) dict ¶
- class pix4dvortex.dmodel.ProjectedGCP(self: pix4dvortex.dmodel.ProjectedGCP, *, id: str = '', coordinates: Annotated[list[float], FixedSize(3)] = [0.0, 0.0, 0.0], sigmas: Annotated[list[float], FixedSize(3)] = [0.0, 0.0, 0.0])¶
OPF projected GCP
- property coordinates¶
- property id¶
- property sigmas¶
- class pix4dvortex.dmodel.ProjectedGeolocation¶
OPF projected geolocation.
- property position¶
- property sigmas¶
- class pix4dvortex.dmodel.ProjectedOrientation¶
OPF projected orientation.
- property angles_deg¶
- property sigmas_deg¶
- class pix4dvortex.dmodel.ProjectedRigTranslation¶
OPF projected_rig_translation.
- property sigmas¶
- property values¶
- class pix4dvortex.dmodel.ProjectedSensor¶
OPF projected_sensor.
- static from_dict(arg0: dict) pix4dvortex.dmodel.ProjectedSensor ¶
- property id¶
- property rig_translation¶
- to_dict(self: pix4dvortex.dmodel.ProjectedSensor) dict ¶
- class pix4dvortex.dmodel.RigRelativeRotation¶
OPF rig relative rotation.
- property angles_deg¶
- property sigmas_deg¶
- class pix4dvortex.dmodel.RigRelativeTranslation¶
OPF rig relative translation.
- property sigmas_m¶
- property values_m¶
- class pix4dvortex.dmodel.RigidTransform¶
OPF rigid_transform.
- property rotation_deg¶
- property scale¶
- property translation¶
- class pix4dvortex.dmodel.SceneRefFrame(self: pix4dvortex.dmodel.SceneRefFrame, *, proj_crs: pix4dvortex.dmodel.CRS = <pix4dvortex.dmodel.CRS object at 0x70ce7acb2f70>, base_to_canonical: pix4dvortex.dmodel.BaseToCanonical = <pix4dvortex.dmodel.BaseToCanonical object at 0x70ce7acb2f30>)¶
OPF scene_reference_frame.
- property base_to_canonical¶
parameters used to convert the projected coordinates into the processing coordinates
- property crs¶
Information on the projected Spatial Reference System
- static from_dict(arg0: dict) pix4dvortex.dmodel.SceneRefFrame ¶
- hash(self: pix4dvortex.dmodel.SceneRefFrame) int ¶
- opf_type = 'scene_reference_frame'¶
- to_dict(self: pix4dvortex.dmodel.SceneRefFrame) dict ¶
- class pix4dvortex.dmodel.SegmentGraph2D(self: pix4dvortex.dmodel.SegmentGraph2D, *, camera_id: int, vertices: list[pix4dvortex.dmodel.Vertex], edges: list[pix4dvortex.dmodel.Edge])¶
2D segment graphs OPF extension 2D segment graph.
- property camera_id¶
- property edges¶
- property vertices¶
- class pix4dvortex.dmodel.SegmentGraphs2D(self: pix4dvortex.dmodel.SegmentGraphs2D, *, graphs: list[pix4dvortex.dmodel.SegmentGraph2D])¶
2D segment graphs OPF extension item.
- static from_dict(arg0: dict) pix4dvortex.dmodel.SegmentGraphs2D ¶
- property graphs¶
- hash(self: pix4dvortex.dmodel.SegmentGraphs2D) int ¶
- opf_type = 'ext_pix4d_2d_segment_graphs'¶
- to_dict(self: pix4dvortex.dmodel.SegmentGraphs2D) dict ¶
- class pix4dvortex.dmodel.SensorElement¶
OPF sensor element.
- property bands¶
- static from_dict(arg0: dict) pix4dvortex.dmodel.SensorElement ¶
- property id¶
- property image_size_px¶
- property internals¶
- property name¶
- property pixel_size_um¶
- property rig_relatives¶
- property shutter_type¶
- to_dict(self: pix4dvortex.dmodel.SensorElement) dict ¶
- class pix4dvortex.dmodel.SphericalInternals¶
OPF sensor spherical internals.
- static from_dict(arg0: dict) pix4dvortex.dmodel.SphericalInternals ¶
- property principal_point_px¶
- to_dict(self: pix4dvortex.dmodel.SphericalInternals) dict ¶
- type = 'spherical'¶
- class pix4dvortex.dmodel.Vertex(self: pix4dvortex.dmodel.Vertex, *, position: Annotated[list[float], FixedSize(2)], confidence: float)¶
2D segment graphs OPF extension 2D vertex.
- property confidence¶
- property position¶
- class pix4dvortex.dmodel.YawPitchRoll(self: pix4dvortex.dmodel.YawPitchRoll, *, angles_deg: Annotated[list[float], FixedSize(3)] = [0.0, 0.0, 0.0], sigmas_deg: Annotated[list[float], FixedSize(3)] = [0.0, 0.0, 0.0])¶
OPF Yaw-Pitch-Roll camera orientation.
- property angles_deg¶
- property sigmas_deg¶
- type = 'yaw_pitch_roll'¶
- class pix4dvortex.dmodel._SurfaceModelMesh(self: pix4dvortex.dmodel._SurfaceModelMesh)¶
Surface model mesh OPF extension.
- class pix4dvortex.dmodel._RadiometryInputs¶
Radiometry inputs OPF extension.
- class pix4dvortex.dmodel._RadiometrySettings¶
Radiometry settings OPF extension.
pix4dvortex data model¶
Additional pix4dvortex data types. These are often OPF type composites, legacy classes, or a
mix of both. Unlike the OPF compatible classes, which are all defined in dmodel
,
these are defined in different modules, which also contain related processing functions and settings.
For convenience, this section presents the class documentation of the main classes only. Documentation
of the full public API of each class is available in the relevant Core Processing sub-sections.
- class pix4dvortex.cameras.InputCameras
Composite of
pix4dvortex.dmodel
OPF input camera and related types.This class wraps
pix4dvortex.dmodel.InputCameras
,pix4dvortex.dmodel.CameraList
, andpix4dvortex.dmodel.CameraOptimizationHints
, and_RadiometryData
objects and provides a higher level interface with convenient data access methods.
- class pix4dvortex.cameras.ProjectedCameras
Composite of
pix4dvortex.dmodel
OPF projected camera and related types.This class wraps
pix4dvortex.dmodel.ProjectedCameras
andpix4dvortex.dmodel.SceneRefFrame
, in addition to components ofInputCameras
, and provides a higher level interface with convenient data access methods.
- class pix4dvortex.calib._CalibratedScene
Composite of
pix4dvortex.dmodel
OPF calibration, scene reference frame, input cameras and control points.This class wraps
pix4dvortex.dmodel.Calibration
,pix4dvortex.dmodel.SceneRefFrame
,pix4dvortex.dmodel.InputCameras
,pix4dvortex.dmodel.InputControlPoints
andpix4dvortex.dmodel._SurfaceModelMesh
. andpix4dvortex.dmodel.CameraOptimizationHints
, and_RadiometryData
objects and provides access to these objects as well as user convenience methods.
Point cloud
- class pix4dvortex.pcl.PointCloud
Bases:
GLTFPointCloud
Composite of
pix4dvortex.dmodel.GLTFPointCloud
andpix4dvortex.dmodel.SceneRefFrame
OPF types.This class is a specialization of
pix4dvortex.dmodel.GLTFPointCloud
, with apix4dvortex.dmodel.SceneRefFrame
object and additional methods to increase usability. It is intended to model the output of a point cloud densification algorithm, with thepix4dvortex.dmodel.SceneRefFrame
representing the bridge between the glTF positions - stored in the “canonical” SRS - and the “base” SRS, typically a real world “projected” SRS.
- class pix4dvortex.dsm._Tiles
- class pix4dvortex.ortho._Tiles
- class pix4dvortex.mesh._MeshLOD
Level of detail representation of a texured mesh
- class pix4dvortex.mesh._Texture
- class pix4dvortex.mesh._MeshGeom
Core Processing¶
Input and Calibrated Cameras¶
Camera-related data types, serving as input data to the processing algorithms, and utilities.
- class pix4dvortex.cameras.DepthConfidenceInfo(self: pix4dvortex.cameras.DepthConfidenceInfo, *, path: os.PathLike, min: float = 0.0, threshold: float = 1.0, max: float = 2.0)¶
- property max¶
- property min¶
- property path¶
- property threshold¶
- class pix4dvortex.cameras.DepthInfo(self: pix4dvortex.cameras.DepthInfo, *, path: os.PathLike, unit_to_meters: Optional[float] = 1.0, confidence_info: Optional[pix4dvortex.cameras.DepthConfidenceInfo] = None)¶
- property confidence_info¶
- property path¶
- property unit_to_meters¶
- class pix4dvortex.cameras.GeoCoordinates(self: pix4dvortex.cameras.GeoCoordinates, *, lat: float = 0, lon: float = 0.0, alt: float = 0.0, lat_accuracy: float = 0.0, lon_accuracy: float = 0.0, alt_accuracy: float = 0.0)¶
Deprecated: Geolocation coordinates and accuracies
Geographic coordinates and accuracies lat[°], lon[°], alt[m], σ(lat[m]), σ(lon[m]), and σ(alt[m])
- Raises
ValueError – if any coordinate or associated accuracies is out of bounds.
- property alt¶
- property alt_accuracy¶
- property lat¶
- property lat_accuracy¶
- property lon¶
- property lon_accuracy¶
- class pix4dvortex.cameras.GeoRotation(self: pix4dvortex.cameras.GeoRotation, *, yaw: float = 0.0, pitch: float = 0.0, roll: float = 0.0, yaw_accuracy: float = 0.0, pitch_accuracy: float = 0.0, roll_accuracy: float = 0.0, unit: str = 'degrees')¶
Deprecated: Yaw-Pitch-Roll rotation angles and accuracies
Deprecated: Rotation angles and accuracies yaw, pitch, roll, σ(yaw), σ(pitch), and σ(roll).
Parameter
unit
specifies the unit of measurement of the arguments and must be one of “degrees” or “radians”.- Raises
ValueError – if an invalid angular unit is chosen
ValueError – if any of the rotation angles or their associated accuracies is out of bounds.
- property pitch¶
- property pitch_accuracy¶
σ(pitch) (radians)
- property roll¶
- property roll_accuracy¶
σ(roll) (radians)
- property yaw¶
- property yaw_accuracy¶
σ(yaw) (radians)
- class pix4dvortex.cameras.GeoTag(*args, **kwargs)¶
Geolocation and orientation in a given coordinate reference system
Overloaded function.
__init__(self: pix4dvortex.cameras.GeoTag, *, geolocation: Optional[pix4dvortex.dmodel.Geolocation] = None, orientation: Optional[Union[pix4dvortex.dmodel.YawPitchRoll, pix4dvortex.dmodel.OmegaPhiKappa]] = None) -> None
Construct from position and orientation.
__init__(self: pix4dvortex.cameras.GeoTag, *, horizontal_srs_code: Optional[str] = None, vertical_srs_code: Optional[str] = None, geo_position: Optional[pix4dvortex.cameras.GeoCoordinates] = None, geo_rotation: Optional[pix4dvortex.cameras.GeoRotation] = None) -> None
Deprecated, use the above initialization instead.
- property geo_position¶
Position ((lat[°], lon[°], alt[m]), (σ(lat[m]), σ(lon[m]), σ(alt[m]))). Deprecated, use ‘geolocation’ instead.
- property geo_rotation¶
Rotation ((yaw, pitch, roll) and (σ(yaw), σ(pitch), σ(roll)) in [rad]. Deprecated, use ‘orientation’ instead.
- property geolocation¶
- property horizontal_srs_code¶
Get horizontal CRS component. Deprecated, retrieve the CRS from ‘location’ instead.
- property orientation¶
- property vertical_srs_code¶
Get vertical CRS component. Deprecated, retrieve the CRS from ‘location’ instead.
- class pix4dvortex.cameras.InputCameras(*args, **kwargs)¶
Composite of
pix4dvortex.dmodel
OPF input camera and related types.This class wraps
pix4dvortex.dmodel.InputCameras
,pix4dvortex.dmodel.CameraList
, andpix4dvortex.dmodel.CameraOptimizationHints
, and_RadiometryData
objects and provides a higher level interface with convenient data access methods.Overloaded function.
__init__(self: pix4dvortex.cameras.InputCameras) -> None
__init__(self: pix4dvortex.cameras.InputCameras, *, input_cameras: pix4dvortex.dmodel.InputCameras, camera_list: pix4dvortex.dmodel.CameraList, optimization_hints: Optional[pix4dvortex.dmodel.CameraOptimizationHints] = None, radiometry_data: Optional[pix4dvortex.cameras._RadiometryData] = None) -> None
- get_depth_info(self: pix4dvortex.cameras.InputCameras, *, camera_id: int) Optional[pix4dvortex.cameras.DepthInfo] ¶
Depth information corresponding to camera with given ID.
- Raises
IndexError – if
camera_id
not valid.
- get_image_path(self: pix4dvortex.cameras.InputCameras, *, camera_id: int) os.PathLike ¶
Path of image file with given camera ID.
- Raises
IndexError – if
camera_id
not valid.
- optimization_hints(self: pix4dvortex.cameras.InputCameras) pix4dvortex.dmodel.CameraOptimizationHints ¶
- radiometry_data(self: pix4dvortex.cameras.InputCameras) Optional[pix4dvortex.cameras._RadiometryData] ¶
- class pix4dvortex.cameras.ProjectedCameras(*args, **kwargs)¶
Composite of
pix4dvortex.dmodel
OPF projected camera and related types.This class wraps
pix4dvortex.dmodel.ProjectedCameras
andpix4dvortex.dmodel.SceneRefFrame
, in addition to components ofInputCameras
, and provides a higher level interface with convenient data access methods.Overloaded function.
__init__(self: pix4dvortex.cameras.ProjectedCameras, *, input_cameras: pix4dvortex.cameras.InputCameras, proj_srs: Optional[pix4dvortex.coordsys.SpatialReference] = None, proj_srs_geoid_height: Optional[float] = None) -> None
Create a
ProjectedCameras
instance.- Parameters
input_cameras – A container of input camera objects.
proj_srs – (optional) Projected coordinate system to be used for camera calibration. Must be isometric. If not specified, a UTM corresponding to the geographic coordinate system of the first image with geolocation is used.
proj_srs_geoid_height – (optional, only accepted for compound
proj_srs
without a geoid model). The geoid height (aka geoid undulation approximation) to use when the coordinates of the images are using an SRS that require conversion to the proj SRS (proj_srs
).
- Returns
A
ProjectedCameras
object. Its coordinate system is eitherproj_srs
if it is specified or UTM otherwise.- Raises
RuntimeError – if any of the images is not geo-located.
RuntimeError – if
proj_srs
is not set and the projected SRS derived from images is not isometric.RuntimeError – if
proj_srs
is set and is not both projected and isometric.
__init__(self: pix4dvortex.cameras.ProjectedCameras, *, projected_cameras: pix4dvortex.dmodel.ProjectedCameras, input_cameras: pix4dvortex.dmodel.InputCameras, camera_list: pix4dvortex.dmodel.CameraList, scene_ref_frame: pix4dvortex.dmodel.SceneRefFrame, optimization_hints: Optional[pix4dvortex.dmodel.CameraOptimizationHints] = None, radiometry_data: Optional[pix4dvortex.cameras._RadiometryData] = None) -> None
Create a
ProjectedCameras
from OPF objects.- property captures¶
A list of calibration Capture objects.
- get_depth_info(self: pix4dvortex.cameras.ProjectedCameras, *, camera_id: int) Optional[pix4dvortex.cameras.DepthInfo] ¶
Depth information corresponding to camera with given ID.
- Raises
IndexError – if
camera_id
not valid.
- get_image_path(self: pix4dvortex.cameras.ProjectedCameras, *, camera_id: int) os.PathLike ¶
Path of image file with given camera ID.
- Raises
IndexError – if
camera_id
not valid.
- optimization_hints(self: pix4dvortex.cameras.ProjectedCameras) pix4dvortex.dmodel.CameraOptimizationHints ¶
- property proj_srs¶
The projected spatial reference system.
- property proj_srs_geoid_height¶
- radiometry_data(self: pix4dvortex.cameras.ProjectedCameras) Optional[pix4dvortex.cameras._RadiometryData] ¶
- property scene_ref_frame¶
Information on the internal processing spatial reference system
- class pix4dvortex.cameras.Settings(self: pix4dvortex.cameras.Settings, *, uid_gen: pix4dvortex.cameras.UIDGen = <UIDGen.BLAKE2B_8_FILE_HASH: 0>, camera_db_path: Optional[os.PathLike] = None, radiometry: Optional[pix4dvortex.cameras.Settings.Radiometry] = None)¶
Configuration settings for
InputCameras
creation.- Parameters
uid_gen – (optional) OPF camera UID generator.
radiometry – (optional) experimental radiometric processing settings.
- class Radiometry(self: pix4dvortex.cameras.Settings.Radiometry, *, weather_condition: pix4dvortex.cameras.WeatherCondition = <WeatherCondition.CLEAR_SKY: 0>, target_search_count: int = 3)¶
- property target_search_count¶
Number of captures at the beginning and end to check for reflectance targets.
- property weather_condition¶
Weather conditions under which the data were captured.
- property camera_db_path¶
User defined alternative camera DB path.
- property radiometry¶
Radiometry settings
- to_dict(self: pix4dvortex.cameras.Settings) dict ¶
- property uid_gen¶
OPF camera UID generator
- class pix4dvortex.cameras.UIDGen(self: pix4dvortex.cameras.UIDGen, value: int)¶
Members:
BLAKE2B_8_FILE_HASH
IMG_CONTENT_HASH
- BLAKE2B_8_FILE_HASH = <UIDGen.BLAKE2B_8_FILE_HASH: 0>¶
- IMG_CONTENT_HASH = <UIDGen.IMG_CONTENT_HASH: 1>¶
- property name¶
- property value¶
- class pix4dvortex.cameras.WeatherCondition(self: pix4dvortex.cameras.WeatherCondition, value: int)¶
Members:
CLEAR_SKY : clear sky
OVERCAST : overcast sky
UNKNOWN : unknown weather condition
- CLEAR_SKY = <WeatherCondition.CLEAR_SKY: 0>¶
- OVERCAST = <WeatherCondition.OVERCAST: 1>¶
- UNKNOWN = <WeatherCondition.UNKNOWN: 2>¶
- property name¶
- property value¶
- pix4dvortex.cameras.make_input_cameras(*, image_info: list[os.PathLike], depth_info: Optional[dict[os.PathLike, pix4dvortex.cameras.DepthInfo]] = None, camera_db_path: Optional[os.PathLike] = None, external_geotags: dict[os.PathLike, pix4dvortex.cameras.GeoTag] = {}, settings: pix4dvortex.cameras.Settings = <pix4dvortex.cameras.Settings object at 0x70ce7ac78e30>, logger: _pyvtx.logging.Logger = None, _stats: Callable[[dict], None] = None) pix4dvortex.cameras.InputCameras ¶
Create an
InputCameras
instance.The optional
external_geotags
argument is used to 1) set geotags for not geolocated images. Geotags must contain coordinates and SRS codes, and optionally also rotation data, failing that, aValueError
: is raised. 2) update existing geotags. Geotags can contain any combination of SRS code, coordinates and rotation data.- Parameters
image_info – List of image file paths to create cameras from.
depth_info – (optional): Mapping of image file path to corresponding depth and depth confidence.
camera_db_path – (optional, deprecated) path of camera database. Embedded one is used if not set. Use
settings.camera_db_path
instead.external_geotags – (optional) mapping of image paths to geotags.
settings – (optional) configuration settings.
logger – (optional) logging callable object.
_stats – (optional) call-back accessing image metadata and stats.
- Raises
ValueError – if any of the images causes a camera creation error
ValueError – if any element of
depth_info
does not map to an image inimage_info
.RuntimeError – if any element of
image_info
ordepth_info
does not map to an existing file.ValueError – if the new geolocation misses coordinates.
ValueError – if
external_geotags
does not reference an existing file.RuntimeError – if any of the mapped geotags does not map to an image in
image_info
.ValueError – if two image files are bit-wise identical.
- class pix4dvortex.cameras._RadiometryData(self: pix4dvortex.cameras._RadiometryData, *, _radiometry_inputs: Optional[pix4dvortex.dmodel._RadiometryInputs] = None, _radiometry_settings: Optional[pix4dvortex.dmodel._RadiometrySettings] = None)¶
Composite of
pix4dvortex.dmodel._RadiometryInputs
andpix4dvortex.dmodel._RadiometrySettings
OPF extension types.Contains radiometric input from reflectance targets, radiometric corrections for each sensor, as well as the orthomosaic settings used to generate the radiometric input data. Objects of this class are used internally in the
ortho
andindexmap
modules when generating multispectral orthomosaics and reflectance maps.
Camera Calibration¶
Module for calibration utilities.
- class pix4dvortex.calib.ReoptSettings(self: pix4dvortex.calib.ReoptSettings, *, calibration_int_param_opt: pix4dvortex.calib.Settings.OptimIntType = <OptimIntType.All: 2>, calibration_ext_param_opt: pix4dvortex.calib.Settings.OptimExtType = <OptimExtType.All: 2>, use_optimized_internals: bool = True, rematch_additional_settings: Optional[pix4dvortex.calib.ReoptSettings.RematchAdditional] = None, _lever_arm_opt: pix4dvortex.calib.Settings._LeverArmType = <_LeverArmType.NoOffset: 0>)¶
- class RematchAdditional(self: pix4dvortex.calib.ReoptSettings.RematchAdditional, *, image_pair_settings: pix4dvortex.calib.Settings.ImagePairSettings = <pix4dvortex.calib.Settings.ImagePairSettings object at 0x70ce7aceb530>, is_oblique_scene: bool = False)¶
- property image_pair_settings¶
Settings for image pair generation. See
ImagePairSettings
.
- property is_oblique_scene¶
- to_dict(self: pix4dvortex.calib.ReoptSettings.RematchAdditional) dict ¶
- property calibration_ext_param_opt¶
Type of optimization for external camera parameters. See
OptimExtType
.
- property calibration_int_param_opt¶
Type of optimization for internal camera parameters. See
OptimIntType
.
- property rematch_additional_settings¶
- to_dict(self: pix4dvortex.calib.ReoptSettings) dict ¶
- property use_optimized_internals¶
Specifies whether to use the optimized internals from the calibrated scene or the internals from the initial cameras. Use False if you have updated the internals in the initial cameras and want to use their value instead of the optimization results.
- pix4dvortex.calib.calibrate(*, cameras: pix4dvortex.cameras.ProjectedCameras, settings: pix4dvortex.calib.Settings, resources: pix4dvortex.proc.Resources = <pix4dvortex.proc.Resources object at 0x70ce7acbb1f0>, control_points: Optional[pix4dvortex.dmodel.InputControlPoints] = None, _keypoint_metrics_handler: Callable[[pix4dvortex.calib.analytics._KeypointMetrics], None] = None, logger: _pyvtx.logging.Logger = None, progress_callback: Callable[[str, int], None] = None, stop_function: Callable[[], bool] = None) pix4dvortex.calib._CalibratedScene ¶
Calibrate cameras.
- Parameters
cameras – Projected cameras container.
settings – The calibration settings, see
Settings
.resources – [Optional] HW resource configuration parameters.
control_points – [Optional] The input control points, see
InputControlPoints
. Ifsettings.use_gcp_srs
is False, the SRS of GCPs must be the same as the system used incameras
, otherwise all GCPs must use the same Cartesian and isometric system (e.g. projected, 3D engineering or compound of projected and vertical)._keypoint_metrics_handler – [Optional] An optional callable to handle a object generated during calibration. Its single argument provides a
_KeypointMetrics
.logger – [Optional] Logging callback.
progress_callback – [Optional] Progress callback.
stop_function – [Optional] Cancellation callback.
- Raises
RuntimeError – on failure to process.
pix4dvortex.proc.StopProcessing – on
stop_function
triggered cancellation.
- Returns
A calibrated scene, see
_CalibratedScene
.
- pix4dvortex.calib.reoptimize(*, cameras: pix4dvortex.cameras.ProjectedCameras, scene: pix4dvortex.calib._CalibratedScene, settings: pix4dvortex.calib.ReoptSettings, resources: pix4dvortex.proc.Resources = <pix4dvortex.proc.Resources object at 0x70ce7aca01f0>, control_points: Optional[pix4dvortex.dmodel.InputControlPoints] = None, logger: _pyvtx.logging.Logger = None, progress_callback: Callable[[str, int], None] = None, stop_function: Callable[[], bool] = None) pix4dvortex.calib._CalibratedScene ¶
Reoptimize a scene.
- Parameters
cameras – Projected cameras container.
scene – The calibrated scene.
settings – The reoptimize settings, see
ReoptSettings
.resources – [Optional] HW resource configuration parameters.
control_points – [Optional] The input control points, see
InputControlPoints
.logger – [Optional] Logging callback.
progress_callback – [Optional] Progress callback.
stop_function – [Optional] Cancellation callback.
- Raises
RuntimeError – on failure to process.
pix4dvortex.proc.StopProcessing – on
stop_function
triggered cancellation.
- Returns
A calibrated scene, see
_CalibratedScene
.
- class pix4dvortex.calib.Settings(self: pix4dvortex.calib.Settings, *, keypt_number: Optional[int] = None, image_scale: float = 1.0, matching_algorithm: pix4dvortex.calib.Settings.MatchingAlgorithm = <MatchingAlgorithm.Standard: 0>, image_pair_settings: pix4dvortex.calib.Settings.ImagePairSettings = <pix4dvortex.calib.Settings.ImagePairSettings object at 0x70ce7ace82f0>, use_rig_matching: bool = False, min_matches: int = 20, min_image_distance: float = 0.0, pipeline: pix4dvortex.calib.Settings.CalibrationType = <CalibrationType.Standard: 0>, calibration_int_param_opt: pix4dvortex.calib.Settings.OptimIntType = <OptimIntType.All: 2>, calibration_ext_param_opt: pix4dvortex.calib.Settings.OptimExtType = <OptimExtType.All: 2>, prior_position_confidence: pix4dvortex.calib.Settings.PriorPositionConfidenceType = <PriorPositionConfidenceType.Low: 0>, oblique: bool = True, rematch: bool = False, use_gcp_srs: bool = False, use_itps: bool = False, save_intermediate_results: bool = False, _lever_arm_opt: pix4dvortex.calib.Settings._LeverArmType = <_LeverArmType.NoOffset: 0>, _rig_relative_opt: pix4dvortex.calib.Settings._ApiRigRelativesOptimType = <_ApiRigRelativesOptimType.RotationUsingSubsetOfCaptures: 2>, _use_sensor_characteristics: bool = True)¶
- class ImageDistance(self: pix4dvortex.calib.Settings.ImageDistance, arg0: float)¶
Images distance. The unit of measure is the one of the processing SRS (usually meter of feet). See
SceneRefFrame
.- to_dict(self: pix4dvortex.calib.Settings.ImageDistance) dict ¶
- property value¶
- class ImageDistanceToMedian(self: pix4dvortex.calib.Settings.ImageDistanceToMedian, arg0: float)¶
Images distance relative to median distance of consecutive images
- to_dict(self: pix4dvortex.calib.Settings.ImageDistanceToMedian) dict ¶
- property value¶
- class PriorPositionConfidenceType(self: pix4dvortex.calib.Settings.PriorPositionConfidenceType, value: int)¶
Confidence that the input prior positions for the cameras contain no outlier.
Members:
- Low :
(default) Outlier or inconsistent positions MAY be present in the input.
- High :
Low number of outliers expected.
- High = <PriorPositionConfidenceType.High: 1>¶
- Low = <PriorPositionConfidenceType.Low: 0>¶
- property name¶
- property value¶
- property calibration_ext_param_opt¶
Type of optimization for external camera parameters. See
OptimExtType
.
- property calibration_int_param_opt¶
Type of optimization for internal camera parameters. See
OptimIntType
.
- property image_pair_settings¶
Settings for image pair generation. See
ImagePairSettings
.
- property image_scale¶
Image scale at which features are computed. The scale is a ratio to the initial size of the image. Recommended values:
0.5
for RGB images from 40Mpx and above0.5
-1.0
for RGB images from 12 to 40Mpx1.0
-2.0
for images with lower resolutions (multispectral, mobile captures, etc.)
- property keypt_number¶
Target number of keypoints per image for matching. If set to
None
, the value will be automatically determined.
- property matching_algorithm¶
Matching algorithm. See
MatchingAlgorithm
.
- property min_image_distance¶
Minimum distance (in xy-plane) for matching or for homography estimation. In most of the cases it doesn’t make sense to eliminate closest images from matching, therefore the recommended value is
0.0
.
- property min_matches¶
Sets a threshold for the minimum matches per an image pair to be considered in calibration. The pair is fully discarded if the number of the matches in the pair is less than this threshold. Recommended value is
20
.
- property oblique¶
Type of flight plan:
True
for oblique or free flight (default)False
for nadir flight
- property pipeline¶
Type of calibration pipeline. See
CalibrationType
and user guide for more information.
- property prior_position_confidence¶
Confidence that the input prior positions for the cameras contain no outlier.
- property rematch¶
Set
True
to enable rematching.Rematching adds more matches after the first part of the initial processing. This usually improves the quality of the results at the cost of increasing the processing time.
- property save_intermediate_results¶
Save features and matches computed during calibration as binary files. This allows to speed up any subsequent calls to reoptimize. Note: the binary files may have a significant size (order of magnitude of the size of input images).
- to_dict(self: pix4dvortex.calib.Settings) dict ¶
- property use_gcp_srs¶
Use SRS defined in GCPs rather than projected cameras for deriving the scene reference frame definition.
- property use_itps¶
Generate tie points out of structural line intersections to help calibrate the scene.
- property use_rig_matching¶
Combine matches of all cameras in a rig instance when matching. Only relevant for rig multispectral captures and
pipeline = LowTexturePlanar
together withmatching_algorithm = GeometricallyVerified
.
- class pix4dvortex.calib.Settings.TEMPLATES¶
Pre-defined settings for commonly used data capture types.
- LARGE¶
Optimized for use with aerial non-multispectral image captures of large scenes.
- FLAT¶
Optimized for aerial nadir data sets of flat, possibly low-texture, scenes. Well suited for multispectral cameras (rigs).
- MAPS_3D¶
Optimized for small (less than 500 images) aerial nadir or oblique data sets with high image overlap acquired in a grid flight plan.
- MODELS_3D¶
Optimized for aerial oblique or terrestrial data sets with high image overlap.
- CATCH¶
Specifically designed for data sets captured by PIX4Dcatch.
- RTK¶
Optimized for RTK aerial non-multispectral image captures of large scenes.
- class pix4dvortex.calib._CalibratedScene(self: pix4dvortex.calib._CalibratedScene, *, calibration: pix4dvortex.dmodel.Calibration = <pix4dvortex.dmodel.Calibration object at 0x70ce7acea730>, scene_ref_frame: pix4dvortex.dmodel.SceneRefFrame = <pix4dvortex.dmodel.SceneRefFrame object at 0x70ce7aca56b0>, input_cameras: pix4dvortex.dmodel.InputCameras = <pix4dvortex.dmodel.InputCameras object at 0x70ce7acd2a70>, ground_mesh: Optional[pix4dvortex.dmodel._SurfaceModelMesh] = None, input_control_points: Optional[pix4dvortex.dmodel.InputControlPoints] = None, segment_graphs_2d: Optional[pix4dvortex.dmodel.SegmentGraphs2D] = None)¶
Composite of
pix4dvortex.dmodel
OPF calibration, scene reference frame, input cameras and control points.This class wraps
pix4dvortex.dmodel.Calibration
,pix4dvortex.dmodel.SceneRefFrame
,pix4dvortex.dmodel.InputCameras
,pix4dvortex.dmodel.InputControlPoints
andpix4dvortex.dmodel._SurfaceModelMesh
. andpix4dvortex.dmodel.CameraOptimizationHints
, and_RadiometryData
objects and provides access to these objects as well as user convenience methods.Construct from a set of OPF objects
- class pix4dvortex.calib.Settings.CalibrationType(self: pix4dvortex.calib.Settings.CalibrationType, value: int)¶
Type of calibration pipeline. See user guide for more information.
Members:
- Standard :
(default) Standard calibration pipeline.
- Scalable :
Calibration pipeline intended for large scale and corridor captures.
- LowTexturePlanar :
Calibration pipeline designed for aerial nadir images with accurate geolocations and homogeneous or repetitive content of flat-like scenes and rig cameras.
- TrustedLocationOrientation :
Calibration pipeline designed for projects with accurate relative locations and inertial measurement (IMU) data. All images must include information about position and orientation.
- class pix4dvortex.calib.Settings.ImagePairSettings(self: pix4dvortex.calib.Settings.ImagePairSettings, *, match_all: bool = False, match_use_triangulation: bool = True, _match_use_orientation: bool = False, _cnv_to_meter_factor: float = 1.0, match_mtp_max_image_pair: int = 50, match_inter_sensor_images: int = 0, match_similarity_images: int = 2, match_time_images: int = 2, min_rematch_overlap: float = 0.30000001192092896, match_distance_images: Union[pix4dvortex.calib.Settings.ImageDistance, pix4dvortex.calib.Settings.ImageDistanceToMedian] = <pix4dvortex.calib.Settings.ImageDistanceToMedian object at 0x70ce7ace35b0>, _match_loop: pix4dvortex.calib.Settings._MatchLoopSettings = <pix4dvortex.calib.Settings._MatchLoopSettings object at 0x70ce7ace3570>)¶
- property match_all¶
If
True
the algorithm will try to find matches in every image pair combination. This will effectively ignore othersImagePairSettings
parameters. It is not recommended due to a severe impact on the processing time. Recommended value isFalse
.
- property match_distance_images¶
Match images with distance smaller than this. This distance must be expressed using either the
ImageDistance
type or theImageDistanceToMedian
type. The distance is 3D if all components are available and 2D (XY plane) otherwise.Setting a value (greater than zero) is useful for oblique or terrestrial projects.
- property match_inter_sensor_images¶
This setting has been introduced for matching images from multiple camera sensors flown at the same time, when the sensors are not part of a rig or the sensors are not sufficiently synchronized. It purpose is similar to the
match_time_images
, but for cameras with multiple sensors. Using zero disables this strategy.
- property match_mtp_max_image_pair¶
Strategy generating image pairs matches, using images connected by control point marks. This strategy limits to number of generated image pair, up to
match_mtp_max_image_pair
per control point. Using zero disables this strategy.
- property match_similarity_images¶
Matching image pairs based on an internal content similarity algorithm. The number defines the maximum number of image pairs that can be matched based on similarity. Zero value disables this matching image pair strategy.
- property match_time_images¶
Matching images consecutively considering their timestamp of capture. The number defines how many consecutive images (using timestamp ordering) are considered for pair matching. Zero value disables this matching image pair strategy. Typical values:
2
-4
- property match_use_triangulation¶
Strategy using the geolocation of the images to guess how likely image pairs are to match. Recommended value is
True
.
- property min_rematch_overlap¶
Minimum relative overlap required for an image pair to be considered in the rematch process. This settings is only relevant when
rematch
isTrue
. Typical value:0.3
- to_dict(self: pix4dvortex.calib.Settings.ImagePairSettings) dict ¶
- class pix4dvortex.calib.Settings.MatchingAlgorithm(self: pix4dvortex.calib.Settings.MatchingAlgorithm, value: int)¶
Members:
- Standard :
Default and adequate for most capture cases.
- GeometricallyVerified :
A slower, but more robust matching strategy. If selected, geometrically inconsistent matches are discarded. The option is useful when many similar features are present throughout the project: rows of plants in a farming field, window corners on a building’s facade, etc.
- class pix4dvortex.calib.Settings.OptimExtType(self: pix4dvortex.calib.Settings.OptimExtType, value: int)¶
Type of optimization for external camera parameters. External camera parameters are position and orientation, and the linear rolling shutter in case the camera model follows the linear rolling shutter model.
Members:
- Motion :
Optimizes rotation and orientation, but not the rolling shutter.
- All :
(default) Optimizes the rotation and position, as well as the linear rolling shutter in case the camera model follows the linear rolling shutter model.
- class pix4dvortex.calib.Settings.OptimIntType(self: pix4dvortex.calib.Settings.OptimIntType, value: int)¶
Type of optimization for internal camera parameters. Internal camera parameters are camera sensor parameters (f.i. focal length, distortion, etc).
Members:
- NoOptim :
Does not optimize any of the internal camera parameters. It may be beneficial for large cameras, if already calibrated, and if these calibration parameters are used for processing.
- Leading :
Optimizes the most important internal camera parameters only. This option is recommended to process cameras with a slow rolling shutter speed.
- All :
(default) Optimizes all the internal camera parameters (including the rolling shutter if applicable). It is recommended to use this method when processing images taken with small UAVs, whose cameras are more sensitive to temperature variations and vibrations.
- AllPrior :
Optimizes all the internal camera parameters (including the rolling shutter if applicable), but forces the optimal internal parameters to be close to the initial values. This settings may be useful for difficult to calibrate projects, where the initial camera parameters are known to be reliable.
Point Cloud Densification¶
Module for densification utilities.
- pix4dvortex.dense.densification(*, scene: pix4dvortex.calib._CalibratedScene, input_cameras: pix4dvortex.cameras.InputCameras, _metrics_handler: Optional[Callable[[str], None]] = None, mask_map: Optional[dict[int, os.PathLike]] = None, roi: Optional[pix4dvortex.geom.Roi2D] = None, settings: pix4dvortex.dense.Settings, resources: pix4dvortex.proc.Resources = <pix4dvortex.proc.Resources object at 0x70ce7ace0c30>, logger: _pyvtx.logging.Logger = None, progress_callback: Callable[[str, int], None] = None, stop_function: Callable[[], bool] = None) tuple[pix4dvortex.pcl.PointCloud, Optional[dict[int, pix4dvortex.cameras.DepthInfo]]] ¶
Generate densified point cloud.
- Parameters
scene – Calibrated scene.
input_cameras – input cameras container. Needed to obtain image path information.
_metrics_handler –
An optional metric callback. Its single argument provides a JSON string of computed densification metrics. The underlying JSON contains below fields:
- * “pre_track_count_distrib”
a list of track-count indexed by camera-count computed before excluding images from densification
- * “post_track_count_distrib”
a list of track-count indexed by camera-count computed after excluding images from densification
mask_map – Mapping of an image file hash to the path of the mask corresponding to this image. A mask is a single-channel black-and-white image. White areas are masked. Points that project onto masked areas will not be considered for densification.
roi – A polygon or muti-polygon defining a 2D region of interest (XY). Points within the (multi-)polygon are considered to belong to the ROI.
settings – Configuration parameters, see
Settings
.resources – HW resource configuration parameters.
logger – Logging callback.
progress_callback – Progress callback.
stop_function – Cancellation callback.
- Raises
RuntimeError – on failure to process.
pix4dvortex.proc.StopProcessing – on
stop_function
triggered cancellation.
- Returns
Tuple of a
PointCloud
and synthetic depth maps. The synthetic depth maps isNone
ifsettings.compute_depth_maps
isFalse
. The format of the depth maps is a dictionary mapping each camera ID to the correspondingDepthInfo
.
- class pix4dvortex.dense.Settings(self: pix4dvortex.dense.Settings, *, image_scale: int = 1, point_density: int = 2, min_no_match: int = 3, window_size: int = 7, multi_scale: bool = True, limit_depth: bool = False, regularized_multiscale: bool = False, min_image_size: int = 512, depth_limit_percentile: float = 0.949999988079071, uniformity_threshold: float = 0.03125, compute_depth_maps: bool = False, camera_filter: pix4dvortex.dense.Settings.CameraFilter = <CameraFilter.RigReferenceCameras: 1>, _partition_input_scene: bool = True, _patch_filtering: bool = True)¶
- class CameraFilter(self: pix4dvortex.dense.Settings.CameraFilter, value: int)¶
Members:
NoFilter
RigReferenceCameras : Only use rig reference cameras and no secondary rig cameras.
- NoFilter = <CameraFilter.NoFilter: 0>¶
- RigReferenceCameras = <CameraFilter.RigReferenceCameras: 1>¶
- property name¶
- property value¶
- property camera_filter¶
Allow to filter the cameras passed to the densification.
- property compute_depth_maps¶
If
True
synthetic depth maps (ie. artificial depth information) are computed during densification. Setting this option will increase the densification processing time by approximately 30%. Synthetic depth maps can be used later as a constraint in the mesh generation step. This option should be turned on only for specific cases when users try to generate 3D meshes of thin structures, for instance high tension towers or power lines.
- property depth_limit_percentile¶
If
limit_depth
isTrue
, this setting is used to compute the depth limit. Considering the distribution of the tracks depths with respect to their visible cameras, the depth limit will be set to the corresponding percentile of this distribution. The default value is recommended.
- property image_scale¶
The image scale defines the downsampling factor of the highest resolution image that will be used for the processing. The downsampling factor is defined as
(1/2)**image_scale
. Increasingimage_scale
value results in using less time and resources, and generate less 3D points. When processing vegetation, increasingimage_scale
can generate more 3D points. Possible values:Use the original image size. Does not significantly improve results over half size images.
Use half size images. This is the recommended value.
Use quarter size images.
Use eighth size images.
- property limit_depth¶
If set to
True
, limit the depth at which points are reconstructed, avoiding reconstructing background objects - useful for 3D models of objects. The depth limit is estimated from input cameras, points and thedepth_limit_percentile
parameter. This may reduce the reconstruction of outlying points. It is recommended to enable this option for oblique projects.
- property min_image_size¶
Minimal image size used when densifying with
multi_scale
enabled. This determines the largest scale at which images will be used. The default value is recommended.
- property min_no_match¶
Minimum number of valid re-projections that a 3D point must have on images to be kept in the point cloud (minimum number of matches necessary to reconstruct a point). Higher values can reduce noise, but also decrease the number of computed 3D points. Possible values:
Each 3D point has to be correctly re-projected in at least 2 images. This option is recommended for projects with small overlap, but it usually produces a point cloud with more noise and artifacts.
Each 3D point has to be correctly re-projected in at least 3 images (default value).
Each 3D point has to be correctly re-projected in at least 4 images.
Each 3D point has to be correctly re-projected in at least 5 images. This option reduces the noise and improves the quality of the point cloud, but it might compute less 3D points in the final point cloud. It is recommended for oblique imagery projects that have a high overlap.
Each 3D point has to be correctly re-projected in at least 6 images. This option reduces the noise and improves the quality of the point cloud, but it might compute less 3D points in the final point cloud. It is recommended for oblique imagery projects that have a very high overlap.
- property multi_scale¶
When this option is set to
True
(default value), the algorithm uses the lower resolutions of the same images in addition to the resolution chosen byimage_scale
parameter. Using this option results in improved completeness at the expense of increased noise in some cases. In particular the reconstruction of uniform areas such as roads is improved. This option is generally useful for computing additional 3D points in vegetation areas keeping details in areas without vegetation.
- property point_density¶
The point density has an impact on the number of generated 3D points. It defines the minimal distance on the image plane that two points reconstructed from the same camera can have, in pixels. The distance applies to the image resolution rescaled with
image_scale
. The number of generated points scales as the inverse power of two of this parameter. Possible values:High density. A 3D point is computed for every
image_scale
pixel. The result will be an oversampled point cloud. Processing at high density typically requires more time and resources than processing at optimal density. Usually, this option does not significantly improve the results.Optimal density. A 3D point is computed for every
4/image_scale
pixel. For example, if theimage_scale
is set to half image size, one 3D point is computed every4/(0.5) == 8
pixels of the original image. This is the recommended value.
Low density. A 3D point is computed for every
16/image_scale
pixel. For example, if theimage_scale
is set to half image size, one 3D point is computed every16/(0.5) = 32
pixels of the original image. The final point cloud is computed faster and uses less resources than optimal density.
- property regularized_multiscale¶
Use patches at lower resolution only in case of uniformity failures. This setting is recommended for oblique project with enabled multiscale to limit outliers due to sky & water surfaces or other uniform backgrounds (generally with poor or no depth perception). The option has no effect if
multi_scale
is disabled.
- to_dict(self: pix4dvortex.dense.Settings) dict ¶
- property uniformity_threshold¶
This sets a threshold on the minimum texture content necessary to generate points. The texture content is estimated by a single number in the range [0, 1]. Decreasing this setting may yield more complete densification at the expense of increased noise while increasing it may yield less points especially in areas with uniform texture. The default value is recommended.
- property window_size¶
Size of the square grid used for matching the densified points in the original images, in pixels. Possible values:
Use a 7x7 pixels grid. This is suggested for aerial nadir images. The
NADIR
template uses this value.
Use a 9x9 pixels grid. This is suggested for oblique and terrestrial images. This value is useful for more accurate positioning of the densified points in the original images. The
OBLIQUE
template uses this value.
- class pix4dvortex.dense.Settings.TEMPLATES¶
Pre-defined settings for commonly used image capture types.
- NADIR¶
Optimized for aerial nadir image capture.
- OBLIQUE¶
Optimized for aerial oblique or terrestrial image capture.
- class pix4dvortex.pcl.PointCloud(*args, **kwargs)¶
Composite of
pix4dvortex.dmodel.GLTFPointCloud
andpix4dvortex.dmodel.SceneRefFrame
OPF types.This class is a specialization of
pix4dvortex.dmodel.GLTFPointCloud
, with apix4dvortex.dmodel.SceneRefFrame
object and additional methods to increase usability. It is intended to model the output of a point cloud densification algorithm, with thepix4dvortex.dmodel.SceneRefFrame
representing the bridge between the glTF positions - stored in the “canonical” SRS - and the “base” SRS, typically a real world “projected” SRS.Overloaded function.
__init__(self: pix4dvortex.pcl.PointCloud) -> None
__init__(self: pix4dvortex.pcl.PointCloud, *, model_path: os.PathLike, scene_ref_frame: Optional[pix4dvortex.dmodel.SceneRefFrame] = None) -> None
- class Attribute¶
- COLOR = <_pyvtx.pcl.PointCloud.Attribute.Color object>¶
- class Color¶
- NORMAL = <_pyvtx.pcl.PointCloud.Attribute.Normal object>¶
- class Normal¶
- POSITION = <_pyvtx.pcl.PointCloud.Attribute.Position object>¶
- class Position¶
- class Box¶
- max_corner(self: pix4dvortex.pcl.PointCloud.Box) list ¶
- min_corner(self: pix4dvortex.pcl.PointCloud.Box) list ¶
- bounding_box(self: pix4dvortex.pcl.PointCloud) pix4dvortex.pcl.PointCloud.Box ¶
- camera_ids(self: pix4dvortex.pcl.PointCloud) list[int] ¶
- copy(self: pix4dvortex.dmodel.GLTFPointCloud, *, out_dir: os.PathLike) pix4dvortex.dmodel.GLTFPointCloud ¶
Return a self-contained copy of this object.
Creates and returns a fully self-contained copy of this object. As a result, all managed GLTF files are copied.
- Parameters
out_dir – Directory to copy internal data files into.
- Raises
RuntimeError – if
out_dir
is the same as the parent of this object’smodel_path()
.
- empty(self: pix4dvortex.dmodel.GLTFPointCloud) bool ¶
- hash(self: pix4dvortex.pcl.PointCloud) int ¶
- is_partitioned(self: pix4dvortex.pcl.PointCloud, i: Optional[int] = None) bool ¶
Check if the GLTF is partitioned.
- Parameters
i – the index of the point cloud to partition. If none is passed, check if all point clouds are partitioned.
- Returns
True if partitioned.
- Raises
RuntimeError – if the index is out of range
- model_path(self: pix4dvortex.dmodel.GLTFPointCloud) os.PathLike ¶
- opf_type = 'point_cloud'¶
- partition(self: pix4dvortex.pcl.PointCloud, i: Optional[int] = None, min_points_per_node: int = 64, lod_node_chunk_size: Optional[int] = 250000) None ¶
Partition the GLTF for octree-based fast spatial queries.
Partitioning will reorder the content of the binary buffers in-place and update the GLTF model accordingly. If the point cloud is already partitioned, do nothing.
- Parameters
i – the index of the point cloud to partition. If None is passed, partition all point clouds in the GLTF.
min_points_per_node – Parameter controlling the stopping criterion for spatial partitioing. During octree construction, a node will stop subdividing when it contains fewer than min_points_per_node points.
lod_node_chunk_size – Parameter controlling the size of chunks in the level-of-detail (LoD) representation. Specifically, the LoD partitioning is designed so that nodes at the i-th depth level of the octree contain approximately lod_node_chunk_size points in the corresponding i-th LoD chunk.
- Raises
RuntimeError – if the index is out of range
Note
If another point cloud refers to the same buffers on disk, it will be left in an inconsistent state. Partitioning should be done before any other operation.
- scene_ref_frame(self: pix4dvortex.pcl.PointCloud) pix4dvortex.dmodel.SceneRefFrame ¶
- size(self: pix4dvortex.dmodel.GLTFPointCloud) int ¶
DSM/DTM¶
Module for dsm utilities.
- class pix4dvortex.dsm.DTMSettings(self: pix4dvortex.dsm.DTMSettings, *, dsm_settings: pix4dvortex.dsm.Settings, rigidity: pix4dvortex.dsm.Rigidity = <Rigidity.Medium: 1>, filter_threshold: float = 0.5, cloth_sampling_distance: float = 1.0)¶
- property cloth_sampling_distance¶
Inter-point sampling distance used to derive simulated cloth overlying the base of the point cloud. Values in the range 1.0 to 1.5 are recommended for the majority of use cases.
- property dsm_settings¶
DSM generation parameters.
- property filter_threshold¶
Cut-off threshold for terrain classification. Points with height over simulated cloth surface larger than threshold are rejected. The remaining points are considered to belong to the terrain. The units of measurement are the same as those of the point cloud.
- property rigidity¶
Tension of simulated cloth overlying the base of the point cloud.
- to_dict(self: pix4dvortex.dsm.DTMSettings) dict ¶
- class pix4dvortex.dsm.Settings(self: pix4dvortex.dsm.Settings, *, method: Union[pix4dvortex.dsm.Settings.Triangulation, pix4dvortex.dsm.Settings.IDW], resolution: float, max_tile_size: int = 4096)¶
- class IDW(*args, **kwargs)¶
Recommended for urban areas, construction sites and buildings. Provides good accuracy, but can generate empty cells and outliers.
- Parameters
gsd – The GSD value as generated by calibration.
dense_settings – The
Settings
used for densification.interpolation_nn_count – [Optional] Number of nearest neighbors to use for interpolation.
smoothing_median_radius – [Optional] Median pixel radius distance to use to smooth the output.
Overloaded function.
__init__(self: pix4dvortex.dsm.Settings.IDW, *, gsd: float, dense_settings: pix4dvortex.dense.Settings, smoothing_median_radius: Optional[int] = 12, interpolation_nn_count: Optional[int] = 10) -> None
__init__(self: pix4dvortex.dsm.Settings.IDW, *, pcl_scale: float, smoothing_median_radius: Optional[int] = 12, interpolation_nn_count: Optional[int] = 10) -> None
- property interpolation_nn_count¶
- property smoothing_median_radius¶
- class Triangulation(self: pix4dvortex.dsm.Settings.Triangulation)¶
Recommended for rural areas, agriculture or low texture captures. Provides less accurate DSM, but generates no empty cells.
- property max_tile_size¶
Desired size of the DSM tiles in pixels. Recommended range is 500-8000.
- property method¶
- property resolution¶
The GSD value as generated by calibration. The used value must remain the same for DSM, DTM, Ortho and GeoTiff writer.
- to_dict(self: pix4dvortex.dsm.Settings) dict ¶
- pix4dvortex.dsm.gen_tiled_dsm(*, point_cloud: pix4dvortex.pcl.PointCloud, roi: Optional[pix4dvortex.geom.Roi2D] = None, settings: pix4dvortex.dsm.Settings, resources: pix4dvortex.proc.Resources = <pix4dvortex.proc.Resources object at 0x70ce7acbad30>, logger: _pyvtx.logging.Logger = None, stop_function: Callable[[], bool] = None) pix4dvortex.dsm._Tiles ¶
Generate a tiled digital surface model (DSM).
- Raises
RuntimeError – on failure to process.
pix4dvortex.proc.StopProcessing – on
stop_function
triggered cancellation.
- pix4dvortex.dsm.gen_tiled_dtm(*, point_cloud: pix4dvortex.pcl.PointCloud, roi: Optional[pix4dvortex.geom.Roi2D] = None, settings: pix4dvortex.dsm.DTMSettings, resources: pix4dvortex.proc.Resources = <pix4dvortex.proc.Resources object at 0x70ce7acea330>, logger: _pyvtx.logging.Logger = None, stop_function: Callable[[], bool] = None) pix4dvortex.dsm._Tiles ¶
Generate a tiled digital terrain model (DTM).
Generate a DTM by applying a cloth simulation filter (CSF) to a digital surface model (DSM). The CSF is applied to the DSM generation on the fly, producing a DTM without need for intermediate DSM artifacts.
- Raises
RuntimeError – on failure to process.
pix4dvortex.proc.StopProcessing – on
stop_function
triggered cancellation.
- class pix4dvortex.dsm.Rigidity(self: pix4dvortex.dsm.Rigidity, value: int)¶
Members:
Low
Medium
High
- class pix4dvortex.dsm._Tile¶
- class pix4dvortex.dsm._Tiles¶
Orthomosaic¶
Module for orthomosaic generation utilities.
- pix4dvortex.ortho.gen_tiled_orthomosaic(*, cameras: list[_pyvtx.cameras.CameraParameters], input_cameras: pix4dvortex.cameras.InputCameras, dsm_tiles: pix4dvortex.dsm._Tiles, settings: pix4dvortex.ortho.Settings, resources: pix4dvortex.proc.Resources = <pix4dvortex.proc.Resources object at 0x70ce7ad02630>, logger: _pyvtx.logging.Logger = None, stop_function: Callable[[], bool] = None) pix4dvortex.ortho._Tiles ¶
Generate orthomosaic.
- Parameters
cameras – List of calibrated cameras.
input_cameras – Input cameras object, used to obtain image data.
dsm_tiles – List of DSM tiles.
settings – Configuration parameters
resources – HW resource configuration parameters. Note: use_gpu is experimental in this function and takes effect when using the fast blending algorithm (see settings). It requires Vulkan 1.2 and may have issues on NVIDIA GeForce [Ti] 1050 cards with 4GiB RAM on Windows.
logger – Logging callback.
stop_function – Cancellation callback.
- Raises
RuntimeError – on failure to process.
pix4dvortex.proc.StopProcessing – on
stop_function
triggered cancellation.
- class pix4dvortex.ortho.Settings(self: pix4dvortex.ortho.Settings, *, fill_occlusion_holes: bool = True, blend_ratio: float = 0.10000000149011612, pipeline: pix4dvortex.ortho.Settings.Pipeline = <Pipeline.FAST: 0>, capture_pattern: pix4dvortex.ortho.Settings.CapturePattern = <CapturePattern.NADIR: 0>, pan_sharpening: bool = False)¶
Configuration of the orthomosaic generation algorithm.
- Parameters
fill_occlusion_holes – If
True
, fill occlusion holes (areas not captured on camera) with the pixels of the nearest image.blend_ratio – Coefficient determining the size of the area to be blended at the borders of image patches. More details at
blend_ratio
.pipeline – Type of the algorithmic pipeline (
Pipeline
).capture_pattern – Type of the capture pattern (
CapturePattern
).pan_sharpening – If
True
, enable pan sharpening (requires a panchromatic band)
- property blend_ratio¶
Coefficient determining the size of the area to be blended at the borders of image patches. The value should be in the range 0.0 to 1.0. Value 0.0 means no blending, leading to hard borders. Value 1.0 means full blending of the two nearest images. Values in the range 0.1 to 0.2 are recommended for the majority of use cases.
- property capture_pattern¶
Type of the capture pattern (
CapturePattern
).
- property fill_occlusion_holes¶
If
True
, fill occlusion holes (areas not captured on camera) with the pixels of the nearest image.
- property pan_sharpening¶
If
True
, enable pan sharpening (requires a panchromatic band)
- to_dict(self: pix4dvortex.ortho.Settings) dict ¶
- class pix4dvortex.ortho.Settings.Pipeline(self: pix4dvortex.ortho.Settings.Pipeline, value: int)¶
Type of the algorithmic pipeline to use for the orthomosaic generation.
Members:
FAST : Speed-oriented algorithmic pipeline.
FULL : Quality-oriented algorithmic pipeline.
DEGHOST : Algorithmic pipeline targeted at removal of moving objects.
- class pix4dvortex.ortho.Settings.CapturePattern(self: pix4dvortex.ortho.Settings.CapturePattern, value: int)¶
Type of photography used for image capture.
Members:
NADIR : Nadir photography.
OBLIQUE : Oblique photography.
- class pix4dvortex.ortho._Tile¶
- class pix4dvortex.ortho._Tiles¶
3D Mesh¶
Module for mesh utilities.
- pix4dvortex.mesh.gen_mesh(*, settings: pix4dvortex.mesh.ScalableSettings, point_cloud: pix4dvortex.pcl.PointCloud, calibrated_cameras: pix4dvortex.dmodel.CalibratedCameras, input_cameras: pix4dvortex.cameras.InputCameras, silhouette_mask_map: dict[int, os.PathLike] = {}, texture_mask_map: dict[int, os.PathLike] = {}, resources: pix4dvortex.proc.Resources = <pix4dvortex.proc.Resources object at 0x70ce7ad0df70>, logger: _pyvtx.logging.Logger = None, stop_function: Callable[[], bool] = None) pix4dvortex.mesh._MeshLOD ¶
Generate a high quality level of detail (LOD) textured mesh using a scalable, memory efficient algorithm.
Generate an LOD textured mesh using an out-of-core algorithm that provides higher quality outputs than the combination of gen_mesh_geometry, gen_mesh_texture and gen_textured_mesh_lod at the expense of increased computational time.
The algorithm produces independent high resolution tiles that for are post-processed to produce a LOD representation. The out-of-core representation will be stored in resources.work_dir and must be retained to keep the returned object valid.
Geometrical constraints can be optionally defined either by depth and depth confidence information contained in input cameras or by silhouette masks.
The texture settings are applied to the generation of the mesh tile texture. Texture masks may be provided to discard image pixels when generating the texture.
Both silhouette and texture masks must have the same size as the camera image to which they map. The masks will be read as single channel 8-bit images. Pixels with a value greather than 0 are masked.
- Parameters
settings – Configuration parameters of type
ScalableSettings
. See alsoSCALABLE_MESH_TEMPLATES
.point_cloud – Densified point cloud.
calibrated_cameras – Calibrated cameras object
input_cameras – Input cameras object, for accessing RGB and maybe depth information associated to images.
silhouette_mask_map – (optional) Mapping of a camera ID to the path of an associated silhouette mask file. The camera ID corresponds to
camera.id
, wherecamera
is an element of thecameras
parameter.texture_mask_map – (optional) Mapping of a camera ID to the path of an associated texture mask file. The camera ID corresponds to
camera.id
, wherecamera
is an element of thecameras
parameter.resources – (optional) HW resource configuration parameters of type
Resources
.logger – (optional) Logging callback.
stop_function – (optional) Cancellation callback.
- Raises
RuntimeError – on failure to process.
pix4dvortex.proc.StopProcessing – on
stop_function
triggered cancellation.
- Returns
An LOD mesh object.
- pix4dvortex.mesh.gen_mesh_geometry(*, point_cloud: pix4dvortex.pcl.PointCloud, settings: pix4dvortex.mesh.Settings, cameras: list[_pyvtx.cameras.CameraParameters], input_cameras: Optional[pix4dvortex.cameras.InputCameras] = None, mask_map: dict[int, os.PathLike] = {}, resources: pix4dvortex.proc.Resources = <pix4dvortex.proc.Resources object at 0x70ce7acf0570>, logger: _pyvtx.logging.Logger = None, stop_function: Callable[[], bool] = None) pix4dvortex.mesh._MeshGeom ¶
Generate mesh geometry defining vertices (as x,y,z coordinates) and faces of a mesh.
Constraints can be optionally defined either by depth and depth confidence information contained in input cameras or by masks.
- Parameters
point_cloud – Densified point cloud.
settings – Configuration parameters of type
Settings
.cameras – List of calibrated cameras.
input_cameras – (optional) Input cameras object, for accessing depth information associated to images.
mask_map – (optional) Mapping of a camera ID to the path of an associated mask file. The camera ID corresponds to
camera.id
, wherecamera
is an element of thecameras
parameter.resources – (optional) HW resource configuration parameters of type
Resources
.logger – (optional) Logging callback.
stop_function – (optional) Cancellation callback.
- Raises
RuntimeError – on failure to process.
pix4dvortex.proc.StopProcessing – on
stop_function
triggered cancellation.
- Returns
A mesh geometry object.
- pix4dvortex.mesh.gen_mesh_texture(*, mesh_geom: pix4dvortex.mesh._MeshGeom, cameras: list[_pyvtx.cameras.CameraParameters], input_cameras: pix4dvortex.cameras.InputCameras, settings: pix4dvortex.mesh.TextureSettings, resources: pix4dvortex.proc.Resources = <pix4dvortex.proc.Resources object at 0x70ce7ac9aab0>, logger: _pyvtx.logging.Logger = None, stop_function: Callable[[], bool] = None) pix4dvortex.mesh._Texture ¶
Generate mesh texture.
- Parameters
mesh_geom – Mesh geometry that defines vertices (as x,y,z coordinates) and faces of a mesh.
cameras – List of calibrated cameras.
input_cameras – Input cameras object.
settings – Configuration parameters of type
TextureSettings
.resources – (optional) HW resource configuration parameters of type
Resources
.logger – (optional) Logging callback.
stop_function – (optional) Cancellation callback.
- Raises
RuntimeError – on failure to process.
pix4dvortex.proc.StopProcessing – on
stop_function
triggered cancellation.
- Returns
A mesh texture object.
- pix4dvortex.mesh.gen_textured_mesh_lod(*, mesh_geom: pix4dvortex.mesh._MeshGeom, texture: pix4dvortex.mesh._Texture, settings: pix4dvortex.mesh.LODSettings, resources: pix4dvortex.proc.Resources = <pix4dvortex.proc.Resources object at 0x70ce7acd1430>, logger: _pyvtx.logging.Logger = None, stop_function: Callable[[], bool] = None) pix4dvortex.mesh._MeshLOD ¶
Generate a level of detail (LOD) textured mesh.
- Parameters
mesh_geom – Mesh geometry that defines vertices (as x,y,z coordinates) and faces of a mesh.
texture – Mesh texture opaque data container.
settings – LOD configuration parameters of type
LODSettings
.resources – (optional) HW resource configuration parameters of type
Resources
. Only max_threads is used.logger – (optional) Logging callback.
stop_function – (optional) Cancellation callback.
- Raises
RuntimeError – on failure to process.
pix4dvortex.proc.StopProcessing – on
stop_function
triggered cancellation.
- Returns
An LOD mesh object.
- class pix4dvortex.mesh.SCALABLE_MESH_TEMPLATES¶
Pre-defined settings for commonly used image capture types.
Contains definitions of objects of type
ScalableSettings
, tuned for different image capture types.- CATCH¶
Defaults for Pix4DCatch projects.
- OBLIQUE¶
Defaults for Oblique projects.
- NADIR¶
Defaults for Nadir projects.
- class pix4dvortex.mesh.ScalableSettings¶
Configuration of the scalable mesh generation algorithm.
- class pix4dvortex.mesh.Settings(self: pix4dvortex.mesh.Settings, *, _geom_gen: pix4dvortex.mesh.Settings._GeomGen = <pix4dvortex.mesh.Settings._GeomGen object at 0x70ce7ad0cd30>, _constraints: pix4dvortex.mesh.Settings._Constraints = <pix4dvortex.mesh.Settings._Constraints object at 0x70ce7ad0ccf0>, _small_comp_filter: pix4dvortex.mesh.Settings._SmallCompFilter = <pix4dvortex.mesh.Settings._SmallCompFilter object at 0x70ce7ad0ccb0>, _decimation: pix4dvortex.mesh.Settings._Decimation = <pix4dvortex.mesh.Settings._Decimation object at 0x70ce7acf0030>, _smoothing: pix4dvortex.mesh.Settings._Smoothing = <pix4dvortex.mesh.Settings._Smoothing object at 0x70ce7aca1a30>)¶
Configuration of the mesh geometry generation algorithm.
- to_dict(self: pix4dvortex.mesh.Settings) dict ¶
- class pix4dvortex.mesh.Settings.TEMPLATES¶
Pre-defined settings for commonly used image capture types.
- LARGE¶
Optimized for large scenes and aerial nadir image capture.
- SMALL¶
Optimized for small scenes and aerial oblique or terrestrial image capture.
- TOWER¶
Optimized for tower-like structures.
- class pix4dvortex.mesh.TextureSettings(self: pix4dvortex.mesh.TextureSettings, *, _outlier_threshold: float = 0.009999999776482582, _texture_size: Optional[int] = None)¶
Configuration of the texture generation algorithm.
- to_dict(self: pix4dvortex.mesh.TextureSettings) dict ¶
- class pix4dvortex.mesh.TextureSettings.TEMPLATES¶
Pre-defined settings for different quality of geometry reconstruction and image capture.
- STANDARD¶
Optimized for well-reconstructed geometries and image captures with no or few occluding or moving features.
- DEGHOST¶
Optimized for imperfect geometries and image captures with many occluding or moving features.
- class pix4dvortex.mesh.LODSettings(self: pix4dvortex.mesh.LODSettings, *, _max_n_faces_per_node: int = 100000, _texture_size: int = 1024, _jpeg_quality: int = 90)¶
Configuration of the LOD mesh generation algorithm.
- to_dict(self: pix4dvortex.mesh.LODSettings) dict ¶
- class pix4dvortex.mesh._Texture¶
- class pix4dvortex.mesh._MeshGeom¶
- class pix4dvortex.mesh._MeshLOD¶
Level of detail representation of a texured mesh
Fast calibration and Orthomosaic generation¶
Fast calibration and orthomosaic generation utilities.
- pix4dvortex.fastalgo.calibrate(*, cameras: pix4dvortex.cameras.ProjectedCameras, resources: pix4dvortex.proc.Resources = <pix4dvortex.proc.Resources object at 0x70ce7ac9b1b0>, logger: _pyvtx.logging.Logger = None, stop_function: Callable[[], bool] = None) pix4dvortex.calib._CalibratedScene ¶
Generate calibrated scene with fast calibration algorithm.
Generate a calibrated scene with a fast algorithm, suited to nadir data captures on flat terrain. The algorithm produces a calibrated scene with a surface model mesh in lieu of a densified point cloud, and can be used directly to generate an orthomosaic with
gen_tiled_orthomosaic()
.- Parameters
cameras – Projected cameras container.
resources – HW resource configuration parameters.
logger – Logging callback.
stop_function – Cancellation callback.
- Raises
RuntimeError – on failure to process.
pix4dvortex.proc.StopProcessing – on
stop_function
triggered cancellation.
- Returns
A calibrated scene with ground mesh and no sparse point cloud. See
_CalibratedScene
.
- pix4dvortex.fastalgo.gen_tiled_orthomosaic(*, scene: pix4dvortex.calib._CalibratedScene, input_cameras: pix4dvortex.cameras.InputCameras, settings: pix4dvortex.ortho.Settings, resources: pix4dvortex.proc.Resources = <pix4dvortex.proc.Resources object at 0x70ce7accb9f0>, logger: _pyvtx.logging.Logger = None, stop_function: Callable[[], bool] = None) pix4dvortex.ortho._Tiles ¶
Generate orthomosaic from fast calibration output.
Generate orthomosaic from a calibrated scene containing a ground mesh, as generated with
calibrate()
.- Parameters
scene – A calibrated scene containing a ground mesh.
input_cameras – Input cameras object, used to obtain image data.
settings – Configuration parameters
resources – HW resource configuration parameters. Note: use_gpu is experimental in this function and takes effect when using the fast blending algorithm (see settings). It requires Vulkan 1.2 and may have issues on NVIDIA GeForce [Ti] 1050 cards with 4GiB RAM on Windows.
logger – Logging callback.
stop_function – Cancellation callback.
- Raises
ValueError – if
scene
does not contain a ground mesh.RuntimeError – on failure to process.
pix4dvortex.proc.StopProcessing – on
stop_function
triggered cancellation.
More Processing¶
AutoGCP¶
Automatic GCP detection tools.
AutoGCPs consists of a set of tools for the automatic detection in images of control point targets with pixel level accuracy.
It supports three types of targets with black and white Haar-like features: square, diagonal and Aeropoint.
Despite its name, AutoGCPs imposes no restrictions on the use of targets. The functionality only concerns itself with the detection of targets in images and the obtainment of an accurate estimate of the position of their marker. The user can then use the information as ground control points (GCPs), check-points (CPs) or anything else.
- exception pix4dvortex.autogcp.AutogcpError¶
- class pix4dvortex.autogcp.Settings(self: pix4dvortex.autogcp.Settings, *, xy_uncertainty: float = 5.0, z_uncertainty: float = 10.0)¶
GCP detection algorithm settings.
- Parameters
xy_uncertainty – Absolute horizontal image georeferencing uncertainty.
z_uncertainty – Absolute vertical image georeferencing uncertainty.
Note
The units of the uncertainties are the same as those of the input GCP geolocation.
The default values are optimized in meters and should be scaled accordingly if other units are used.
- to_dict(self: pix4dvortex.autogcp.Settings) dict ¶
- property xy_uncertainty¶
Absolute horizontal image georeferencing uncertainty.
- property z_uncertainty¶
Absolute vertical image georeferencing uncertainty.
- pix4dvortex.autogcp.detect_gcp_marks(*, scene: pix4dvortex.calib._CalibratedScene, input_cameras: pix4dvortex.cameras.InputCameras, input_gcps: list[pix4dvortex.dmodel.GCP], settings: pix4dvortex.autogcp.Settings, logger: _pyvtx.logging.Logger = None) list[pix4dvortex.dmodel.GCP] ¶
Detect GCP marks in images.
- Parameters
scene – Calibrated scene.
input_cameras – Input cameras object, used to obtain image data.
input_gcps – 3D GCP without marks. The GCP coordinates must be in the same coordinate system as the one used to create the projected cameras used as input to camera calibration.
settings – Configuration parameters.
logger – Logging callback.
- Returns
A list of
GCP
objects with detected marks.- Raises
ValueError – if
input_gcps
contain marks.ValueError – if
input_gcps
andscene
CRS is not the same.AutogcpError – if configuration parameters are invalid or the detection algorithm cannot complete.
Depth Processing¶
Utilities to process LiDAR and synthetic depth maps and depth point clouds.
- class pix4dvortex.depth.DepthCompletionSettings(self: pix4dvortex.depth.DepthCompletionSettings, _initial_dilation_kernel_size: int = 5, _closing_kernel_size: int = 5, _hole_filling_kernel_size: int = 7, _smoothing: bool = False)¶
Configuration parameters for depth map densification algorithm.
- to_dict(self: pix4dvortex.depth.DepthCompletionSettings) dict ¶
- class pix4dvortex.depth.MergeSettings(self: pix4dvortex.depth.MergeSettings, _distance: float = 0.025, _n_neighbors: int = 128, _sor: Optional[pix4dvortex.depth.MergeSettings._SOR] = <pix4dvortex.depth.MergeSettings._SOR object at 0x70ce7c1a1df0>)¶
- to_dict(self: pix4dvortex.depth.MergeSettings) dict ¶
- pix4dvortex.depth.densify(*, settings: pix4dvortex.depth.DepthCompletionSettings = <pix4dvortex.depth.DepthCompletionSettings object at 0x70ce7ad00a30>, sparse_depth_maps: dict[int, pix4dvortex.cameras.DepthInfo], resources: pix4dvortex.proc.Resources = <pix4dvortex.proc.Resources object at 0x70ce7ad009f0>, logger: _pyvtx.logging.Logger = None) dict[int, pix4dvortex.cameras.DepthInfo] ¶
Generate densified depth maps from sparse depth maps.
- Parameters
settings – The algorithm settings, see
DepthCompletionSettings
.sparse_depth_maps – A dictionary mapping each camera ID to its corresponding
DepthInfo
. The depth maps is assumed to use 0 to mark unknown depth and positive values for known depths. The confidence is ignored by this algorithm.resources – HW resource configuration parameters.
logger – Logging callback.
- Returns
The densified depth maps, a dictionary mapping each camera ID to its corresponding
DepthInfo
.
- pix4dvortex.depth.gen_pcl(*, settings: pix4dvortex.depth.Settings = <pix4dvortex.depth.Settings object at 0x70ce7ad006b0>, scene: pix4dvortex.calib._CalibratedScene, input_cameras: pix4dvortex.cameras.InputCameras, roi: Optional[pix4dvortex.geom.Roi2D] = None, resources: pix4dvortex.proc.Resources = <pix4dvortex.proc.Resources object at 0x70ce7aca62f0>, logger: _pyvtx.logging.Logger = None, stop_function: Callable[[], bool] = None) pix4dvortex.pcl.PointCloud ¶
Generate a depth point cloud from LiDAR depth maps.
- Parameters
settings – Configuration parameters, see
Settings
.scene – Calibrated scene.
input_cameras – Input cameras object containing depth maps and, optionally, their confidences.
roi – (optional) 2D region of interest in the XY plane defined as a polygon or a muti-polygon.
resources – HW resource configuration parameters.
logger – (optional) Logging callback.
stop_function – (optional) cancellation callback.
- Raises
RuntimeError – on failure to process.
pix4dvortex.proc.StopProcessing – on
stop_function
triggered cancellation.
- Returns
A point cloud object.
- pix4dvortex.depth.pcl_merge(*, pcl: pix4dvortex.pcl.PointCloud, depth_pcl: pix4dvortex.pcl.PointCloud, settings: pix4dvortex.depth.MergeSettings = <pix4dvortex.depth.MergeSettings object at 0x70ce7ad00870>, resources: pix4dvortex.proc.Resources = <pix4dvortex.proc.Resources object at 0x70ce7ad00830>, logger: _pyvtx.logging.Logger = None, stop_function: Callable[[], bool] = None) pix4dvortex.pcl.PointCloud ¶
Merge a densified photogrammetry point cloud with a depth point cloud created from LiDAR depth maps.
- Parameters
pcl – Dense point cloud.
depth_pcl – Depth point cloud.
settings – Configuration parameters, see
MergeSettings
.resources – HW resource configuration parameters.
logger – (optional) Logging callback.
stop_function – (optional) cancellation callback.
- Raises
RuntimeError – on failure to process.
pix4dvortex.proc.StopProcessing – on
stop_function
triggered cancellation.
- Returns
A point cloud object.
- class pix4dvortex.depth.Settings(self: pix4dvortex.depth.Settings, *, _pcl: pix4dvortex.depth.Settings._PCL = <pix4dvortex.depth.Settings._PCL object at 0x70ce7be86a30>, _depth_filter: Optional[pix4dvortex.depth.Settings._DepthFilter] = <pix4dvortex.depth.Settings._DepthFilter object at 0x70ce7acd2e70>)¶
- to_dict(self: pix4dvortex.depth.Settings) dict ¶
- class pix4dvortex.depth.Settings.ConfidenceLevel(self: pix4dvortex.depth.Settings.ConfidenceLevel, value: int)¶
Members:
Low
Medium
High
Sky and Water Segmentation¶
Module for sky and water segmentation utilities.
- pix4dvortex.skyseg.gen_segment_masks(*, cameras: list[_pyvtx.cameras.CameraParameters], input_cameras: pix4dvortex.cameras.InputCameras, output_dir: os.PathLike, settings: pix4dvortex.skyseg.Settings, resources: pix4dvortex.proc.Resources = <pix4dvortex.proc.Resources object at 0x70ce7ace9ef0>, logger: _pyvtx.logging.Logger = None) dict[int, os.PathLike] ¶
Mask sky, water or both in images.
- Parameters
cameras – List of calibrated cameras.
input_cameras – Input cameras object, used to obtain image data.
output_dir – Directory the mask image files will be written to.
settings – Configuration parameters, see
Settings
.resources – HW resource configuration parameters.
logger – Logging callback.
- Returns
Mapping of a camera ID to the path of the mask corresponding to this image. A mask is a single-channel black-and-white image. White areas are masked.
- Raises
ValueError – if the input data is invalid.
RuntimeError – if the ML model could not be loaded.
RuntimeError – if the specified amount of memory is not enough.
- class pix4dvortex.skyseg.Settings(self: pix4dvortex.skyseg.Settings, *, masking_type: pix4dvortex.skyseg.Settings.MaskingType = <MaskingType.SKY: 0>, mode: pix4dvortex.skyseg.Settings.Mode = <Mode.FULL: 0>)¶
- property masking_type¶
Select which entities to mask in images, see
MaskingType
.
- to_dict(self: pix4dvortex.skyseg.Settings) dict ¶
- class pix4dvortex.skyseg.Settings.MaskingType(self: pix4dvortex.skyseg.Settings.MaskingType, value: int)¶
Members:
SKY : Identifies sky segments.
WATER : Identifies water segments.
SKY_WATER : Identifies both sky and water segments.
- class pix4dvortex.skyseg.Settings.Mode(self: pix4dvortex.skyseg.Settings.Mode, value: int)¶
Members:
- FULL :
Full segmentation mode.
- FAST :
Fast segmentation mode. This option is faster. It is not recommended for images with water segment.
Point Cloud Alignment¶
Utilities for obtaining a point cloud alignment.
- pix4dvortex.pcalign.alignment(*, point_cloud: pix4dvortex.pcl.PointCloud, ref_point_cloud: pix4dvortex.pcl.PointCloud, logger: _pyvtx.logging.Logger = None) _pyvtx.pcalign.Alignment ¶
Get an alignment of a misaligned point cloud to a reference point cloud.
- Parameters
point_cloud – Misaligned point cloud.
ref_point_cloud – Reference point cloud.
- Returns
Object containing a 4-by-4 transformation matrix, aligning the misaligned point cloud to the reference, and the quality of the obtained alignment.
- Raises
RuntimeError – if the spatial reference of the misaligned and the reference point clouds is not the same.
Point Cloud Transformation¶
Affine transformation tools
- pix4dvortex.transform.transform(*args, **kwargs)¶
Overloaded function.
transform(*, transformation: Buffer, point_cloud: pix4dvortex.pcl.PointCloud, work_dir: os.PathLike = PosixPath(‘/tmp’)) -> pix4dvortex.pcl.PointCloud
Transform a point cloud.
- Parameters
transformation – 4-by-4 transformation matrix to apply.
point_cloud – Point cloud to transform.
work_dir – Temporary work space. It will be created if it doesn’t exist. Defaults to system temporary directory.
Note
The spatial reference of the transformation must match that of the point cloud.
- Returns
Transformed point cloud.
transform(*, transformation: Buffer, calib_scene: pix4dvortex.calib._CalibratedScene, work_dir: os.PathLike = PosixPath(‘/tmp’)) -> pix4dvortex.calib._CalibratedScene
Transform a calibrated scene.
- Parameters
transformation – 4-by-4 transformation matrix to apply.
calib_scene – Calibrated scene to transform.
Note
The spatial reference of the transformation must match that of the calibrated scene.
- Returns
Transformed calibrated scene.
Index Calculation¶
Orthomosaic index map generation utilities.
- pix4dvortex.indexmap.compute_custom_index(*, input_path: os.PathLike, formula: str, output_path: os.PathLike) None ¶
Generate an index map by applying a formula to an ortho-mosaic.
Generate a GeoTIFF file containing an index map generated by applying a formula to an input ortho-mosaic. The input must be georeferenced with a CRS convertible to WGS84.
- Parameters
input_path – Path to a georeferenced GeoTIFF file containing an orthomisaic with bands to be used for index calculation.
formula – A formula to calculate index values from band values.
output_path – Path to a GeoTIFF file containing the index map.
- Raises
ValueError – if
input_path
does not point to a valid GeoTIFF file.ValueError – if
formula
is not correct.RuntimeError – if input is not georeferenced and convertible to WGS84.
RuntimeError – for any other errors.
- pix4dvortex.indexmap.get_custom_index_band_names(*, path: os.PathLike) list[str] ¶
Get a list of index band names to use in index calculation formulas.”
- Parameters
path – Path to a GeoTIFF file containing an orthomisaic with bands information.
- Raises
ValueError – if
path
does not point to a valid GeoTIFF file.
Exports¶
Cesium¶
Utilities for exporting mesh data to Cesium format.
- pix4dvortex.io.cesium.write_mesh(*, output_path_prefix: os.PathLike, mesh_lod: pix4dvortex.mesh._MeshLOD, resources: pix4dvortex.proc.Resources = <pix4dvortex.proc.Resources object at 0x70ce7ad0e2b0>, logger: _pyvtx.logging.Logger = None) None ¶
Write an LOD mesh as Cesium 3D Tiles.
Serialize an LOD mesh to the Cesium 3D Tiles file format. The output’s
transform
property transforms from the tile’s local coordinate system to a target depending on the SRS of the input LOD mesh: input SRS or EPSG:4978 for engineering or projected SRS respectively.- Parameters
output_path_prefix – Path to the output files ending with a file name prefix.
mesh_lod – LOD mesh.
resources – Hardware resources. Only work_dir is used.
logger – (optional) Logging callback.
- pix4dvortex.io.cesium.write_pcl(*, output_path_prefix: os.PathLike, point_cloud: pix4dvortex.pcl.PointCloud, resources: pix4dvortex.proc.Resources = <pix4dvortex.proc.Resources object at 0x70ce7ad0e430>, logger: _pyvtx.logging.Logger = None) None ¶
Write a point cloud as Cesium 3D Tiles.
Serialize a point cloud to the Cesium 3D Tiles file format. The output’s
transform
property transforms from the tile’s local coordinate system to a target depending on the SRS of the input point cloud: input SRS or EPSG:4978 for engineering or projected SRS respectively.- Parameters
output_path_prefix – Path to the output files ending with a file name prefix.
point_cloud – PointCloud object to be serialized.
resources – Hardware resources. Only max_threads and work_dir are used.
GeoTIFF¶
GeoTIFF export tools
- pix4dvortex.io.geotiff.to_cog(*, input_path: os.PathLike, output_path: os.PathLike, settings: dict[str, str] = {}) None ¶
Convert a GeoTIFF file into a cloud optimized GeoTIFF (COG) file.
- Parameters
input_path – Path to the input geotiff file.
output_path – Path to the output COG file.
settings – Configuration parameters for the conversion. See the creation options of GDAL’s COG driver for details.
- pix4dvortex.io.geotiff.write_geotiff(*, output_path: os.PathLike, tiles: Union[None, pix4dvortex.dsm._Tiles, pix4dvortex.ortho._Tiles], gsd: Optional[float] = None, settings: pix4dvortex.io.geotiff.Settings = <pix4dvortex.io.geotiff.Settings object at 0x70ce7ad032f0>) None ¶
Write raster tiles into a GeoTIFF file.
- Parameters
output_path – Path to the output file.
tiles – List of either DSM or orthomosaic raster tiles. The tiles must all have the same resolution.
gsd – [optional, deprecated] Ignored. Previously, a GSD given as a pixel size in units of the coordinate system, used to override the resolution of the tile set. Usage was likely to result in incorrectly scaled and placed tiles.
settings – Configuration parameters.
- Raises
ValueError – if tiles is empty.
ValueError – if the resolution value of all tiles is not the same.
- pix4dvortex.io.geotiff.write_geotiff_tile(*, output_path: os.PathLike, tile: Union[pix4dvortex.dsm._Tile, pix4dvortex.ortho._Tile], gsd: Optional[float] = None, settings: pix4dvortex.io.geotiff.Settings = <pix4dvortex.io.geotiff.Settings object at 0x70ce7acac030>) None ¶
Write a raster tile into a GeoTIFF file.
- Parameters
output_path – Path to the output file.
tile – Raster tile.
gsd – [optional, deprecated] Ignored. Previously, a GSD given as a pixel size in units of the coordinate system, used to override the the resolution of the tile. Usage was likely to result in incorrectly scaled tiles.
settings – Configuration parameters.
- class pix4dvortex.io.geotiff.Settings(self: pix4dvortex.io.geotiff.Settings, *, compression: pix4dvortex.io.geotiff.Settings.Compression = <Compression.LZW: 1>, xml_xmp: str = '', sw: str = '', no_data_value: Optional[float] = -10000.0, extra: dict[str, str] = {})¶
- property compression¶
The compression algorithm.
- property extra¶
Additional configuration parameters for the serialization. See the creation options of GDAL’s GTIFF driver for details.
- property no_data_value¶
Value for empty pixels. The value must be valid for the pixel data type.
- property sw¶
Write the software used to write the GeoTIFF file into the TIFF metadata.
- property xml_xmp¶
XML string with XMP data. It can be generated with ExifView.
- class pix4dvortex.io.geotiff.Settings.Compression(self: pix4dvortex.io.geotiff.Settings.Compression, value: int)¶
Members:
NoCompression
LZW
LAS¶
LAS export tools
- pix4dvortex.io.las.write_pcl(*, output_path: os.PathLike, point_cloud: pix4dvortex.pcl.PointCloud, compress: bool = False, las_version: pix4dvortex.io.las.LasVersion = <LasVersion.V1_2: 0>, _point_filter_factory: Callable[[pix4dvortex.pcl._View], Callable[[int], bool]] = None) None ¶
Serialize a point cloud object to a LAS/LAZ file.
Serialize a point cloud object to a LAS/LAZ file. The format (LAS/LAZ) is determined from the file extension if it matches “.las” or “.laz” (case insensitive). Otherwise it is determined by the
compress
parameter.- Parameters
output_path – The output file path. The format is inferred from the extension if matching “.las” or “.laz” as described above.
point_cloud – The point cloud object to serialize.
compress – Set output format to LAZ/LAS if True/False respectively. Has no effect if format can be inferred from output_path extension.
las_version – LAS version to use in serialization.
_point_filter_factory – Experimental mechanism to inject user-defined point filtering.
- Raises
ValueError – if
point_cloud
is empty.ValueError – if
point_cloud
colors and positions don’t match.RuntimeError – on failure to write output file.
- class pix4dvortex.io.las.LasVersion(self: pix4dvortex.io.las.LasVersion, value: int)¶
Members:
V1_2
V1_4
OBJ¶
Utilities for exporting mesh data to OBJ format.
- pix4dvortex.io.obj.write_mesh(output_path: os.PathLike, mesh_geom: pix4dvortex.mesh._MeshGeom, texture: pix4dvortex.mesh._Texture, settings: pix4dvortex.io.obj.Settings = <pix4dvortex.io.obj.Settings object at 0x70ce7aca6070>) None ¶
Write a triangular mesh into an OBJ file.
- Parameters
output_path – Path to the output file.
mesh_geom – Mesh geometry defined by a list of vertex coordinates (x,y,z) and a list of mesh faces.
texture – Mesh texture opaque data container.
settings – Configuration settings.
- Raises
RuntimeError – if the mesh data is invalid or inconsistent.
ValueError – if the texture file is invalid.
- class pix4dvortex.io.obj.Settings(self: pix4dvortex.io.obj.Settings, *, texture_fmt: pix4dvortex.io.obj.Settings.TextureFmt = <TextureFmt.JPEG: 0>, jpeg_quality: int = 75)¶
- property jpeg_quality¶
JPEG quality parameter. Ignored if texture_fmt is not JPEG.
- property texture_fmt¶
Format of output texture file.
- class pix4dvortex.io.obj.Settings.TextureFmt(self: pix4dvortex.io.obj.Settings.TextureFmt, value: int)¶
Members:
JPEG
PNG
SLPK¶
Utilities for exporting mesh data to SLPK format.
- pix4dvortex.io.slpk.write_mesh(*, output_path: os.PathLike, mesh_lod: pix4dvortex.mesh._MeshLOD, _geog_cs: Optional[pix4dvortex.coordsys.SpatialReference] = None, resources: pix4dvortex.proc.Resources = <pix4dvortex.proc.Resources object at 0x70ce7ad02130>, logger: _pyvtx.logging.Logger = None) None ¶
Write a triangular LOD mesh into an SLPK file.
- Parameters
output_path – Path to the output file.
mesh_lod – LOD mesh.
_geog_cs – (optional) Geographical coordinate system suitable for geolocating the mesh. Currently, only WGS84 is supported. If omitted, the original input projected spatial reference system (see
pix4dvortex.cameras.ProjectedCameras()
) is used.resources – HW resource configuration parameters.
logger – Logging callback.
- pix4dvortex.io.slpk.write_pcl(*, output_path: os.PathLike, point_cloud: pix4dvortex.pcl.PointCloud, resources: pix4dvortex.proc.Resources = <pix4dvortex.proc.Resources object at 0x70ce7ad00db0>) None ¶
Write a point cloud as an SLPK LOD file.
- Parameters
output_path – Path to the output file.
point_cloud – Point cloud.
resources – HW resource configuration parameters.
OPF (experimental)¶
Experimental project utilities.
Warning
This module is currently experimental. Although breaking changes are not foreseen, they cannot be ruled out. The main functionality is expected to remain unchanged, and any breaking changes should be minor and easy to adapt to.
This module contains experimental OPF project utilities to simplify common OPF project related operations such as:
Storing pix4dvortex data model objects in an OPF project and writing them to disk.
Reading pix4dvortex data model objects from an OPF project on disk.
Bundling an entire OPF project from one location to another.
- class pix4dvortex._project.ProjectWriter(*, id: str, name: str, description: str, root_dir: Optional[PathLike] = None, logger: Optional[Callable[[int, str], None]] = None)¶
Handle storing OPF compatible objects and saving them to files.
- add_item(*, obj: Union[CalibratedIntersectionTiePoints, Calibration, CameraList, CameraOptimizationHints, Features, GLTFPointCloud, InputCameras, InputControlPoints, IntersectionTiePoints, Matches, OriginalMatches, ProjectedCameras, ProjectedControlPoints, SceneRefFrame, SegmentGraphs2D, _RadiometryInputs, _RadiometrySettings, _SurfaceModelMesh], source_ids: Optional[list[str]] = None, labels: Optional[list[str]] = None) str ¶
Store an OPF compatible data object as an OPF project item.
Note
This method is suited for storing objects that have no connection to other OPF items, such as
pix4dvortex.dmodel.SceneRefFrame
orpix4dvortex.dmodel.CameraList
. For more complex objects, prefer storing the corresponding OPF item composite usingadd_item_composite()
instead of using this one to manually load the connected objects and setting their sources.Serializes an object compatible with an OPF item type into OPF format files and stores item in project. Ensures no duplicate objects are stored.
- Parameters
obj – a data object representing an OPF item.
source_ids – OPF items of sources. These must already be stored in the project.
labels – Optional labels to add to stored OPF project item.
- Returns
Unique id of stored item.
- add_item_composite(*, obj: Union[InputCameras, ProjectedCameras, _CalibratedScene, PointCloud], labels: Optional[list[str]] = None) str ¶
Store a composite of multiple OPF compatible objects as OPF project items.
Serializes an object containing multiple OPF compatible sub-object into OPF format files and stores items in project. Sets required item sources. Internally calls
add_item()
to consistently store the individual OPF items without duplication.- Parameters
obj – a data object representing a composite of OPF items.
labels – Optional labels to add to stored OPF top project item.
- Returns
Unique id of stored top OPF item.
- Raises
ValueError – if
obj
is not a supported OPF item composite.
- class pix4dvortex._project.ProjectReader(project_path: PathLike, logger: Optional[Callable[[int, str], None]] = None)¶
Handle loading OPF compatible objects from files.
- load_obj(opf_class, label_pred=None)¶
Load a data object of a given class.
- Parameters
opf_class – the class of the object to be loaded.
label_pred – A predicate taking a list of OPF item labels to narrow search of OPF item types.
If
opf_class
is an OPF item type, the first object with a positive predicate call will be selected. Ignored ifopf_class
is apix4dvortex
composite type.- Returns
An object of type
opf_class
. Ifopf_class
is an OPF item type andlabels
is set, one or more matching labels are required.- Raises
ValueError – if
opf_class
is not an OPF item or composite item.LookupError – if no items of the right class found.
RuntimeError – if more than one object matching object is found.
- pix4dvortex._project.package_project(*, project_path: PathLike, dst_dir: PathLike) Path ¶
Move all project files to a common root directory and fix image paths.
Moves all files defined by a project file, including images, into a single directory such that they can be packaged and transferred to an external host. Takes care of setting all internal URIs as URI references, with paths relative the location of the top level project file.
- Parameters
project_path – path of project file of project to be packaged.
dst_dir – destination directory to which project files and images are written.
- Returns
The path of the new top level project file.
Utilities¶
General utilities¶
Generic utility collection.
- pix4dvortex.util.collect_hw_info()¶
Collect HW info.
- pix4dvortex.util.collect_stats(*, msg_handler=None, task_specific=None)¶
Collect usage stats and pass them to a message handler.
- pix4dvortex.util.hash_id(filename)¶
Calculate hash-based identifier of arbitrarily large file.
The identifier is a 64 bit big integer built from a BLAKE2b hash of the file contents. The most significant byte corresponds to the beginning of the sequence of hash bytes.
- pix4dvortex.util.hw_info()¶
Return a dict with host hardware info.
- pix4dvortex.util.path_to_url(path)¶
Return a file URI from a file path.
- pix4dvortex.util.sha256(filename)¶
Calculate SHA(2)256 of arbitrarily large file.
- pix4dvortex.util.task_info(task)¶
Task function information.
- pix4dvortex.util.timestamp(filename)¶
File modification UTC date-time string in ISO 8601 format.
- pix4dvortex.util.url_to_path(url)¶
Extract the path from a URL.
Processing utilities¶
Processing utilities
- class pix4dvortex.proc.Resources(self: pix4dvortex.proc.Resources, *, max_ram: int = 8589934592, max_threads: int = 0, use_gpu: bool = False, max_gpu_mem: int = 4294967296, work_dir: os.PathLike = PosixPath('/tmp'))¶
- property max_gpu_mem¶
Maximum GPU memory in bytes.
- property max_ram¶
Maximum RAM available for processing in bytes. 0 is an invalid value.
- property max_threads¶
Maximum number of threads to use for processing. 0 means use the number of logical cores.
- property use_gpu¶
- property work_dir¶
Temporary work space. It will be created if it doesn’t exist. Defaults to system temporary directory.
- exception pix4dvortex.proc.StopProcessing¶
Geometry¶
Geometry utilities
- class pix4dvortex.geom.Polygon2D(self: pix4dvortex.geom.Polygon2D, *, outer_ring: list[Annotated[list[float], FixedSize(2)]], inner_rings: list[list[Annotated[list[float], FixedSize(2)]]] = [])¶
Two-dimensional polygon.
A two-dimensional polygon is defined by an outer ring, and an optional set of non-overlapping inner rings. A point is within the polygon if it is inside the outer ring and outside the inner rings.
Initialize a Polygon2D from an outer ring and an optional set of inner rings.
Rings have no self-intersections and are non-overlapping.
- Parameters
outer_ring – an iterable of points defining the outer ring.
inner_rings – (optional) an iterable of iterables of points defining the inner rings of the polygon.
- Raises
RuntimeError – if any of the rings is overlapping or self-intersecting.
- is_within(self: pix4dvortex.geom.Polygon2D, point: Annotated[list[float], FixedSize(2)]) bool ¶
- class pix4dvortex.geom.Roi2D(*args, **kwargs)¶
Two-dimensional region of interest (ROI).
A 2D ROI is defined as a set of Polygon2D objects. A point is within the ROI if it is within any of the polygons that define it.
Overloaded function.
__init__(self: pix4dvortex.geom.Roi2D, *, polygon: pix4dvortex.geom.Polygon2D) -> None
Initialize a Roi2D from a Polygon2D.
__init__(self: pix4dvortex.geom.Roi2D, *, polygons: list[pix4dvortex.geom.Polygon2D]) -> None
Initialize a Roi2D from a set of Polygon2Ds.
- Parameters
polygons – an iterable of non-overlapping polygons.
- Raises
RuntimeError – if any of the polygons is overlapping.
- is_within(self: pix4dvortex.geom.Roi2D, point: Annotated[list[float], FixedSize(2)]) bool ¶
Spatial reference systems¶
Coordinate system tools
- class pix4dvortex.coordsys.BaseToCanonCnv(self: pix4dvortex.coordsys.BaseToCanonCnv, *, base_to_canonical: pix4dvortex.dmodel.BaseToCanonical)¶
A converter to transform from the base to the translated canonical SRS.
- convert(*args, **kwargs)¶
Overloaded function.
convert(self: pix4dvortex.coordsys.BaseToCanonCnv, point: Annotated[list[float], FixedSize(2)], *, apply_offset: bool = True) -> list
convert(self: pix4dvortex.coordsys.BaseToCanonCnv, point: Annotated[list[float], FixedSize(3)], *, apply_offset: bool = True) -> list
- convert_inplace(self: pix4dvortex.coordsys.BaseToCanonCnv, arg0: pix4dvortex.geom.Roi2D) None ¶
- class pix4dvortex.coordsys.CanonToBaseCnv(self: pix4dvortex.coordsys.CanonToBaseCnv, *, base_to_canonical: pix4dvortex.dmodel.BaseToCanonical)¶
A converter to transform from the translated canonical to the base SRS.
- convert(*args, **kwargs)¶
Overloaded function.
convert(self: pix4dvortex.coordsys.CanonToBaseCnv, point: Annotated[list[float], FixedSize(2)], *, apply_offset: bool = True) -> list
convert(self: pix4dvortex.coordsys.CanonToBaseCnv, point: Annotated[list[float], FixedSize(3)], *, apply_offset: bool = True) -> list
- convert_inplace(self: pix4dvortex.coordsys.CanonToBaseCnv, arg0: pix4dvortex.geom.Roi2D) None ¶
- class pix4dvortex.coordsys.CoordinateConverter(self: pix4dvortex.coordsys.CoordinateConverter, *, src: pix4dvortex.coordsys.SpatialReference, dst: pix4dvortex.coordsys.SpatialReference, src_geoid_height: Optional[float] = None, dst_geoid_height: Optional[float] = None)¶
Construct a coordinate converter based on two spatial reference systems with optional geoid heights. If geoid height is provided for a SRS, it supersedes any vertical transformation specific for the vertical component of the SRS in question. Conversions between different ellipsoids are still applied on the vertical axes. It’s important to note that a 0 height is not the same thing as a null height.
- Parameters
src – Source SRS (SRS to start from).
dst – Destination SRS (SRS to go to).
src_geoid_height – (optional) Single geoid undulation value (aka geoid_height), to be provided when src (source SRS) has no geoid.
dst_geoid_height – (optional) Single geoid undulation value (aka geoid_height), to be provided when dst (destination SRS) has no geoid.
- Raises
RuntimeError – if this transformation is not supported.
RuntimeError – If a geoid height is given for a spatial reference system which is not compound or has a defined geoid model name or a bad SRS is given.
RuntimeError – If either src (source SRS) or dst (destination SRS) contains or is an engineering spatial reference system.
- convert(*args, **kwargs)¶
Overloaded function.
convert(self: pix4dvortex.coordsys.CoordinateConverter, arg0: Annotated[list[float], FixedSize(3)]) -> list
convert(self: pix4dvortex.coordsys.CoordinateConverter, arg0: float, arg1: float, arg2: float) -> tuple[float, float, float]
- class pix4dvortex.coordsys.SpatialReference(*args, **kwargs)¶
Overloaded function.
__init__(self: pix4dvortex.coordsys.SpatialReference, wkt: str) -> None
Create a SpatialReference object from a WKT string.
__init__(self: pix4dvortex.coordsys.SpatialReference, *, horizontal_wkt: str, vertical_wkt: str, geoid: Optional[str] = None) -> None
Create a SpatialReference object from horizontal and vertical WKT strings.
- Parameters
horizontal_wkt – Horizontal WKT string.
vertical_wkt – Vertical WKT string.
geoid – (optional) A valid geoid model corresponding to the given vertical SRS. If omitted, an unspecified geoid (if more than one is available for the vertical SRS) is used as default.
- Raises
ValueError – if the geoid model is invalid.
- as_utm(self: pix4dvortex.coordsys.SpatialReference, *, lat: float, lon: float) pix4dvortex.coordsys.SpatialReference ¶
- as_wkt(self: pix4dvortex.coordsys.SpatialReference, *, wkt_convention: pix4dvortex.coordsys.WktConvention = <WktConvention.WKT2_2019: 1>) str ¶
- axes(self: pix4dvortex.coordsys.SpatialReference) list[pix4dvortex.coordsys.SpatialReference.Axis] ¶
Axes of this spatial reference.
- Returns
Array of
Axis
objects corresponding to the axes of this spatial reference system. The array may be of length 1, 2 or 3 depending on the dimensions of the SRS (vertical, horizontal or compound respectively).
- axes_3d(self: pix4dvortex.coordsys.SpatialReference) Annotated[list[pix4dvortex.coordsys.SpatialReference.Axis], FixedSize(3)] ¶
3D axes of this spatial reference.
- Returns
Length 3 array of
Axis
objects corresponding to the axes of this spatial reference system. If the SRS is 3-dimensional, returns the same asaxes()
. If the SRS is 2-dimensional, the 3rd component is assumed to be an ellipsoidal height.- Raises
RuntimeError – if SRS is vertical and not compound.
RuntimeError – if SRS is 2-dimensional and projected, and non-isometric.
- geoid(self: pix4dvortex.coordsys.SpatialReference) str ¶
The geoid model used by this spatial reference, or an empty string in case of the default geoid model.
- identifier(self: pix4dvortex.coordsys.SpatialReference) str ¶
- is_compound(self: pix4dvortex.coordsys.SpatialReference) bool ¶
- is_engineering(self: pix4dvortex.coordsys.SpatialReference) bool ¶
- is_geographic(self: pix4dvortex.coordsys.SpatialReference) bool ¶
- is_isometric(self: pix4dvortex.coordsys.SpatialReference) bool ¶
- is_left_handed(self: pix4dvortex.coordsys.SpatialReference) bool ¶
- is_projected(self: pix4dvortex.coordsys.SpatialReference) bool ¶
- is_vertical(self: pix4dvortex.coordsys.SpatialReference) bool ¶
- name(self: pix4dvortex.coordsys.SpatialReference) str ¶
- pix4dvortex.coordsys.get_scene_ref_frame(*args, **kwargs)¶
Overloaded function.
get_scene_ref_frame(*, definition: str, shift: Annotated[list[float], FixedSize(3)] = [0.0, 0.0, 0.0], geoid_height: Optional[float] = None) -> pix4dvortex.dmodel.SceneRefFrame
Creates a
SceneRefFrame
object from a WKT string and optionalshift
&geoid_height
values.- Returns
A
SceneRefFrame
object.- Parameters
definition – WKT string
shift – (optional) An internal vector used to shift coordinates to center the SRS to the scene.
geoid_height – (optional) Single geoid undulation value (aka geoid_height), to be provided when proj SRS has no geoid.
- Raises
RuntimeError – if SRS is vertical and not compound.
RuntimeError – if SRS is 2-dimensional and projected, and non-isometric.
get_scene_ref_frame(*, proj_srs: pix4dvortex.coordsys.SpatialReference, shift: Annotated[list[float], FixedSize(3)] = [0.0, 0.0, 0.0], geoid_height: Optional[float] = None) -> pix4dvortex.dmodel.SceneRefFrame
Creates a
SceneRefFrame
object from aSpatialReference
object and optionalshift
&geoid_height
values.- Returns
A
SceneRefFrame
object.- Parameters
proj_srs – A
SpatialReference
object.shift – (optional) An internal vector used to shift coordinates to center the SRS to the scene.
geoid_height – (optional) Single geoid undulation value (aka geoid_height), to be provided when proj SRS has no geoid.
- Raises
RuntimeError – if SRS is vertical and not compound.
RuntimeError – if SRS is 2-dimensional and projected, and non-isometric.
- pix4dvortex.coordsys.srs_from_code(*args, **kwargs)¶
Overloaded function.
srs_from_code(code: str) -> pix4dvortex.coordsys.SpatialReference
Create a SpatialReference object from an authority code string.
- Parameters
code – A valid authority code string (e.g. “EPSG:2056”).
srs_from_code(*, horizontal_code: str, vertical_code: str, geoid: Optional[str] = None) -> pix4dvortex.coordsys.SpatialReference
Create a SpatialReference object from horizontal and vertical authority code strings.
- Parameters
horizontal_code – A valid horizontal SRS authority code string (e.g. “EPSG:4326”).
vertical_code – A valid vertical SRS authority code string (e.g. “EPSG:5773”).
geoid – (optional) A valid geoid model corresponding to the given vertical SRS (e.g. EGM96). If omitted, an unspecified geoid (if more than one is available for the vertical SRS) is used as default.
- Raises
ValueError – if the geoid model is invalid.
- pix4dvortex.coordsys.srs_from_epsg(*args, **kwargs)¶
Overloaded function.
srs_from_epsg(epsg: int) -> pix4dvortex.coordsys.SpatialReference
Create a SpatialReference object from an EPSG authority code.
- Parameters
epsg – A valid EPSG code (e.g. 2056).
srs_from_epsg(*, horizontal_epsg: int, vertical_epsg: int, geoid: Optional[str] = None) -> pix4dvortex.coordsys.SpatialReference
Create a SpatialReference object from horizontal and vertical EPSG authority codes.
- Parameters
horizontal_epsg – A valid horizontal SRS EPSG code (e.g. 4326).
vertical_epsg – A valid vertical SRS EPSG code (e.g. 5773).
geoid – (optional) A valid geoid model corresponding to the given vertical SRS (e.g. EGM96). If omitted, an unspecified geoid (if more than one is available for the vertical SRS) is used as default.
- Raises
ValueError – if the geoid model is invalid.
- pix4dvortex.coordsys.wkt_from_code(code: str, *, wkt_convention: pix4dvortex.coordsys.WktConvention = <WktConvention.WKT2_2019: 1>, wkt_options: pix4dvortex.coordsys.WktExportOptions = <WktExportOptions.DEFAULT: 0>) str ¶
Create a WKT string from an authority code string.
- Parameters
code – A valid authority code string (e.g. “EPSG:2056”).
wkt_convention – (optional) Specifies WKT1 or WKT2 convention.
wkt_options – (optional) Bit mask specifying additional options.
- pix4dvortex.coordsys.wkt_from_epsg(epsg_no: int, *, wkt_convention: pix4dvortex.coordsys.WktConvention = <WktConvention.WKT2_2019: 1>, wkt_options: pix4dvortex.coordsys.WktExportOptions = <WktExportOptions.DEFAULT: 0>) str ¶
Create a WKT string from an EPSG authority code.
- Parameters
epsg_no – A valid EPSG code (e.g. 2056).
wkt_convention – (optional) Specifies WKT1 or WKT2 convention.
wkt_options – (optional) Bit mask specifying additional options.
- class pix4dvortex.coordsys.WktExportOptions(self: pix4dvortex.coordsys.WktExportOptions, value: int)¶
Members:
DEFAULT
MULTILINE
- class pix4dvortex.coordsys.WktConvention(self: pix4dvortex.coordsys.WktConvention, value: int)¶
Members:
WKT1_GDAL
WKT2_2019