API Reference¶
This reference contains the code documentation for modules, classes and functions of the pix4dvortex public API. It is assumed that the user is familiar with its higher level concepts. Some of the concepts are defined in Glossary.
Pix4Dvortex.
Authentication¶
Auth session
- pix4dvortex.session._login_dongle(dongle_files_directory: os.PathLike) None ¶
Login with a Pix4D dongle. Experimental feature.
- Parameters
dongle_files_directory – Directory where the dongle files are located. Each dongle file must be named <dongle_serial_number>.dongle.json.
- Raises
RuntimeError – if the dongle file can’t be accessed for reading
RuntimeError – if the dongle isn’t plugged in or detected
RuntimeError – if the dongle file doesn’t match the plugged in dongle
RuntimeError – if the dongle file content is corrupted or has unexpected content
- pix4dvortex.session.is_logged_in() bool ¶
Check if authorization for a session has been granted. This check is independent of method of the session acquisition.
- Returns
True
if authorized,False
otherwise.
- pix4dvortex.session.login(*args, **kwargs)¶
Overloaded function.
login(*, client_id: str, client_secret: str, license_key: Optional[str] = None) -> None
Request authorization for a PIX4Dengine session using the Pix4D license server.
- Parameters
client_id – oauth2 client ID.
client_secret – oauth2 client secret.
license_key – (optional) PIX4Dengine license key to use for the authorization request
- Raises
RuntimeError – on access failure.
login(*, url: str) -> None
Request authorization for a PIX4Dengine session using the embedded license server.
- Parameters
url – Embedded license server URL.
- Raises
RuntimeError – on access failure.
- pix4dvortex.session.logout() bool ¶
Log out of the current session.
- Returns
True
if successful,False
otherwise.
Data Model¶
Pix4Dengine data model classes.
- class pix4dvortex.dmodel.BaseToCanonical(self: pix4dvortex.dmodel.BaseToCanonical, *, shift: List[float[3]] = [0.0, 0.0, 0.0], scale: List[float[3]] = [1.0, 1.0, 1.0], swap_xy: bool = False) None ¶
A set of parameters used to transform from/to the projected SRS to/from the internal processing SRS.
- property scale¶
An internal vector used to scale coordinates to make the SRS isometric
- property shift¶
An internal vector used to shift coordinates to center the SRS to the scene
- property swap_xy¶
An internal boolean used to swap coordinates to get a right-handed SRS
- class pix4dvortex.dmodel.CRS(self: pix4dvortex.dmodel.CRS, *, definition: str = '', geoid_height: Optional[float] = None) None ¶
Spatial Reference information used to describe a Spatial Reference System
- property definition¶
- property geoid_height¶
The geoid height to be used when the vertical SRS is not supported or cannot be retrieved from WKT
- class pix4dvortex.dmodel.CalibratedControlPoint(self: pix4dvortex.dmodel.CalibratedControlPoint, id: str = '', coordinates: List[float[3]] = [0.0, 0.0, 0.0]) None ¶
- property coordinates¶
The known measured (prior) 3D-position of a point in the scene.
- property id¶
The camera ID.
- class pix4dvortex.dmodel.CalibratedControlPoints(self: pix4dvortex.dmodel.CalibratedControlPoints, *, control_points: List[pix4dvortex.dmodel.CalibratedControlPoint] = []) None ¶
A list of calibrated control points stored in a way that matches the standard control points format.
- property control_points¶
The list of CalibratedControlPoint.
- class pix4dvortex.dmodel.GCP(self: pix4dvortex.dmodel.GCP, *, id: str = '', geolocation: pix4dvortex.dmodel.Geolocation = <pix4dvortex.dmodel.Geolocation object at 0x7f7b695598f0>, marks: List[pix4dvortex.dmodel.Mark] = [], is_checkpoint: bool = False) None ¶
A GCP is alike a
MTP
but with a known 3D position, together with the uncertainty on that position. It represents known projections onto different images, of the same known physical 3D position. GCPs are a type of position constrained tie points, where the known 3D position is using a known coordinate system.- property geolocation¶
- property id¶
The camera ID to reference the image.
- property is_checkpoint¶
If true, the GCP is provided by the users for the sake of quality assessment. It will not be passed to the calibration but used later to compute the re-projection error.
- property marks¶
The set of image points that represent the projections of a 3D-point.
- class pix4dvortex.dmodel.Geolocation(self: pix4dvortex.dmodel.Geolocation, *, crs: pix4dvortex.dmodel.CRS = <pix4dvortex.dmodel.CRS object at 0x7f7b695589b0>, coordinates: List[float[3]] = [0.0, 0.0, 0.0], sigmas: List[float[3]] = [0.0, 0.0, 0.0]) None ¶
- property coordinates¶
- property crs¶
- property sigmas¶
- class pix4dvortex.dmodel.InputControlPoints(self: pix4dvortex.dmodel.InputControlPoints, *, gcps: List[pix4dvortex.dmodel.GCP] = [], mtps: List[pix4dvortex.dmodel.MTP] = []) None ¶
A list of input control points stored in a way that matches the standard control points format.
- property gcps¶
- property mtps¶
- class pix4dvortex.dmodel.MTP(self: pix4dvortex.dmodel.MTP, *, is_checkpoint: bool = False, id: str = '', marks: List[pix4dvortex.dmodel.Mark] = []) None ¶
A MTP represents a set of image points that represent the projections of a 3D-point (with unknown 3D-position in the scene. For a Manual Tie Point (MTP), these represent the manually clicks in the images.
- property id¶
The identifier or name of the MTP.
- property is_checkpoint¶
If true, the MTP is provided by the users for the sake of quality assessment. It will not be passed to the calibration but used later to compute the re-projection error.
- property marks¶
The set of image points that represent the projections of a 3D-point.
- class pix4dvortex.dmodel.Mark(self: pix4dvortex.dmodel.Mark, *, accuracy: float = 0.0, position: List[float[2]] = [0.0, 0.0], camera_id: int = 0) None ¶
A Mark is a reference to a pixel in an image.
- property accuracy¶
A number representing the accuracy of the click. This will be used by the calibration as a weight for this mark.
- property camera_id¶
The camera ID to reference the image.
- property position¶
The mark position in the image.
- class pix4dvortex.dmodel.ProjectedControlPoints(*args, **kwargs)¶
A list of projected control points in OPF format.
Overloaded function.
__init__(self: pix4dvortex.dmodel.ProjectedControlPoints, *, projected_gcps: List[pix4dvortex.dmodel.ProjectedGCP] = []) -> None
__init__(self: pix4dvortex.dmodel.ProjectedControlPoints, *, input_control_points: pix4dvortex.dmodel.InputControlPoints, scene_ref_frame: pix4dvortex.dmodel.SceneRefFrame) -> None
- property projected_gcps¶
- class pix4dvortex.dmodel.ProjectedGCP(self: pix4dvortex.dmodel.ProjectedGCP, *, id: str = '', coordinates: List[float[3]] = [0.0, 0.0, 0.0], sigmas: List[float[3]] = [0.0, 0.0, 0.0]) None ¶
- property coordinates¶
- property id¶
- property sigmas¶
- class pix4dvortex.dmodel.SceneRefFrame(self: pix4dvortex.dmodel.SceneRefFrame, *, proj_crs: pix4dvortex.dmodel.CRS = <pix4dvortex.dmodel.CRS object at 0x7f7b69557a70>, base_to_canonical: pix4dvortex.dmodel.BaseToCanonical = <pix4dvortex.dmodel.BaseToCanonical object at 0x7f7b69557a30>) None ¶
Information set used to describe the processing SRS and the projected SRS
- property base_to_canonical¶
parameters used to convert the projected coordinates into the processing coordinates
- property crs¶
Information on the projected Spatial Reference System
Core Processing¶
Input and Calibrated Cameras¶
Camera-related data types, serving as input data to the processing algorithms, and utilities.
- class pix4dvortex.cameras.GeoCoordinates(self: pix4dvortex.cameras.GeoCoordinates, *, lat: float = 0, lon: float = 0.0, alt: float = 0.0, lat_accuracy: float = 0.0, lon_accuracy: float = 0.0, alt_accuracy: float = 0.0) None ¶
LAT/LON/ALT coordinates and accuracies
Geographic coordinates and accuracies lat[°], lon[°], alt[m], σ(lat[m]), σ(lon[m]), and σ(alt[m])
- Raises
ValueError – if any coordinate or associated accuracies is out of bounds.
- property alt¶
- property alt_accuracy¶
- property lat¶
- property lat_accuracy¶
- property lon¶
- property lon_accuracy¶
- class pix4dvortex.cameras.GeoRotation(self: pix4dvortex.cameras.GeoRotation, *, yaw: float, pitch: float, roll: float, yaw_accuracy: float, pitch_accuracy: float, roll_accuracy: float, unit: str = 'degrees') None ¶
YPR rotation angles and accuracies
Rotation angles and accuracies yaw, pitch, roll, σ(yaw), σ(pitch), and σ(roll).
Parameter
unit
specifies the unit of measurement of the arguments and must be one of “degrees” or “radians”.- Raises
ValueError – if an invalid angular unit is chosen
ValueError – if any of the rotation angles or their associated accuracies is out of bounds.
- property pitch¶
- property pitch_accuracy¶
σ(pitch) (radians)
- property roll¶
- property roll_accuracy¶
σ(roll) (radians)
- property yaw¶
- property yaw_accuracy¶
σ(yaw) (radians)
- class pix4dvortex.cameras.GeoTag(*args, **kwargs)¶
Overloaded function.
__init__(self: pix4dvortex.cameras.GeoTag, *, horizontal_srs_code: Optional[str] = None, vertical_srs_code: Optional[str] = None, geo_position: Optional[pix4dvortex.cameras.GeoCoordinates] = None, geo_rotation: Optional[pix4dvortex.cameras.GeoRotation] = None) -> None
Geolocation type specifying the position with accuracy in a given geographic reference system and optional rotation data with accuracy
__init__(self: pix4dvortex.cameras.GeoTag, *, geolocation: Optional[pix4dvortex.dmodel.Geolocation] = None, orientation: Optional[pix4dvortex.dmodel.YawPitchRoll] = None) -> None
Geolocation type specifying the position with accuracy in a given geographic reference system and optional rotation data with accuracy
- property geo_position¶
Position ((lat[°], lon[°], alt[m]), (σ(lat[m]), σ(lon[m]), σ(alt[m])))
- property geo_rotation¶
Rotation ((yaw, pitch, roll) and (σ(yaw), σ(pitch), σ(roll)) in [rad]
- property horizontal_srs_code¶
- property vertical_srs_code¶
- class pix4dvortex.cameras.InputCameras(self: pix4dvortex.cameras.InputCameras) None ¶
- get_depth_info(self: pix4dvortex.cameras.InputCameras, *, camera_id: int) Optional[_vtx_core.cameras.DepthInfo] ¶
Depth information corresponding to camera with given ID.
- Raises
IndexError – if
camera_id
not valid.
- get_image_path(self: pix4dvortex.cameras.InputCameras, *, camera_id: int) os.PathLike ¶
Path of image file with given camera ID.
- Raises
IndexError – if
camera_id
not valid.
- class pix4dvortex.cameras.ProjectedCameras(self: pix4dvortex.cameras.ProjectedCameras, *, input_cameras: pix4dvortex.cameras.InputCameras, proj_srs: Optional[pix4dvortex.coordsys.SpatialReference] = None, proj_srs_geoid_height: Optional[float] = None) None ¶
Create a
ProjectedCameras
object.- Parameters
input_cameras – A container of input camera objects.
proj_srs – (optional) Projected coordinate system to be used for camera calibration. Must be isometric. If not specified, a UTM corresponding to the geographic coordinate system of the first image with geolocation is used.
proj_srs_geoid_height – (optional, only accepted for compound
proj_srs
without a geoid model). The geoid height (aka geoid undulation approximation) to use when the coordinates of the images are using an SRS that require conversion to the proj SRS (proj_srs
).
- Returns
A
ProjectedCameras
object. Its coordinate system is eitherproj_srs
if it is specified or UTM otherwise.- Raises
RuntimeError – if any of the images is not geo-located.
RuntimeError – if
proj_srs
is not set and the projected SRS derived from images is not isometric.RuntimeError – if
proj_srs
is set and is not both projected and isometric.
- property captures¶
A list of calibration Capture objects.
- get_depth_info(self: pix4dvortex.cameras.ProjectedCameras, *, camera_id: int) Optional[_vtx_core.cameras.DepthInfo] ¶
Depth information corresponding to camera with given ID.
- Raises
IndexError – if
camera_id
not valid.
- get_image_path(self: pix4dvortex.cameras.ProjectedCameras, *, camera_id: int) os.PathLike ¶
Path of image file with given camera ID.
- Raises
IndexError – if
camera_id
not valid.
- property proj_srs¶
The projected spatial reference system.
- property proj_srs_geoid_height¶
- property scene_ref_frame¶
Information on the internal processing spatial reference system
- pix4dvortex.cameras.make_input_cameras(*, image_info: List[os.PathLike], depth_info: Optional[Dict[os.PathLike, _vtx_core.cameras.DepthInfo]] = None, camera_db_path: Optional[os.PathLike] = None, external_geotags: Dict[os.PathLike, pix4dvortex.cameras.GeoTag] = {}, logger: _vtx_core.logging.Logger = None) pix4dvortex.cameras.InputCameras ¶
Create an
InputCameras
object.The optional
external_geotags
argument is used to 1) set geotags for not geolocated images. Geotags must contain coordinates and SRS codes, and optionally also rotation data, failing that, aValueError
: is raised. 2) update existing geotags. Geotags can contain any combination of SRS code, coordinates and rotation data.- Parameters
image_info – List of image file paths to create cameras from.
depth_info – (optional): Mapping of image file path to corresponding depth and depth confidence.
camera_db_path – (optional) path of camera database. Embedded one is used if not set.
external_geotags – (optional) mapping of image paths to geotags.
logger – (optional) logging callable object.
- Raises
ValueError – if any of the images causes a camera creation error
ValueError – if any element of
depth_info
does not map to an image inimage_info
.RuntimeError – if any element of
image_info
ordepth_info
does not map to an existing file.ValueError – if the new geolocation misses coordinates.
ValueError – if
external_geotags
does not reference an existing file.RuntimeError – if any of the mapped geotags does not map to an image in
image_info
.ValueError – if two image files are bit-wise identical.
- pix4dvortex.cameras.version() str ¶
Camera Calibration¶
Module for calibration utilities.
- class pix4dvortex.calib.ReoptSettings(self: pix4dvortex.calib.ReoptSettings, *, calibration_int_param_opt: pix4dvortex.calib.Settings.OptimIntType = <OptimIntType.All: 2>, calibration_ext_param_opt: pix4dvortex.calib.Settings.OptimExtType = <OptimExtType.All: 1>, _lever_arm_opt: pix4dvortex.calib.Settings._LeverArmType = <_LeverArmType.NoOffset: 0>) None ¶
- property calibration_ext_param_opt¶
Type of optimization for external camera parameters. See
OptimExtType
.
- property calibration_int_param_opt¶
Type of optimization for internal camera parameters. See
OptimIntType
.
- pix4dvortex.calib.calibrate(*, cameras: pix4dvortex.cameras.ProjectedCameras, settings: pix4dvortex.calib.Settings, resources: pix4dvortex.proc.Resources = <pix4dvortex.proc.Resources object at 0x7f7b6a490a70>, control_points: Optional[pix4dvortex.dmodel.InputControlPoints] = None, _keypoint_metrics_handler: Callable[[pix4dvortex.calib.analytics._KeypointMetrics], None] = None, logger: _vtx_core.logging.Logger = None, progress_callback: Callable[[str, int], None] = None, stop_function: Callable[[], bool] = <built-in method of PyCapsule object at 0x7f7b69586a30>) _vtx_core.calib._CalibratedScene ¶
Calibrate cameras.
- Parameters
cameras – Projected cameras container.
settings – The calibration settings, see
Settings
.resources – [Optional] HW resource configuration parameters.
control_points – [Optional] The input control points, see
InputControlPoints
._keypoint_metrics_handler – [Optional] An optional callable to handle a object generated during calibration. Its single argument provides a
_KeypointMetrics
.logger – [Optional] Logging callback.
progress_callback – [Optional] Progress callback.
stop_function – [Optional] Cancellation callback.
- Returns
A calibrated scene, see
_CalibratedScene
.
- pix4dvortex.calib.reoptimize(*, cameras: pix4dvortex.cameras.ProjectedCameras, scene: _vtx_core.calib._CalibratedScene, settings: pix4dvortex.calib.ReoptSettings, resources: pix4dvortex.proc.Resources = <pix4dvortex.proc.Resources object at 0x7f7b6956a4b0>, control_points: Optional[pix4dvortex.dmodel.InputControlPoints] = None, logger: _vtx_core.logging.Logger = None, progress_callback: Callable[[str, int], None] = None, stop_function: Callable[[], bool] = <built-in method of PyCapsule object at 0x7f7b69586d90>) _vtx_core.calib._CalibratedScene ¶
Reoptimize a scene.
- Parameters
cameras – Projected cameras container.
scene – The calibrated scene.
settings – The reoptimize settings, see
ReoptSettings
.resources – [Optional] HW resource configuration parameters.
control_points – [Optional] The input control points, see
InputControlPoints
.logger – [Optional] Logging callback.
progress_callback – [Optional] Progress callback.
stop_function – [Optional] Cancellation callback.
- Returns
A calibrated scene, see
_CalibratedScene
.
- pix4dvortex.calib.version() str ¶
- class pix4dvortex.calib.Settings(self: pix4dvortex.calib.Settings, *, keypt_number: Optional[int] = None, image_scale: float = 1.0, matching_algorithm: pix4dvortex.calib.Settings.MatchingAlgorithm = <MatchingAlgorithm.Standard: 0>, image_pair_settings: pix4dvortex.calib.Settings.ImagePairSettings = <pix4dvortex.calib.Settings.ImagePairSettings object at 0x7f7b69589bf0>, use_rig_matching: bool = False, min_matches: int = 20, min_image_distance: float = 0.0, pipeline: pix4dvortex.calib.Settings.CalibrationType = <CalibrationType.Standard: 0>, calibration_int_param_opt: pix4dvortex.calib.Settings.OptimIntType = <OptimIntType.All: 2>, calibration_ext_param_opt: pix4dvortex.calib.Settings.OptimExtType = <OptimExtType.All: 1>, oblique: bool = True, rematch: bool = False, _lever_arm_opt: pix4dvortex.calib.Settings._LeverArmType = <_LeverArmType.NoOffset: 0>, _rig_relative_opt: pix4dvortex.calib.Settings._ApiRigRelativesOptimType = <_ApiRigRelativesOptimType.RotationUsingSubsetOfCaptures: 2>) None ¶
- class ImageDistance(self: pix4dvortex.calib.Settings.ImageDistance, arg0: float) None ¶
Images distance. The unit of measure is the one of the processing SRS (usually meter of feet). See
SceneRefFrame
.- property value¶
- class ImageDistanceToMedian(self: pix4dvortex.calib.Settings.ImageDistanceToMedian, arg0: float) None ¶
Images distance relative to median distance of consecutive images
- property value¶
- property calibration_ext_param_opt¶
Type of optimization for external camera parameters. See
OptimExtType
.
- property calibration_int_param_opt¶
Type of optimization for internal camera parameters. See
OptimIntType
.
- property image_pair_settings¶
Settings for image pair generation. See
ImagePairSettings
.
- property image_scale¶
Image scale at which features are computed. The scale is a ratio to the initial size of the image. Recommended values:
0.5
for RGB images from 40Mpx and above0.5
-1.0
for RGB images from 12 to 40Mpx1.0
-2.0
for images with lower resolutions (multispectral, mobile captures, etc.)
- property keypt_number¶
Maximum number of key points to extract per image. This number is a target, therefore the actual number of key points extracted may differ. If the parameter is
None
, then the number of the extracted key points will be automatically determined by the algorithm. Recommended values are eitherNone
or10000
.
- property matching_algorithm¶
Matching algorithm. See
MatchingAlgorithm
.
- property min_image_distance¶
Minimum distance (in xy-plane) for matching or for homography estimation. In most of the cases it doesn’t make sense to eliminate closest images from matching, therefore the recommended value is
0.0
.
- property min_matches¶
Sets a threshold for the minimum matches per an image pair to be considered in calibration. The pair is fully discarded if the number of the matches in the pair is less than this threshold. Recommended value is
20
.
- property oblique¶
Type of flight plan:
True
for oblique or free flight (default)False
for nadir flight
- property pipeline¶
Type of calibration pipeline. See
CalibrationType
and user guide for more information.
- property rematch¶
Set
True
to enable rematching.Rematching adds more matches after the first part of the initial processing. This usually improves the quality of the results at the cost of increasing the processing time.
- property use_rig_matching¶
Combine matches of all cameras in a rig instance when matching. Only relevant for rig multispectral captures and
pipeline = LowTexturePlanar
together withmatching_algorithm = GeometricallyVerified
.
- class pix4dvortex.calib.Settings.TEMPLATES¶
Pre-defined settings for commonly used data capture types.
- LARGE¶
Optimized for use with aerial non-multispectral image captures of large scenes.
- FLAT¶
Optimized for aerial nadir data sets of flat, possibly low-texture, scenes. Well suited for multispectral cameras (rigs).
- MAPS_3D¶
Optimized for small (less than 500 images) aerial nadir or oblique data sets with high image overlap acquired in a grid flight plan.
- MODELS_3D¶
Optimized for aerial oblique or terrestrial data sets with high image overlap.
- CATCH¶
Specifically designed for data sets captured by PIX4Dcatch, possibly using a position precision enhancement device (RTK).
- class pix4dvortex.calib.Settings.CalibrationType(self: pix4dvortex.calib.Settings.CalibrationType, value: int) None ¶
Type of calibration pipeline. See user guide for more information.
Members:
- Standard :
(default) Standard calibration pipeline.
- Scalable :
Calibration pipeline intended for large scale and corridor captures.
- LowTexturePlanar :
Calibration pipeline designed for aerial nadir images with accurate geolocations and homogeneous or repetitive content of flat-like scenes and rig cameras.
- TrustedLocationOrientation :
Calibration pipeline designed for projects with accurate relative locations and inertial measurement (IMU) data. All images must include information about position and orientation.
- class pix4dvortex.calib.Settings.ImagePairSettings(self: pix4dvortex.calib.Settings.ImagePairSettings, *, match_all: bool = False, match_use_triangulation: bool = True, _match_use_orientation: bool = False, _cnv_to_meter_factor: float = 1.0, match_mtp_max_image_pair: int = 50, match_inter_sensor_images: int = 0, match_similarity_images: int = 2, match_time_images: int = 2, min_rematch_overlap: float = 0.30000001192092896, match_distance_images: Union[pix4dvortex.calib.Settings.ImageDistance, pix4dvortex.calib.Settings.ImageDistanceToMedian] = <pix4dvortex.calib.Settings.ImageDistanceToMedian object at 0x7f7b695892f0>, _match_loop: pix4dvortex.calib.Settings._MatchLoopSettings = <pix4dvortex.calib.Settings._MatchLoopSettings object at 0x7f7b695892b0>) None ¶
- property match_all¶
If
True
the algorithm will try to find matches in every image pair combination. This will effectively ignore othersImagePairSettings
parameters. It is not recommended due to a severe impact on the processing time. Recommended value isFalse
.
- property match_distance_images¶
Match images with distance smaller than this. This distance must be expressed using either the
ImageDistance
type or theImageDistanceToMedian
type. The distance is 3D if all components are available and 2D (XY plane) otherwise.Setting a value (greater than zero) is useful for oblique or terrestrial projects.
- property match_inter_sensor_images¶
This setting has been introduced for matching images from multiple camera sensors flown at the same time, when the sensors are not part of a rig or the sensors are not sufficiently synchronized. It purpose is similar to the
match_time_images
, but for cameras with multiple sensors. Using zero disables this strategy.
- property match_mtp_max_image_pair¶
Strategy generating image pairs matches, using images connected by control point marks. This strategy limits to number of generated image pair, up to
match_mtp_max_image_pair
per control point. Using zero disables this strategy.
- property match_similarity_images¶
Matching image pairs based on an internal content similarity algorithm. The number defines the maximum number of image pairs that can be matched based on similarity. Zero value disables this matching image pair strategy.
- property match_time_images¶
Matching images consecutively considering their timestamp of capture. The number defines how many consecutive images (using timestamp ordering) are considered for pair matching. Zero value disables this matching image pair strategy. Typical values:
2
-4
- property match_use_triangulation¶
Strategy using the geolocation of the images to guess how likely image pairs are to match. Recommended value is
True
.
- property min_rematch_overlap¶
Minimum relative overlap required for an image pair to be considered in the rematch process. This settings is only relevant when
rematch
isTrue
. Typical value:0.3
- class pix4dvortex.calib.Settings.MatchingAlgorithm(self: pix4dvortex.calib.Settings.MatchingAlgorithm, value: int) None ¶
Members:
- Standard :
Default and adequate for most capture cases.
- GeometricallyVerified :
A slower, but more robust matching strategy. If selected, geometrically inconsistent matches are discarded. The option is useful when many similar features are present throughout the project: rows of plants in a farming field, window corners on a building’s facade, etc.
- class pix4dvortex.calib.Settings.OptimExtType(self: pix4dvortex.calib.Settings.OptimExtType, value: int) None ¶
Type of optimization for external camera parameters. External camera parameters are position and orientation, and the linear rolling shutter in case the camera model follows the linear rolling shutter model.
Members:
- Motion :
Optimizes rotation and orientation, but not the rolling shutter.
- All :
(default) Optimizes the rotation and position, as well as the linear rolling shutter in case the camera model follows the linear rolling shutter model.
- class pix4dvortex.calib.Settings.OptimIntType(self: pix4dvortex.calib.Settings.OptimIntType, value: int) None ¶
Type of optimization for internal camera parameters. Internal camera parameters are camera sensor parameters (f.i. focal length, distortion, etc).
Members:
- NoOptim :
Does not optimize any of the internal camera parameters. It may be beneficial for large cameras, if already calibrated, and if these calibration parameters are used for processing.
- Leading :
Optimizes the most important internal camera parameters only. This option is recommended to process cameras with a slow rolling shutter speed.
- All :
(default) Optimizes all the internal camera parameters (including the rolling shutter if applicable). It is recommended to use this method when processing images taken with small UAVs, whose cameras are more sensitive to temperature variations and vibrations.
- AllPrior :
Optimizes all the internal camera parameters (including the rolling shutter if applicable), but forces the optimal internal parameters to be close to the initial values. This settings may be useful for difficult to calibrate projects, where the initial camera parameters are known to be reliable.
Point Cloud Densification¶
Module for densification utilities.
- pix4dvortex.dense.densification(*, scene: _vtx_core.calib._CalibratedScene, input_cameras: pix4dvortex.cameras.InputCameras, _exclude_image_set: Optional[Set[int]] = None, _metrics_handler: Optional[Callable[[str], None]] = None, mask_map: Optional[Dict[int, os.PathLike]] = None, roi: Optional[pix4dvortex.geom.Roi2D] = None, settings: pix4dvortex.dense.Settings, resources: pix4dvortex.proc.Resources = <pix4dvortex.proc.Resources object at 0x7f7b6956b530>, logger: _vtx_core.logging.Logger = None, progress_callback: Callable[[str, int], None] = None, stop_function: Callable[[], bool] = <built-in method of PyCapsule object at 0x7f7b3f2847e0>) Tuple[_vtx_core.pcl.PointCloud, Optional[Dict[int, _vtx_core.cameras.DepthInfo]]] ¶
Generate densified point cloud.
- Parameters
scene – Calibrated scene.
input_cameras – input cameras container. Needed to obtain image path information.
_exclude_image_set – A set of camera IDs to exclude from densification processing.
_metrics_handler –
An optional metric callback. Its single argument provides a JSON string of computed densification metrics. The underlying JSON contains below fields:
- * “pre_track_count_distrib”
a list of track-count indexed by camera-count computed before excluding images from densification
- * “post_track_count_distrib”
a list of track-count indexed by camera-count computed after excluding images from densification
mask_map – Mapping of an image file hash to the path of the mask corresponding to this image. A mask is a single-channel black-and-white image. White areas are masked. Points that project onto masked areas will not be considered for densification.
roi – A polygon or muti-polygon defining a 2D region of interest (XY). Points within the (multi-)polygon are considered to belong to the ROI.
settings – Configuration parameters, see
Settings
.resources – HW resource configuration parameters.
logger – Logging callback.
progress_callback – Progress callback.
stop_function – Cancellation callback.
- Returns
Tuple of a
PointCloud
and synthetic depth maps. The synthetic depth maps isNone
ifsettings.compute_depth_maps
isFalse
. The format of the depth maps is a dictionary mapping each camera ID to the correspondingDepthInfo
.
- pix4dvortex.dense.version() str ¶
- class pix4dvortex.dense.Settings(self: pix4dvortex.dense.Settings, *, image_scale: int = 1, point_density: int = 2, min_no_match: int = 3, window_size: int = 7, multi_scale: bool = True, limit_depth: bool = False, regularized_multiscale: bool = False, min_image_size: int = 512, depth_limit_percentile: float = 0.949999988079071, uniformity_threshold: float = 0.03125, compute_depth_maps: bool = False, _partition_input_scene: bool = True, _patch_filtering: bool = True) None ¶
- property compute_depth_maps¶
If
True
synthetic depth maps (ie. artificial depth information) are computed during densification. Setting this option will increase the densification processing time by approximately 30%. Synthetic depth maps can be used later as a constraint in the mesh generation step. This option should be turned on only for specific cases when users try to generate 3D meshes of thin structures, for instance high tension towers or power lines.
- property depth_limit_percentile¶
If
limit_depth
isTrue
, this setting is used to compute the depth limit. Considering the distribution of the tracks depths with respect to their visible cameras, the depth limit will be set to the corresponding percentile of this distribution. The default value is recommended.
- property image_scale¶
The image scale defines the downsampling factor of the highest resolution image that will be used for the processing. The downsampling factor is defined as
(1/2)**image_scale
. Increasingimage_scale
value results in using less time and resources, and generate less 3D points. When processing vegetation, increasingimage_scale
can generate more 3D points. Possible values:Use the original image size. Does not significantly improve results over half size images.
Use half size images. This is the recommended value.
Use quarter size images.
Use eighth size images.
- property limit_depth¶
If set to
True
, limit the depth at which points are reconstructed, avoiding reconstructing background objects - useful for 3D models of objects. The depth limit is estimated from input cameras, points and thedepth_limit_percentile
parameter. This may reduce the reconstruction of outlying points. It is recommended to enable this option for oblique projects.
- property min_image_size¶
Minimal image size used when densifying with
multi_scale
enabled. This determines the largest scale at which images will be used. The default value is recommended.
- property min_no_match¶
Minimum number of valid re-projections that a 3D point must have on images to be kept in the point cloud (minimum number of matches necessary to reconstruct a point). Higher values can reduce noise, but also decrease the number of computed 3D points. Possible values:
Each 3D point has to be correctly re-projected in at least 2 images. This option is recommended for projects with small overlap, but it usually produces a point cloud with more noise and artifacts.
Each 3D point has to be correctly re-projected in at least 3 images (default value).
Each 3D point has to be correctly re-projected in at least 4 images.
Each 3D point has to be correctly re-projected in at least 5 images. This option reduces the noise and improves the quality of the point cloud, but it might compute less 3D points in the final point cloud. It is recommended for oblique imagery projects that have a high overlap.
Each 3D point has to be correctly re-projected in at least 6 images. This option reduces the noise and improves the quality of the point cloud, but it might compute less 3D points in the final point cloud. It is recommended for oblique imagery projects that have a very high overlap.
- property multi_scale¶
When this option is set to
True
(default value), the algorithm uses the lower resolutions of the same images in addition to the resolution chosen byimage_scale
parameter. Using this option results in improved completeness at the expense of increased noise in some cases. In particular the reconstruction of uniform areas such as roads is improved. This option is generally useful for computing additional 3D points in vegetation areas keeping details in areas without vegetation.
- property point_density¶
The point density has an impact on the number of generated 3D points. It defines the minimal distance on the image plane that two points reconstructed from the same camera can have, in pixels. The distance applies to the image resolution rescaled with
image_scale
. The number of generated points scales as the inverse power of two of this parameter. Possible values:High density. A 3D point is computed for every
image_scale
pixel. The result will be an oversampled point cloud. Processing at high density typically requires more time and resources than processing at optimal density. Usually, this option does not significantly improve the results.Optimal density. A 3D point is computed for every
4/image_scale
pixel. For example, if theimage_scale
is set to half image size, one 3D point is computed every4/(0.5) == 8
pixels of the original image. This is the recommended value.
Low density. A 3D point is computed for every
16/image_scale
pixel. For example, if theimage_scale
is set to half image size, one 3D point is computed every16/(0.5) = 32
pixels of the original image. The final point cloud is computed faster and uses less resources than optimal density.
- property regularized_multiscale¶
Use patches at lower resolution only in case of uniformity failures. This setting is recommended for oblique project with enabled multiscale to limit outliers due to sky & water surfaces or other uniform backgrounds (generally with poor or no depth perception). The option has no effect if
multi_scale
is disabled.
- property uniformity_threshold¶
This sets a threshold on the minimum texture content necessary to generate points. The texture content is estimated by a single number in the range [0, 1]. Decreasing this setting may yield more complete densification at the expense of increased noise while increasing it may yield less points especially in areas with uniform texture. The default value is recommended.
- property window_size¶
Size of the square grid used for matching the densified points in the original images, in pixels. Possible values:
Use a 7x7 pixels grid. This is suggested for aerial nadir images. The
NADIR
template uses this value.
Use a 9x9 pixels grid. This is suggested for oblique and terrestrial images. This value is useful for more accurate positioning of the densified points in the original images. The
OBLIQUE
template uses this value.
DSM/DTM¶
Module for dsm utilities.
- class pix4dvortex.dsm.DTMSettings(self: pix4dvortex.dsm.DTMSettings, *, dsm_settings: pix4dvortex.dsm.Settings, rigidity: pix4dvortex.dsm.Rigidity = <Rigidity.Medium: 1>, filter_threshold: float = 0.5, cloth_sampling_distance: float = 1.0) None ¶
- property cloth_sampling_distance¶
Inter-point sampling distance used to derive simulated cloth overlying the base of the point cloud. Values in the range 1.0 to 1.5 are recommended for the majority of use cases.
- property dsm_settings¶
DSM generation parameters.
- property filter_threshold¶
Cut-off threshold for terrain classification. Points with height over simulated cloth surface larger than threshold are rejected. The remaining points are considered to belong to the terrain. The units of measurement are the same as those of the point cloud.
- property rigidity¶
Tension of simulated cloth overlying the base of the point cloud.
- class pix4dvortex.dsm.Settings(self: pix4dvortex.dsm.Settings, *, method: Union[pix4dvortex.dsm.Settings.Triangulation, pix4dvortex.dsm.Settings.IDW], resolution: float, max_tile_size: int = 4096) None ¶
- class IDW(*args, **kwargs)¶
Recommended for urban areas, construction sites and buildings. Provides good accuracy, but can generate empty cells and outliers.
- Parameters
gsd – The GSD value as generated by calibration.
dense_settings – The
Settings
used for densification.interpolation_nn_count – [Optional] Number of nearest neighbors to use for interpolation.
smoothing_median_radius – [Optional] Median pixel radius distance to use to smooth the output.
Overloaded function.
__init__(self: pix4dvortex.dsm.Settings.IDW, *, gsd: float, dense_settings: pix4dvortex.dense.Settings, smoothing_median_radius: Optional[int] = 12, interpolation_nn_count: Optional[int] = 10) -> None
__init__(self: pix4dvortex.dsm.Settings.IDW, *, pcl_scale: float, smoothing_median_radius: Optional[int] = 12, interpolation_nn_count: Optional[int] = 10) -> None
- property interpolation_nn_count¶
- property smoothing_median_radius¶
- class Triangulation(self: pix4dvortex.dsm.Settings.Triangulation) None ¶
Recommended for rural areas, agriculture or low texture captures. Provides less accurate DSM, but generates no empty cells.
- property max_tile_size¶
Desired size of the DSM tiles in pixels. Recommended range is 500-8000.
- property method¶
- property resolution¶
The GSD value as generated by calibration. The used value must remain the same for DSM, DTM, Ortho and GeoTiff writer.
- pix4dvortex.dsm.gen_tiled_dsm(*, point_cloud: _vtx_core.pcl.PointCloud, roi: Optional[pix4dvortex.geom.Roi2D] = None, settings: pix4dvortex.dsm.Settings, resources: pix4dvortex.proc.Resources = <pix4dvortex.proc.Resources object at 0x7f7b695705f0>, logger: _vtx_core.logging.Logger = None) _vtx_core.dsm.Tiles ¶
- pix4dvortex.dsm.gen_tiled_dtm(*, point_cloud: _vtx_core.pcl.PointCloud, roi: Optional[pix4dvortex.geom.Roi2D] = None, settings: pix4dvortex.dsm.DTMSettings, resources: pix4dvortex.proc.Resources = <pix4dvortex.proc.Resources object at 0x7f7b3f289070>, logger: _vtx_core.logging.Logger = None) _vtx_core.dsm.Tiles ¶
Generate a tiled digital terrain model (DTM).
Generate a DTM by applying a cloth simulation filter (CSF) to a digital surface model (DSM). The CSF is applied to the DSM generation on the fly, producing a DTM without need for intermediate DSM artifacts.
- pix4dvortex.dsm.version() str ¶
- class pix4dvortex.dsm.Rigidity(self: pix4dvortex.dsm.Rigidity, value: int) None ¶
Members:
Low
Medium
High
Orthomosaic¶
Module for orthomosaic generation utilities.
- pix4dvortex.ortho.gen_tiled_orthomosaic(*, cameras: List[_vtx_core.cameras.CameraParameters], input_cameras: pix4dvortex.cameras.InputCameras, dsm_tiles: _vtx_core.dsm.Tiles, settings: pix4dvortex.ortho.Settings, resources: pix4dvortex.proc.Resources = <pix4dvortex.proc.Resources object at 0x7f7b69569e30>, logger: _vtx_core.logging.Logger = None) _vtx_core.ortho.Tiles ¶
Generate orthomosaic.
- Parameters
cameras – List of calibrated cameras.
input_cameras – Input cameras object, used to obtain image data.
dsm_tiles – List of DSM tiles.
settings – Configuration parameters
resources – HW resource configuration parameters. Note: use_gpu is experimental in this function and takes effect when using the fast blending algorithm (see settings). It requires Vulkan 1.2 and may have issues on NVIDIA GeForce [Ti] 1050 cards with 4GiB RAM on Windows.
logger – Logging callback.
- pix4dvortex.ortho.get_ortho_reflectance_stats(*, output=None, radiometry_input, **_)¶
Get stats for settings, inputs and output.
- pix4dvortex.ortho.version() str ¶
- class pix4dvortex.ortho.Settings(self: pix4dvortex.ortho.Settings, *, fill_occlusion_holes: bool = True, blend_ratio: float = 0.10000000149011612, pipeline: pix4dvortex.ortho.Settings.Pipeline = <Pipeline.FAST: 0>, capture_pattern: pix4dvortex.ortho.Settings.CapturePattern = <CapturePattern.NADIR: 0>) None ¶
Configuration of the orthomosaic generation algorithm.
- Parameters
fill_occlusion_holes – If
True
, fill occlusion holes (areas not captured on camera) with the pixels of the nearest image.blend_ratio – Coefficient determining the size of the area to be blended at the borders of image patches. More details at
blend_ratio
.pipeline – Type of the algorithmic pipeline (
Pipeline
).capture_pattern – Type of the capture pattern (
CapturePattern
).
- property blend_ratio¶
Coefficient determining the size of the area to be blended at the borders of image patches. The value should be in the range 0.0 to 1.0. Value 0.0 means no blending, leading to hard borders. Value 1.0 means full blending of the two nearest images. Values in the range 0.1 to 0.2 are recommended for the majority of use cases.
- property capture_pattern¶
Type of the capture pattern (
CapturePattern
).
- property fill_occlusion_holes¶
If
True
, fill occlusion holes (areas not captured on camera) with the pixels of the nearest image.
- class pix4dvortex.ortho.Settings.Pipeline(self: pix4dvortex.ortho.Settings.Pipeline, value: int) None ¶
Type of the algorithmic pipeline to use for the orthomosaic generation.
Members:
FAST : Speed-oriented algorithmic pipeline.
FULL : Quality-oriented algorithmic pipeline.
DEGHOST : Algorithmic pipeline targeted at removal of moving objects.
- class pix4dvortex.ortho.Settings.CapturePattern(self: pix4dvortex.ortho.Settings.CapturePattern, value: int) None ¶
Type of photography used for image capture.
Members:
NADIR : Nadir photography.
OBLIQUE : Oblique photography.
3D Mesh¶
Module for mesh utilities.
- pix4dvortex.mesh.gen_mesh_geometry(*, point_cloud: _vtx_core.pcl.PointCloud, settings: pix4dvortex.mesh.Settings, cameras: List[_vtx_core.cameras.CameraParameters], input_cameras: Optional[pix4dvortex.cameras.InputCameras] = None, mask_map: Dict[int, os.PathLike] = {}, resources: pix4dvortex.proc.Resources = <pix4dvortex.proc.Resources object at 0x7f7b3f29ccf0>, _roi: Optional[_vtx_core.mesh._ROI] = None, logger: _vtx_core.logging.Logger = None) _vtx_core.mesh._MeshGeom ¶
Generate mesh geometry defining vertices (as x,y,z coordinates) and faces of a mesh.
Constraints can be optionally defined either by depth and depth confidence information contained in input cameras or by masks.
- Parameters
point_cloud – Densified point cloud.
settings – Configuration parameters of type
Settings
.cameras – List of calibrated cameras.
input_cameras – (optional) Input cameras object, for accessing depth information associated to images.
mask_map – (optional) Mapping of a camera ID to the path of an associated mask file. The camera ID corresponds to
camera.id
, wherecamera
is an element of thecameras
parameter.resources – (optional) HW resource configuration parameters of type
Resources
._roi – (optional, experimental) Region of interest object. The reference system is required to be the same as that of
point_cloud
positions, that is, the “processing” SRS.logger – (optional) Logging callback.
- Returns
A mesh geometry object.
- pix4dvortex.mesh.gen_mesh_texture(*, mesh_geom: _vtx_core.mesh._MeshGeom, cameras: List[_vtx_core.cameras.CameraParameters], input_cameras: pix4dvortex.cameras.InputCameras, settings: pix4dvortex.mesh.TextureSettings, resources: pix4dvortex.proc.Resources = <pix4dvortex.proc.Resources object at 0x7f7b69554f30>, logger: _vtx_core.logging.Logger = None) _vtx_core.mesh.Texture ¶
Generate mesh texture.
- Parameters
mesh_geom – Mesh geometry that defines vertices (as x,y,z coordinates) and faces of a mesh.
cameras – List of calibrated cameras.
input_cameras – Input cameras object.
settings – Configuration parameters of type
TextureSettings
.resources – (optional) HW resource configuration parameters of type
Resources
.logger – (optional) Logging callback.
- Returns
A mesh texture object.
- pix4dvortex.mesh.gen_textured_mesh_lod(*, mesh_geom: _vtx_core.mesh._MeshGeom, texture: _vtx_core.mesh.Texture, settings: pix4dvortex.mesh.LODSettings, resources: pix4dvortex.proc.Resources = <pix4dvortex.proc.Resources object at 0x7f7b69554df0>, logger: _vtx_core.logging.Logger = None) _vtx_core.mesh.MeshLOD ¶
Generate a level of detail (LOD) textured mesh.
- Parameters
mesh_geom – Mesh geometry that defines vertices (as x,y,z coordinates) and faces of a mesh.
texture – Mesh texture opaque data container.
settings – LOD configuration parameters of type
LODSettings
.resources – (optional) HW resource configuration parameters of type
Resources
. Only max_threads is used.logger – (optional) Logging callback.
- Returns
An LOD mesh object.
- pix4dvortex.mesh.version() str ¶
- class pix4dvortex.mesh.Settings(self: pix4dvortex.mesh.Settings, *, _geom_gen: pix4dvortex.mesh.Settings._GeomGen = <pix4dvortex.mesh.Settings._GeomGen object at 0x7f7b3f28be70>, _constraints: pix4dvortex.mesh.Settings._Constraints = <pix4dvortex.mesh.Settings._Constraints object at 0x7f7b3f28be30>, _small_comp_filter: pix4dvortex.mesh.Settings._SmallCompFilter = <pix4dvortex.mesh.Settings._SmallCompFilter object at 0x7f7b3f28bdf0>, _decimation: pix4dvortex.mesh.Settings._Decimation = <pix4dvortex.mesh.Settings._Decimation object at 0x7f7b3f28bdb0>, _smoothing: pix4dvortex.mesh.Settings._Smoothing = <pix4dvortex.mesh.Settings._Smoothing object at 0x7f7b3f272bf0>) None ¶
Configuration of the mesh geometry generation algorithm.
- class pix4dvortex.mesh.Settings.TEMPLATES¶
Pre-defined settings for commonly used image capture types.
- LARGE¶
Optimized for large scenes and aerial nadir image capture.
- SMALL¶
Optimized for small scenes and aerial oblique or terrestrial image capture.
- TOWER¶
Optimized for tower-like structures.
- class pix4dvortex.mesh.TextureSettings(self: pix4dvortex.mesh.TextureSettings, *, _outlier_threshold: float = 0.009999999776482582, _texture_size: int = 8192) None ¶
Configuration of the texture generation algorithm.
- class pix4dvortex.mesh.TextureSettings.TEMPLATES¶
Pre-defined settings for different quality of geometry reconstruction and image capture.
- STANDARD¶
Optimized for well-reconstructed geometries and image captures with no or few occluding or moving features.
- DEGHOST¶
Optimized for imperfect geometries and image captures with many occluding or moving features.
- class pix4dvortex.mesh.LODSettings(self: pix4dvortex.mesh.LODSettings, *, _max_n_faces_per_node: int = 100000, _texture_size: int = 1024, _jpeg_quality: int = 90) None ¶
Configuration of the LOD mesh generation algorithm.
More Processing¶
AutoGCP¶
Automatic GCP detection tools.
AutoGCPs consists of a set of tools for the automatic detection in images of control point targets with pixel level accuracy.
It supports three types of targets with black and white Haar-like features: square, diagonal and Aeropoint.
Despite its name, AutoGCPs imposes no restrictions on the use of targets. The functionality only concerns itself with the detection of targets in images and the obtainment of an accurate estimate of the position of their marker. The user can then use the information as ground control points (GCPs), check-points (CPs) or anything else.
- exception pix4dvortex.autogcp.AutogcpError¶
- class pix4dvortex.autogcp.Settings(self: pix4dvortex.autogcp.Settings, *, xy_uncertainty: float = 5.0, z_uncertainty: float = 10.0) None ¶
GCP detection algorithm settings.
- Parameters
xy_uncertainty – Absolute horizontal image georeferencing uncertainty.
z_uncertainty – Absolute vertical image georeferencing uncertainty.
Note
The units of the uncertainties are the same as those of the input GCP geolocation.
The default values are optimized in meters and should be scaled accordingly if other units are used.
- property xy_uncertainty¶
Absolute horizontal image georeferencing uncertainty.
- property z_uncertainty¶
Absolute vertical image georeferencing uncertainty.
- pix4dvortex.autogcp.detect_gcp_marks(*, scene: _vtx_core.calib._CalibratedScene, input_cameras: pix4dvortex.cameras.InputCameras, input_gcps: List[pix4dvortex.dmodel.GCP], settings: pix4dvortex.autogcp.Settings, logger: _vtx_core.logging.Logger = None) List[pix4dvortex.dmodel.GCP] ¶
Detect GCP marks in images.
- Parameters
scene – Calibrated scene.
input_cameras – Input cameras object, used to obtain image data.
input_gcps – 3D GCP without marks. The GCP coordinates must be in the same coordinate system as the one used to create the projected cameras used as input to camera calibration.
settings – Configuration parameters.
logger – Logging callback.
- Returns
A list of
GCP
objects with detected marks.- Raises
ValueError – if
input_gcps
contain marks.ValueError – if
input_gcps
andscene
CRS is not the same.AutogcpError – if configuration parameters are invalid or the detection algorithm cannot complete.
- pix4dvortex.autogcp.version() str ¶
Version of the autogcp algorithm API.
Depth Processing¶
Utilities to process LiDAR and synthetic depth maps and depth point clouds.
- class pix4dvortex.depth.DepthCompletionSettings(self: pix4dvortex.depth.DepthCompletionSettings, _initial_dilation_kernel_size: int = 5, _closing_kernel_size: int = 5, _hole_filling_kernel_size: int = 7, _smoothing: bool = False) None ¶
Configuration parameters for depth map densification algorithm.
- class pix4dvortex.depth.MergeSettings(self: pix4dvortex.depth.MergeSettings, _distance: float = 0.025, _n_neighbors: int = 128, _sor: Optional[pix4dvortex.depth.MergeSettings._SOR] = <pix4dvortex.depth.MergeSettings._SOR object at 0x7f7b3f273070>) None ¶
- pix4dvortex.depth.densify(*, sparse_depth_maps: Dict[int, _vtx_core.cameras.DepthInfo], settings: pix4dvortex.depth.DepthCompletionSettings = <pix4dvortex.depth.DepthCompletionSettings object at 0x7f7b3f273bf0>, resources: pix4dvortex.proc.Resources = <pix4dvortex.proc.Resources object at 0x7f7b3f273bb0>, logger: _vtx_core.logging.Logger = None) Dict[int, _vtx_core.cameras.DepthInfo] ¶
Generate densified depth maps from sparse depth maps.
- Parameters
sparse_depth_maps – A dictionary mapping each camera ID to its corresponding
DepthInfo
. The depth maps is assumed to use 0 to mark unknown depth and positive values for known depths. The confidence is ignored by this algorithm.settings – The algorithm settings, see
DepthCompletionSettings
.resources – HW resource configuration parameters.
logger – Logging callback.
- Returns
The densified depth maps, a dictionary mapping each camera ID to its corresponding
DepthInfo
.
- pix4dvortex.depth.gen_pcl(*, settings: pix4dvortex.depth.Settings = <pix4dvortex.depth.Settings object at 0x7f7b3f273870>, scene: _vtx_core.calib._CalibratedScene, input_cameras: pix4dvortex.cameras.InputCameras, roi: Optional[pix4dvortex.geom.Roi2D] = None, resources: pix4dvortex.proc.Resources = <pix4dvortex.proc.Resources object at 0x7f7b3f273830>, logger: _vtx_core.logging.Logger = None) _vtx_core.pcl.PointCloud ¶
Generate a depth point cloud from LiDAR depth maps.
- Parameters
settings – Configuration parameters, see
Settings
.scene – Calibrated scene.
input_cameras – Input cameras object containing depth maps and, optionally, their confidences.
roi – (optional) 2D region of interest in the XY plane defined as a polygon or a muti-polygon.
resources – HW resource configuration parameters.
logger – (optional) Logging callback.
- Returns
A point cloud object.
- pix4dvortex.depth.pcl_merge(*, pcl: _vtx_core.pcl.PointCloud, depth_pcl: _vtx_core.pcl.PointCloud, settings: pix4dvortex.depth.MergeSettings = <pix4dvortex.depth.MergeSettings object at 0x7f7b3f273a70>, resources: pix4dvortex.proc.Resources = <pix4dvortex.proc.Resources object at 0x7f7b3f273a30>, logger: _vtx_core.logging.Logger = None) _vtx_core.pcl.PointCloud ¶
Merge a densified photogrammetry point cloud with a depth point cloud created from LiDAR depth maps.
- Parameters
pcl – Dense point cloud.
depth_pcl – Depth point cloud.
settings – Configuration parameters, see
MergeSettings
.resources – HW resource configuration parameters.
logger – (optional) Logging callback.
- Returns
A point cloud object.
- pix4dvortex.depth.version() str ¶
- class pix4dvortex.depth.Settings(self: pix4dvortex.depth.Settings, *, _pcl: pix4dvortex.depth.Settings._PCL = <pix4dvortex.depth.Settings._PCL object at 0x7f7b3f272cf0>, _depth_filter: Optional[pix4dvortex.depth.Settings._DepthFilter] = <pix4dvortex.depth.Settings._DepthFilter object at 0x7f7b69579f30>) None ¶
- class pix4dvortex.depth.Settings.ConfidenceLevel(self: pix4dvortex.depth.Settings.ConfidenceLevel, value: int) None ¶
Members:
Low
Medium
High
Sky and Water Segmentation¶
Module for sky and water segmentation utilities.
- pix4dvortex.skyseg.gen_segment_masks(*, cameras: List[_vtx_core.cameras.CameraParameters], input_cameras: pix4dvortex.cameras.InputCameras, output_dir: os.PathLike, settings: pix4dvortex.skyseg.Settings, resources: pix4dvortex.proc.Resources = <pix4dvortex.proc.Resources object at 0x7f7b69571530>, logger: _vtx_core.logging.Logger = None) Dict[int, os.PathLike] ¶
Mask sky, water or both in images.
- Parameters
cameras – List of calibrated cameras.
input_cameras – Input cameras object, used to obtain image data.
output_dir – Directory the mask image files will be written to.
settings – Configuration parameters, see
Settings
.resources – HW resource configuration parameters.
logger – Logging callback.
- Returns
Mapping of a camera ID to the path of the mask corresponding to this image. A mask is a single-channel black-and-white image. White areas are masked.
- Raises
ValueError – if the input data is invalid.
RuntimeError – if the ML model could not be loaded.
RuntimeError – if the specified amount of memory is not enough.
- pix4dvortex.skyseg.version() str ¶
Version of the skyseg algorithm API.
- class pix4dvortex.skyseg.Settings(self: pix4dvortex.skyseg.Settings, *, masking_type: pix4dvortex.skyseg.Settings.MaskingType = <MaskingType.SKY: 0>, mode: pix4dvortex.skyseg.Settings.Mode = <Mode.FULL: 0>) None ¶
- property masking_type¶
Select which entities to mask in images, see
MaskingType
.
- class pix4dvortex.skyseg.Settings.MaskingType(self: pix4dvortex.skyseg.Settings.MaskingType, value: int) None ¶
Members:
SKY : Identifies sky segments.
WATER : Identifies water segments.
SKY_WATER : Identifies both sky and water segments.
- class pix4dvortex.skyseg.Settings.Mode(self: pix4dvortex.skyseg.Settings.Mode, value: int) None ¶
Members:
- FULL :
Full segmentation mode.
- FAST :
Fast segmentation mode. This option is faster. It is not recommended for images with water segment.
Point Cloud Alignment¶
Utilities for obtaining a point cloud alignment.
- pix4dvortex.pcalign.alignment(*, point_cloud: _vtx_core.pcl.PointCloud, ref_point_cloud: _vtx_core.pcl.PointCloud, logger: _vtx_core.logging.Logger = None) _vtx_core.pcalign.Alignment ¶
Get an alignment of a misaligned point cloud to a reference point cloud.
- Parameters
point_cloud – Misaligned point cloud.
ref_point_cloud – Reference point cloud.
- Returns
Object containing a 4-by-4 transformation matrix, aligning the misaligned point cloud to the reference, and the quality of the obtained alignment.
- Raises
RuntimeError – if the spatial reference of the misaligned and the reference point clouds is not the same.
- pix4dvortex.pcalign.version() str ¶
Version of the pcalign algorithm API.
Point Cloud Transformation¶
Affine transformation tools
- pix4dvortex.transform.transform(*args, **kwargs)¶
Overloaded function.
transform(*, transformation: buffer, point_cloud: _vtx_core.pcl.PointCloud, work_dir: os.PathLike = PosixPath(‘/tmp’)) -> _vtx_core.pcl.PointCloud
Transform a point cloud.
- Parameters
transformation – 4-by-4 transformation matrix to apply.
point_cloud – Point cloud to transform.
work_dir – Temporary work space. It will be created if it doesn’t exist. Defaults to system temporary directory.
Note
The spatial reference of the transformation must match that of the point cloud.
- Returns
Transformed point cloud.
transform(*, transformation: buffer, calib_scene: _vtx_core.calib._CalibratedScene) -> _vtx_core.calib._CalibratedScene
Transform a calibrated scene.
- Parameters
transformation – 4-by-4 transformation matrix to apply.
calib_scene – Calibrated scene to transform.
Note
The spatial reference of the transformation must match that of the calibrated scene.
- Returns
Transformed calibrated scene.
Exports¶
Cesium¶
Utilities for exporting mesh data to Cesium format.
- pix4dvortex.io.cesium.version() str ¶
- pix4dvortex.io.cesium.write_mesh(*, output_path_prefix: os.PathLike, mesh_lod: _vtx_core.mesh.MeshLOD) None ¶
Write an LOD mesh into Cesium files.
- Parameters
output_path_prefix – Path to the output files ending with a file name prefix.
mesh_lod – LOD mesh.
- pix4dvortex.io.cesium.write_pcl(*, output_path_prefix: os.PathLike, point_cloud: _vtx_core.pcl.PointCloud, resources: pix4dvortex.proc.Resources = <pix4dvortex.proc.Resources object at 0x7f7b3f29d3f0>) None ¶
Write a point cloud as Cesium LOD files.
- Parameters
output_path_prefix – Path to the output files ending with a file name prefix.
point_cloud – PointCloud object to be serialized.
resources – Hardware resources. Only max_threads and work_dir are used.
GeoTIFF¶
GeoTIFF export tools
- pix4dvortex.io.geotiff.to_cog(*, input_path: os.PathLike, output_path: os.PathLike, settings: Dict[str, str] = {}) None ¶
Convert a GeoTIFF file into a cloud optimized GeoTIFF (COG) file.
- Parameters
input_path – Path to the input geotiff file.
output_path – Path to the output COG file.
settings – Configuration parameters for the conversion.
- pix4dvortex.io.geotiff.write_geotiff(*, output_path: os.PathLike, tiles: Union[None, _vtx_core.dsm.Tiles, _vtx_core.ortho.Tiles], gsd: float, settings: pix4dvortex.io.geotiff.Settings = <pix4dvortex.io.geotiff.Settings object at 0x7f7b6957a430>) None ¶
Write raster tiles into a GeoTIFF file.
- Parameters
output_path – Path to the output file.
tiles – List of either DSM or orthomosaic raster tiles.
gsd – GSD. A pixel size in units of the coordinate system.
settings – Configuration parameters.
- pix4dvortex.io.geotiff.write_geotiff_tile(*, output_path: os.PathLike, tile: Union[_vtx_core.dsm.Tile, _vtx_core.ortho.Tile], gsd: Optional[float] = None, settings: pix4dvortex.io.geotiff.Settings = <pix4dvortex.io.geotiff.Settings object at 0x7f7b3f29d570>) None ¶
Write a raster tile into a GeoTIFF file.
- Parameters
output_path – Path to the output file.
tile – Raster tile.
gsd – GSD. A pixel size in units of the coordinate system. If not specified, the resolution of the tile is used.
settings – Configuration parameters.
- class pix4dvortex.io.geotiff.Settings(self: pix4dvortex.io.geotiff.Settings, *, compression: pix4dvortex.io.geotiff.Settings.Compression = <Compression.LZW: 1>, xml_xmp: str = '', sw: str = '', no_data_value: Optional[float] = -10000.0) None ¶
- property compression¶
The compression algorithm.
- property no_data_value¶
Value for empty pixels. The value must be valid for the pixel data type.
- property sw¶
Write the software used to write the GeoTIFF file into the TIFF metadata.
- property xml_xmp¶
XML string with XMP data. It can be generated with ExifView.
- class pix4dvortex.io.geotiff.Settings.Compression(self: pix4dvortex.io.geotiff.Settings.Compression, value: int) None ¶
Members:
NoCompression
LZW
LAS¶
LAS export tools
- pix4dvortex.io.las.write_pcl(*, output_path: os.PathLike, point_cloud: _vtx_core.pcl.PointCloud, compress: bool = False, las_version: pix4dvortex.io.las.LasVersion = <LasVersion.V1_2: 0>, _point_filter_factory: Callable[[_vtx_core.pcl._View], Callable[[int], bool]] = None) None ¶
- class pix4dvortex.io.las.LasVersion(self: pix4dvortex.io.las.LasVersion, value: int) None ¶
Members:
V1_2
V1_4
OBJ¶
Utilities for exporting mesh data to OBJ format.
- pix4dvortex.io.obj.version() str ¶
- pix4dvortex.io.obj.write_mesh(output_path: os.PathLike, mesh_geom: _vtx_core.mesh._MeshGeom, texture: _vtx_core.mesh.Texture, settings: pix4dvortex.io.obj.Settings = <pix4dvortex.io.obj.Settings object at 0x7f7b3f29dbf0>) None ¶
Write a triangular mesh into an OBJ file.
- Parameters
output_path – Path to the output file.
mesh_geom – Mesh geometry defined by a list of vertex coordinates (x,y,z) and a list of mesh faces.
texture – Mesh texture opaque data container.
settings – Configuration settings.
- Raises
RuntimeError – if the mesh data is invalid or inconsistent.
ValueError – if the texture file is invalid.
- class pix4dvortex.io.obj.Settings(self: pix4dvortex.io.obj.Settings, *, texture_fmt: pix4dvortex.io.obj.Settings.TextureFmt = <TextureFmt.JPEG: 0>, jpeg_quality: int = 75) None ¶
- property jpeg_quality¶
JPEG quality parameter. Ignored if texture_fmt is not JPEG.
- property texture_fmt¶
Format of output texture file.
- class pix4dvortex.io.obj.Settings.TextureFmt(self: pix4dvortex.io.obj.Settings.TextureFmt, value: int) None ¶
Members:
JPEG
PNG
SLPK¶
Utilities for exporting mesh data to SLPK format.
- pix4dvortex.io.slpk.version() str ¶
- pix4dvortex.io.slpk.write_mesh(*, output_path: os.PathLike, mesh_lod: _vtx_core.mesh.MeshLOD, _geog_cs: Optional[pix4dvortex.coordsys.SpatialReference] = None, resources: pix4dvortex.proc.Resources = <pix4dvortex.proc.Resources object at 0x7f7b69551bb0>, logger: _vtx_core.logging.Logger = None) None ¶
Write a triangular LOD mesh into an SLPK file.
- Parameters
output_path – Path to the output file.
mesh_lod – LOD mesh.
_geog_cs – (optional) Geographical coordinate system suitable for geolocating the mesh. Currently, only WGS84 is supported. If omitted, the original input projected spatial reference system (see
pix4dvortex.cameras.ProjectedCameras()
) is used.resources – HW resource configuration parameters.
logger – Logging callback.
- pix4dvortex.io.slpk.write_pcl(*, output_path: os.PathLike, point_cloud: _vtx_core.pcl.PointCloud, resources: pix4dvortex.proc.Resources = <pix4dvortex.proc.Resources object at 0x7f7b69566ef0>) None ¶
Write a point cloud as an SLPK LOD file.
- Parameters
output_path – Path to the output file.
point_cloud – Point cloud.
resources – HW resource configuration parameters.
Utilities¶
General utilities¶
Generic utility collection.
- pix4dvortex.util.collect_hw_info()¶
Collect HW info.
- pix4dvortex.util.collect_stats(*, msg_handler=None, task_specific=None)¶
Collect usage stats and pass them to a message handler.
- pix4dvortex.util.hash_id(filename)¶
Calculate hash-based identifier of arbitrarily large file.
The identifier is a 64 bit big integer built from a BLAKE2b hash of the file contents. The most significant byte corresponds to the beginning of the sequence of hash bytes.
- pix4dvortex.util.hw_info()¶
Return a dict with host hardware info.
- pix4dvortex.util.path_to_url(path)¶
Return a file URI from a file path.
- pix4dvortex.util.sha256(filename)¶
Calculate SHA(2)256 of arbitrarily large file.
- pix4dvortex.util.task_info(task)¶
Task function information.
- pix4dvortex.util.timestamp(filename)¶
File modification UTC date-time string in ISO 8601 format.
- pix4dvortex.util.url_to_path(url)¶
Extract the path from a URL.
Processing utilities¶
Processing utilities
- class pix4dvortex.proc.Resources(self: pix4dvortex.proc.Resources, *, max_ram: int = 8589934592, max_threads: int = 0, use_gpu: bool = False, max_gpu_mem: int = 4294967296, work_dir: os.PathLike = PosixPath('/tmp')) None ¶
- property max_gpu_mem¶
Maximum GPU memory in bytes.
- property max_ram¶
Maximum RAM available for processing in bytes. 0 is an invalid value.
- property max_threads¶
Maximum number of threads to use for processing. 0 means use the number of logical cores.
- property use_gpu¶
- property work_dir¶
Temporary work space. It will be created if it doesn’t exist. Defaults to system temporary directory.
Geometry¶
Geometry utilities
- class pix4dvortex.geom.Polygon2D(self: pix4dvortex.geom.Polygon2D, *, outer_ring: List[List[float[2]]], inner_rings: List[List[List[float[2]]]] = []) None ¶
Two-dimensional polygon.
A two-dimensional polygon is defined by an outer ring, and an optional set of non-overlapping inner rings. A point is within the polygon if it is inside the outer ring and outside the inner rings.
Initialize a Polygon2D from an outer ring and an optional set of inner rings.
Rings have no self-intersections and are non-overlapping.
- Parameters
outer_ring – an iterable of points defining the outer ring.
inner_rings – (optional) an iterable of iterables of points defining the inner rings of the polygon.
- Raises
RuntimeError – if any of the rings is overlapping or self-intersecting.
- is_within(self: pix4dvortex.geom.Polygon2D, point: List[float[2]]) bool ¶
- class pix4dvortex.geom.Roi2D(*args, **kwargs)¶
Two-dimensional region of interest (ROI).
A 2D ROI is defined as a set of Polygon2D objects. A point is within the ROI if it is within any of the polygons that define it.
Overloaded function.
__init__(self: pix4dvortex.geom.Roi2D, *, polygon: pix4dvortex.geom.Polygon2D) -> None
Initialize a Roi2D from a Polygon2D.
__init__(self: pix4dvortex.geom.Roi2D, *, polygons: List[pix4dvortex.geom.Polygon2D]) -> None
Initialize a Roi2D from a set of Polygon2Ds.
- Parameters
polygons – an iterable of non-overlapping polygons.
- Raises
RuntimeError – if any of the polygons is overlapping.
- is_within(self: pix4dvortex.geom.Roi2D, point: List[float[2]]) bool ¶
Spatial reference systems¶
Coordinate system tools
- class pix4dvortex.coordsys.CoordinateConverter(self: pix4dvortex.coordsys.CoordinateConverter, *, src: pix4dvortex.coordsys.SpatialReference, dst: pix4dvortex.coordsys.SpatialReference, src_geoid_height: Optional[float] = None, dst_geoid_height: Optional[float] = None) None ¶
Construct a coordinate converter based on two spatial reference systems with optional geoid heights. If geoid height is provided for a SRS, it supercedes any vertical transformation specific for the vertical component of the SRS in question. Conversions between different ellipsoids are still applied on the vertical axes. It’s important to note that a 0 height is not the same thing as a null height.
- Parameters
src – Source SRS (SRS to start from).
dst – Destination SRS (SRS to go to).
src_geoid_height – (optional) Single geoid undulation value (aka geoid_height), to be provided when src (source SRS) has no geoid.
dst_geoid_height – (optional) Single geoid undulation value (aka geoid_height), to be provided when dst (destination SRS) has no geoid.
- Raises
RuntimeError – if this transformation is not supported.
RuntimeError – If a geoid height is given for a spatial reference system which is not compound or has a defined geoid model name or a bad SRS is given.
RuntimeError – If either src (source SRS) or dst (destination SRS) contains or is an engineering spatial reference system.
- convert(*args, **kwargs)¶
Overloaded function.
convert(self: pix4dvortex.coordsys.CoordinateConverter, arg0: List[float[3]]) -> list
convert(self: pix4dvortex.coordsys.CoordinateConverter, arg0: float, arg1: float, arg2: float) -> Tuple[float, float, float]
- class pix4dvortex.coordsys.SpatialReference(*args, **kwargs)¶
Overloaded function.
__init__(self: pix4dvortex.coordsys.SpatialReference, wkt: str) -> None
Create a SpatialReference object from a WKT string.
__init__(self: pix4dvortex.coordsys.SpatialReference, *, horizontal_wkt: str, vertical_wkt: str, geoid: Optional[str] = None) -> None
Create a SpatialReference object from horizontal and vertical WKT strings.
- Parameters
horizontal_wkt – Horizontal WKT string.
vertical_wkt – Vertical WKT string.
geoid – (optional) A valid geoid model corresponding to the given vertical SRS. If omitted, an unspecified geoid (if more than one is available for the vertical SRS) is used as default.
- Raises
ValueError – if the geoid model is invalid.
- as_utm(self: pix4dvortex.coordsys.SpatialReference, *, lat: float, lon: float) pix4dvortex.coordsys.SpatialReference ¶
- as_wkt(self: pix4dvortex.coordsys.SpatialReference, *, wkt_convention: pix4dvortex.coordsys.WktConvention = <WktConvention.WKT2_2019: 1>) str ¶
- axes(self: pix4dvortex.coordsys.SpatialReference) List[pix4dvortex.coordsys.SpatialReference.Axis] ¶
Axes of this spatial reference.
- Returns
Array of
Axis
objects corresponding to the axes of this spatial reference system. The array may be of length 1, 2 or 3 depending on the dimensions of the SRS (vertical, horizontal or compound respectively).
- axes_3d(self: pix4dvortex.coordsys.SpatialReference) List[pix4dvortex.coordsys.SpatialReference.Axis[3]] ¶
3D axes of this spatial reference.
- Returns
Length 3 array of
Axis
objects corresponding to the axes of this spatial reference system. If the SRS is 3-dimensional, returns the same asaxes()
. If the SRS is 2-dimensional, the 3rd component is assumed to be an ellipsoidal height.- Raises
RuntimeError – if SRS is vertical and not compound.
RuntimeError – if SRS is 2-dimensional and projected, and non-isometric.
- geoid(self: pix4dvortex.coordsys.SpatialReference) str ¶
The geoid model used by this spatial reference, or an empty string in case of the default geoid model.
- is_compound(self: pix4dvortex.coordsys.SpatialReference) bool ¶
- is_geographic(self: pix4dvortex.coordsys.SpatialReference) bool ¶
- is_isometric(self: pix4dvortex.coordsys.SpatialReference) bool ¶
- is_left_handed(self: pix4dvortex.coordsys.SpatialReference) bool ¶
- is_projected(self: pix4dvortex.coordsys.SpatialReference) bool ¶
- is_vertical(self: pix4dvortex.coordsys.SpatialReference) bool ¶
- pix4dvortex.coordsys.get_scene_ref_frame(*args, **kwargs)¶
Overloaded function.
get_scene_ref_frame(*, definition: str, shift: List[float[3]] = [0.0, 0.0, 0.0], geoid_height: Optional[float] = None) -> pix4dvortex.dmodel.SceneRefFrame
Creates a
SceneRefFrame
object from a WKT string and optionalshift
&geoid_height
values.- Returns
A
SceneRefFrame
object.- Parameters
definition – WKT string
shift – (optional) An internal vector used to shift coordinates to center the SRS to the scene.
geoid_height – (optional) Single geoid undulation value (aka geoid_height), to be provided when proj SRS has no geoid.
- Raises
RuntimeError – if SRS is vertical and not compound.
RuntimeError – if SRS is 2-dimensional and projected, and non-isometric.
get_scene_ref_frame(*, proj_srs: pix4dvortex.coordsys.SpatialReference, shift: List[float[3]] = [0.0, 0.0, 0.0], geoid_height: Optional[float] = None) -> pix4dvortex.dmodel.SceneRefFrame
Creates a
SceneRefFrame
object from aSpatialReference
object and optionalshift
&geoid_height
values.- Returns
A
SceneRefFrame
object.- Parameters
proj_srs – A
SpatialReference
object.shift – (optional) An internal vector used to shift coordinates to center the SRS to the scene.
geoid_height – (optional) Single geoid undulation value (aka geoid_height), to be provided when proj SRS has no geoid.
- Raises
RuntimeError – if SRS is vertical and not compound.
RuntimeError – if SRS is 2-dimensional and projected, and non-isometric.
- pix4dvortex.coordsys.srs_from_code(*args, **kwargs)¶
Overloaded function.
srs_from_code(code: str) -> pix4dvortex.coordsys.SpatialReference
Create a SpatialReference object from an authority code string.
- Parameters
code – A valid authority code string (e.g. “EPSG:2056”).
srs_from_code(*, horizontal_code: str, vertical_code: str, geoid: Optional[str] = None) -> pix4dvortex.coordsys.SpatialReference
Create a SpatialReference object from horizontal and vertical authority code strings.
- Parameters
horizontal_code – A valid horizontal SRS authority code string (e.g. “EPSG:4326”).
vertical_code – A valid vertical SRS authority code string (e.g. “EPSG:5773”).
geoid – (optional) A valid geoid model corresponding to the given vertical SRS (e.g. EGM96). If omitted, an unspecified geoid (if more than one is available for the vertical SRS) is used as default.
- Raises
ValueError – if the geoid model is invalid.
- pix4dvortex.coordsys.srs_from_epsg(*args, **kwargs)¶
Overloaded function.
srs_from_epsg(epsg: int) -> pix4dvortex.coordsys.SpatialReference
Create a SpatialReference object from an EPSG authority code.
- Parameters
epsg – A valid EPSG code (e.g. 2056).
srs_from_epsg(*, horizontal_epsg: int, vertical_epsg: int, geoid: Optional[str] = None) -> pix4dvortex.coordsys.SpatialReference
Create a SpatialReference object from horizontal and vertical EPSG authority codes.
- Parameters
horizontal_epsg – A valid horizontal SRS EPSG code (e.g. 4326).
vertical_epsg – A valid vertical SRS EPSG code (e.g. 5773).
geoid – (optional) A valid geoid model corresponding to the given vertical SRS (e.g. EGM96). If omitted, an unspecified geoid (if more than one is available for the vertical SRS) is used as default.
- Raises
ValueError – if the geoid model is invalid.
- pix4dvortex.coordsys.wkt_from_code(code: str, *, wkt_convention: pix4dvortex.coordsys.WktConvention = <WktConvention.WKT2_2019: 1>, wkt_options: pix4dvortex.coordsys.WktExportOptions = <WktExportOptions.DEFAULT: 0>) str ¶
Create a WKT string from an authority code string.
- Parameters
code – A valid authority code string (e.g. “EPSG:2056”).
wkt_convention – (optional) Specifies WKT1 or WKT2 convention.
wkt_options – (optional) Bit mask specifying additional options.
- pix4dvortex.coordsys.wkt_from_epsg(epsg_no: int, *, wkt_convention: pix4dvortex.coordsys.WktConvention = <WktConvention.WKT2_2019: 1>, wkt_options: pix4dvortex.coordsys.WktExportOptions = <WktExportOptions.DEFAULT: 0>) str ¶
Create a WKT string from an EPSG authority code.
- Parameters
epsg_no – A valid EPSG code (e.g. 2056).
wkt_convention – (optional) Specifies WKT1 or WKT2 convention.
wkt_options – (optional) Bit mask specifying additional options.
- class pix4dvortex.coordsys.WktExportOptions(self: pix4dvortex.coordsys.WktExportOptions, value: int) None ¶
Members:
DEFAULT
MULTILINE
- class pix4dvortex.coordsys.WktConvention(self: pix4dvortex.coordsys.WktConvention, value: int) None ¶
Members:
WKT1_GDAL
WKT2_2019