PIX4D API

Download OpenAPI specification:Download

Authentication Guide

This document describes how to get started with the PIX4Dengine Cloud REST API, which provides third party applications access to the Pix4D cloud service. It allows a PIX4Dengine Cloud client to access his/her data on the cloud and perform a range of operations on them.

  • For end to end security, HTTPS is used in all the APIs
  • The base URL for API endpoints is: https://cloud.pix4d.com
  • Each REST API call needs to be authenticated. See Authentication below
  • The communication format is expected to be JSON, unless otherwise stated

Getting access to PIX4Dengine Cloud REST API

To register the application and start using the PIX4Dengine Cloud REST API, you will need to acquire a PIX4Dengine Cloud license. Please contact us at https://www.pix4d.com/enterprise-contact to request one.

Once the license has been issued, a client_id/client_secret pair that represents your application is needed, the one that will connect to the API and that you will use to authenticate to the API.

Note that as its name suggests, client_secret is a password identifying the client application and therefore should be handled securely. In order to generate client_id/client_secret, please follow these steps:

Log in to https://cloud.pix4d.com with your Pix4D account.

  1. Go to your Organization dashboard through https://account.pix4d.com, choosing your Organization and going to the Dashboard.

  2. In the API access section, the list of existing keys is displayed (it will be empty the first time a user logs in):

list_keys

  1. Click on Generate new API credentials and select the correct license:

generate_credentials

  1. Click on Generate and a new pair of credentials will be generated. Make sure they are copied as they will not be displayed again:

generate_credentials2

In order to disable existing credentials, they have to be deleted by clicking on Delete.

API Authentication

Equipped with the API client_id/client_secret, the first step is to retrieve an authentication token. An authentication token identifies both the application connecting to the API and the Pix4D user this application is connecting on behalf of.

This token must then be passed along every single request that the application makes to the API. It is passed in the HTTP Authorization header, like so:

Authorization: Bearer <ACCESS_TOKEN>

The PIX4Dengine Cloud REST API uses OAuth 2.0, the industry standard for connecting apps and accounts. OAuth 2.0 supports several "authentication flows" to retrieve an authentication token. Pix4D supports several of them that are each used for a specific purpose, but for PIX4Dengine Cloud customers, only one is relevant: "Client Credentials"

Client Credentials flow

Using this flow, an API client application can get access to its own Pix4D user account (and only to that account). With this method, the authentication is straightforward and only requires the applications' client_id/client_secret pair

The client must send the following HTTP POST request to https://cloud.pix4d.com/oauth2/token/ with a payload containing:

  • grant_type: client_credentials
  • client_id: the client ID of the application that was given to you by your Pix4D contact
  • client_secret: the client_secret of the application that was given to you by your Pix4D contact
  • token_format: jwt

example:

curl --request POST \
  -d "grant_type=client_credentials&token_format=jwt&client_id=YuB7fu…&client_secret=GMSVvt8dF…" \
  https://cloud.pix4d.com/oauth2/token/

Token information

Token content

The response you receive when performing the authentication request above has the following content (In JSON):

  • access_token: the token value that you will have to join in all your requests
  • token_type: for Pix4D, this is always a Bearer token
  • expires_in: the number of seconds the token is valid for. After this number of seconds, API requests using this token will get rejected and you will need to request a new token through the authentication procedure again
  • scope: describe what the token is valid for. In this case, it's always "read write" since you get a full access to your own account
{
  "access_token": "<ACCESS_TOKEN>",
  "token_type": "Bearer",
  "expires_in": 36000,
  "scope": "read write"
}

Refreshing the token

When the token expires, you simply need to perform the above authentication procedure again to get a fresh token.

Using the API

For an in-depth description of all possible API commands, please refer to the API documentation. This documentation is only available for users who already have an API access.

First Example

In this guide, you will discover how to get a token, upload a new project, get it processed, and access the results.

Prerequisites

API Access

You need a PIX4Dengine Cloud license to authenticate and get access to the API. Please contact your Pix4D reseller to start a trial if you don't have a license already. More information is available on PIX4Dengine Cloud on our product page.

With this license comes your authentication information, Client ID, and Client Secret Key, which are the two values you need in this guide.

Terminal tooling

This guide can be completed from the Terminal using basic tooling:

  • curl command line is used to call the API; it should be included in your OS or docker image
  • aws-cli is used to upload and download the data; you can get it directly from AWS: https://aws.amazon.com/cli/

Photos to process

We will use a set of photos to be processed in this guide. If you don't have a good dataset at your disposal, feel free to download one of our sample datasets, for example, the building dataset. Select "Download" then "Input Images" from the UI. We consider you are unzipping the archive in ./photos in this guide.

Authenticate

We are using the OAuth2 Client Credentials flow to generate an access token. Using the Client ID and Client Secret provided with your PIX4Dengine Cloud license, you can get an Access Token using the following curl command:

export PIX4D_CLIENT_ID=__YOUR_CLIENT_ID__
export PIX4D_CLIENT_SECRET=__YOUR_CLIENT_SECRET_KEY__
curl --request POST \
  --url https://cloud.pix4d.com/oauth2/token/ \
  --form client_id=$PIX4D_CLIENT_ID \
  --form client_secret=$PIX4D_CLIENT_SECRET \
  --form grant_type=client_credentials \
  --form token_type=access_token \
  --form token_format=jwt

The response body is a JSON document containing an access_token attribute:

{
  "access_token": "<__PIX4D_ACCESS_TOKEN__>",
  "expires_in": 172800,
  "token_type": "Bearer",
  "scope": "read:cloud write:cloud"
}

For the following example, we will reference the token as PIX4D_ACCESS_TOKEN.

export PIX4D_ACCESS_TOKEN=__PIX4D_ACCESS_TOKEN__

You can learn more about authentication in the reference documentation (see Authentication).

Create a new project

Let's start by creating a project. The only required parameter is the project name, which can be passed as a JSON payload in the request body.

curl --request POST \
  --url https://cloud.pix4d.com/project/api/v3/projects/ \
  --header "Authorization: Bearer ${PIX4D_ACCESS_TOKEN}" \
  --header "Content-Type: application/json" \
  --data '{"name": "My first project"}'

The response will be a 201 CREATED and the response body will include the project details. Please keep note of the project id and important AWS S3 properties that we will use later.

This is an extract of the interesting JSON properties

{
  "id": 877866,
  "bucket_name": "prod-pix4d-cloud-default",
  "s3_base_path": "user-123123123121312/project-877866"
}
export PROJECT_ID=<THE PROJECT ID>
export S3_BUCKET=prod-pix4d-cloud-default
export S3_BASE_PATH="user-123123123121312/project-877866"

Upload the photos

It is recommended to use the shell AWS CLI or the python boto3 library but other tools can work as well. First, we need to retrieve the AWS S3 credentials associated with this project:

curl --url https://cloud.pix4d.com/project/api/v3/projects/$PROJECT_ID/s3_credentials/ \
  --header "Authorization: Bearer ${PIX4D_ACCESS_TOKEN}" \
  --header "Content-Type: application/json"

The answer contains all the S3 information we need, at least the access_key, secret_key, and the session_token.

{
  "access_key": "ASIATOCJLBKSU2CVJIHR",
  "secret_key": "5OGGBSvn8Sesdu8l...<remainder of secret key>",
  "session_token": "FwoGZXIvYX...<remainder of security token>",
  "expiration": "2021-05-10T21:55:47Z",
  "bucket": "prod-pix4d-cloud-default",
  "key": "user-199a56ab-7ac6-d6d1-4778-5b4d338fc9de/project-883349",
  "server_time": "2021-05-19T09:55:47.357641+00:00",
  "region": "us-east-1"
}

We can store the S3 credentials in our environment so that they will get picked up by the AWS CLI tool.

export AWS_ACCESS_KEY_ID=ASIATOCJLBKSU2CVJIHR
export AWS_SECRET_ACCESS_KEY='5OGGBSvn8Sesdu8l...<remainder of secret key>'
export AWS_SESSION_TOKEN='FwoGZXIvYX...<remainder of security token>'

Make sure to prefix all your desired destination locations with the path returned in the credentials call. This is the only place for which write access is granted. Make sure that the files are proper images and their names include an extension supported by Pix4D.

Provided your images are located in a folder located at $HOME/images, and that it contains only images, you can run the command:

aws s3 cp ./images/ "s3://${S3_BUCKET}/${S3_BASE_PATH}/" --recursive

If your folder contains multiple images, you need to upload them one by one, which may require the use of a script.

aws s3 cp $HOME/images/ "s3://${S3_BUCKET}/${S3_BASE_PATH}/" --recursive

You then need to register the images in the PIX4Dengine Cloud API so that they will be processed. It is possible to upload files that are not project inputs, and it is not possible to know which files are meant as input. It is therefore required to register each S3 uploaded file in the API. Register the uploaded files you uploaded (single API call):

curl --request POST --url https://cloud.pix4d.com/project/api/v3/projects/$PROJECT_ID/inputs/bulk_register/ \
  --header "Authorization: Bearer ${PIX4D_ACCESS_TOKEN}" \
  --header "Content-Type: application/json" \
  --data "{
      \"input_file_keys\": [
        \"${S3_BASE_PATH}/P0350035.JPG\",
        \"${S3_BASE_PATH}/P0360036.JPG\",
        \"${S3_BASE_PATH}/P0370037.JPG\",
        \"${S3_BASE_PATH}/P0380038.JPG\",
        \"${S3_BASE_PATH}/P0390039.JPG\",
        \"${S3_BASE_PATH}/P0400040.JPG\",
        \"${S3_BASE_PATH}/P0410041.JPG\",
        \"${S3_BASE_PATH}/P0420042.JPG\",
        \"${S3_BASE_PATH}/P0430043.JPG\",
        \"${S3_BASE_PATH}/P0440044.JPG\"
      ]
    }"

The response should confirm that the various images have been registered amongst other data.

{ "nb_images_registered": 10 }

Start processing

You are now ready to start processing your project:

curl --request POST --url https://cloud.pix4d.com/project/api/v3/projects/$PROJECT_ID/start_processing/ \
  --header "Authorization: Bearer ${PIX4D_ACCESS_TOKEN}" \
  --header "Content-Type: application/json"

Check the processing state

curl --url https://cloud.pix4d.com/project/api/v3/projects/$PROJECT_ID/ \
  --header "Authorization: Bearer ${PIX4D_ACCESS_TOKEN}" \
  --header "Content-Type: application/json"

A response can look like:

{
  "id": 883349,
  "name": "My first project",
  "display_name": "My first project",
  "project_group_id": null,
  "is_demo": false,
  "is_geolocalized": true,
  "create_date": "2021-05-19T11:53:39.504849+02:00",
  "public_status": "PROCESSING",
  "display_detailed_status": "Waiting for processing",
  "error_reason": "",
  "user_display_name": "Jhon Doe",
  "project_thumb": "<__URL__>",
  "detail_url": "https://cloud.pix4d.com/project/api/v3/projects/883349/",
  "acquisition_date": "2021-05-19T11:53:38.973075+02:00",
  "project_type": "pro",
  "image_count": 10,
  "last_datetime_processing_started": "2021-05-19T10:58:10.514800Z",
  "last_datetime_processing_ended": null,
  "bucket_name": "prod-pix4d-cloud-default",
  "s3_bucket_region": "us-east-1",
  "s3_base_path": "user-188a56ab-7ac6-d6d1-4778-5b4d338fc9de/project-883349",
  "never_delete": false,
  "under_trial": false,
  "uuid": "239ae97821d54f98975bc0afa2fcc72f",
  "coordinate_system": "",
  "outputs": {
    "mesh": { "texture_res": {} },
    "images": {
      "project_thumb": {
        "status": "processed",
        "name": "project_thumb.jpg",
        "s3_key": "user-188a56ab-7ac6-d6d1-4778-5b4d338fc9de/project-883349/thumb/project_thumb.jpg",
        "s3_bucket": "prod-pix4d-cloud-default"
      },
      "reflectance": {}
    },
    "map": { "layers": {}, "bounds": { "sw": [0, 0], "ne": [0, 0] } },
    "bundles": {
      "inputs": {
        "status": "requestable",
        "request_url": "https://cloud.pix4d.com/project/api/v3/projects/883349/inputs/zip/"
      }
    },
    "reports": {}
  },
  "min_zoom": -1,
  "max_zoom": -1,
  "proj_pipeline": ""
}

Once the status changes from PROCESSING to DONE, the project's main outputs are ready to be retrieved.

Once the project is processed, by querying its details, it is possible to:

  • Get its PIX4Dcloud visualization page
  • Get its PIX4Dcloud public page link (read-only)
  • Get any asset that was produced (in the form of an S3 link to the asset to be downloaded)

How to process a project

This guide describes all of the different ways to process a project with PIX4Dengine Cloud API. Once the project has been created, in order to process it, the following endpoint must be called:

POST on https://cloud.pix4d.com/project/api/v3/projects/{id}/start_processing/ See the full documentation for this endpoint.

The body request includes:

{
  "tags": ["string"]
}

Depending on the type of processing, not all of the parameters in the request body are required. There are different types of processing:

  1. Nadir images (3d-maps)
  • Flat terrain
  1. Oblique images

Unless explicitly specified, processing types are mutually exclusive.

  1. Mobile processing
  2. Building reconstruction projects

1. Nadir

A faster processing pipeline for nadir datasets with the newest algorithms that yields better results and has better management of coordinate systems. It supports vertical coordinate systems over an ellipsoid, geoid model, or user-defined constant geoid undulation. This pipeline also produces a 3D mesh with improved visualization in the PIX4Dcloud viewer.

This is the default pipeline if nothing is specified.

It can produce the following outputs:

  • Orthophoto
  • DSM
  • Point cloud
  • 3D mesh

Similarly to the other pipelines, it can be selected by using nadir (or equivalently 3d-maps) in the tags parameter of the processing options payload:

{
  "tags": [
    "nadir"
  ]
}

Flat terrain

Enable specific settings for a particularly flat scene, such as a field with few to no vertical structures, such as trees or buildings. To enable those settings, use the flat tag in addition to the above nadir one in the tags parameters of the processing options payload:

{
  "tags": [
    "nadir",
    "flat"
  ]
}

Note that this tag is only valid for nadir processing pipelines (meaning together with nadir or 3d-maps tags), oblique nor building processing pipelines.

2. Oblique

A faster processing pipeline for oblique datasets with the newest algorithms that yields better results and has better management of coordinate system (e.g. it supports vertical coordinate systems: ellipsoidal, geoid or user-defined constant geoid undulation). This pipeline also produces a 3D mesh with improved visualization in the PIX4Dcloud viewer.

It can produce the following outputs:

  • Orthophoto
  • DSM
  • Point cloud
  • 3D mesh

Similarly to the other pipelines, it can be selected by using oblique in the tags parameter of the processing options payload:

{
  "tags": [
    "oblique"
  ]
}

3. Mobile processing

The API provides a third processing pipeline for images (and depth data optionally) captured by the PIX4Dcatch. This photogrammetric pipeline is optimized for this type of data and produces better and faster results. It can be used with images, or if your mobile device is equipped with a LiDAR scanner, the pipeline will use both images and depth data in the process. The outputs generated after processing are Orthophoto, DSM, point cloud, and 3D mesh. This pipeline is automatically selected when using images captured with Pix4Dcatch.

The LiDAR scanner captures depth information during the image acquisition. These LiDAR points will compensate for the lack of 3D points over reflective and low-texture surfaces.

More information on combining photogrammetry and LiDAR can be found in this article: https://www.pix4d.com/blog/lidar-photogrammetry

4. Building Reconstruction Projects

For images captured from oblique flights around targets with featureless facades, such as walls of a uniform color and texture, the building reconstruction pipeline can provide higher-quality results than standard processing would otherwise.

This pipeline can be selected by passing building in the tags parameter:

{
  "tags": [
    "building"
  ]
}

Limitations

  • Only RGB cameras and nadir flights are supported. See the full list of supported cameras
  • The outputs generated are: Orthophoto, DSM, and log file (no point cloud or 3D mesh is generated)
  • It is not possible to define specific processing options. The computation will always use the default parameters and will produce an ortho and DSM with a default resolution
  • If the pipeline is used in areas with a lot of height changes (urban areas for example), artifacts in the ortho might appear

Errors

In the event of processing failure, an error code* and reason are given in the project details API response. Error codes are intended to be machine-readable, while error reasons are human-readable messages to aid in debugging.

The following table describes the possible error codes and their corresponding reasons:

Code Reason Potential Mitigation
10001 An unexpected error occurred.
10002 Processing exceeded allotted time.
10003 Processing exceeded available resources.
10101 Failed to create cameras. More information will be available in the processing log. Refer to the processing log for additional details.
10201 Failed to calibrate a sufficient number of cameras. Verify the image quality and overlap.
10402 Point cloud generation failed. Verify the image quality and overlap.
10410 Processing failed.
10411 Failed to densify sparse point cloud. Verify the quality of the calibration.
10501 3D textured mesh generation failed. Ensure that the dense point cloud consists of a single block.
10601 The input data is not valid.
10611 The selected output CRS is not isometric. Use a valid isometric CRS.
10612 The selected output CRS is not projected or arbitrary. Use a valid projected or arbitrary CRS.
10613 An output CRS cannot be defined without a horizontal CRS component. Use a valid CRS with an horizontal component.
10614 A geoid model or geoid height cannot be specified without a CRS vertical component. Remove the geoid model or add a vertical component to the CRS.
10615 A geoid height cannot be specified with a geoid model. Remove the geoid height or add a geoid model to the CRS.

How to create Sites and assign Projects to them

Here we will guide you through creating a "Site" to group Projects into a timeline view.

The primary use case for this is to group Projects of the same location, created over a time period, for example to track a building construction project.

In the Cloud Frontend you can compare these projects easily to see the differences between any of the processed assets in both 2D and 3D views.

1. Creating a Site

POST on *https://cloud.pix4d.com/project/api/v3/project-groups/ specifying a name and setting the project_group_type to bim.

curl --request POST --url https://cloud.pix4d.com/project/api/v3/project_groups/ \
  --header "Authorization: Bearer ${PIX4D_ACCESS_TOKEN}" \
  --header "Content-Type: application/json"
  --data '{"name": "Construction site 123", "project_group_type": "bim"}`

This will return the information about the Site (ProjectGroup) you have created, including it's id.

{
  "name": "Construction site 123",
  "id": 112233,
  ...
}

2. Assigning an existing Project to a Site

To assign a Project to a Site you can PUT the project into the project_group (a.k.a Site) using the move_batch endpoint. You will need to supply the Organization's uuid in the owner_uuid field.

Note: using the PATCH method on the Project detail endpoint (project/api/v3/projects/<id>/) to move projects is deprecated and will be removed at some point in Q3 2025.

curl --request PUT \
  --url https://cloud.pix4d.com/common/api/v4/drive/move_batch/ \
  --header "Authorization: Bearer ${PIX4D_ACCESS_TOKEN}" \
  --header 'content-type: application/json' \
  --data '{
  "owner_uuid": "58101312-de35-4a22-a951-943eba20a041",
  "source_nodes": [
    {
      "type": "project",
      "uuid": "0c26579b-f2ad-4dfb-a253-a4c43dbbbf53"
    }
  ],
  "target_type": "project_group",
  "target_uuid": "50df7f65-f480-4fcc-9f86-32d8d7724689"
}'

3. Removing a Project from a Site

To un-assign a Project from a Site you can PUT the project into the organization (or even a folder) using the move_batch endpoint. You will need to supply the Organization's uuid in the owner_uuid field.

Note: using the PATCH method on the Project detail endpoint (project/api/v3/projects/<id>/) to move projects is deprecated and will be removed at some point in Q3 2025.

curl --request PUT \
  --url https://cloud.pix4d.com/common/api/v4/drive/move_batch/ \
  --header "Authorization: Bearer ${PIX4D_ACCESS_TOKEN}" \
  --header 'content-type: application/json' \
  --data '{
  "owner_uuid": "58101312-de35-4a22-a951-943eba20a041",
  "source_nodes": [
    {
      "type": "project",
      "uuid": "0c26579b-f2ad-4dfb-a253-a4c43dbbbf53"
    }
  ],
  "target_type": "organization",
  "target_uuid": "58101312-de35-4a22-a951-943eba20a041"
}'

How to deal with coordinate systems

1. Horizontal Input coordinate system

The images which are uploaded for processing can have two different coordinate systems:

  • WGS84

The image tags include latitude, longitude and ellipsoidal height with respect to WGS84 and will be taken automatically by the software. In that case, the input coordinate system is set to WGS84.

  • Arbitrary

The images do not have any geolocation and then the input coordinate system is set to arbitrary.

Any other input coordinate system is not supported.

2. Horizontal Output coordinate system

It is the coordinate system to which the outputs will refer to.

Default output coordinate system

When nothing is specified, the output coordinate system is set up by default:

  • If the input coordinate system is WGS84, the default output coordinate system will be WGS84 / UTM XX. XX or the UTM zone depends on the position of the images

  • If the input coordinate system is arbitrary, the output coordinate system will also be arbitrary

User-defined output coordinate system

It is possible to define an output coordinate system when the project is created:

POST on https://cloud.pix4d.com/project/api/v3/projects/

One of the parameters of the request body is coordinate_system which can be either:

  • a full WKT string representing the coordinate system (can be an arbitrary one)
  • a valid EPSG code, given in the format EPSG:2056 (only projected coordinate systems are supported)
  • In order to get the result in an arbitrary system, the WKT string can include either:
    • the value ARBITRARY_METERS that will produce the result in an arbitrary default coordinate system in meters
    • the value ARBITRARY_FEET that will produce the result in an arbitrary default coordinate system in feet
    • the value ARBITRARY_US_FEET that will produce the result in an arbitrary default coordinate system in US survey feet
{
  "coordinate_system": "string"
}

3. Vertical coordinate systems

  • Vertical Input coordinate system

The vertical input coordinate system will be ellipsoidal

  • Vertical output coordinate system

The vertical output coordinate system will always be EGM 96 Geoid.

4. Getting WKT strings of standard CS

spatialreference.org hosts a database of EPSG registered coordinate systems which should cover most needs related to horizontal CS. For example, to select the Swiss coordinate system CH1903, one would search for it on the database and export in OGC WKT format to get the WKT string as expected by PIX4D:

PROJCS["CH1903 / LV03",
    GEOGCS["CH1903",
        DATUM["CH1903",
            SPHEROID["Bessel 1841",6377397.155,299.1528128,
                AUTHORITY["EPSG","7004"]],
            AUTHORITY["EPSG","6149"]],
        PRIMEM["Greenwich",0,
            AUTHORITY["EPSG","8901"]],
        UNIT["degree",0.0174532925199433,
            AUTHORITY["EPSG","9122"]],
        AUTHORITY["EPSG","4149"]],
    PROJECTION["Hotine_Oblique_Mercator_Azimuth_Center"],
    PARAMETER["latitude_of_center",46.9524055555556],
    PARAMETER["longitude_of_center",7.43958333333333],
    PARAMETER["azimuth",90],
    PARAMETER["rectified_grid_angle",90],
    PARAMETER["scale_factor",1],
    PARAMETER["false_easting",600000],
    PARAMETER["false_northing",200000],
    UNIT["metre",1,
        AUTHORITY["EPSG","9001"]],
    AXIS["Easting",EAST],
    AXIS["Northing",NORTH],
    AUTHORITY["EPSG","21781"]]

Coordinate systems must comply with the WKT standards:

  • Geographic horizontal system e.g. WGS84, in this case the WKT starts with GEOGCS
  • Projected horizontal system e.g. WGS84 / UTM zone 32, in this case the WKT starts with PROJCS
  • Arbitrary horizontal system, in this case the WKT starts with LOCAL_CS

How to use Ground Control Points and Manual Tie Points

Learn how to use Ground Control Points (GCP) or Manual Tie Points (MTP) and Check Points in the computation.

A Ground Control Point (GCP) is a characteristic point which coordinates are known. The coordinates have been measured with traditional surveying methods or have been obtained by other sources (LiDAR, older maps of the area, Web Map Service). GCPs are used to georeference a project and reduce the noise.

A Manual Tie Point (MTP) is a characteristic point for which the coordinates are not known, but which is visible and accurately identifiable from several images, e.g. the corner of a wall. This is used to help the photogrammetry process to join the images in the scene.

Ground Control Points (GCPs)

The use of GCPs is possible with all PIX4D engines.

Once the project has been created and before processing it, it is possible to pass GCP coordinates which will be used in the computation:

POST on /project/api/v3/projects/{id}/gcp/register ({id} is the project ID)

Request body is as follows:

{
  "gcps": [
    {
        "name": "GCP_123",
        "point_type": "CHECKPOINT",
        "x": 1.23,
        "y": 45.2,
        "z": 445.87,
        "xy_accuracy": 0.02,
        "z_accuracy": 0.02
    },
    {...}
  ]
}
  • "name": It must be unique for each of the points
  • "point_type": It can be either "CHECKPOINT" or "GCP". This article explains the difference between both
  • "x","y","z": Coordinates of the point. They have to refer to the output coordinate system of the project and in the same units as the output coordinate system. Geographical coordinates are not supported
  • "xy_accuracy","z_accuracy": The planimetric and altimetric accuracy of the GCPs or Check Points

Important

The coordinates of a GCP are:

  • "x" : Coordinate in East direction (or West in some specific coordinate systems as in the example below)
  • "y" : Coordinate in North direction (or South in some specific coordinate systems as in the example below)
  • "z" : The altitude with respect to the ellipsoid of the geoid

Although most coordinate system are defined as above, there are some cases where the orientations of the axes are different. As an example, a coordinate system in Japan and another one in South Africa:

  • JGD2011 / Japan Plane Rectangular CS VI : The "x" coordinate is pointing to the North and the "y" is pointing to the East. In this case, the x,y coordinates have to be flipped so that the "x" in the body requests points to the East and the "y" points to North.
  • Cape / Lo17 : The "x" coordinate is pointing to the South and the "y" is pointing to the West. In this case, the x,y coordinates have to be flipped so that the "x" in the body requets points to the West and the "y" points to South.

More information about GCPs

Manual Tie Points (MTPs)

MTPs can be created similarly to GCPs, but omitting the georeferencing, and through a different endpoint:

POST on /project/api/v3/projects/{id}/mtp/ ({id} is the project ID)

Request body is as follows:

{
  "mtps": [
    {
        "name": "MTP_456",
        "is_checkpoint": false,
    },
    {...}
  ]
}
  • "name": It must be unique for each of the points in the project.
  • "is_checkpoint": Similar to the GCP point_type. This article explains the difference between both.

GCP and MTP Marks

Once the GCP or MTP data has been registered, it is necessary to also pass the marks of each GCP/MTP, in other words the pixel coordinates of the GCPs/MTPs in each of the images.

For GCP and MTP Marks the gcp field is used for the name of the GCP or MTP.

POST on /project/api/v3/projects/{id}/mark/register/ ({id} is the project ID)

Request body is as follows:

{
  "marks": [
    {
      "gcp": "GCP_123",
      "photo": "user-123/project-354/my_file.jpg",
      "x": 1.23,
      "y": 45.2,
    },
    {...}
  ]
}
  • "gcp": Name of the GCP/MTP registered in the previous step
  • "photo" : Photo s3_key where the GCP/MTP has been marked (each GCP/MTP must be marked in at least two photos)
  • "x" and "y" : Pixel coordinates (units are pixels) of the mark in the Photo

In order to mark the GCP/MTP in the images, it is possible to use PIX4Dmatic or other third party applications.

How to process a project with AutoGCP

This guide describes the specific case of using Pix4D AutoGCP to automatically mark GCPs and process a project with PIX4Dengine Cloud API.

This allows you to simply upload GCP coordinates and images, and the system will take care of marking GCP targets in the images. If you have manually generated the marks data then see this article on how to use that data directly.

Creating the project and uploading images is the same as in other examples.

For information on how to optimally set out your marks on your survey site see this article.

1. Create your project, upload the images and set and processing options

This is the same as in other examples. You must also set the coordinate_system when creating the project. This should be the same coordinate system as your GCPs.

2. Register GCPs

This is as described in the GCPs

3. Choose the appropriate pipeline for your data

AutoGCP is available when the one of the following processing pipelines is used:

4. Processing

Call the start processing endpoint as with all processing.

Processing with AutoGCP will take some extra time, due to the extra compute required to analyse and mark the GCPs on the input images.

How to define a region of interest

This feature allows the user to define a region of interest, which means no reconstructions will be created outside the defined area when processing a project with PIX4Dengine Cloud API.

ROI UI example

Once the project has been created, in order to define the region of interest, the following endpoint must be called:

POST on https://cloud.pix4d.com/project/api/v3/projects/{id}/processing_options/ See the full documentation for this endpoint.

The body request includes:

{
  "area": {
    "plane": {
      "vertices3d": [
        [
          "float",
          "float",
          "float"
        ],
        [
          "float",
          "float",
          "float"
        ],
        [
          "float",
          "float",
          "float"
        ],
        [
          "float",
          "float",
          "float"
        ]
      ],
      "outer_boundary": [
        "int",
        "int",
        "int",
        "int"
      ],
    },
    "thickness": "float"
  }
}

The only required field to set a region of interest is the plane, which consists of:

  • The vertices3d defines a list of 3D locations in WGS 84. For now, the altitude or z value of the location is not considered and the areas defined will be applied only in the 2D plane.
  • The outer_boundary defines the order each location stored in vertices3d must be considered when drawing the area.

There is also an optional field available inside plane named inner_boundaries to define areas inside the main area (defined with the outer_boundary) to be excluded from processing.

Finally, the thickness field is defined as a limit distance from the plane in the normal direction. If not specified, it is assumed to be infinite (usual case when limiting the region of interest).

Examples

  1. Standard region of interest
{
  "area": {
    "plane": {
      "vertices3d": [
        [
          3.248295746230309,
          43.415212850276255,
          0
        ],
        [
          3.2484144251306066,
          43.41525557252694,
          0
        ],
        [
          3.248465887662594,
          43.415162880462645,
          0
        ],
        [
          3.2483382815883806,
          43.415126642785765,
          0
        ]
      ],
      "outer_boundary": [
        0,
        1,
        2,
        3
      ]
    }
  }
}
  1. Region of interest with inner excluded zones
{
  "area": {
    "plane": {
      "vertices3d": [
        [
          3.248295746230309,
          43.415212850276255,
          0
        ],
        [
          3.2484144251306066,
          43.41525557252694,
          0
        ],
        [
          3.248465887662594,
          43.415162880462645,
          0
        ],
        [
          3.2483382815883806,
          43.415126642785765,
          0
        ],
        [
          3.248345633378664,
          43.415194540731015,
          0
        ],
        [
          3.248360336959232,
          43.415163261911765,
          0
        ],
        [
          3.248417050769994,
          43.41518500450734,
          0
        ]
      ],
      "outer_boundary": [
        0,
        1,
        2,
        3
      ],
      "inner_boundaries": [
        [
          4,
          5,
          6
        ]
      ]
    }
  }
}

How to compute a volume

A volume can be computed for a given project available on PIX4Dcloud and it is computed between a base surface boundary and the terrain surface.

The base surface boundary is given as a set of vertex coordinates and defines the base plane for the volume calculation.

Before running this computation, make sure that:

  • Project is in a PROCESSED state
  • DSM exists for the project

1. HTTP request

To compute the volume, send a POST request to https://api.webgis.pix4d.com/v1/project/{id}/volumes/.

2. Payload body fields

The payload body fields are contained in the list below.

The payload must be in the same coordinate system and the same units as the project.

Parameter Type
base_surface String
coordinates Array[Array[Number]]
custom_elevation Number
  • Base surface

    This parameter allows to select the base plane for the volume calculation. Accepted values are:

    • average
    • custom
    • fitPlane
    • triangulated
    • highest
    • lowest

    When using custom, the custom_elevation parameter is required (see below).

    More information about the different base surfaces at Menu View > Volumes > Sidebar > Objects

  • Coordinates

    Each set of coordinates refers to a vertex of the boundary and they must be given with respect to the output project coordinate system and in the same units as the projects (meters, feet or US foot).

  • custom_elevation

    Optional. The elevation MUST be provided when the base_surface is set to custom. If custom_elevation is provided, only X,Y vertex coordinates are needed, the Z is the custom_elevation.

  • units

    Optional. If not provided, the preferred units will be used to calculate the volume

    Accepted values are:

    • m (Metres)
    • yd (Yards)
    • ydUS (US Survey Yards)

3. Response body fields

The response parameters will be in the same coordinate system and same units and the same units as the project. If the project is in meters, the volume will be computed in m³. If the project is in feet or US foot, the volume will be computed in yd³.

Parameter Type
cut Number
cut_error Number
fill Number
fill_error Number
  • cut

    The volume that is above the volume base. The volume is measured between the volume base and the surface defined by the DSM.

  • cut_error

    Error estimation of the cut volume.

  • fill

    The volume that is below the volume base. The volume is measured between the volume base and the surface defined by the DSM.

  • fill_error

    Error estimation of the fill volume.

4. Response and failure modes

HTTP status code 200 - OK

If the request is successful, the body includes volumes and error estimations. The order of the response body fields doesn’t matter and is not guaranteed.

{
  "cut": Number,
  "cut_error": Number,
  "fill": Number,
  "fill_error": Number
}

HTTP status code 504 - Gateway Timeout

{
  "message": "Network error communicating with endpoint"
}
Cause Likely a networking issue either in the API Gateway, or any server that failed to route the request properly to the next point
Solution Resend the request
{
  "message": "Endpoint request timed out"
}
Cause The request took more than 29 seconds to compute. One of the reasons might be that too many vertices to describe the polygon boundary are sent
Solution Reduce the number of the polygon vertices

HTTP status code 404 - Not found

{
  "title": "404 Not Found"
}
Cause The project doesn’t exist
Solution Check that the project exists

HTTP status code 400 - Bad request

{
  "title": "No COG DSM found for this project"
}
Cause The project doesn’t have a DSM. There may also be a delay between registering a DSM and the volume calculation being available, as PIX4Dcloud creates the COG DSM
Solution Check that the project has a DSM. Allow some time for the system to register the DSM and create the COG DSM
{
  "title": "The polygon doesn't overlap the dataset"
}
Cause The vertices of the defined polygon lie outside the DSM
{
  "title": "Invalid Polygon"
}
Cause The interpolation of elevations inside the polygon boundary failed
{
  "title": "Too much data requested"
}
Cause The DSM doesn’t have a high enough overview level, as a result it is not possible to extract the maximum amount of data that is defined by the API
Solution A possible solution could be to reduce the size of the area for which the volume needs to be computed. If it is not a suitable solution or the issue persists contact the Support

5. Examples

For a complete demonstration of a volume computation, see this Jupiter Notebook.

The following examples show the request body to POST to https://api.webgis.pix4d.com/v1/project/{id}/volumes/

Calculate a volume with a triangulated base surface

{
  "base_surface": "triangulated",
  "coordinates": [
    [328726.692, 4688271.030, 159.725],
    [328728.351, 4688298.208, 150.376],
    [328750.527, 4688289.860, 150.250],
    [328744.430, 4688271.645, 150.056]
  ]
}

Calculate a volume with a custom base surface

{
  "base_surface": "triangulated",
  "coordinates": [
    [419430.327, 3469059.806],
    [419429.338, 3469056.812],
    [419431.319, 3469060.816],
    [419433.329, 3469058.803]
  ],
  "custom_elevation": 40.52
}

Note: The coordinates of the vertices are given with respect to the output project coordinate system and in the same units as the projects (meters, feet or US foot).

How to upload and process data through a ZIP file

Capture

Use PIX4Dcatch to capture your scene.

Once the capture is complete, select "Export all data" to generate the ZIP file.

PIX4Dcloud Upload

Create your project

This is the same as in other examples.

Upload the ZIP file

Instead of uploading images, upload the ZIP file exported from PIX4Dcatch to S3.

aws s3 cp inputs.zip "s3://${S3_BUCKET}/${S3_BASE_PATH}/"

Then, register the ZIP file in place of the images.

curl --request POST \
  --header "Authorization: Bearer ${PIX4D_ACCESS_TOKEN}" \
  --header "Content-Type: application/json" \
  --data \
    "{
      \"input_file_keys\": [
        \"${S3_BASE_PATH}/inputs.zip\"
      ]
    }" \
  https://cloud.pix4d.com/project/api/v3/projects/$PROJECT_ID/inputs/bulk_register/

ZIP Contents

The ZIP file exported from PIX4Dcatch will be compatible with PIX4Dcloud.

If you need to modify the ZIP file's contents however, there are certain things you should be aware of.

  • It must contain a file at the root level called manifest.json
  • This file must provide a semantic description of all the files in the ZIP archive, except manifest.json itself
  • The manifest.json must conform to the JSON schema

Passing GCPs in the ZIP contents

Optionally you could also pass the list of GCPs and Marks in the zip contents with respect to the JSON schema defined for it.

The input_control_points.json file must be compliant to below rules.

  • The zip bundle is completely self-consistent. It means all the Images and GCPs referenced by the Marks must be present in the zip itself.
  • The id for each GCP must be unique.
  • The CRS passed in the GCPs list:
    • Only EPSG format is supported
    • The definition property in the CRS should have:
      • A string in the format Authority:code+code where the first code is for a 2D CRS and the second one if for a vertical CRS (e.g. EPSG:4326+5773).
      • A string in the form Authority:code+Auhority:code where the first code is for a 2D CRS and the second one if for a vertical CRS (e.g. EPSG:4326+EPSG:5773).
      • A string in the form Authority:code where the code is for a 2D or 3D CRS (e.g. EPSG:4326).
    • In CRS, optional geoid_height can be passed. Please note that geoid is not supported in the current schema.
    • All GCPs must contain the same CRS
    • If the output CRS is not already set, the GCP CRS will be used as the output CRS
    • If the output CRS is already set using the processing_options endpoint, the GCP coordinates will be transformed in to the output CRS
    • If the output CRS is already set in the project coordinate_systems, it will give an validation error.

If any of these requirements are not met then your project will not process and will be marked as errored.

Example

The example of the zip file layout will look like:

├── input_control_points.json
├── logs
│   ├── log.json
│   └── test
│       └── abc.txt
├── manifest.json
├── new_sample
│   └── new
│       └── sample_image4.jpg
├── sample
│   └── sample_image3.jpg
├── sample_image1.jpg
└── sample_image2.jpg

The manifest.json from this zip file will contain:

{
    "inputs": [
        "sample_image1.jpg",
        "sample_image2.jpg",
        "sample/sample_image3.jpg",
        "new_sample/new/sample_image4.jpg"
    ],
    "log_files": [
        "logs/log.json",
        "logs/test/abc.txt"
    ],
    "input_control_points": "input_control_points.json"
}

and the input_control_points.json from this zip file will have the list of all GCPs and the Marks as below.

{
    "format": "application/opf-input-control-points+json",
    "version": "0.2",
    "gcps": [
        {
            "id": "gcp0",
            "geolocation": {
                "crs": {
                    "definition": "EPSG:4265+EPSG:5214",
                    "geoid_height": 14
                },
                "coordinates": [
                    1,
                    2,
                    3
                ],
                "sigmas": [
                    5,
                    5,
                    10
                ]
            },
            "marks": [
                {
                    "photo": "sample_image1.jpg",
                    "position_px": [
                        458,
                        668
                    ],
                    "accuracy": 1.0
                }
            ],
            "is_checkpoint": true
        }
    ],
    "mtps": []
}

How to retrieve inputs, outputs and reports

Once the project has been processed, a user can retrieve various data which are stored on the servers.

  • Single files: any output, report, or input image can be requested individually
  • ZIP files: All of the input images and available outputs and reports can be requested together in a ZIP file

Single files

1. Get "s3_key" and "s3_bucket" of the file which you want to retrieve

  • Possible outputs and reports

GET on https://cloud.pix4d.com/project/api/v3/projects/{id}/outputs/

The response body will include the s3_key and s3_bucket of all of the output types. An example bellow shows the output point_cloud :

{
  "result_type": "point_cloud",
  "output_type": "point_cloud",
  "availability": "done",
  "s3_key": "user-123123123123123123123123/project-741581/test_3dmaps/2_densification/point_cloud/test_3dmaps_group1_densified_point_cloud.las",
  "s3_bucket": "prod-pix4d-cloud-default",
  "s3_region": "us-east-1",
  "output_id": 147817230
}

A list of the main outputs which can be obtained is shown below:

Result type Output type Description
ortho ortho Transparent orthomosaic in TIFF format
ortho ortho_rgba_bundle ZIP file containing transparent orthomosaic in TIFF format, .prj and .tfw files
ortho ortho_rgb Opaque orthomosaic in TIFF format
ortho ortho_rgb_bundle ZIP file containing opaque orthomosaic in TIFF format, .prj and .tfw files
ortho ortho_cloud_optimized Ortho in Cloud-Optimized Geotiff format
dsm dsm Digital Surface Model (DSM) in TIFF format
dsm_cloud_optimized dsm_cloud_optimized DSM in Cloud-Optimized Geotiff format
point_cloud point_cloud Generated point cloud in LAS or LAZ format
3d_mesh_obj 3d_mesh_obj_zip ZIP file containing .obj, .mtl, and .jpg files
3d_mesh_obj 3d_mesh_fbx 3Dmesh in FBX format
3d_mesh_obj b3dm_js 3Dmesh in Cesium format (index file)
ndvi ndvi Generated NDVI layer in TIFF format
quality_report quality_report PDF file with information about the process
xml_quality_report xml_quality_report Quality Report in XML format
mapper_log mapper_log Log file of the process in text format
opf_project opf_project OPF project document

Notes: Depending on the processing options and type of processing used, some outputs might not be generated.

  • Input images

GET on https://cloud.pix4d.com/project/api/v3/projects/{id}/photos/

The response body will include the s3_key and s3_bucket of all of the input images. An example bellow shows the image IMG_4082.JPG:

{
  "id": 165965565,
  "s3_key": "user-105e0ece-f221-467e-bab0-de5fbf004b61/project-741581/images/IMG_4082.JPG",
  "thumbs_s3_key": {
    "legacy_png_512": "user-105e0ece-f221-467e-bab0-de5fbf004b61/project-741581/photo_thumbnails/images/IMG_4082_thumb.png"
  },
  "s3_bucket": "prod-pix4d-cloud-default",
  "width": 4000,
  "height": 3000,
  "excluded_from_mapper": null
}

2. Get the files from AWS S3

It is recommended to use the shell AWS CLI or the python boto3 library but other tools can work as well. First, retrieve the AWS S3 credentials associated with this project:

GET on https://cloud.pix4d.com/project/api/v3/projects/{ID}/s3_credentials/

The response contains all the S3 information we need, at least the access_key, secret_key and the session_token.

{
  "access_key": "ASIATOCJLBKSU2CVJIHR",
  "secret_key": "5OGGBSvn8Sesdu8l...<remainder of the secret key>",
  "session_token": "FwoGZXIvYX...<remainder of security token>",
  "expiration": "2021-05-10T21:55:47Z",
  "bucket": "prod-pix4d-cloud-default",
  "key": "user-199a56ab-7ac6-d6d1-4778-5b4d338fc9de/project-883349",
  "server_time": "2021-05-19T09:55:47.357641+00:00",
  "region": "us-east-1"
}

The S3 credentials can be stored in our environment, so that they will get picked up by the AWS CLI tool.

export AWS_ACCESS_KEY_ID=ASIATOCJLBKSU2CVJIHR
export AWS_SECRET_ACCESS_KEY='5OGGBSvn8Sesdu8l...<remainder of the secret key>'
export AWS_SESSION_TOKEN='AQoDYXdzEJr...<remainder of security token>'
export AWS_REGION='us-east-1'

Once the credentials have been set, the copy command can be run to get an specific file using its unique "s3_key" and "s3_bucket":

aws s3 cp s3://${S3_BUCKET}/${S3_KEY} ./

As an example, in order to get the point cloud from the example above, install AWS CLI and run the following:

aws s3 cp s3://prod-pix4d-cloud-default/user-123123123123123123123123/project-741581/test_3dmaps/2_densification/point_cloud/test_3dmaps_group1_densified_point_cloud.laz ./

It would copy the test_3dmaps_group1_densified_point_cloud.laz file from AWS S3 to your working directory.

PIX4D OPF Project

This is the Open Photogrammetry Format from processing the project on PIX4Dcloud.

This can be read by PIX4Dmatic, and also using tooling such as pyopf.

The output type opf_project points to the project.opf top level file. This file then references all the other files in the project with relative paths, allowing you to download them as needed, using the same S3 credentials for a Project as in the above examples.

Once you have downloaded the project.opf you can use pyopf to discover the files it references:

from pyopf import io
from pyopf import project as opf
from pyopf import resolve

index_file = "/some/directory/structure/project.opf"
pix4d_project: opf.Project = io.load(index_file)
references = [resource.uri for item in pix4d_project.items for resource in item.resources]
objects = resolve.resolve(pix4d_project)
images = [camera.uri for camera_list in objects.camera_list_objs for camera in camera_list.cameras]

print("Prepend the following with your Project S3 prefix and download them:")
print(objects)
print(images)

Notes:

  • This is only available for non-deprecated photogrammetry pipelines, and only for projects processed after April 2025.

You can then open the project.opf in PIX4Dmatic or perform further analysis on the OPF documents.

ZIP files

Input images

GET on https://cloud.pix4d.com/project/api/v3/projects/{id}/inputs/zip/

An email will be sent containing a URL to download a ZIP file with all of the input images.

Outputs

GET on https://cloud.pix4d.com/project/api/v3/projects/{id}/outputs/zip/

An email will be sent containing a URL to download a ZIP file with all of the outputs.

How to embed the complete 2D and 3D editor

This feature allows embeding the editor in a iframe, (both 2D and 3D view). Please contact your sales representative to enable this feature. The user must specify the domain(s) where the PIX4Dcloud editor iframe will be embedded.

1. Get token permision

The feature works only with shared sites and datasets (information about share links can be found in this support article). It means that first, it is necessary to generate a token for your project (dataset) or project group (site).

Read-Only Token Permission

Example request:

curl --request POST \
 --header 'Content-Type: application/json' \
 --header 'Accept: application/json' \
 --header 'Authorization: **SECRET**' \
 --data \
    '{ \
      "enabled": true, \
      "write": false, \
      "type": "Project", \
      "type_id": 1234 \
    }' \
  https://cloud.pix4d.com/common/api/v3/permission-token/

Example response:

{
  "token": "d8dfbe6d-93b1-4bca-af40-5c469e3530da",
  "enabled": true,
  "write": false,
  "type": "Project",
  "type_id": 1234,
  "creation_date": "2021-06-25T10:17:06.734910+02:00"
}

Read and Write Token Permission

Example request:

curl --request POST \
 --header 'Content-Type: application/json' \
 --header 'Accept: application/json' \
 --header 'Authorization: **SECRET**' \
 --data \
    '{ \
      "enabled": true, \
      "type": "Project", \ 
      "type_id": 1234 \
    }' \
  https://cloud.pix4d.com/common/api/v3/permission-token/

Example response:

{
  "token": "05a114e1-804f-4e6c-a094-5d3eb80d2119",
  "enabled": true,
  "write": true,
  "type": "Project",
  "type_id": 1234,
  "creation_date": "2021-06-25T10:17:06.734910+02:00"
}

The generated token will have a shape of an uuid v4 (xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx)

Note: Revoking (disabling) the share link, will also disable your embedded editor. To do that, set 'enabled' to 'false'

curl --request PATCH \
 --header 'Content-Type: application/json' \
 --header 'Accept: application/json' \
 --header 'Authorization: **SECRET**' \
 --data \
    '{ \
      "enabled": false, \
      "type": "Project", \ 
      "type_id": 1234 \
  }' \
  https://cloud.pix4d.com/common/api/v3/permission-token/

2. iframe tag definition

Embedding should be done through an iframe tag (HTML iframe tag)

As an example:

<iframe
  src="https://embed.pix4d.com/cloud/default/dataset/256164/map?shareToken=b77fa15a-14ff-4c6e-b7eb-da05bad16bb2&lang=fr&theme=light"
  referrerpolicy="strict-origin-when-cross-origin"

  allow="geolocation"
  frameborder="0"
  width="100%"
  height="100%"
  allowfullscreen
></iframe>

Required Properties

  • src an embed url to your site/dataset. In “URL templates” section you’ll learn how to build different urls for your users

  • referrerpolicy="strict-origin-when-cross-origin" it’s a minimal permission that lets us determine if white label is allowed to be displayed in the domain

Optional Properties

allow="geolocation" in order to let users use the geolocation feature of the editor.

frameborder, width, height recommended setup to make the editor seem seamless.

2. URL templates

1. Single datasets

https://embed.pix4d.com/cloud/default/dataset/:dataset_id:/:view?:?shareToken={share_token}

dataset_id id of the dataset

view? optional parameter that can be either map or model.

  • if nothing is passed, then we will decide which view is the best for that particular dataset
  • map forces map view
  • model forces model view

share_token the generated share token in the “Manage share link” step

2. Site

https://embed.pix4d.com/cloud/default/site/:site_id:/dataset/:dataset_id:/:view?:?shareToken=:share_token site_id: id of the site dataset_id: id of the dataset view?: optional parameter that can be either map or model.

  • if nothing is passed, then we will decide which view is the best for that particular dataset
  • map forces map view
  • model forces model view. share_token the generated share token in the “Manage share link” step

By defining additional query parameters, you can also define language and a theme

theme can be either light or dark. By default it’s dark.

lang en-US or ja or ko or it or es. By default it’s en-US.

For example, to get the embedded editor in french with them set to light, it would be:

https://embed.pix4d.com/cloud/default/dataset/256164/map?shareToken=b77fa15a-14ff-4c6e-b7eb-da05bad16bb2&lang=fr&theme=light

3. Testing

To test your local website before publishing to your domain, you can run up a server that is on any port and known locally to your machine as localhost.

For example, if you have the iframe HTML code from the example above in an index.html file, then could run the python command to bring up a minimal webserver:

python -m http.server 8000

Then in your local web browser navigate to localhost:8000/index.html you can test the embed view.

Opening the index.html file without going through a webserver on localhost will fail with a 403: You are not authorized to access this website due to the Referer header not specifying an allowed domain.

Only localhost:* is available to new clients to test.

If you wish to deploy it to your own domain you must reach out to your Sales representative in PIX4D and supply the domain names you plan to host the embed on (for example domains for staging and production systems, so you can test before deploying), otherwise you will receive a 403 error.

Annotations Quickstart

The Annotation API allows for programmatic management of annotations, this allows the API to user to do the following operations:

Creating annotations

curl --location --request POST 'https://api.webgis.pix4d.com/v1/annotations/' \
--header 'Authorization: Bearer <insert JWT here>'
--header 'Content-Type: application/json' \
--data-raw '{
  "annotations": [
    {
      "entity_type": "Project",
      "entity_id": 123456,
      "properties": {
        "name": "My annotation",
        "color": "#FFFFFF80",
        "description": "My first annotation"
      },
      "geometry": {
        "coordinates": [
          2.38,
          57.322,
          0
        ],
        "type": "Point"
      }
    },
    {
      "entity_type": "Project",
      "entity_id": 123456,
      "properties": {
        "name": "My annotation",
        "color": "#FFFFFF80",
        "description": "My second annotation"
      },
      "geometry": {
        "coordinates": [
          3.49,
          68.433,
          0
        ],
        "type": "Point"
      }
    }
  ]
}'

The successful response will have 201 CREATED code and return annotation id in the body. E.g. A successful response will have a status of 201 CREATED and a body similar to what is listed below.

{
  "annotations": [
    {
      "annotation_id": "Project_123456_62063898-531b-4389-93f9-ed5126338ff3",
      "success": true
    },
    {
      "annotation_id": "Project_697180_82f70b94-773c-47db-80dd-5576e569548f",
      "success": true
    }
  ]
}

The example above will have created two point in the project coordinate system, to visualise the annotation go to Pix4dCloud.

List annotations

curl --location --request GET 'https://api.webgis.pix4d.com/v1/annotations/?entity_type=Project&entity_id=697180' \
--header 'Authorization: Bearer <INSERT_JWT_HERE>'

A successful response will look something like this:

{
  "results": [
    {
      "version": "1.0",
      "entity_id": 697180,
      "entity_type": "Project""id": "Project_697180_31c1e01d-39a9-4926-9054-87934cee3c69",
      "created": "2022-05-17T08:16:49.669183+00:00",
      "modified": "2022-05-17T08:16:49.669186+00:00",
      "tags": [ ...
      ],
      "geometry": { ...
      },
      "properties": {
        "visible": true,
        "camera_position": [ ...
        ]
                "description": "Description",
        "volume": { ...
        },
        "color_fill": "#00224488",
        "name": "Annotation 0",
        "color": "#00224488"
      },
      "extension": { ...
      },
    },
    {
      "version": "1.0",
      "entity_id": 697180,
      "entity_type": "Project""id": "Project_697180_4c8ac186-5427-46b5-8347-c1ee374fd10f",
      "created": "2022-05-17T08:16:49.670391+00:00",
      "modified": "2022-05-17T08:16:49.670393+00:00",
      "tags": [ ...
      ],
      "geometry": { ...
      },
      "properties": {
        "visible": true,
        "camera_position": [ ...
        ],
        "description": "Description",
        "volume": { ...
        },
      },
      "extension": { ...
      }
    },
  ]
}

Deleting annotations

curl --location --request DELETE 'https://api.webgis.pix4d.com/v1/annotations/' \
--header 'Authorization: Bearer <INSERT JWT HERE>' \
--header 'Content-Type: application/json' \
--data-raw \
'{
  "annotations": [
    "Project_123456_12345678-1234-1234-1234-123456789abc",
    "Project_654321_fedcba98-fedc-fedc-fedc-fedcba987654",
  ]
}'

A successful response will look something like this:

{
  "annotations": [
    {
      "success": true,
      "annotation_id": "Project_123456_12345678-1234-1234-1234-123456789abc",
    },
    {
      "success": true,
      "annotation_id": "Project_654321_fedcba98-fedc-fedc-fedc-fedcba987654",
    }
  ]
}

How to organize projects on the drive

Here we will guide you through using Folders to organize your Datasets (Projects) and Sites (ProjectGroups), then moving resources within the organization, and finally navigating the resulting "resource tree" by listing or searching content.

Folders can be used, not only to organize content, but also to organize permissions as access to a Folder (also Projects and Project Groups) can be limited to certain users.

1. Creating a Folder

POST to *https://cloud.pix4d.com/common/api/v4/folders/ specifying a name and a parent using parent_type and parent_uuid. parent_type must be one of organization or folder...

curl --request POST \
  --url https://cloud.pix4d.com/common/api/v4/folders/ \
  --header "Authorization: Bearer ${PIX4D_ACCESS_TOKEN}" \
  --header "Content-Type: application/json"
  --data '{
    "name": "Customer ABC",
    "parent_uuid": "f6dfd584-c4a2-43ae-9e61-672b7d1f5058",
    "parent_type": "organization"
  }'

This will return the information about the Folder you have created, including it's uuid.

{
  "uuid": "65f7d6ab-2e46-4b30-8a3a-38fdaf54307a",
  "name": "Customer ABC"
}

2. Updating a Folder

To rename a Folder you can PATCH it to update the name field.

curl --request PATCH \
  --url https://cloud.pix4d.com/common/api/v4/folders/65f7d6ab-2e46-4b30-8a3a-38fdaf54307a/ \
  --header "Authorization: Bearer ${PIX4D_ACCESS_TOKEN}" \
  --header 'content-type: application/json' \
  --data '{
    "name": "Customer ABC Ltd"
  }'

The return value confirms the updated name.

{
  "name": "Customer ABC Ltd"
}

3. Deleting a Folder

To remove a Folder you can DELETE. The Folder will be deleted along with all its descendants in the resource tree.

curl --request DELETE \
  --url https://cloud.pix4d.com/common/api/v4/folders/65f7d6ab-2e46-4b30-8a3a-38fdaf54307a/ \
  --header "Authorization: Bearer ${PIX4D_ACCESS_TOKEN}" 

4. Creating a Project (or Project Group) inside a Folder

To create a Project (or Project Group) inside a Folder, simply add a parent_type and parent_uuid to the body of the request.

curl --request POST \
  --url https://cloud.pix4d.com/project/api/v3/projects/ \
  --header "Authorization: Bearer ${PIX4D_ACCESS_TOKEN}" \
  --header 'content-type: application/json' \
  --data '{
    "name": "Project 6",
    "parent_uuid": "65f7d6ab-2e46-4b30-8a3a-38fdaf54307a",
    "parent_type": "folder"
  }'

5. Moving resources within your organization

Once you have created Folders within your organization you may want to move existing Projects and Project Groups (or other Folders) into them. To do this you can POST to https://cloud.pix4d.com/common/api/v4/drive/move_batch/

curl --request PUT \
  --url https://cloud.pix4d.com/common/api/v4/drive/move_batch/ \
  --header "Authorization: Bearer ${PIX4D_ACCESS_TOKEN}" \
  --header 'content-type: application/json' \
  --data '{
  "owner_uuid": "58101312-de35-4a22-a951-943eba20a041",
  "source_nodes": [
    {
      "type": "project_group",
      "uuid": "d9c5bec2-e11f-4371-8640-3d1534e8e3b2"
    },
    {
      "type": "folder",
      "uuid": "bee32b9e-360c-404b-81d3-7a44ff88eb66"
    },
    {
      "type": "project",
      "uuid": "1ec0ba95-b16a-4ee5-9353-511ae7e46778"
    }
  ],
  "target_type": "folder",
  "target_uuid": "65f7d6ab-2e46-4b30-8a3a-38fdaf54307a"
}'

6. Listing contents of a Folder / root

Now you have Projects and Project Groups arranged within Folders, you will find it useful to list these resources according to their location. The list endpoint of the Drive allows you to list resources within a given Folder or those at the "root" of the organization.

curl --request GET \
  --url 'https://cloud.pix4d.com/common/api/v4/drive/folder/e8f6731b-dc00-4e27-afb9-2740fb76c843/' \
  --header "Authorization: Bearer ${PIX4D_ACCESS_TOKEN}"

The paginated response will look like this:

{
  "count": 4,
  "next": null,
  "previous": null,
  "results": [
    {
      "legacy_id": 97389,
      "uuid": "72cd5e09-38a6-4490-b8e7-97486f01e45d",
      "type": "folder",
      "date": "2025-02-21T13:45:24.978372+01:00",
      "name": "Vancouver",
      "metadata": null
    },
    {
      "legacy_id": 97388,
      "uuid": "a2c58c0f-861f-4240-b646-f35290fc81eb",
      "type": "folder",
      "date": "2025-02-21T13:45:00.906934+01:00",
      "name": "Toronto",
      "metadata": null
    },
    {
      "legacy_id": 440328,
      "uuid": "50df7f65-f480-4fcc-9f86-32d8d7724689",
      "type": "project_group",
      "date": "2025-03-06T17:25:46.890181+01:00",
      "name": "Credit counter",
      "metadata": null
    },
    {
      "legacy_id": 1005051,
      "uuid": "0c26579b-f2ad-4dfb-a253-a4c43dbbbf53",
      "type": "project",
      "date": "2025-03-06T17:23:43.384397+01:00",
      "name": "CreditCounter",
      "metadata": null
    }
  ]
}

7. Searching resources by name

You can also search for resources within an Organization whose name matches a string.

curl --request GET \
  --url 'https://cloud.pix4d.com/common/api/v4/drive/organization/58101312-de35-4a22-a951-943eba20a041/search/?q=inside' \
  --header "Authorization: Bearer ${PIX4D_ACCESS_TOKEN}"

The paginated response is the same format as for the list endpoint:

{
  "count": 3,
  "next": null,
  "previous": null,
  "results": [
    {
      "legacy_id": 97389,
      "uuid": "72cd5e09-38a6-4490-b8e7-97486f01e45d",
      "type": "folder",
      "date": "2025-02-21T13:45:24.978372+01:00",
      "name": "Vancouver",
      "metadata": null
    },
    {
      "legacy_id": 438767,
      "uuid": "baf79c5a-c402-45c9-9bf8-411eed93507c",
      "type": "project_group",
      "date": "2025-02-21T13:46:02.684781+01:00",
      "name": "Vancouver 2",
      "metadata": null
    },
    {
      "legacy_id": 438766,
      "uuid": "65a6914f-8db8-4ba3-ac31-4d5e388beee3",
      "type": "project_group",
      "date": "2025-02-21T13:45:52.247435+01:00",
      "name": "Vancouver 1",
      "metadata": null
    }
  ]
}

8. Finding the "path" of a given resource within the "resource tree"

Given a particular resource (identified by its type and uuid), you can also retrieve its "path" in the resource tree, which can be used to build breadcrumbs.

curl --request GET \
  --url https://cloud.pix4d.com/common/api/v4/drive/project_group/baf79c5a-c402-45c9-9bf8-411eed93507c/path/ \
  --header "Authorization: Bearer ${PIX4D_ACCESS_TOKEN}"

A successful response will be in the following format:

{
  "path": [
    {
      "uuid": "baf79c5a-c402-45c9-9bf8-411eed93507c",
      "type": "project_group",
      "name": "Vancouver 2"
    },
    {
      "uuid": "72cd5e09-38a6-4490-b8e7-97486f01e45d",
      "type": "folder",
      "name": "Vancouver"
    },
    {
      "uuid": "0cda2c1a-e52f-4c54-aec6-e827d09c232a",
      "type": "folder",
      "name": "Canada"
    }
  ],
  "owner": {
    "type": "organization",
    "uuid": "58101312-de35-4a22-a951-943eba20a041",
    "name": "Customer ABC Ltd"
  },
  "more_ancestors": false
}

projects

List projects

List projects the user can access.

Details

A public_status field is provided to clients. It can take the values CREATED, UPLOADED, PROCESSING, DONE, ERROR.

A more detailed status can be found in the display_detailed_status field, but is only for display, not for logic control as it reflects an internal status the name and flow of which might change.

Filtering

It is possible to filter projects returned by their public status using query parameters:

  • public_status for a status to include
  • public_status_exclude for a status to exclude

Multiple values can be used by providing the parameter multiple times with different values.

The fields id, name, display_name can be used in a similar filter/exclude fashion.

For example, if you don't want the demo project, you can set ?is_demo=false

Response

There are 2 ways to serialize a project;

  • A simple serializer that returns project basic information and is fast
  • A detailed serializer that returns detailed information, including some sub-objects, but that takes longer to compute.

By default, the list REST action uses the simple serializer while retrieve uses the detailed one. You can override the serializer used by passing a serializer query parameter with value simple or detailed.

Authorizations:
ClientCredentialsAuthentication
query Parameters
page
integer

A page number within the paginated result set.

page_size
integer

Number of results to return per page.

Responses

Response samples

Content type
application/json
{
  • "count": 123,
  • "results": [
    ]
}

Create an empty project

Create an empty project.

name is a mandatory parameter and is limited to 100 characters (cannot contain slashes, must not start with a dash and cannot end with whitespace). project_type and acquisition_date are optional. project_type can take the following values : pro, bim, model, ag. If project_type is not provided it defaults to the preferred solution of the user as defined in his profile

billing_model is one of CLOUD_STANDARD, CRANE, ENGINE_CLOUD or INSPECT.

  • CRANE is accepted only if the project is created through a crane interface on an account that has a CRANE license
  • ENGINE_CLOUD is accepted only if the project is created through a public API interface on an account that has an ENGINE_CLOUD license
  • INSPECT is accepted only if the project is created through an Inspect interface on an account that has a INSPECT license
  • CLOUD_STANDARD is accepted only if the user has a valid license with cloud allowance

Will return a 400 if the billing model is unknown or invalid for the user.

If acquisition_date is not provided (in iso-8601 format), it will default to the current time

The coordinate system is compliant with Open Photogrammetry Format specification (OPF).

If passed, coordinate_system is either:

  • a WKT string version 2 (it includes WKT string version 1)
  • A string in the form Authority:code where the code is for a 2D or 3D CRS (e.g.: EPSG:21781)
  • A string in the format Authority:code+code where the first code is for a 2D CRS and the second one if for a vertical CRS (e.g. EPSG:2056+5728 )
  • A string in the form Authority:code+Authority:code where the first code is for a 2D CRS and the second one if for a vertical CRS.

In addition the following values are accepted for arbitrary coordinate systems:

  • the value ARBITRARY_METERS for the software to use an arbitrary default coordinate system in meters
  • the value ARBITRARY_FEET for the software to use an arbitrary default coordinate system in feet
  • the value ARBITRARY_US_FEET for the software to use an arbitrary default coordinate system in us survey feet

If coordinate_system is passed, two optionals fields can be added:

Will return with a 400 error code if the coordinate_system is invalid. Unsupported cases:

  • Geographical coordinate systems (example: EPSG 4326)
  • Non isometric coordinate systems (all axes must be in the same unit of measurement)

In addition please note that to process with PIX4Dmapper:

  • the coordinate system has to be compatible with WKT1.
  • the coordinate system should not have a vertical component.

Organization Management users should specify the 'parent' of the project by passing:

  • parent_type : one of: organization, projectgroup or folder.
    • organization to create a project in the drive root of an organization.
    • folder and projectgroup are used to create projects in a specific folder or project group.
  • parent_uuid : The uuid of the parent.

processing_email_notification an optional parameter, it defaults to true, setting it to false will disable all email notifications related to project processing events (e.g. start of processing, end of processing etc.).

Authorizations:
ClientCredentialsAuthentication
Request Body schema:
name
required
string [ 1 .. 100 ] characters
project_type
string (SolutionEnum)
Enum: "pro" "bim" "ag" "model" "inspection"
  • pro - Pro
  • bim - BIM
  • ag - Ag
  • model - Model
  • inspection - Inspection
acquisition_date
string <date-time>
project_group_id
integer or null
billing_model
string (BillingModelEnum)
Enum: "CLOUD_STANDARD" "ENGINE_CLOUD" "CRANE" "INSPECT"
  • CLOUD_STANDARD - Cloud Standard
  • ENGINE_CLOUD - Engine Cloud
  • CRANE - Crane
  • INSPECT - Inspect
parent_id
string
owner_uuid
string or null <uuid>
(OwnerTypeCe4Enum (string or null)) or (BlankEnum (any or null)) or (NullEnum (any or null))
processing_email_notification
boolean
Default: true
parent_uuid
string <uuid>
parent_type
string (ProjectCreatorParentTypeEnum)
Enum: "organization" "user" "project_group" "folder"
  • organization - Organization
  • user - Pixuser
  • project_group - Project Group
  • folder - Folder
coordinate_system
string or null
coordinate_system_geoid_height
number or null <double>
coordinate_system_extensions
any or null

Responses

Request samples

Content type
{
  • "name": "string",
  • "project_type": "pro",
  • "acquisition_date": "2019-08-24T14:15:22Z",
  • "project_group_id": 0,
  • "billing_model": "CLOUD_STANDARD",
  • "parent_id": "string",
  • "owner_uuid": "a528e82a-c54a-4046-8831-44d7f9028f54",
  • "owner_type": "ORG_GRP",
  • "processing_email_notification": true,
  • "parent_uuid": "77932ac3-028b-48fa-aaa9-4d11b1d1236a",
  • "parent_type": "organization",
  • "coordinate_system": "string",
  • "coordinate_system_geoid_height": 0,
  • "coordinate_system_extensions": null
}

Response samples

Content type
application/json
{
  • "id": 0,
  • "name": "string",
  • "project_type": "pro",
  • "acquisition_date": "2019-08-24T14:15:22Z",
  • "project_group_id": 0,
  • "billing_model": "CLOUD_STANDARD",
  • "parent_id": "string",
  • "owner_uuid": "a528e82a-c54a-4046-8831-44d7f9028f54",
  • "owner_type": "ORG_GRP",
  • "proj_pipeline": "string",
  • "processing_email_notification": true,
  • "parent_uuid": "77932ac3-028b-48fa-aaa9-4d11b1d1236a",
  • "parent_type": "organization",
  • "coordinate_system": "string",
  • "crs": {
    }
}

Get a project

Get a project.

User required to have access to the project.

A public_status field is provided to clients. It can take the values CREATED, UPLOADED, PROCESSING, DONE, ERROR.

A more detailed status can be found in the display_detailed_status field, but is only for display, not for logic control as it reflects an internal status the name and flow of which might change.

There are 2 ways to serialize a project;

  • A simple serializer that returns project basic information and is fast
  • A detailed serializer that returns detailed information, including some sub-objects, but that takes longer to compute.

By default, the list REST action uses the simple serializer while retrieve use the detailed one. You can override the serializer used by passing a serializer query parameter with value simple or detailed.

Notes

  • The min_zoom and max_zoom properties found in the object are deprecated. Please rather use the min_zoom and max_zoom properties that are provided for each of the map layers.
  • The display_user_name is an empty string when the owner_uuid represents an organization rather than an individual user. See the RESTful API documentation for POST /project/api/v3/projects/ for more details about owner_uuid.
Authorizations:
ClientCredentialsAuthentication
path Parameters
id
required
integer

A unique integer value identifying this project.

Responses

Response samples

Content type
application/json
{
  • "embed_urls": {
    },
  • "error_reason": "string",
  • "error_code": 0,
  • "last_datetime_processing_started": "2019-08-24T14:15:22Z",
  • "last_datetime_processing_ended": "2019-08-24T14:15:22Z",
  • "s3_bucket_region": "string",
  • "never_delete": true,
  • "under_trial": true,
  • "source": "string",
  • "owner_uuid": "string",
  • "credits": 0,
  • "crs": {
    },
  • "public_share_token": "c827b6b5-2f34-47ee-824e-a48b2ab6b708",
  • "public_status": "string",
  • "detail_url": "string",
  • "image_count": 0,
  • "public_url": "string",
  • "acquisition_date": "2019-08-24T14:15:22Z",
  • "is_geolocalized": true,
  • "s3_base_path": "string",
  • "display_name": "string",
  • "id": 0,
  • "uuid": "095be615-a8ad-4c33-8e9c-c7612fbf6c9f",
  • "user_display_name": "string",
  • "project_type": "pro",
  • "project_group_id": 0,
  • "create_date": "2019-08-24T14:15:22Z",
  • "name": "string",
  • "bucket_name": "string",
  • "project_thumb": "string",
  • "front_end_public_group_url": "string",
  • "front_end_public_url": "string",
  • "is_demo": true,
  • "display_detailed_status": "string",
  • "coordinate_system": "string",
  • "outputs": "string",
  • "min_zoom": -2147483648,
  • "max_zoom": -2147483648,
  • "proj_pipeline": "string"
}

Update a project

Update the project.

The following attributes will be updated:

  • display_name
  • project_group_id
  • is_geolocalized
  • acquisition_date
  • public_share_token
  • s3_base_path
  • never_delete
  • owner_uuid
  • coordinate_system
  • coordinate_system_geoid_height
  • coordinate_system_extensions
  • min_zoom
  • max_zoom

The coordinate system is compliant with Open Photogrammetry Format specification (OPF).

If passed, coordinate_system is either:

  • a WKT string version 2 (it includes WKT string version 1)
  • A string in the form Authority:code where the code is for a 2D or 3D CRS (e.g.: EPSG:21781)
  • A string in the format Authority:code+code where the first code is for a 2D CRS and the second one if for a vertical CRS (e.g. EPSG:2056+5728 )
  • A string in the form Authority:code+Authority:code where the first code is for a 2D CRS and the second one if for a vertical CRS.

In addition the following values are accepted for arbitrary coordinate systems:

  • the value ARBITRARY_METERS for the software to use an arbitrary default coordinate system in meters
  • the value ARBITRARY_FEET for the software to use an arbitrary default coordinate system in feet
  • the value ARBITRARY_US_FEET for the software to use an arbitrary default coordinate system in us survey feet

If coordinate_system is passed, two optionals fields can be added:

Will return with a 400 error code if the coordinate_system is invalid. Unsupported cases:

  • Geographical coordinate systems (example: EPSG 4326)
  • Non isometric coordinate systems (all axes must be in the same unit of measurement)

In addition please note that to process with PIX4Dmapper:

  • the coordinate system has to be compatible with WKT1.
  • the coordinate system should not have a vertical component.
Authorizations:
ClientCredentialsAuthentication
path Parameters
id
required
integer

A unique integer value identifying this project.

Request Body schema:
never_delete
boolean
owner_uuid
string non-empty
acquisition_date
string <date-time>
is_geolocalized
boolean
s3_base_path
string
display_name
string [ 1 .. 100 ] characters
project_group_id
integer or null
coordinate_system
string or null
coordinate_system_geoid_height
number or null <double>
coordinate_system_extensions
any or null
min_zoom
integer [ -2147483648 .. 2147483647 ]
max_zoom
integer [ -2147483648 .. 2147483647 ]

Responses

Request samples

Content type
{
  • "never_delete": true,
  • "owner_uuid": "string",
  • "acquisition_date": "2019-08-24T14:15:22Z",
  • "is_geolocalized": true,
  • "s3_base_path": "string",
  • "display_name": "string",
  • "project_group_id": 0,
  • "coordinate_system": "string",
  • "coordinate_system_geoid_height": 0,
  • "coordinate_system_extensions": null,
  • "min_zoom": -2147483648,
  • "max_zoom": -2147483648
}

Response samples

Content type
application/json
{
  • "embed_urls": {
    },
  • "error_reason": "string",
  • "error_code": 0,
  • "last_datetime_processing_started": "2019-08-24T14:15:22Z",
  • "last_datetime_processing_ended": "2019-08-24T14:15:22Z",
  • "s3_bucket_region": "string",
  • "never_delete": true,
  • "under_trial": true,
  • "source": "string",
  • "owner_uuid": "string",
  • "credits": 0,
  • "crs": {
    },
  • "public_share_token": "c827b6b5-2f34-47ee-824e-a48b2ab6b708",
  • "public_status": "string",
  • "detail_url": "string",
  • "image_count": 0,
  • "public_url": "string",
  • "acquisition_date": "2019-08-24T14:15:22Z",
  • "is_geolocalized": true,
  • "s3_base_path": "string",
  • "display_name": "string",
  • "id": 0,
  • "uuid": "095be615-a8ad-4c33-8e9c-c7612fbf6c9f",
  • "user_display_name": "string",
  • "project_type": "pro",
  • "project_group_id": 0,
  • "create_date": "2019-08-24T14:15:22Z",
  • "name": "string",
  • "bucket_name": "string",
  • "project_thumb": "string",
  • "front_end_public_group_url": "string",
  • "front_end_public_url": "string",
  • "is_demo": true,
  • "display_detailed_status": "string",
  • "coordinate_system": "string",
  • "outputs": "string",
  • "min_zoom": -2147483648,
  • "max_zoom": -2147483648,
  • "proj_pipeline": "string"
}

Delete the project

Delete the project.

Interrupts processing and deletes the project with all its files. This operation is irreversible.

Authorizations:
ClientCredentialsAuthentication
path Parameters
id
required
integer

A unique integer value identifying this project.

Responses

List the project depth data

List the project depth data

Retrieve all the depth data associated with the project.

Authorizations:
ClientCredentialsAuthentication
path Parameters
id
required
string^[0-9a-fA-F-]+$

Responses

Response samples

Content type
application/json
{
  • "id": 0,
  • "photo": 0,
  • "depth_map_confidence": "string",
  • "depth_map": "string"
}

List extra files

List the project extras.

Returns the list of the project's registered extras (p4d, masks, ...). Each file is a hash that contains several access information.

Authorizations:
ClientCredentialsAuthentication
path Parameters
id
required
string^[0-9a-fA-F-]+$

Responses

Response samples

Content type
application/json
{
  • "file_key": "string"
}

Register extra files

Register a project extra.

Requests to register a file as a project extra.

Authorizations:
ClientCredentialsAuthentication
path Parameters
id
required
string^[0-9a-fA-F-]+$
Request Body schema:
file_key
required
string non-empty

Responses

Request samples

Content type
{
  • "file_key": "string"
}

Response samples

Content type
application/json
{
  • "file_key": "string"
}

List the project GCPs

List the project GCPs.

Return the list of the project's GCPs. In case of the CS being EPSG:4326, x is read as the longitude and y the latitude.

{
    "gcps": [
        {
            "id": 123,
            "project": 456,
            "name": "GCP_123",
            "point_type": "CHECKPOINT",
            "x": 1.23,
            "y": 45.2,
            "z": 445.87,
            "xy_accuracy": 0.02,
            "z_accuracy": 0.02
        }
    ]
}

In case of error, nothing is registered
Authorizations:
ClientCredentialsAuthentication
path Parameters
id
required
string^[0-9a-fA-F-]+$

Responses

Response samples

Content type
application/json
{
  • "id": 0,
  • "project": 0,
  • "name": "string",
  • "point_type": "GCP",
  • "x": 0,
  • "y": 0,
  • "z": 0,
  • "xy_accuracy": 0,
  • "z_accuracy": 0
}

Update the project GCP

Update the project GCP.

Authorizations:
ClientCredentialsAuthentication
path Parameters
gcp_name
required
string^\w+$
id
required
string^[0-9a-fA-F-]+$
Request Body schema:
name
string [ 1 .. 200 ] characters
point_type
string (PointTypeEnum)
Enum: "GCP" "CHECKPOINT"
  • GCP - GCP
  • CHECKPOINT - Checkpoint
x
number <double>
y
number <double>
z
number <double>
xy_accuracy
number or null <double>
z_accuracy
number or null <double>

Responses

Request samples

Content type
{
  • "name": "string",
  • "point_type": "GCP",
  • "x": 0,
  • "y": 0,
  • "z": 0,
  • "xy_accuracy": 0,
  • "z_accuracy": 0
}

Response samples

Content type
application/json
{
  • "id": 0,
  • "project": 0,
  • "name": "string",
  • "point_type": "GCP",
  • "x": 0,
  • "y": 0,
  • "z": 0,
  • "xy_accuracy": 0,
  • "z_accuracy": 0
}

Delete the project GCP

Delete the project GCP.

Authorizations:
ClientCredentialsAuthentication
path Parameters
gcp_name
required
string^\w+$
id
required
string^[0-9a-fA-F-]+$

Responses

Create project GCPs in bulk

Create project GCPs in bulk.

The GCP name must be unique for this project. The project must have a coordinate system defined. The GCPs will be read in this coordinate system. Since EPSG:4326 is not supported as a project coordinate system, the GCPs cannot be given in this system (degrees) either.

The project id is taken from the captured URL argument.

Point type is one of [GCP | CHECKPOINT]

Takes a list of gcps like so

{
    "gcps": [
        {
            "name": "GCP_123",
            "point_type": "CHECKPOINT",
            "x": 1.23,
            "y": 45.2,
            "z": 445.87,
            "xy_accuracy": 0.02,
            "z_accuracy": 0.02
        },
        {...}
    ]
}
Authorizations:
ClientCredentialsAuthentication
path Parameters
id
required
string^[0-9a-fA-F-]+$
Request Body schema:
required
Array of objects (GCPRequest)

Responses

Request samples

Content type
{
  • "gcps": [
    ]
}

Response samples

Content type
application/json
{
  • "gcps": [
    ]
}

Delete the project input

Delete the project input.

Unregisters the input image from the project. Will also delete the image from the S3 storage, if it can be found

Authorizations:
ClientCredentialsAuthentication
path Parameters
id
required
string^[0-9a-fA-F-]+$
input_id
required
string^[0-9]+$

Responses

Register project inputs

Register project inputs.

Registers the list of inputs for the project. Inputs are the images and possibly a p4d file. You can pass the images also in zip format with or without depth_map and depth_map_confidence files.

e.g.

{
    "input_file_keys": [
        "user-123/project-123/.../images.zip"
    ]
}

It's possible to register images with depth data using inputs key.

This endpoint can be called several times for a single project.

If you are directly uploading images, to avoid timeout issues due to some processing necessary at image registration, it is advised to register images in batches of 500 images or less. Using this API with more than 500 images might work, but there is no guarantee and we will force it to return an empty images attribute in the response payload to keep it bearable

input_file_keys must be an array of strings, each being the full s3 key of the file

e.g.

{
    "input_file_keys": [
        "user-123/project-123/.../IMG_xxx1.jpg",
        "user-123/project-123/.../IMG_xxx2.jpg"
    ]
}

inputs must be an array of dictionaries, each containing photo and its assets. The values are s3 keys of the corresponding files. For the full list of photo assets see project.constants.INPUT_TYPES.

Depth data is registered only in the case when both depth_map and depth_map_confidence are present! In other cases, depth data input is ignored.

e.g.

{
    "inputs": [
        {
            "photo": "user-123/project-123/.../Image_xxx1.jpg",
            "depth_map_confidence": "user-123/project-123/.../Confidence_xxx1.tiff",
            "depth_map": "user-123/project-123/.../DepthMap_xxx1.tiff"
        }
    ]
}

File keys must be valid s3 keys. Therefore:

  • They should match exactly the keys you used to upload files to s3
  • This means they are prefixed with the user-xxx/project-xxx/ that was returned in the credentials request (any other prefix should have failed when you tried to put the files on s3)

Input files must be valid image files (either passed directly or in zip format) supported by PIX4Dmapper software, named with the proper extension

Returns

  • nb_images_registered (int): the number of input images we read from the payload
  • nb_image_signatures_registered (int): the number of images having a valid signature file
  • nb_depth_data_registered (int): the number of depth data files registered, should be an even number
  • p4d_registered (boolean): whether a p4d config file was found in the payload and registered
  • extra_files_registered (int): the number of non-image inputs registered
  • images (list): a list of the image objects that were just registered

The thumbnail of the image takes time to generate and therefore the thumbnail link returned might return a 404 for a while before the thumbnail is actually there

In case of uploading images in zip format, this input type will be considered as an extra and will return count extra_files_registered + 1. It will not count the number of images passed in zip bundle.

If project processing was already triggered before calling the endpoint, the inputs are not registered and the endpoint returns 400 - Bad Request.

e.g.

{
    "nb_images_registered": 2,
    "extra_files_registered": 0,
    "images": [
        {
            "id": 23167077,
            "temp_url": "https://s3.amazonaws.com/test.pix4d.com/user-123/project-345/potatoes/IMG_170328_124606_0223_RED.TIF?AWSAccessKeyId=AKIAJ4X7DJRPQPFCIMOQ&Signature=slJpeyo0r5Pammg%2FWU61fSdu9hU%3D&Expires=1502786097",
            "s3_key": "user-123/project-345/potatoes/IMG_170328_124606_0223_RED.TIF",
            "file_size": null,
            "thumb_s3_key": null,
            "thumb_url": null,
            "exif": {},
            "s3_bucket": "test.pix4d.com"
        },
        {
            "id": 23167078,
            "temp_url": "https://s3.amazonaws.com/test.pix4d.com/user-123/project-345/potatoes/IMG_170328_124604_0224_RED.TIF?AWSAccessKeyId=AKIAJ4X7DJRPQPFCIMOQ&Signature=DmbjLn6IplbV%2Fb8GgyLOCXIJOEk%3D&Expires=1502786097",
            "s3_key": "user-123/project-345/potatoes/IMG_170328_124604_0224_RED.TIF",
            "file_size": null,
            "thumb_s3_key": null,
            "thumb_url": null,
            "exif": {},
            "s3_bucket": "test.pix4d.com"
        }
    ],
    "p4d_registered": false,
    "nb_image_signatures_registered": 0,
    "nb_depth_data_registered": 0
}
Authorizations:
ClientCredentialsAuthentication
path Parameters
id
required
string^[0-9a-fA-F-]+$
Request Body schema:
input_file_keys
Array of strings[ items [ 1 .. 1024 ] characters ]
Array of objects (ProjectInputRequest)

Responses

Request samples

Content type
{
  • "input_file_keys": [
    ],
  • "inputs": [
    ]
}

Response samples

Content type
application/json
{
  • "input_file_keys": [
    ],
  • "inputs": [
    ]
}

Get the project inputs as archive

Get the project inputs as archive.

Retrieve the url to download the input zip containing images and p4d file.

Returns

  • 200 if zip exists, specified in "url" in JSON response
  • 202 if zip doesn't exist, it is now being generated, user will be emailed on completion
  • 404 if the project does not exist or does not belong to the user
Authorizations:
ClientCredentialsAuthentication
path Parameters
id
required
string^[0-9a-fA-F-]+$

Responses

Response samples

Content type
application/json
{
  • "embed_urls": {
    },
  • "error_reason": "string",
  • "error_code": 0,
  • "last_datetime_processing_started": "2019-08-24T14:15:22Z",
  • "last_datetime_processing_ended": "2019-08-24T14:15:22Z",
  • "s3_bucket_region": "string",
  • "never_delete": true,
  • "under_trial": true,
  • "source": "string",
  • "owner_uuid": "string",
  • "credits": 0,
  • "crs": {
    },
  • "public_share_token": "c827b6b5-2f34-47ee-824e-a48b2ab6b708",
  • "public_status": "string",
  • "detail_url": "string",
  • "image_count": 0,
  • "public_url": "string",
  • "acquisition_date": "2019-08-24T14:15:22Z",
  • "is_geolocalized": true,
  • "s3_base_path": "string",
  • "display_name": "string",
  • "id": 0,
  • "uuid": "095be615-a8ad-4c33-8e9c-c7612fbf6c9f",
  • "user_display_name": "string",
  • "project_type": "pro",
  • "project_group_id": 0,
  • "create_date": "2019-08-24T14:15:22Z",
  • "name": "string",
  • "bucket_name": "string",
  • "project_thumb": "string",
  • "front_end_public_group_url": "string",
  • "front_end_public_url": "string",
  • "is_demo": true,
  • "display_detailed_status": "string",
  • "coordinate_system": "string",
  • "outputs": "string",
  • "min_zoom": -2147483648,
  • "max_zoom": -2147483648,
  • "proj_pipeline": "string"
}

List the project marks

List the project marks.

{
    "marks": [
        {
            "id": 130521,
            "gcp": "GCP_123",
            "gcp_id": 96120,
            "photo": "user-123/project-354/my_file.jpg",
            "x": 1.23,
            "y": 45.2
        }
    ]
}
  • gcp: name/label of the tie-point (GCP or MTP) corresponding to the mark.
  • gcp_id: ID of the GCP/MTP corresponding to the mark.
  • x, y: location of the mark in the photo.
  • photo: name of the photo that contains the mark.
Authorizations:
ClientCredentialsAuthentication
path Parameters
id
required
string^[0-9a-fA-F-]+$

Responses

Response samples

Content type
application/json
{
  • "id": 0,
  • "photo": "string",
  • "gcp": "string",
  • "gcp_id": 0,
  • "x": 0,
  • "y": 0
}

Update the project mark

Update the project mark.

Update a previously registered mark. You can't change the GCP or the photo the mark is registered on. Trying to do so will be a no-op. If you need to do that, you'll have to delete the mark and create a new one.

Authorizations:
ClientCredentialsAuthentication
path Parameters
id
required
string^[0-9a-fA-F-]+$
mark_id
required
string^[0-9]+$
Request Body schema:
x
number <double>
y
number <double>

Responses

Request samples

Content type
{
  • "x": 0,
  • "y": 0
}

Response samples

Content type
application/json
{
  • "id": 0,
  • "photo": "string",
  • "gcp": "string",
  • "gcp_id": 0,
  • "x": 0,
  • "y": 0
}

Delete the project mark

Delete the project mark.

Authorizations:
ClientCredentialsAuthentication
path Parameters
id
required
string^[0-9a-fA-F-]+$
mark_id
required
string^[0-9]+$

Responses

Create project marks in bulk

Create project marks in bulk.

Create Marks associated with the project. The Mark photo and GCP must exist and be defined on the same project. Note that gcp may refer to any type of tie-points, i.e a GCP or MTP.

Takes a list of marks with photo being the s3_key of an image of the project, gcp the name of a GCP of the project and x/y being the positive pixel coordinates of the GCP inside the photo, with respect to the top-left corner

{
    "marks": [
        {
            "gcp": "GCP_123",
            "photo": "user-123/project-354/my_file.jpg",
            "x": 1.23,
            "y": 45.2,
        },
        {...}
    ]
}

A GCP can be marked only once on a given photo

In case of errors, nothing is registered.

If the error is due to photo(s) and/or gcp(s) that we can't find, those are returned with the format:

{"photos": [], "gcps": []}

If the error is due to an attempt to register a GCP twice on a photo we return

{"detail": "Attempt to create existing mark(s): [['photo', 'gcp'], [...]]"}
Authorizations:
ClientCredentialsAuthentication
path Parameters
id
required
string^[0-9a-fA-F-]+$
Request Body schema:
required
Array of objects (MarkRequest)

Responses

Request samples

Content type
{
  • "marks": [
    ]
}

Response samples

Content type
application/json
{
  • "marks": [
    ]
}

List the project MTPs

List the project MTPs.

Authorizations:
ClientCredentialsAuthentication
path Parameters
id
required
string^[0-9a-fA-F-]+$

Responses

Response samples

Content type
application/json
{
  • "mtps": [
    ]
}

Create project MTPs in bulk

Create project MTPs in bulk.

MTP names must be unique per project.

Authorizations:
ClientCredentialsAuthentication
path Parameters
id
required
string^[0-9a-fA-F-]+$
Request Body schema:
required
Array of objects (MTPRequest)

Responses

Request samples

Content type
{
  • "mtps": [
    ]
}

Response samples

Content type
application/json
{
  • "mtps": [
    ]
}

Update the project MTP

Update a project MTP.

Authorizations:
ClientCredentialsAuthentication
path Parameters
id
required
string^[0-9a-fA-F-]+$
mtp_id
required
string^[\d]+$
Request Body schema:
name
string [ 1 .. 200 ] characters
is_checkpoint
boolean
Default: false
x
number or null <double>
y
number or null <double>
z
number or null <double>

Responses

Request samples

Content type
{
  • "name": "string",
  • "is_checkpoint": false,
  • "x": 0,
  • "y": 0,
  • "z": 0
}

Response samples

Content type
application/json
{
  • "name": "string",
  • "project": 0,
  • "id": 0,
  • "is_checkpoint": false,
  • "x": 0,
  • "y": 0,
  • "z": 0
}

Delete a project MTP

Delete a project MTP.

Authorizations:
ClientCredentialsAuthentication
path Parameters
id
required
string^[0-9a-fA-F-]+$
mtp_id
required
string^[\d]+$

Responses

List the project outputs

List the project outputs.

Returns a list of all the project's outputs -- files built from the processing results.

Authorizations:
ClientCredentialsAuthentication
path Parameters
id
required
string^[0-9a-fA-F-]+$

Responses

Response samples

Content type
application/json
{
  • "output": "string",
  • "output_type": "ortho_rgba_bundle"
}

Register a project output

Register a project output

Output is the file path. It is expected to be the full s3 key (starts with user-123/project-456)

Passing an output_type is optional. If one is passed it must be valid. If none is passed, the type is derived from the file path.

If an output of the same type already existed, the new output replaces the old one.

NOTE : If you are uploading 3D object/texture/material then upload all three in the same request through the bulk endpoint NOT THIS ENDPOINT to ensure downstream items that require all three are consistent.

Returns

  • 200 if all went fine
  • 404 if the project does not exist or does not belong to the user
  • 403 if the user does not have a valid license to create content on the cloud
  • 400 if the output_type is invalid, or could not be derived from the output path
Authorizations:
ClientCredentialsAuthentication
path Parameters
id
required
string^[0-9a-fA-F-]+$
Request Body schema:
output
required
string [ 1 .. 1024 ] characters
output_type
string (OutputOutputTypeEnum)
Enum: "ortho_rgba_bundle" "ortho_rgb_bundle" "gaussian_splatting" "gaussian_splatting_display" "gaussian_splatting_potree" "3d_mesh_draco" "3d_mesh_fbx" "3d_mesh_mat" "3d_mesh_obj" "3d_mesh_obj_zip" "3d_mesh_slpk" "3d_mesh_texture_2048" "3d_mesh_texture_8192" "3d_mesh_texture_16384" "3d_mesh_texture" "3d_mesh_thumb" "b3dm_js" "b3dm_js_identity" "calibrated_camera_parameters" "calibrated_camera_parameters_json" "calibrated_external_camera_parameters" "calibrated_external_camera_parameters_error" "calibrated_external_camera_parameters_error_json" "cell_tower_analytics" "contour_files_bundle" "dsm_bundle" "dsm_cloud_optimized" "dsm_cloud_optimized_display" "dsm" "dsm_metadata" "dsm_preview" "dsm_thumb" "dtm_bundle" "dtm" "gcp_quality_report" "ifc" "input_zip" "inspect_report_json" "inspect_report_pdf" "project_report_json" "project_report_pdf" "i_construction" "log_file_bundle" "mapper_log" "ndvi" "ndvi_metadata" "ndvi_thumb" "ortho_cloud_optimized" "ortho_cloud_optimized_display" "ortho" "ortho_metadata" "ortho_rgb" "ortho_thumb" "point_cloud_bundle" "point_cloud_gltf" "point_cloud" "potree_js" "potree_metadata" "point_cloud_slpk" "project_offset" "project_thumb" "project_wkt" "project_zip" "quality_report" "reflectance_green" "reflectance_maps_bundle" "reflectance_nir" "reflectance_red" "reflectance_reg" "xml_quality_report" "opf_project" "calibrated_cameras" "input_cameras" "opf_scene_ref_frame" "json_quality_report"
  • ortho_rgba_bundle - Orthomosaic RGBA Bundle
  • ortho_rgb_bundle - Orthomosaic RGB Bundle
  • gaussian_splatting - Gaussian Splatting
  • gaussian_splatting_display - Gaussian Splatting Display
  • gaussian_splatting_potree - Gaussian Splatting Potree
  • 3d_mesh_draco - 3D Mesh - Draco
  • 3d_mesh_fbx - 3D Mesh - FBX
  • 3d_mesh_mat - 3D Mesh - Mat
  • 3d_mesh_obj - 3D Mesh - Obj
  • 3d_mesh_obj_zip - 3D Mesh - OBJ Zip
  • 3d_mesh_slpk - 3D Mesh - SLPK
  • 3d_mesh_texture_2048 - 3D Mesh - Texture (2048)
  • 3d_mesh_texture_8192 - 3D Mesh - Texture (8192)
  • 3d_mesh_texture_16384 - 3d_mesh_texture_16384
  • 3d_mesh_texture - 3D Mesh - Texture
  • 3d_mesh_thumb - 3D Mesh - thumb
  • b3dm_js - Batched 3D mesh index
  • b3dm_js_identity - Batched 3D mesh index (identity transform)
  • calibrated_camera_parameters - Calibrated Camera Parameters
  • calibrated_camera_parameters_json - Calibrated Camera Parameters JSON
  • calibrated_external_camera_parameters - Calibrated External Camera Parameters
  • calibrated_external_camera_parameters_error - Calibrated External Camera Parameters Error
  • calibrated_external_camera_parameters_error_json - Calibrated External Camera Parameters Error JSON
  • cell_tower_analytics - Cell Tower Analytics
  • contour_files_bundle - Contour Files Bundle
  • dsm_bundle - DSM Bundle
  • dsm_cloud_optimized - DSM_COG
  • dsm_cloud_optimized_display - DSM COG Display
  • dsm - DSM
  • dsm_metadata - DSM Metadata
  • dsm_preview - DSM Preview
  • dsm_thumb - DSM Thumbnail
  • dtm_bundle - DTM Bundle
  • dtm - DTM
  • gcp_quality_report - Auto GCP Quality Report
  • ifc - IFC
  • input_zip - Input Zip
  • inspect_report_json - Inspect JSON Report
  • inspect_report_pdf - Inspect PDF Report
  • project_report_json - Project JSON Report
  • project_report_pdf - Project PDF Report
  • i_construction - iConstruction
  • log_file_bundle - Log File Bundle
  • mapper_log - Processing log
  • ndvi - NDVI
  • ndvi_metadata - NDVI Metadata
  • ndvi_thumb - NDVI Thumbnail
  • ortho_cloud_optimized - ORTHO_COG
  • ortho_cloud_optimized_display - Ortho COG Display
  • ortho - Orthomosaic
  • ortho_metadata - Orthomosaic Metadata
  • ortho_rgb - Orthomosaic RGB
  • ortho_thumb - Orthomosaic Thumbnail
  • point_cloud_bundle - Point Cloud Bundle
  • point_cloud_gltf - Point Cloud glTF
  • point_cloud - Point Cloud
  • potree_js - Point Cloud Potree Index
  • potree_metadata - Point Cloud Potree Metadata JSON
  • point_cloud_slpk - SLPK Point Cloud
  • project_offset - Project Offset Coordinates
  • project_thumb - Project Thumb
  • project_wkt - Project WKT Projection
  • project_zip - Project Zip
  • quality_report - Quality Report
  • reflectance_green - Reflectance green
  • reflectance_maps_bundle - Reflectance Maps Bundle
  • reflectance_nir - Reflectance nir
  • reflectance_red - Reflectance red
  • reflectance_reg - Reflectance red edge
  • xml_quality_report - XML Quality Report
  • opf_project - OPF Project Index
  • calibrated_cameras - Calibrated Cameras
  • input_cameras - Input Cameras
  • opf_scene_ref_frame - OPF Scene Reference Frame
  • json_quality_report - JSON Quality Report

Responses

Request samples

Content type
{
  • "output": "string",
  • "output_type": "ortho_rgba_bundle"
}

Response samples

Content type
application/json
{
  • "output": "string",
  • "output_type": "ortho_rgba_bundle"
}

Get a project output

Get a project output.

302 Redirect to the S3 store file. Direct download. Uses the S3 accelerated endpoints where applicable.

Authorizations:
ClientCredentialsAuthentication
path Parameters
id
required
string^[0-9a-fA-F-]+$
output_id
required
string^[0-9]+$

Responses

Output deletion

Given project_id and output_id, delete an output of the project.

Returns

  • 204 if the output has been deleted successfully
  • 403 if the user lacks permission to delete outputs for the given project
  • 404 if the output does not exist.
Authorizations:
ClientCredentialsAuthentication
path Parameters
id
required
string^[0-9a-fA-F-]+$
output_id
required
string^[0-9]+$

Responses

Bulk register project outputs

Register project outputs in bulk

Output is the file path. It is expected to be the full s3 key (starts with user-123/project-456).

Passing an output_type is optional. If one is passed it must be valid. If none is passed, the type is derived from the file path.

If an output of the same type already existed, the new output replaces the old one.

  • outputs is a list of output specifiers.

    An output consists of:

    • output the path on S3
    • output_type (optional) see note above

For example:

{
  "outputs": [
     {
       "output": "some/s3/key/file1.ext"
     },
     {
       "output": "some/s3/key/file2.ext",
       "output_type": "some_type"
     }
  ]
}

Returns

  • 200 if all went fine
Authorizations:
ClientCredentialsAuthentication
path Parameters
id
required
string^[0-9a-fA-F-]+$
Request Body schema:
required
Array of objects (OutputRequest) non-empty

Responses

Request samples

Content type
{
  • "outputs": [
    ]
}

Response samples

Content type
application/json
{
  • "outputs": [
    ]
}

Get the project outputs as archive

Get the project outputs as archive.

Retrieve the url to download the project zip containing almost everything.

Returns

  • 200 if zip exists, specified in "url" in JSON response
  • 202 if zip doesn't exist, it is now being generated, user will be emailed on completion
  • 404 if the project does not exist or does not belong to the user
  • 400 if the project output size is over PROJECT_ZIP_OUTPUT_SIZE_LIMIT
Authorizations:
ClientCredentialsAuthentication
path Parameters
id
required
string^[0-9a-fA-F-]+$

Responses

Response samples

Content type
application/json
{
  • "embed_urls": {
    },
  • "error_reason": "string",
  • "error_code": 0,
  • "last_datetime_processing_started": "2019-08-24T14:15:22Z",
  • "last_datetime_processing_ended": "2019-08-24T14:15:22Z",
  • "s3_bucket_region": "string",
  • "never_delete": true,
  • "under_trial": true,
  • "source": "string",
  • "owner_uuid": "string",
  • "credits": 0,
  • "crs": {
    },
  • "public_share_token": "c827b6b5-2f34-47ee-824e-a48b2ab6b708",
  • "public_status": "string",
  • "detail_url": "string",
  • "image_count": 0,
  • "public_url": "string",
  • "acquisition_date": "2019-08-24T14:15:22Z",
  • "is_geolocalized": true,
  • "s3_base_path": "string",
  • "display_name": "string",
  • "id": 0,
  • "uuid": "095be615-a8ad-4c33-8e9c-c7612fbf6c9f",
  • "user_display_name": "string",
  • "project_type": "pro",
  • "project_group_id": 0,
  • "create_date": "2019-08-24T14:15:22Z",
  • "name": "string",
  • "bucket_name": "string",
  • "project_thumb": "string",
  • "front_end_public_group_url": "string",
  • "front_end_public_url": "string",
  • "is_demo": true,
  • "display_detailed_status": "string",
  • "coordinate_system": "string",
  • "outputs": "string",
  • "min_zoom": -2147483648,
  • "max_zoom": -2147483648,
  • "proj_pipeline": "string"
}

List project photos

List the project photos.

Returns the paginated list of the project's registered photos. Each photo is a map that contains several access information

You can use a photo_ids query param that is a comma separated list of photo ids if you need the details of a known subset of photos

You can pass an ordering query parameter to specify on which field the results should be ordered

Supported fields for ordering:

  • image (default)
  • id

excluded_from_mapper is a flag used for photos that are part of the project but not taken into account in the photogrammetry processing. A value of true means that they are indeed not considered for processing while a value of false or null means that they behave as default photos, i.e. are including in photogrammetry

Sample response

{
    "count": 12,
    "next": "https://....",
    "previous": "https://...",
    "results": [
        {
            "id": 123,
            "s3_key": "foo/bar/baz.png",
            "thumbs_s3_key": {
                "small": "foo/bar/baz_thumb_1.jpg"
            },
            "s3_bucket": "bucket_name",
            "width": 125,
            "height": 156,
            "excluded_from_mapper": null
        }
    ]
}
Authorizations:
ClientCredentialsAuthentication
path Parameters
id
required
string^[0-9a-fA-F-]+$

Responses

Return a photo

Get the project photo

excluded_from_mapper is a flag used for photos that are part of the project but not taken into account in the photogrammetry processing. A value of true means that they are indeed not considered for processing while a value of false or null means that they behave as default photos, i.e. are including in photogrammetry

Sample response

{
    "id": 123,
    "s3_key": "foo/bar/baz.png",
    "thumbs_s3_key": {
        "small": "foo/bar/baz_thumb_1.jpg"
    },
    "s3_bucket": "bucket_name",
    "width": 125,
    "height": 156,
    "excluded_from_mapper": null
}
Authorizations:
ClientCredentialsAuthentication
path Parameters
id
required
string^[0-9a-fA-F-]+$
photo_id
required
string^[0-9a-f-]+$

Responses

Response samples

Content type
application/json
{
  • "id": 0,
  • "s3_key": "string",
  • "thumbs_s3_key": "string",
  • "s3_bucket": "string",
  • "width": 0,
  • "height": 0,
  • "excluded_from_mapper": true
}

Update a photo metadata

Update the project photo

excluded_from_mapper is a flag used for photos that are part of the project but not taken into account in the photogrammetry processing. A value of true means that they are indeed not considered for processing while a value of false or null means that they behave as default photos, i.e. are including in photogrammetry

Sample response

{
    "id": 123,
    "s3_key": "foo/bar/baz.png",
    "thumbs_s3_key": {
        "small": "foo/bar/baz_thumb_1.jpg"
    },
    "s3_bucket": "bucket_name",
    "width": 125,
    "height": 156,
    "excluded_from_mapper": null
}
Authorizations:
ClientCredentialsAuthentication
path Parameters
id
required
string^[0-9a-fA-F-]+$
photo_id
required
string^[0-9a-f-]+$
Request Body schema:
excluded_from_mapper
boolean or null

Responses

Request samples

Content type
{
  • "excluded_from_mapper": true
}

Response samples

Content type
application/json
{
  • "id": 0,
  • "s3_key": "string",
  • "thumbs_s3_key": "string",
  • "s3_bucket": "string",
  • "width": 0,
  • "height": 0,
  • "excluded_from_mapper": true
}

Delete the depth data

Delete the depth data

Unregister the depth data associated with the photo and deletes it from the S3 storage.

Authorizations:
ClientCredentialsAuthentication
path Parameters
id
required
string^[0-9a-fA-F-]+$
photo_id
required
string^[0-9]+$

Responses

Return a photo EXIF data

Get EXIF of the photo

Returns the EXIF of the photo, if available

{
    "exif": { STANDARD_EXIF_OBJECT }
}
Authorizations:
ClientCredentialsAuthentication
path Parameters
id
required
string^[0-9a-fA-F-]+$
photo_id
required
string^[0-9]+$

Responses

Return a photo XMP data

Get XMP of the photo

Returns the XMP of the photo, if any data is available.

XMP data is not following any spec. Therefore each constructor stores data differently. Currently we support Parrot Anafi and DJI (with Gimbal)

Note that the XMP data contains float angles in degrees, as string. They can be prefixed by a '+' or '-' sign (or no sign). The list of attributes is not guaranteed, meaning you can have from 0 to multiple. This is only an example of some attributes:

{
    "xmp": {
        "yaw": "-6.00",
        "pitch": "+7.123440",
        "roll": "0.00"
    }
}
Authorizations:
ClientCredentialsAuthentication
path Parameters
id
required
integer

A unique integer value identifying this project.

photo_id
required
string^[0-9a-f-]+$

Responses

Get the project processing options

Get the project processing options.

RESTful interaction with the project's processing options. A shortcut to define processing options is to pass them to the start_processing endpoint.

Retrieve the processing options.

This endpoint returns processing options in the output, for their specific format see the documentation of the set processing options (POST) endpoint.

Authorizations:
ClientCredentialsAuthentication
path Parameters
id
required
string^[0-9a-fA-F-]+$

Responses

Response samples

Content type
application/json
{
  • "tags": [
    ],
  • "output_cs_horizontal": 1024,
  • "output_cs_vertical": 1024,
  • "output_cs_geoid": "string",
  • "output_cs_geoid_height": 0,
  • "outputs": [
    ],
  • "formats": {
    },
  • "area": {
    },
  • "custom_template_s3_key": "string",
  • "custom_template_s3_bucket": "string",
  • "standard_template": "3d-maps"
}

Set the project processing options

Set the project processing options.

RESTful interaction with the project's processing options. A shortcut to define processing options is to pass them to the start_processing endpoint.

Create/overwrite processing options. Any option not passed in the request body will use the default value.

Processing options

  • tags: keywords that describe the input data and can influence the type of processing performed. Supported tags are:
    • rtk : GPS image data is captured using an RTK device.
    • building : input images are from oblique flights around objects with little texture.
    • 3d-maps or nadir : process with settings optimised for nadir images.
    • oblique: process with settings optimised for oblique images.
    • half-scale : scale down data processed in non-PIX4Dmapper pipelines.
    • quarter-scale : scale down further data processed in non-PIX4Dmapper pipelines.
    • flat: used together with the nadir tag, to process datasets of flat terrains.
  • output_cs_horizontal (Pix4D internal): EPSG code of the horizontal output coordinate system (CS). Expected to be set if any other output_cs_* parameter is set.
  • output_cs_vertical (Pix4D internal): EPSG code of the vertical output CS. Must be defined with either output_cs_geoid or output_cs_geoid_height.
  • output_cs_geoid (Pix4D internal): name of the geoid to use with the output CS.
  • output_cs_geoid_height (Pix4D internal): the constant geoid height in meters to use with the output CS. Note that currently this constant can not be used for US, Myanmar and Liberia countries.
  • outputs: keywords that describe the desired output types to be generated. These cannot be used in conjunction with any of the template arguments above. If no outputs are passed, default ones will be generated. Supported values are:
    • ortho: Orthomosaic geotiff
    • dsm: DSM geotiff
    • point_cloud: Point Cloud las or laz depending on the context and calibrated camera parameters file
    • mesh: Mesh as OBJ/material/texture files and offset xyz file
    • gaussian_splatting: Gaussian Splatting 3D model
  • formats: dictionary where the key is an output and the value valid extensions e.g.
    "formats": {"point_cloud": ["laz", "slpk"]},
    
  • area: Region of interest defined in OPF format with the coordinates in WGS84. See https://github.com/Pix4D/opf-spec/blob/main/schema/plane.schema.json. Example:
    "plane": {
        "vertices3d": [
            [
                3.2483144217356084,
                43.41515239449451,
                0
            ],
            ...
        ],
        "outer_boundary": [
            0,
            ...
        ]
    },
    "thickness": 10
    
  • [DEPRECATED] standard_template: the name of a valid Pix4Dmapper default template. List available inside Pix4Dmapper. Incompatible with the custom_template_s3_key option. Not applied to projects coming from Pix4Dmapper (that already contain a fully configured p4d configuration file)
  • [DEPRECATED] custom_template_s3_key: the full S3 key of the template file (.tmpl) for Pix4Dmapper to use. Incompatible with the standard_template option. Expected to be set if custom_template_s3_bucket is set. Not applied to projects coming from Pix4Dmapper (that already contain a fully configured p4d configuration file)
  • [DEPRECATED] custom_template_s3_bucket: the bucket where the template file can be found. Expected to be set if custom_template_s3_key is set.
Authorizations:
ClientCredentialsAuthentication
path Parameters
id
required
string^[0-9a-fA-F-]+$
Request Body schema:
tags
Array of strings or null (TagsEnum)
Enum: "rtk" "building" "3d-maps" "nadir" "flat" "oblique" "half-scale" "quarter-scale" "high-confidence-positions"

Project data classifications tags. Valid tags are: ['rtk', 'building', '3d-maps', 'nadir', 'flat', 'oblique', 'half-scale', 'quarter-scale', 'high-confidence-positions']

output_cs_horizontal
integer or null [ 1024 .. 32767 ]

EPSG code of the desired output horizontal coordinate system

output_cs_vertical
integer or null [ 1024 .. 32767 ]

EPSG code of the desired output vertical coordinate system

output_cs_geoid
string or null <= 50 characters
output_cs_geoid_height
number or null <double>
outputs
Array of strings or null (OutputsEnum)
Enum: "ortho" "dsm" "point_cloud" "mesh" "gaussian_splatting"

Outputs to be created when processing a project. Valid outputs are: ['ortho', 'dsm', 'point_cloud', 'mesh', 'gaussian_splatting']

object (FormatsFieldRequest)

Specify formats for the requested outputs.

Values for each output can define one or more formats from the available choices.

Example: {"formats": {"point_cloud": ["laz", "slpk"]}}

object (AreaRequest)
custom_template_s3_key
string or null <= 1024 characters
Deprecated

The S3 key for a valid pix4dmapper template .tmpl file

custom_template_s3_bucket
string or null <= 63 characters
Deprecated

The S3 bucket for a valid pix4dmapper template .tmpl file

(StandardTemplateEnum (string or null)) or (NullEnum (any or null))
Deprecated

Responses

Request samples

Content type
{
  • "tags": [
    ],
  • "output_cs_horizontal": 1024,
  • "output_cs_vertical": 1024,
  • "output_cs_geoid": "string",
  • "output_cs_geoid_height": 0,
  • "outputs": [
    ],
  • "formats": {
    },
  • "area": {
    },
  • "custom_template_s3_key": "string",
  • "custom_template_s3_bucket": "string",
  • "standard_template": "3d-maps"
}

Response samples

Content type
application/json
{
  • "tags": [
    ],
  • "output_cs_horizontal": 1024,
  • "output_cs_vertical": 1024,
  • "output_cs_geoid": "string",
  • "output_cs_geoid_height": 0,
  • "outputs": [
    ],
  • "formats": {
    },
  • "area": {
    },
  • "custom_template_s3_key": "string",
  • "custom_template_s3_bucket": "string",
  • "standard_template": "3d-maps"
}

Update the project processing options

Update the project processing options.

RESTful interaction with the project's processing options. A shortcut to define processing options is to pass them to the start_processing endpoint.

Create/overwrite processing options. Any option not passed in the request body will use the default value.

This endpoint accepts parameters in the payload, for their specific format see the documentation of the set processing options (POST) endpoint.

Authorizations:
ClientCredentialsAuthentication
path Parameters
id
required
string^[0-9a-fA-F-]+$
Request Body schema:
tags
Array of strings or null (TagsEnum)
Enum: "rtk" "building" "3d-maps" "nadir" "flat" "oblique" "half-scale" "quarter-scale" "high-confidence-positions"

Project data classifications tags. Valid tags are: ['rtk', 'building', '3d-maps', 'nadir', 'flat', 'oblique', 'half-scale', 'quarter-scale', 'high-confidence-positions']

output_cs_horizontal
integer or null [ 1024 .. 32767 ]

EPSG code of the desired output horizontal coordinate system

output_cs_vertical
integer or null [ 1024 .. 32767 ]

EPSG code of the desired output vertical coordinate system

output_cs_geoid
string or null <= 50 characters
output_cs_geoid_height
number or null <double>
outputs
Array of strings or null (OutputsEnum)
Enum: "ortho" "dsm" "point_cloud" "mesh" "gaussian_splatting"

Outputs to be created when processing a project. Valid outputs are: ['ortho', 'dsm', 'point_cloud', 'mesh', 'gaussian_splatting']

object (FormatsFieldRequest)

Specify formats for the requested outputs.

Values for each output can define one or more formats from the available choices.

Example: {"formats": {"point_cloud": ["laz", "slpk"]}}

object (AreaRequest)
custom_template_s3_key
string or null <= 1024 characters
Deprecated

The S3 key for a valid pix4dmapper template .tmpl file

custom_template_s3_bucket
string or null <= 63 characters
Deprecated

The S3 bucket for a valid pix4dmapper template .tmpl file

(StandardTemplateEnum (string or null)) or (NullEnum (any or null))
Deprecated

Responses

Request samples

Content type
{
  • "tags": [
    ],
  • "output_cs_horizontal": 1024,
  • "output_cs_vertical": 1024,
  • "output_cs_geoid": "string",
  • "output_cs_geoid_height": 0,
  • "outputs": [
    ],
  • "formats": {
    },
  • "area": {
    },
  • "custom_template_s3_key": "string",
  • "custom_template_s3_bucket": "string",
  • "standard_template": "3d-maps"
}

Response samples

Content type
application/json
{
  • "tags": [
    ],
  • "output_cs_horizontal": 1024,
  • "output_cs_vertical": 1024,
  • "output_cs_geoid": "string",
  • "output_cs_geoid_height": 0,
  • "outputs": [
    ],
  • "formats": {
    },
  • "area": {
    },
  • "custom_template_s3_key": "string",
  • "custom_template_s3_bucket": "string",
  • "standard_template": "3d-maps"
}

Update the project processing options

Update the project processing options.

RESTful interaction with the project's processing options. A shortcut to define processing options is to pass them to the start_processing endpoint.

Updates only the options passed in the payload. If called before POST, will use default values for all other options, as it is assumed that projects always use default options.

This endpoint accepts parameters in the payload, for their specific format see the documentation of the set processing options (POST) endpoint.

Authorizations:
ClientCredentialsAuthentication
path Parameters
id
required
string^[0-9a-fA-F-]+$
Request Body schema:
tags
Array of strings or null (TagsEnum)
Enum: "rtk" "building" "3d-maps" "nadir" "flat" "oblique" "half-scale" "quarter-scale" "high-confidence-positions"

Project data classifications tags. Valid tags are: ['rtk', 'building', '3d-maps', 'nadir', 'flat', 'oblique', 'half-scale', 'quarter-scale', 'high-confidence-positions']

output_cs_horizontal
integer or null [ 1024 .. 32767 ]

EPSG code of the desired output horizontal coordinate system

output_cs_vertical
integer or null [ 1024 .. 32767 ]

EPSG code of the desired output vertical coordinate system

output_cs_geoid
string or null <= 50 characters
output_cs_geoid_height
number or null <double>
outputs
Array of strings or null (OutputsEnum)
Enum: "ortho" "dsm" "point_cloud" "mesh" "gaussian_splatting"

Outputs to be created when processing a project. Valid outputs are: ['ortho', 'dsm', 'point_cloud', 'mesh', 'gaussian_splatting']

object (FormatsFieldRequest)

Specify formats for the requested outputs.

Values for each output can define one or more formats from the available choices.

Example: {"formats": {"point_cloud": ["laz", "slpk"]}}

object (AreaRequest)
custom_template_s3_key
string or null <= 1024 characters
Deprecated

The S3 key for a valid pix4dmapper template .tmpl file

custom_template_s3_bucket
string or null <= 63 characters
Deprecated

The S3 bucket for a valid pix4dmapper template .tmpl file

(StandardTemplateEnum (string or null)) or (NullEnum (any or null))
Deprecated

Responses

Request samples

Content type
{
  • "tags": [
    ],
  • "output_cs_horizontal": 1024,
  • "output_cs_vertical": 1024,
  • "output_cs_geoid": "string",
  • "output_cs_geoid_height": 0,
  • "outputs": [
    ],
  • "formats": {
    },
  • "area": {
    },
  • "custom_template_s3_key": "string",
  • "custom_template_s3_bucket": "string",
  • "standard_template": "3d-maps"
}

Response samples

Content type
application/json
{
  • "tags": [
    ],
  • "output_cs_horizontal": 1024,
  • "output_cs_vertical": 1024,
  • "output_cs_geoid": "string",
  • "output_cs_geoid_height": 0,
  • "outputs": [
    ],
  • "formats": {
    },
  • "area": {
    },
  • "custom_template_s3_key": "string",
  • "custom_template_s3_bucket": "string",
  • "standard_template": "3d-maps"
}

Get S3 credentials

Get the project S3 credentials.

Returns temporary AWS S3 credentials to access the project folder These credentials are either read-write or read-only, depending on the user's rights to access the project.

The key returned is the only place in AWS S3 where you are allowed to write for that project with the returned credentials. When uploading and registering files make sure that all your paths are prefixed with this location.

Authorizations:
ClientCredentialsAuthentication
path Parameters
id
required
integer

A unique integer value identifying this project.

Responses

Response samples

Content type
application/json
{
  • "access_key": "string",
  • "secret_key": "string",
  • "session_token": "string",
  • "expiration": "2019-08-24T14:15:22Z",
  • "bucket": "string",
  • "key": "string",
  • "server_time": "2019-08-24T14:15:22Z",
  • "region": "string"
}

Launch processing of the project

Expects input images to have been uploaded and to have been registered (will return a 400 if no images are registered) Will also fail if the project is already processing or has been deleted

This is also checking your licenses and will refuse to start if the number of images for the project is bigger than the allowed number for your project

If GCPs are registered in the project, then it will also validate that either coordinate_system is set in the project or output_cs_* options are set in the processing_options.

Returns an estimation in seconds of the project processing time (estimated_time) if the project is not processed with credits. Else it returns 202 ACCEPTED.

You can pass an optional boolean query parameter send_email to disable automatic emails for the project. The default value is True, that is email notifications are sent.

You can pass a payload of processing options (all optional) that will be passed to pix4dmapper. For more details about the processing options, see the processing options RESTful API documentation (POST /project/api/v3/projects/{id}/processing_options/)

Authorizations:
ClientCredentialsAuthentication
path Parameters
id
required
integer

A unique integer value identifying this project.

Request Body schema:
tags
Array of strings or null (TagsEnum)
Enum: "rtk" "building" "3d-maps" "nadir" "flat" "oblique" "half-scale" "quarter-scale" "high-confidence-positions"

Project data classifications tags. Valid tags are: ['rtk', 'building', '3d-maps', 'nadir', 'flat', 'oblique', 'half-scale', 'quarter-scale', 'high-confidence-positions']

output_cs_horizontal
integer or null [ 1024 .. 32767 ]

EPSG code of the desired output horizontal coordinate system

output_cs_vertical
integer or null [ 1024 .. 32767 ]

EPSG code of the desired output vertical coordinate system

output_cs_geoid
string or null <= 50 characters
output_cs_geoid_height
number or null <double>
outputs
Array of strings or null (OutputsEnum)
Enum: "ortho" "dsm" "point_cloud" "mesh" "gaussian_splatting"

Outputs to be created when processing a project. Valid outputs are: ['ortho', 'dsm', 'point_cloud', 'mesh', 'gaussian_splatting']

object (FormatsFieldRequest)

Specify formats for the requested outputs.

Values for each output can define one or more formats from the available choices.

Example: {"formats": {"point_cloud": ["laz", "slpk"]}}

object (AreaRequest)
custom_template_s3_key
string or null <= 1024 characters
Deprecated

The S3 key for a valid pix4dmapper template .tmpl file

custom_template_s3_bucket
string or null <= 63 characters
Deprecated

The S3 bucket for a valid pix4dmapper template .tmpl file

(StandardTemplateEnum (string or null)) or (NullEnum (any or null))
Deprecated

Responses

Request samples

Content type
{
  • "tags": [
    ],
  • "output_cs_horizontal": 1024,
  • "output_cs_vertical": 1024,
  • "output_cs_geoid": "string",
  • "output_cs_geoid_height": 0,
  • "outputs": [
    ],
  • "formats": {
    },
  • "area": {
    },
  • "custom_template_s3_key": "string",
  • "custom_template_s3_bucket": "string",
  • "standard_template": "3d-maps"
}

Response samples

Content type
application/json
{
  • "tags": [
    ],
  • "output_cs_horizontal": 1024,
  • "output_cs_vertical": 1024,
  • "output_cs_geoid": "string",
  • "output_cs_geoid_height": 0,
  • "outputs": [
    ],
  • "formats": {
    },
  • "area": {
    },
  • "custom_template_s3_key": "string",
  • "custom_template_s3_bucket": "string",
  • "standard_template": "3d-maps"
}

Return S3 Credentials of multiple projects

Retrieve temporary AWS S3 credentials to access the projects and/or project groups passed in the payload. The credentials are always read-only.

This endpoint returns only the credentials, not the additional bucket/region/key information as in the s3_credentials endpoint.

Input payload can contain two keys project_ids which must be a list of int project IDs (not empty if provided) and project_group_ids which must be a list of int project group IDs (not empty if provided). The number of projects + project_groups is limited to a small number due to limitations for the length of AWS IAM policies. The limit is about 10. If the limit is exceeded, a 400 error is raised with an error_code of TOO_MANY_PROJECTS

If any of the projects and/or project groups in the list are:

  • not found, returns 404
  • not accessible, returns 403
Authorizations:
ClientCredentialsAuthentication
Request Body schema:
project_ids
Array of integers non-empty
project_group_ids
Array of integers non-empty

Responses

Request samples

Content type
{
  • "project_ids": [
    ],
  • "project_group_ids": [
    ]
}

Response samples

Content type
application/json
{
  • "project_ids": [
    ],
  • "project_group_ids": [
    ]
}

Validate the WKT string

Validate the WKT string.

Validation is done according to pix4dmapper requirements. Takes a json as input with wkt_string containing the wkt to validate

{
    "wkt_string": "PROJCS["Custom coordinate system CUSTOM_OBLIQUE...
}
Authorizations:
ClientCredentialsAuthentication
Request Body schema:
wkt_string
required
string non-empty

Responses

Request samples

Content type
{
  • "wkt_string": "string"
}

Response samples

Content type
application/json
{
  • "wkt_string": "string"
}

project_groups

List project groups

List projects groups.

Authorizations:
ClientCredentialsAuthentication
query Parameters
ordering
string

Which field to use when ordering the results.

page
integer

A page number within the paginated result set.

page_size
integer

Number of results to return per page.

search
string

A search term.

Responses

Response samples

Content type
application/json
{
  • "count": 123,
  • "results": [
    ]
}

Create a project group

Create a project group.

The coordinate system is compliant with Open Photogrammetry Format specification (OPF).

If passed, coordinate_system is either:

  • a WKT string version 2 (it includes WKT string version 1)
  • A string in the form Authority:code where the code is for a 2D or 3D CRS (e.g.: EPSG:21781)
  • A string in the format Authority:code+code where the first code is for a 2D CRS and the second one if for a vertical CRS (e.g. EPSG:2056+5728 )
  • A string in the form Authority:code+Authority:code where the first code is for a 2D CRS and the second one if for a vertical CRS.

In addition the following values are accepted for arbitrary coordinate systems:

  • the value ARBITRARY_METERS for the software to use an arbitrary default coordinate system in meters
  • the value ARBITRARY_FEET for the software to use an arbitrary default coordinate system in feet
  • the value ARBITRARY_US_FEET for the software to use an arbitrary default coordinate system in us survey feet

If coordinate_system is passed, two optionals fields can be added:

Will return with a 400 error code if the coordinate_system is invalid. Unsupported cases:

  • Geographical coordinate systems (example: EPSG 4326)
  • Non isometric coordinate systems (all axes must be in the same unit of measurement)

If a coordinate system is provided to a Project Group, then new project created in this group will inherit from this coordinate system by default (if not specified).

project_group_type can take the following values: pro, bim, model, ag.

Organization Management users should specify the 'parent' of the project group by passing:

  • parent_type : one of: organization or folder.
    • organization to create a project group in the drive root of an organization.
    • folder is used to create project groups in a specific folder.
  • parent_uuid : The uuid of the parent.
Authorizations:
ClientCredentialsAuthentication
Request Body schema:
name
required
string [ 1 .. 100 ] characters
project_group_type
required
string (SolutionEnum)
Enum: "pro" "bim" "ag" "model" "inspection"
  • pro - Pro
  • bim - BIM
  • ag - Ag
  • model - Model
  • inspection - Inspection
owner_uuid
string or null <uuid>
(OwnerTypeCe4Enum (string or null)) or (NullEnum (any or null))
parent_id
string non-empty
parent_uuid
string <uuid>
parent_type
string (ProjectGroupParentTypeEnum)
Enum: "organization" "user" "folder"
  • organization - Organization
  • user - Pixuser
  • folder - Folder
coordinate_system
string or null
coordinate_system_geoid_height
number or null <double>
coordinate_system_extensions
any or null

Responses

Request samples

Content type
{
  • "name": "string",
  • "project_group_type": "pro",
  • "owner_uuid": "a528e82a-c54a-4046-8831-44d7f9028f54",
  • "owner_type": "ORG_GRP",
  • "parent_id": "string",
  • "parent_uuid": "77932ac3-028b-48fa-aaa9-4d11b1d1236a",
  • "parent_type": "organization",
  • "coordinate_system": "string",
  • "coordinate_system_geoid_height": 0,
  • "coordinate_system_extensions": null
}

Response samples

Content type
application/json
{
  • "id": 0,
  • "uuid": "095be615-a8ad-4c33-8e9c-c7612fbf6c9f",
  • "name": "string",
  • "project_group_type": "pro",
  • "is_aligned": false,
  • "is_demo": true,
  • "project_list_url": "string",
  • "latest_project_date": "2019-08-24T14:15:22Z",
  • "latest_project_url": "string",
  • "latest_project_id": 0,
  • "latitude": 0,
  • "longitude": 0,
  • "project_group_thumb": "string",
  • "project_count": "string",
  • "public_share_token": "string",
  • "project_status_count": "string",
  • "owner_uuid": "a528e82a-c54a-4046-8831-44d7f9028f54",
  • "owner_type": "ORG_GRP",
  • "parent_id": "string",
  • "parent_uuid": "77932ac3-028b-48fa-aaa9-4d11b1d1236a",
  • "parent_type": "organization",
  • "coordinate_system": "string",
  • "crs": {
    }
}

Get the project group

Retrieve a project group.

Authorizations:
ClientCredentialsAuthentication
path Parameters
id
required
integer

A unique integer value identifying this project group.

Responses

Response samples

Content type
application/json
{
  • "id": 0,
  • "uuid": "095be615-a8ad-4c33-8e9c-c7612fbf6c9f",
  • "name": "string",
  • "project_group_type": "pro",
  • "is_aligned": false,
  • "is_demo": true,
  • "project_list_url": "string",
  • "latest_project_date": "2019-08-24T14:15:22Z",
  • "latest_project_url": "string",
  • "latest_project_id": 0,
  • "latitude": 0,
  • "longitude": 0,
  • "project_group_thumb": "string",
  • "project_count": "string",
  • "public_share_token": "string",
  • "project_status_count": "string",
  • "owner_uuid": "a528e82a-c54a-4046-8831-44d7f9028f54",
  • "owner_type": "ORG_GRP",
  • "parent_id": "string",
  • "parent_uuid": "77932ac3-028b-48fa-aaa9-4d11b1d1236a",
  • "parent_type": "organization",
  • "coordinate_system": "string",
  • "crs": {
    }
}

Update the project group

Partially update a project group.

The coordinate system is compliant with Open Photogrammetry Format specification (OPF).

If passed, coordinate_system is either:

  • a WKT string version 2 (it includes WKT string version 1)
  • A string in the form Authority:code where the code is for a 2D or 3D CRS (e.g.: EPSG:21781)
  • A string in the format Authority:code+code where the first code is for a 2D CRS and the second one if for a vertical CRS (e.g. EPSG:2056+5728 )
  • A string in the form Authority:code+Authority:code where the first code is for a 2D CRS and the second one if for a vertical CRS.

In addition the following values are accepted for arbitrary coordinate systems:

  • the value ARBITRARY_METERS for the software to use an arbitrary default coordinate system in meters
  • the value ARBITRARY_FEET for the software to use an arbitrary default coordinate system in feet
  • the value ARBITRARY_US_FEET for the software to use an arbitrary default coordinate system in us survey feet

If coordinate_system is passed, two optionals fields can be added:

Will return with a 400 error code if the coordinate_system is invalid. Unsupported cases:

  • Geographical coordinate systems (example: EPSG 4326)
  • Non isometric coordinate systems (all axes must be in the same unit of measurement)

If a coordinate system is provided to a Project Group, then new project created in this group will inherit from this coordinate system by default (if not specified).

project_group_type can take the following values: pro, bim, ag.

Authorizations:
ClientCredentialsAuthentication
path Parameters
id
required
integer

A unique integer value identifying this project group.

Request Body schema:
name
string [ 1 .. 100 ] characters
project_group_type
string (SolutionEnum)
Enum: "pro" "bim" "ag" "model" "inspection"
  • pro - Pro
  • bim - BIM
  • ag - Ag
  • model - Model
  • inspection - Inspection
owner_uuid
string or null <uuid>
(OwnerTypeCe4Enum (string or null)) or (NullEnum (any or null))
parent_id
string non-empty
parent_uuid
string <uuid>
parent_type
string (ProjectGroupParentTypeEnum)
Enum: "organization" "user" "folder"
  • organization - Organization
  • user - Pixuser
  • folder - Folder
coordinate_system
string or null
coordinate_system_geoid_height
number or null <double>
coordinate_system_extensions
any or null

Responses

Request samples

Content type
{
  • "name": "string",
  • "project_group_type": "pro",
  • "owner_uuid": "a528e82a-c54a-4046-8831-44d7f9028f54",
  • "owner_type": "ORG_GRP",
  • "parent_id": "string",
  • "parent_uuid": "77932ac3-028b-48fa-aaa9-4d11b1d1236a",
  • "parent_type": "organization",
  • "coordinate_system": "string",
  • "coordinate_system_geoid_height": 0,
  • "coordinate_system_extensions": null
}

Response samples

Content type
application/json
{
  • "id": 0,
  • "uuid": "095be615-a8ad-4c33-8e9c-c7612fbf6c9f",
  • "name": "string",
  • "project_group_type": "pro",
  • "is_aligned": false,
  • "is_demo": true,
  • "project_list_url": "string",
  • "latest_project_date": "2019-08-24T14:15:22Z",
  • "latest_project_url": "string",
  • "latest_project_id": 0,
  • "latitude": 0,
  • "longitude": 0,
  • "project_group_thumb": "string",
  • "project_count": "string",
  • "public_share_token": "string",
  • "project_status_count": "string",
  • "owner_uuid": "a528e82a-c54a-4046-8831-44d7f9028f54",
  • "owner_type": "ORG_GRP",
  • "parent_id": "string",
  • "parent_uuid": "77932ac3-028b-48fa-aaa9-4d11b1d1236a",
  • "parent_type": "organization",
  • "coordinate_system": "string",
  • "crs": {
    }
}

Delete the project group

Delete the project group

Authorizations:
ClientCredentialsAuthentication
path Parameters
id
required
integer

A unique integer value identifying this project group.

Responses

Get S3 credentials

Get the project group S3 credentials

Retrieve temporary AWS S3 credentials to access the project group folder. These credentials are either read-write or read-only, depending on the user's rights to access the project group.

The bucket and the region returned in the response are specific to the project group, and do not necessarily match the storage location of the projects belonging to the group.

Response

{
  "access_key": "foo",
  "secret_key": "secret",
  "session_token": "session_token",
  "expiration": 17200,
  "bucket": "project-group-bucket",
  "key": "S3-prefix-of-the-project-group-bucket"
  "region": "project-group-region"
}
Authorizations:
ClientCredentialsAuthentication
path Parameters
id
required
integer

A unique integer value identifying this project group.

Responses

Response samples

Content type
application/json
{
  • "access_key": "string",
  • "secret_key": "string",
  • "session_token": "string",
  • "expiration": "2019-08-24T14:15:22Z",
  • "bucket": "string",
  • "key": "string",
  • "server_time": "2019-08-24T14:15:22Z",
  • "region": "string"
}

user_account

Retrieve user information

Return the current logged in user information

For external system references, the user uuid should be used, not the user id

preferred_units can take the values metric | imperial license_per_solution_summary contains a value for each solution that can be

  • OK: the user has a valid license for this solution (trial, or OTC with S&U or rental)
  • OTC_EXPIRED: the user has an OTC license for this solution but with expired S&U
  • NONE: the user has no valid license for this solution

Note: license_per_solution_summary does not contain a detailed/granular view of the user permissions. Use the permission endpoint for that.

Authorizations:
ClientCredentialsAuthentication

Responses

Response samples

Content type
application/json
{
  • "uuid": "095be615-a8ad-4c33-8e9c-c7612fbf6c9f",
  • "first_name": "string",
  • "last_name": "string",
  • "email": "string",
  • "preferred_units": "metric",
  • "is_staff": "string",
  • "is_partner": "string",
  • "solution": "pro",
  • "license_per_solution_summary": "string",
  • "is_confirmed": "string",
  • "preferred_language": "string",
  • "portal_type": "string",
  • "testing_group": "string",
  • "trial_blacklist_reason": "string",
  • "country": "AF",
  • "city": "string",
  • "zip": "string",
  • "title": "string",
  • "phone": "string",
  • "preferred_theme": "dark",
  • "default_organization": "570b8884-2314-432f-8691-2fac663f140c",
  • "region": "string",
  • "preferred_infra": 0,
  • "is_active": "string",
  • "is_eum_enabled": true,
  • "is_free_domain": true,
  • "auto_topup_store_product": "string",
  • "auto_topup_order_reference": "string",
  • "hubspot_id": "string"
}

permission_tokens

List the tokens

List tokens created for your projects and project groups, or the ones owned by your organization. Tokens can be filtered using query parameters and available fields of the token.

For example: to filter tokens by project type, you can add ?type=Project to the url.

When using within an organization, one of the following must be specified:

  • organization (with owner_uuid`), or
  • resource (with type and type_id).
Authorizations:
ClientCredentialsAuthentication
query Parameters
enabled
boolean
page
integer

A page number within the paginated result set.

page_size
integer

Number of results to return per page.

type
string
Enum: "Project" "ProjectGroup"
  • Project - Project
  • ProjectGroup - Project Group
type_id
integer
write
boolean

Responses

Response samples

Content type
application/json
{}

Create a token

Create a token for project or project group owned by you, or owned by your organization.

  • type: Project or ProjectGroup.
  • type_id: ID of the project or project group.
  • write: true for read/write tokens. false for read-only tokens.

Payload format:

{
   "type": "Project",
   "type_id": 123,
   "write": true,
   "enabled": true
}

It is recommended to use tokens instead of the deprecated public_url you'll find in project and project group.

Authorizations:
ClientCredentialsAuthentication
Request Body schema:
token
string non-empty
enabled
boolean
write
boolean
type
required
string (Type6f2Enum)
Enum: "Project" "ProjectGroup"
  • Project - Project
  • ProjectGroup - Project Group
type_id
required
integer [ -2147483648 .. 2147483647 ]

Responses

Request samples

Content type
{
  • "token": "string",
  • "enabled": true,
  • "write": true,
  • "type": "Project",
  • "type_id": -2147483648
}

Response samples

Content type
application/json
{
  • "token": "string",
  • "enabled": true,
  • "write": true,
  • "type": "Project",
  • "type_id": -2147483648,
  • "creation_date": "2019-08-24T14:15:22Z"
}

Get a token.

Permission token used for sharing access.

Authorizations:
ClientCredentialsAuthentication
path Parameters
id
required
string <uuid>

A UUID string identifying this permission token.

Responses

Response samples

Content type
application/json
{
  • "token": "string",
  • "enabled": true,
  • "write": true,
  • "type": "Project",
  • "type_id": 0,
  • "creation_date": "2019-08-24T14:15:22Z",
  • "created_by": "ee824cad-d7a6-4f48-87dc-e8461a9201c4"
}

Update a token

Modify the enabled or write fields of the token.

Payload format:

{
   "write": false,
   "enabled": true
}
Authorizations:
ClientCredentialsAuthentication
path Parameters
id
required
string <uuid>

A UUID string identifying this permission token.

Request Body schema:
token
string non-empty
enabled
boolean
write
boolean

Responses

Request samples

Content type
{
  • "token": "string",
  • "enabled": true,
  • "write": true
}

Response samples

Content type
application/json
{
  • "token": "string",
  • "enabled": true,
  • "write": true,
  • "type": "Project",
  • "type_id": 0,
  • "creation_date": "2019-08-24T14:15:22Z",
  • "created_by": "ee824cad-d7a6-4f48-87dc-e8461a9201c4"
}

Delete a token

Permission token used for sharing access.

Authorizations:
ClientCredentialsAuthentication
path Parameters
id
required
string <uuid>

A UUID string identifying this permission token.

Responses

annotations

Create new annotations

Create new annotations by passing an array of annotations.

Authorizations:
ShareTokenOAuth2
Request Body schema: application/json
required
Array of objects (AnnotationCreate)

Responses

Request samples

Content type
application/json
{
  • "annotations": [
    ]
}

Response samples

Content type
application/json
{
  • "annotations": [
    ]
}

Search for annotations

List annotation according to specific criteria

Authorizations:
ShareTokenOAuth2
query Parameters
entity_type
required
string
entity_id
required
integer
last_key
string
page_size
integer
shareToken
string

Responses

Response samples

Content type
application/json
{
  • "next": "string",
  • "results": [
    ]
}

Delete annotations

Delete list of annotations

Authorizations:
ShareTokenOAuth2
Request Body schema: application/json
annotations
required
Array of strings (AnnotationId)

Responses

Request samples

Content type
application/json
{
  • "annotations": [
    ]
}

Response samples

Content type
application/json
{
  • "annotations": [
    ]
}

Overwrite an existing annotation

Authorizations:
ShareTokenOAuth2
path Parameters
annotation_id
required
string
Request Body schema: application/json
entity_type
required
string
Value: "Project"

The main entity that the given annotation will refer to.

entity_id
required
integer
object
required
GeoJSONPoint (object) or GeoJSONLineString (object) or GeoJSONPolygon (object) or Circle (object) (Geometries)
object (AnnotationProperties)
Array of objects (AnnotationAttachments)
tags
Array of strings (AnnotationTags)
version
string (AnnotationVersion)

version of an annotation schema in the format of MAJOR.MINOR

Responses

Request samples

Content type
application/json
{
  • "entity_type": "Project",
  • "entity_id": 0,
  • "extension": { },
  • "geometry": {
    },
  • "properties": {
    },
  • "attachments": [
    ],
  • "tags": [
    ],
  • "version": "1.0"
}

Response samples

Content type
application/json
{
  • "entity_type": "Project",
  • "entity_id": 0,
  • "extension": { },
  • "geometry": {
    },
  • "properties": {
    },
  • "attachments": [
    ],
  • "tags": [
    ],
  • "version": "1.0",
  • "id": "Project_123456_12345678-1234-1234-1234-123456789abc",
  • "created": "2021-09-16T07:05:39.610209+00:00",
  • "modified": "2021-09-16T07:05:39.610209+00:00"
}

Delete an existing annotation

Authorizations:
ShareTokenOAuth2
path Parameters
annotation_id
required
string

Responses

Response samples

Content type
application/json
{
  • "title": "'site' is not a valid EntityType"
}

Delete all annotations for an entity

Delete all annotations for a given entity

Authorizations:
ShareTokenOAuth2
query Parameters
entity_type
required
string
entity_id
required
integer
shareToken
string

Responses

Response samples

Content type
application/json
{
  • "title": "Validation Error",
  • "errors": {
    }
}

drive

List folders, projects and project groups

Lists the folders, projects and projects groups in drive root or in a specific folder. This endpoint lists elements only one level deep and is not used to list hierarchical order of a folder tree. If a user has access within the Organization, but not at the Organization root, then listing the children of the Organization will return the "access points" granted within the organization (though do note that any access points that are nested inside another access point will not be returned).

Path variables:

  • parent_type :It must be one of: organization, user, folder. organization and user are used to list elements in the drive root. folder is used to list elements in a specific folder.
  • parent_uuid : The uuid identifying the organization, user or folder.

    Query parameters:

  • include_projects: Used to filter the elements by projects. By default, it is set to True.
  • include_project_groups: Used to filter the elements by project_groups. By default, it is set to True.
  • ordering: Used to sort the elements either by name or by date. The usage is one of these: ordering=date, ordering=-date, ordering=name, ordering=-name
  • full_metadata: Used to retrieve additional metadata of the drive object. The usage is true or false regardless of case-sensitivity.

    Responses:

    200: Successful
    400: Invalid full_metadata param
    403: The user is not permitted to access the parent
    404: parent_type or parent_uuid is invalid or does not exist
Authorizations:
ClientCredentialsAuthentication
path Parameters
parent_type
required
string
Enum: "folder" "organization" "user"
parent_uuid
required
string <uuid>
query Parameters
full_metadata
boolean
Default: false
include_project_groups
boolean
Default: true
include_projects
boolean
Default: true
ordering
string
Enum: "-date" "-name" "date" "name"
page
integer
page_size
integer

Responses

Response samples

Content type
application/json
{}

Retrieve the ancestors of a resource

Returns a list of nodes which are the closest ancestors to the specified node.

Users without a role at the organization will have the list limited to the nodes to which they have access within the organization.

Path variables:

  • parent_type :It must be one of: organization, user, folder, project or project_group.
  • parent_uuid : The uuid identifying the organization, user, folder, project or project_group.

    Query parameters:

  • max_path_items: The maximum number of path items (including the current node) to return for the given node. The default value is shown in the example below.

    Responses:

    200: Successful
    400: max_path_items param is invalid
    403: The user is not permitted to access the parent
    404: parent_type or parent_uuid is invalid or does not exist
Authorizations:
ClientCredentialsAuthentication
path Parameters
parent_type
required
string
Enum: "folder" "organization" "project" "project_group" "user"
parent_uuid
required
string <uuid>
query Parameters
max_path_items
integer
Default: 3

Responses

Response samples

Content type
application/json
{
  • "path": [
    ],
  • "owner": {
    },
  • "more_ancestors": true
}

Search for folders, project and project groups

Lists folders, projects and project groups whose (display) names match the supplied query parameter. Searching is case-insensitive and done to any depth in the resource tree of the organization or user that are specified by the parent_type and parent_uuid parameters.

Path variables:

  • parent_type :It must be either organization or user.
  • parent_uuid : The uuid identifying the organization or user.

    Query parameters:

  • ordering: Used to sort the elements either by name or by date in ascending or descending order, with an initial '-' indicating descending order. e.g. -date or name.
  • full_metadata: Used to retrieve additional metadata of the drive object. The usage is true or false regardless of case-sensitivity.
  • q: The search text.
  • exclude_grouped_projects: If true then the search results will not include the projects from inside groups.
  • include_projects: Used to filter the elements by projects. By default, it is set to True.
  • include_project_groups: Used to filter the elements by project_groups. By default, it is set to True.

    Responses:

    200: Successful
    400: Invalid full_metadata query param
    403: The user is not permitted to access the parent
    404: parent_type or parent_uuid is invalid or does not exist
Authorizations:
ClientCredentialsAuthentication
path Parameters
parent_type
required
string
Enum: "organization" "user"
parent_uuid
required
string <uuid>
query Parameters
exclude_grouped_projects
boolean
Default: false
full_metadata
boolean
Default: false
include_project_groups
boolean
Default: true
include_projects
boolean
Default: true
ordering
string
Enum: "-date" "-name" "date" "name"
page
integer
page_size
integer
q
required
string

Responses

Response samples

Content type
application/json
{
  • "count": 1,
  • "next": null,
  • "previous": null,
  • "results": []
}

Moves multiple nodes within an organization or user

Moves multiple nodes within an organization.

  • source_nodes : The nodes that will be moved to the target. The source_nodes should not be empty or repeat, and should all belong to the same parent. Each of them require should be specified with and
  • target_type : The type of the parent where nodes will be moved.
  • target_uuid : The uuid of the parent where nodes will be moved.
  • owner_uuid : The uuid of the organization inside which the nodes are being located. (This is used to check the user has permissions to do this move).
Authorizations:
ClientCredentialsAuthentication
Request Body schema:
owner_uuid
required
string <uuid>
required
Array of objects (SourceNodeRequest)
target_type
required
string (MoveBatchTargetTypeEnum)
Enum: "organization" "user" "folder" "project_group"
  • organization - organization
  • user - user
  • folder - folder
  • project_group - project_group
target_uuid
required
string <uuid>

Responses

Request samples

Content type
{
  • "source_nodes": [
    ],
  • "target_type": "folder",
  • "target_uuid": "941f971b-bb21-47a2-a1da-6b2604ea9429",
  • "owner_uuid": "e55e1729-0467-49c3-82b3-73e9dd88d41e"
}

Create a folder

Creates a new folder inside either the root of the drive or another folder. Required fields in request body:

  • name (max length 255) : Folder name
  • parent_type : The type of the parent of the folder to be created. It must be one of: organization or folder
  • parent_uuid : The uuid of the parent of the folder to be created. It must be one of the following according to the parent type specified in the request body: or

    Responses:

    201: Folder creation successful
    400: parent_type or parent_uuid is invalid
    403: The user is not permitted to access the parent.
Authorizations:
ClientCredentialsAuthentication
Request Body schema:
name
required
string [ 1 .. 255 ] characters
parent_uuid
required
string <uuid>
parent_type
required
string
Enum: "folder" "organization" "user"
  • folder - Folder
  • organization - Organization
  • user - Pixuser

Responses

Request samples

Content type
{
  • "name": "08-june-14:03",
  • "parent_uuid": "d32ad3ae-ab2e-444a-bde3-55e9e01b9401",
  • "parent_type": "organization"
}

Response samples

Content type
application/json
{
  • "uuid": "43d218ce-62f6-42be-b765-c304f1229b11",
  • "name": "example-folder-name"
}

Update a folder

Updates a folder's properties. Currently, we only allow updating the name property. Required fields in request body:

  • name (max length 255) : Folder name

    Responses:

    200: Update successful.
    400: Invalid request body(for example: folder name is too long)
    403: The user is not permitted to access the folder.
Authorizations:
ClientCredentialsAuthentication
path Parameters
uuid
required
string <uuid>

UUID of the folder to update

Request Body schema:
name
string [ 1 .. 255 ] characters

Responses

Request samples

Content type
{
  • "name": "example_name"
}

Response samples

Content type
application/json
{
  • "name": "example_name"
}

Delete a folder and its contents

Deletes a folder along with all its descendants in the resource tree.

Responses:

204: Delete successful.
403: The folder exists, but the user is not allowed to access it
404: The folder does not exist

Authorizations:
ClientCredentialsAuthentication
path Parameters
uuid
required
string <uuid>

UUID of the folder to delete

Responses

access

List users access to a resource

List users granted access to a resource.

Returns a list of members. If there are none the list will be empty. An additional 'accessrole_uuid' will be returned for each of them which can be used with the remove_membership / update_role endpoints.

Path variables:

  • resource_type : It must be one of: organization, folder, project or project_group.
  • resource_uuid : The uuid identifying the organization, folder, project or project_group.

Query variables:

  • page: a page number within the paginated results
  • page_size: number of results to return per page
  • name: An (optional) search text which limits the returned results to users who match it in any of their email, first or last name.
  • role: One or more roles that if specified, limit the response to users with those roles.

Returns HTTP status:

  • 200 on success,
  • 400 if the input is invalid,
  • 403 if the resource_uuid is invalid, the resource does not exist, or the requesting user does not have access to the organization owning it.

Response body in case of an HTTP 200 success response

{
    "count": 3,
    "next": null,
    "previous": null,
    "results": [
        {
            "email": "bob@example.com",
            "access_type": "ORGANIZATION",
            "first_name": "Bob",
            "last_name": "Jones",
            "role": "OWNER",
            "accessrole_uuid": "e30b1f48-f05e-41c2-be14-dcb53792bd2d",
            "accessor_uuid": "b0034aa0-15b3-4699-96e9-7c73ec6265cb"
        },
        {
            "email": "charlie@example.com",
            "access_type": "INHERITED",
            "first_name": "Charlie",
            "last_name": "Forthright",
            "role": "READER",
            "accessrole_uuid": "505b2461-f0dd-4a67-9695-a9abd74747b7",
            "accessor_uuid": "3dfd0afa-936f-49bc-b9e4-7722b0282207"
        },
        {
            "email": "alice@example.com",
            "access_type": "DIRECT",
            "first_name": "Alice",
            "last_name": "Smith",
            "role": "EDITOR",
            "accessrole_uuid": "eabd166e-d2fa-4d19-9df6-15f2b0f1e80f",
            "accessor_uuid": "5cbc4c40-a572-4bd6-b0be-761713fe7d9f"
        },
    ]
}
Authorizations:
ClientCredentialsAuthentication
path Parameters
resource_type
required
string^[a-z][a-z0-9\-_]*$
resource_uuid
required
string^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]...
query Parameters
page
integer
page_size
integer

Responses

Response samples

Content type
application/json
{}

List the roles the current user can assign to other users to access the resource

Returns the list of roles the current user can assign to other users to access the resource. If an assignee_uuid is provided, then the list of roles returned will be limited to roles that assignee does NOT already have at a higher resource in the tree.

The roles returned will not include roles which would be redundant given the user’s current roles higher up the path.

Response body in case of an HTTP 200 success response

{
  "roles": [
     "MANAGER", "EDITOR"
  ]
}
Authorizations:
ClientCredentialsAuthentication
path Parameters
resource_type
required
string^[a-z][a-z0-9\-_]*$
resource_uuid
required
string^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]...

Responses

Response samples

Content type
application/json
{
  • "roles": [
    ]
}

Get the details of a user's access to a resource

Get the details of a user's access for a given resource type and UUID.

Returns the details of a member accessing the given resource.

Path variables:

  • resource_type : It must be one of: organization, folder, project or project_group.
  • resource_uuid : The uuid identifying the organization, folder, project or project_group.

Returns HTTP status:

  • 200 on success,
  • 400 if the input is invalid,
  • 403 if the resource_uuid is invalid, the resource does not exist, or the requesting user does not have access to the organization owning it.

Response body in case of an HTTP 200 success response

{
    "access_type": "INHERITED",
    "role": "OWNER",
    "uuid": "e30b1f48-f05e-41c2-be14-dcb53792bd2d",
    "resource_uuid": "b3c1d98a-f4e6-4d58-ae3e-0cbba97e5e79",
    "resource_type": "organization",
}
Authorizations:
ClientCredentialsAuthentication
path Parameters
resource_type
required
string^[a-z][a-z0-9\-_]*$
resource_uuid
required
string^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]...

Responses

Response samples

Content type
application/json
{
  • "access_type": "string",
  • "uuid": "095be615-a8ad-4c33-8e9c-c7612fbf6c9f",
  • "role": "string",
  • "resource_uuid": "9a3c106a-0244-4962-b5f2-052f4eb77461",
  • "resource_type": "string"
}

List pending invitations to a resource

List invitations to a resource.

Returns a list of invitations. If there are none the list will be empty.

Path variables:

  • resource_type : It must be one of: organization, folder, project or project_group.
  • resource_uuid : The uuid identifying the organization, folder, project or project_group.

Query variables:

  • page: a page number within the paginated results
  • page_size: number of results to return per page
  • name: An (optional) search text which limits the returned results to invited users who match it in their email.
  • role: One or more roles that if specified, limit the response to users with those roles.

Returns HTTP status:

  • 200 on success,
  • 400 if the input is invalid,
  • 403 if the resource_uuid is invalid, the resource does not exist, or the requesting user does not have access to the organization owning it.

Response body in case of an HTTP 200 success response

{
    "count": 3,
    "next": null,
    "previous": null,
    "results": [
        {
            "email": "bob@example.com",
            "uuid": "f4d30216-1de8-4483-9e81-da37fd15f282",
            "expires_on": "2021-03-15T14:38:18.493014Z",
            "invitation_status": "pending",
            "role": "EDITOR"
        }
    ]
}
Authorizations:
ClientCredentialsAuthentication
path Parameters
resource_type
required
string^[a-z][a-z0-9\-_]*$
resource_uuid
required
string^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]...

Responses

Response samples

Content type
application/json
{
  • "uuid": "095be615-a8ad-4c33-8e9c-c7612fbf6c9f",
  • "expires_on": "2019-08-24T14:15:22Z",
  • "role": "string",
  • "email": "string",
  • "invitation_status": "string"
}

Invites users or yet to be users to access a resource, OR updates the role of such users.

Invites users or yet to be users to access a resource, OR updates the role of such users.

An email address should be provided for each person invited. Because users are specified by email address the caller does not need to know whether they are members of the organization or not.

A role parameter specifies a role that is to be applied to all users.

Finally, a redirect_url parameter specifies a page (using https and in the pix4d.com domain) that a user will be redirected to upon accepting the invitation.

Optionally a name used in the invitation email for external resources.

One of two emails will be sent to each user depending on whether they are members of the organization or not.

Path variables:

  • resource_type : It must be one of: organization, folder, project or project_group.
  • resource_uuid : The uuid identifying the resource.

Returns HTTP status:

  • 201 on success,
  • 400 if the input is invalid,
  • 403 if the resource_uuid is invalid, the resource does not exist, or the requesting user does not have access to the organization owning it.

Response body in case of an HTTP 200 success response

[
    {
        "email": "newly-created@example.com",
        "access_type": "DIRECT",
        "first_name": "Bob",
        "last_name": "Jones",
        "role": "MANAGER",
        "accessrole_uuid": "e30b1f48-f05e-41c2-be14-dcb53792bd2d"
    },
    {
        "email": "invited@pix3d.com",
        "expires_on": "2023-07-17T00:02:07.282222Z",
        "invitation_status": "pending",
        "role": "MANAGER",
        "uuid": "564bc4f1-43a0-4be2-908d-11442cb73efb"
    }
]
Authorizations:
ClientCredentialsAuthentication
path Parameters
resource_type
required
string^[a-z][a-z0-9\-_]*$
resource_uuid
required
string^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]...
Request Body schema:
invitees
required
Array of strings <email> [ 1 .. 500 ] items [ items <email > non-empty ]
role
required
string (RoleEnum)
Enum: "EDITOR" "MANAGER" "OWNER" "READER"
  • EDITOR - Editor
  • MANAGER - Manager
  • OWNER - Owner
  • READER - Reader
resource_name
string non-empty

Responses

Request samples

Content type
{
  • "invitees": [
    ],
  • "role": "EDITOR",
  • "resource_name": "string"
}

Response samples

Content type
application/json
{
  • "uuid": "095be615-a8ad-4c33-8e9c-c7612fbf6c9f",
  • "expires_on": "2019-08-24T14:15:22Z",
  • "role": "string",
  • "email": "string",
  • "invitation_status": "string"
}

Remove access to a resource for a user

Remove access to a resource for a user specified by accessrole uuid. If the resource is an organization then this will remove all access for the given user. Otherwise, for other resource types, note that this only removes access at the resource specified. The user may still be able to access the resource via access set at the organization or an intermediate resource.

Path variables:

  • resource_type : It must be one of: organization, folder, project or project_group.
  • resource_uuid : The uuid identifying the organization, folder, project or project_group.
  • accessrole_uuid : uuid of the access role to be removed.

Returns HTTP status:

  • 204 on success,
  • 400 if the request is invalid: either an invalid accessrole_uuid supplied for membership or the accessrole passed did not correspond to the resource specified,
  • 403 if the member does not exist, is not part of the organization or when a user tries to remove themselves.
Authorizations:
ClientCredentialsAuthentication
path Parameters
accessrole_uuid
required
string^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]...
resource_type
required
string^[a-z][a-z0-9\-_]*$
resource_uuid
required
string^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]...

Responses

Release notes

Release notes: 19-05-2025

  • Added access management for resources.
  • Added narrative documentation for using Folder and Drive endpoints.

Release notes: 06-05-2025

  • Added information about how to download OPF projects.

Release notes: 01-05-2025

  • Removed references to the discontinued vegetation pipeline.

Release notes: 10-04-2025

  • Add more detailed explanation about Sites and assigning Projects to them.

Release notes: 02-04-2025

  • Included the Folder and Drive endpoints.

Release notes: 14-03-2025

  • Moved the documentation for PIX4Dmapper on PIX4Dcloud to a deprecated section, as processing using this engine is being phased out. Provisional date for it's removal is January 2026.

Release notes: 13-01-2025

  • The region of interest is now included among the processing options to limit the extent of the reconstructions that will be created when processing a project.

Release notes: 03-10-2024

  • Removed the deprecated AutoGCP for PIX4Dmapper pipeline endpoint documentation. AutoGCP is now available through the Nadir and Oblique pipelines instead.

Release notes: 06-03-2024

  • Included the MTP endpoints and added information to the GCP example to explain the usage.

Release notes: 16-01-2023

  • Update to use Spectacular library to generate project and user_data documentation.

Release notes: 18-05-2022

Release notes: 15-12-2021

  • Support of multiple credentials under one PIX4Dengine cloud license (See Authentication)

Release notes: 12-10-2021

On october 12th 2021, the Release Notes section is inaugurated. From now on, the new implementations will be listed in this section together with the date when they are released.

As of today, the features included in the API are:

  • Create projects or sites
  • Process projects:
    • Using the standard PIX4Dmapper processing pipeline
      • With default or custom templates
      • An output coordinate system can be defined
      • GCPs and AutoGCPs are supported
  • Compute volumes
  • Download and retrieve the results
  • Embed the complete 2D/3D editor
  • Generate a sharing link

Deprecated Documentation

As our products evolve we need to occasionally remove and decommission functionality.

To ensure continuity for our API clients documentation will continue to be available here until the feature is finally removed.

We recommend reaching out to your PIX4D Sales/Support contact for assistance with choosing the best migration method away from these deprecated services.

PIX4Dmapper processing

PIX4Dmapper processing : EOL Jan 2026

POST on https://cloud.pix4d.com/project/api/v3/projects/{id}/start_processing/ See the full documentation for this endpoint.

The body request includes:

{
  "custom_template_s3_key": "string",
  "custom_template_s3_bucket": "string",
  "standard_template": "string",
  "tags": ["string"]
}

Standard processing with PIX4Dmapper

  • With a default template
  • With a user-defined template

Processing with PIX4Dmapper allows computing with a different set of parameters which are called "templates". Depending on the type of flight and the desired outputs, a different template can be selected.

Processing with a default template

There are different default templates that can be used by PIX4Dmapper. Detailed information about all of them can be found in this support article

To use a default template, pass the name of the template in the value of the standard_template key in the request body.

{
  "standard_template": "<template-name>"
}

The strings which correspond to each of the default templates are listed below:

String Default Template
3d-maps 3D Maps
3d-maps-rapid 3D Maps - Rapid/Low Res
3d-models 3D Models
3d-models-rapid 3D Models - Rapid/Low Res
ag-modified-camera Ag Modified Camera
ag-modified-camera-rapid Ag Modified Camera - Rapid/Low Res
ag-multispectral Ag Multispectral
ag-rgb Ag RGB
ag-rgb-rapid Ag RGB - Rapid/Low Res
thermal-camera al Camera
thermomap-camera AP Camera

For example, in order to process a project with the standard 3D model template:

Send a POST request to https://cloud.pix4d.com/project/api/v3/projects/{id}/start_processing/ with the following body:

{
  "standard_template": "3d-models"
}

Processing with a user-defined template

Even though there are several default templates that cover most use cases, it is also possible to create your own template for processing.

Information regarding how to create a template file (*.tmpl) can be found in this support article

It is recommended to use the PIX4Dmapper user interface as much as possible to create templates and export them to a file. PIX4Dmapper can be downloaded from the download-page.

Once the .tmpl file exists, there are two possibilities:

  • It is uploaded to an existing customer's S3 bucket (with "public read" permissions for our system to be able to access it)
  • It is uploaded to our S3 bucket. In this case, it has to be done for each project

In both cases, in order to process a project with a user-defined template:

The request body must specify the custom_template_s3_key and the custom_template_s3_bucket:

{
  "custom_template_s3_key": "string",
  "custom_template_s3_bucket": "string"
}

Errors

PIX4Dmapper pipelines provide error reasons only, not error codes.

How to process a project with an existing .p4d file

The API offers the option to upload an existing .p4d file and process with it. If that is the case, the information in that file will be taken into account during the processing.

In order to process with an exsiting .p4d file, the workflow is as follows:

1. Create the .p4d file

It is recommended to use the PIX4Dmapper user interface as much as possible to create the .p4d file. Learn how to download Pix4Dmapper on this article

In order to create the project, open PIX4Dmapper and follow the steps explained on this support page

2. Upload and register the photos

Please, follow what is explained in section Upload the photos in the first example guide.

3. Upload and register the .p4d file

  • Upload the .p4d file to the correct S3 bucket and S3 key

Provided your .p4d file is located in a folder located at $HOME/p4d, you can open the AWS CLI and type:

aws s3 cp ./p4d/some_filename.p4d "s3://${S3_BUCKET}/${S3_BASE_PATH}/"

Make sure the S3 bucket and S3 key are correct.

  • Register the .p4d file

POST on https://cloud.pix4d.com/project/api/v3/projects/{id}/extras/

Request body:

{
  "file_key": "${S3_BASE_PATH}/some_filename.p4d"
}

4. Process the project

POST on https://cloud.pix4d.com/project/api/v3/projects/{id}/start_processing/ ({id} is the project ID)

All the information contained in the .p4d file will be used in the computation: PIX4Dmapper version, coordinate systems, processing options, etc. This article explains the processing options which can be selected by the user:

  • Point cloud and 3Dmesh are produced if Step 2 is selected
  • DSM and Orthophoto are produced if Step 3 is selected
  • If only Step 1 is selected, there will not be any result, just the calibration step will be computed