Edge API Function Reference

  1. Configuration and Calibration
    1. get_projection_matrix
    2. patch_configuration_by_product
    3. patch_configuration
    4. load_configuration
    5. handeye_calibration
  2. Image Aquisition
    1. trigger
    2. save_image
  3. Result Retrieval
    1. get_pose
    2. get_number_of_detections
    3. estimate_surface_distance

Configuration and Calibration

get_projection_matrix

get_projection_matrix(product_id)

Triggers an image capture and updates the intrinsic camera parameters associated with a product. It is recommended to execute intrinsic calibration just before arming a product and running inference for optimal precision.

Arguments:

  • product_id (str): Identifier of the product for which the calibration is performed.

Return Value:

A \(3\times 3\) intrinsic projection matrix flatted in column-major ordering.

patch_configuration_by_product

patch_configuration_by_product(product_id, service, workflow, key, value)

Overwrites a (nested) value in a configuration. The configuration to edit is retrieved based on a product id, service name, and workflow. If the configuration does not exist, it is created.

Arguments:

  • product_id (string): Database id of the product id to which the configuration belongs.
  • service (string): Service to which the configuration belongs.
  • workflow (string): Workflow to which the configuration belongs.
  • key (string): Path of the property in the configuration object to patch.
  • value: Desired new value.

patch_configuration

patch_configuration(config_id, key, value)

Overwrites a (nested) value in a configuration.

The configurations for a product, device, and service can be obtained through calls against accompanying REST API.

Arguments:

  • config_id (string): Database id of the configuration to patch.
  • key (string): Path of the property in the configuration object to patch.
  • value: Desired new value.

load_configuration

load_configuration(product, workflow)

Trigger a reload of the configuration of the inference pipeline.

Arguments:

  • product (string): Product identifier.
  • workflow (string): Name of the workflow to configure.

handeye_calibration

handeye_calibration(session,
                    product_id,
                    representation='vector',
                    degrees=False,
                    eye_in_hand=False,
                    pattern_side_length=0.025,
                    pattern_width=10,
                    pattern_height=7,
                    service='2d.handeye.calibration.vathos.net')

Start a hand-eye calibration based on a series of input images and poses.

Arguments:

  • session (str): A common session name under which the images for this calibration were stored.
  • product_id (str): Identifier of the product for which the calibration is performed.
  • representation (str): Representation of the rotational part of the pose returned. See the scipy reference for a list of supported values.
  • degrees (bool): True if rotational part of the pose is defined in degrees, False if defined in radians.
  • eye_in_hand: True if the camera is mounted on the end-effector of the robot, False for static cameras.
  • pattern_sidelength (float): Length of squares in the pattern in meters.
  • pattern_width (int): Number of inner corners of the pattern in horizontal direction.
  • pattern_height (int): Number of inner corners of the pattern in vertical direction.
  • service (str): Either '2d.handeye.calibration.vathos.net' or '3d.handeye.calibration.vathos.net' depending on whether depth should be considered during calibration.

Return Value:

A list of 6 real numbers representing the desired pose. The pose is in the same format in terms of rotation group parametrization and units as the input poses.

Image Aquisition

trigger

trigger(workflow, wait_for_camera=False, modality='depth')

Triggers the inference pipeline but does so in an asynchronous way, i.e., the function returns as soon as the image capture is started without waiting for the first detection to arrive. Polling mechanisms may be built into service-specific functions which are called to obtain the result of an image analysis.

Arguments:

  • workflow (string): Name of inference pipeline to run.
  • wait_for_camera (boolean): True if this function should return after image acquisition is complete. This is useful in situations where the camera is mounted to the end-effector of the robot and moving while capture is in progress would lead to image blur. False if this function should return immediately after triggering (e.g., when the camera is static).
  • modality (string): Either 'depth' or 'rgb'. Selects the pipeline to trigger in case two sensors of different modalities are connected.

save_image

save_image(stream_no, pose, representation='vector', degrees=False)

Triggers an image capture and stores the current camera pose with it. Note that just the data is stored but no analysis takes place.

Arguments:

  • stream_no (integer): The number of the stream (e.g., depth or RGB) to save. When a negative integer is passed, all streams are saved. This is necessary for setting service='3d.handeye.calibration.vathos.net' in handeye_calibration.
  • pose (list): List containing 6 floats, representing the target TCP pose. The first three values denote the position in space, the last three a rotation/orientation encoded as defined by representation.
  • representation (string): Representation of the rotational part of the pose returned. Currently only the keyword "vector" for the axis-angle representation and "ZYX" for global Euler angles used by KUKA robots is supported.
  • degrees (boolean): True if rotational part of the pose is defined in degrees, False otherwise.

Return Value:

A list of strings, each representing an id of the stored image’s metadata.

Result Retrieval

get_pose

get_pose(workflow='votenet', representation='vector', degrees=False, timeout=10)

Pops a detection from the detection queue.

New object detections arising from the analysis of a single image are stored in a queue as soon as they are ready. This enables the system to continue image analysis while the robot processes the first detected item. This function retrieves the object detection with the highest priority from the queue. Different priorities can be configured such as distance from the camera, longitudinal position on a conveyor belt, confidence, etc.

Arguments:

  • workflow (string): Inference service to address request to.
  • representation (string): Representation of the rotational part of the pose returned. A list of supported arguments is found here.
  • degrees (boolean): True if rotational part of the pose is defined in degrees, False if defined in radians.
  • timeout (int): Maximum time to wait for new detections to be computed.

Return Value:

A list of 14 floats representing the target TCP pose, i.e., the desired grip per object state. The first three values denote the position in space, the next three an orientation encoded as defined by representation. The 7th element contains the stable state of the detected product. The last and 8th element the product index if a classification is performed. If no more detections are available at the moment, the last two entries are set to -1. The last 6 elements describe an offset from the desired grip. When multiple grips are defined, the service looks for the first one that is free of collisions. The first in the series of allowed grips is assumed to be the default grip. The offset variables describe the position/orientation of the selected grip relative to the default grip. This is useful for compensating grip changes at place time. When only one grip is defined per state, the last 6 elements of the return value are all 0.

All dimensional variables processed or returned by the API are assumed to be in the SI base unit meters. Most robot vendors, however, choose to reoresebt length in millimeters. Conversion between the two is the responsibility of the client application.

get_number_of_detections

get_number_of_detections('votenet')

Returns the size of the detection queue.

New object detections arising from the analysis of a single image are stored in a queue as soon as they are ready. This enables the system to continue image analysis while the robot processes the first detected item. With this function, the number of computed (but yet to retrieve) detections in the current image is returned.

Arguments:

  • workflow (string): Inference service to address request to.

Return Value:

An integer value equalling the size of the detection queue.

estimate_surface_distance

estimate_surface_distance(radius)

Returns the distance of the closest point in a circular region around the image center.

This function could he helpful for example in getting the rough fill level of a magazine.

Arguments:

  • radius (float): Radius of the circular region.

Return Value:

Distance of closest point inside the center of the image.


Back to top

©2025 Vathos GmbH | All rights reserved.