Skip to main content

MclassGetResultEntry

BoardSupported
Host SystemYes
V4L2Yes
Clarity UHDYes
Concord PoENo
GenTLYes
GevIQYes
GigE VisionYes
IndioNo
Iris GTXYes
Radient eV-CLYes
Rapixo CLYes
Rapixo CoFYes
Rapixo CXPYes
USB3 VisionYes

Get results from an entry in a dataset.

Syntax

void MclassGetResultEntry(
AIL_ID DatasetContextClassId, //in
AIL_INT64 EntryIndex, //in
AIL_UUID EntryKey, //in
AIL_INT64 TaskType, //in
AIL_INT64 LabelOrIndex, //in
AIL_INT64 ResultType, //in
void * ResultArrayPtr //out
)

Description

This function retrieves results of the specified type from an entry in an images or a features dataset context.

Before retrieving results from a dataset, you must call MclassTrain or MclassPredict with it. If you want to get results from a dataset after calling MclassTrain, you must copy the dataset-specific results from the result buffer that MclassTrain produces to a dataset (MclassCopyResult) before calling MclassGetResultEntry. This copy is not required for MclassPredict, since it can write results directly in the dataset.

Parameters

DatasetContextClassId (in, AIL_ID)

Specifies the identifier of the images or features dataset context from which to get results. This context is allocated using MclassAlloc with M_DATASET_IMAGES or M_DATASET_FEATURES and you must have called MclassTrain and copied the results using MclassCopyResult or have called MclassPredict with it.

EntryIndex (in, AIL_INT64)

Specifies the index of the entry for which to get results.

For specifying the entry's index

ValueDescription
M_DEFAULTSpecifies that the EntryIndex parameter is not required.
Value >= 0Specifies the index of an entry.

EntryKey (in, AIL_UUID)

Specifies the key (AIL_UUID) of the entry for which to get results. The key is defined as an Aurora Imaging Library universal unique identifier (UUID).

For specifying the entry's key

ValueDescription
M_DEFAULT_KEYSpecifies that the EntryKey parameter is not required.
AIL_UUID ValueSpecifies the key (AIL_UUID) of an entry.

TaskType (in, AIL_INT64)

Specifies the task (classification, segmentation, object detection, or anomaly detection) for which to get results. An entry in an images dataset can contain classification, segmentation, object detection, or anomaly detection results. An entry in a features dataset can only contain classification results.

For specifying the type of task for which to get results

ValueDescription
M_DEFAULTSame as M_AUTO.
M_ANOMALY_DETECTIONSpecifies that you are getting anomaly detection results.
M_AUTOSpecifies the task for which to get results automatically, given the content in the dataset used (DatasetContextClassId).

If the dataset selected is an images dataset, and it contains only image classification results,M_AUTO is the same as M_CLASSIFICATION. If the dataset contains only segmentation results,M_AUTO is the same as M_SEGMENTATION. If the dataset only contains object detection results, M_AUTO is the same as M_OBJECT_DETECTION. If the dataset only contains anomaly detection results, M_AUTO is the same as M_ANOMALY_DETECTION.

If the dataset contains two types of results, then M_AUTO is the same as M_SEGMENTATION (if segmentation results are present), or M_CLASSIFICATION (if no segmentation results are present). If a result type (ResultType) is specified that only applies to object detection, M_AUTO is the same as M_OBJECT_DETECTION.

If the dataset contains image classification, segmentation, object detection, and anomaly detection results, M_AUTO is the same as M_SEGMENTATION.

If the dataset selected is a features dataset, M_AUTO is the same as M_CLASSIFICATION, since segmentation and object detection are not supported for a features dataset.

To determine whether M_CLASSIFICATION, M_SEGMENTATION, M_OBJECT_DETECTION, or M_ANOMALY_DETECTION results are available in the dataset, use the M_PREDICT_INFO result. Typically, a dataset should be set up for a specific task (either image classification, segmentation, object detection, or anomaly detection). | | M_CLASSIFICATION | Specifies that you are getting classification results (image or feature). | | M_OBJECT_DETECTION | Specifies that you are getting object detection results. | | M_SEGMENTATION | Specifies that you are getting segmentation results. |

LabelOrIndex (in, AIL_INT64)

Specifies the class for which to get the entry result, or specifies that you are getting a general entry result. The values in this table are available for all tasks unless otherwise specified. Set this parameter to one of the following values.

For specifying the class for which to get the entry result, or specifies to get a general entry result

ValueDescription
M_DEFAULT
M_CLASS_INDEXSpecifies the index of the class for which to get the entry result. To get the number of classes, use M_NUMBER_OF_CLASSES.
M_INSTANCE_INDEXSpecifies the index of the instance for which to get results. You can get the number of included instances in the result buffer using M_NUMBER_OF_INSTANCES.

This is only available for images datasets with object detection results. | | M_ALL_INSTANCES | Specifies to retrieve results for all instances.

This is only available for images datasets with object detection results. | | M_GENERAL (default) | Specifies to get a general entry result. |

ResultType (in, AIL_INT64)

Specifies the type of result to retrieve from the dataset entry.

ResultArrayPtr *(out, void)

Specifies the address of the array in which to write results.

Parameter Associations

For retrieving general entry results (images or features dataset)

To retrieve general results from an entry in the images or features dataset (DatasetContextClassId), the ResultType parameter can be set to one of the following values. In this case, you must set the LabelOrIndex parameter to M_GENERAL unless otherwise specified. You can retrieve results for any task (TaskType).


M_BEST_CLASS_INDEX

Retrieves the index of the class with the highest score for the entry. For classification, M_BEST_CLASS_INDEX retrieves one index. For segmentation, M_BEST_CLASS_INDEX retrieves an array of values (indices) that can be arranged in a 2D image, where the size is equal to M_PREDICTION_SIZE_X * M_PREDICTION_SIZE_Y. Each value is the index of the best class score for that X- Y-pixel location. The index of the class begins at 0. For object detection, to retrieve the class index with the highest score for all instances, set the LabelOrIndex parameter to M_ALL_INSTANCES. The size of the retrieved array is equal to the number of instances (MclassGetResultEntrywith M_NUMBER_OF_INSTANCES and LabelOrIndex set to M_GENERAL) and will contain the class index with the highest score for each instance. For object detection, to retrieve the class index with the highest score for all instances of a given class, set the LabelOrIndex parameter to M_CLASS_INDEX(). The size of the retrieved array is equal to the number of instances (MclassGetResultEntrywith M_NUMBER_OF_INSTANCES and LabelOrIndex set to M_CLASS_INDEX()) and will contain the class index with the highest score for each instance of a given class. For object detection, to retrieve the class index with the highest score for a specific instance, set the LabelOrIndex parameter to M_INSTANCE_INDEX().

ValueDescription
Value >= 0Specifies the index of the class with the highest score for the entry.

M_BEST_CLASS_SCORE

Retrieves the highest class score for the entry. For classification, M_BEST_CLASS_SCORE retrieves one score. For segmentation, M_BEST_CLASS_SCORE retrieves an array of values (scores) that can be arranged in a 2D image, where the size is equal to M_PREDICTION_SIZE_X * M_PREDICTION_SIZE_Y. Each value is the best class score for that X- Y-pixel location. The index of the class begins at 0. For object detection, to retrieve the highest score for all instances, set the LabelOrIndex parameter to M_ALL_INSTANCES. The size of the retrieved array is equal to the number of instances (MclassGetResultEntrywith M_NUMBER_OF_INSTANCES and LabelOrIndex set to M_GENERAL) and will contain the highest score for each instance. For object detection, to retrieve the highest score for all instances of a given class, set the LabelOrIndex parameter to M_CLASS_INDEX(). The size of the retrieved array is equal to the number of instances (MclassGetResultEntrywith M_NUMBER_OF_INSTANCES and LabelOrIndex set to M_CLASS_INDEX()) and will contain the highest score for each instance of a given class. For object detection, to retrieve the highest score for a specific instance, set the LabelOrIndex parameter to M_INSTANCE_INDEX().

ValueDescription
0.0 <= Value <= 100.0Specifies the best class score.

M_IMAGE_PREDICTED_ANOMALOUS

Retrieves the anomaly class index at the image-level.

ValueDescription
M_FALSESpecifies that the image is normal.
M_TRUESpecifies that the image is anomalous.

M_IMAGE_SCORE

Retrieves the anomaly score of the image.

ValueDescription
0.0 <= Value <= 100.0Specifies the best class score.

M_MASK_IMAGE

Retrieves the anomaly class index for each pixel in the image.

ValueDescription
M_FALSESpecifies that the pixel is normal.
M_TRUESpecifies that the pixel is anomalous.

M_NUMBER_OF_CLASSES

Retrieves the total number of classes available. This refers to the number of outputs in the classifier's last layer.

ValueDescription
Value >= 0Specifies the number of classes.

M_PIXEL_SCORES

Retrieves the anomaly score of each pixel in the image.

ValueDescription
0.0 <= Value <= 100.0Specifies the pixel's anomaly score.

For retrieving general entry results (images dataset)

To retrieve general results from an entry in the images dataset (DatasetContextClassId), the ResultType parameter can be set to one of the following values. In this case, you must set DatasetContextClassId parameter to the identifier of an images dataset, and the LabelOrIndex parameter to M_GENERAL. You can retrieve results for any task (TaskType).


M_BEST_INDEX_IMAGE_TYPE

Retrieves the image type to provide when calling MclassDraw or MclassDrawEntry with M_DRAW_BEST_INDEX_IMAGE or M_DRAW_BEST_INDEX_CONTOUR_IMAGE.

ValueDescription
M_UNSIGNED + 8Specifies that the image type is 8-bit unsigned.
M_UNSIGNED + 16Specifies that the image type is 16-bit unsigned.

M_CLASSIFIER_PREDEFINED_TYPE

Retrieves the type of the predefined classifier.

ValueDescription
M_ADNETSpecifies an ADNet classifier context for anomaly detection.
M_CSNET_COLOR_XLSpecifies an extra large CSNet classifier context that is for color images.
M_CSNET_MSpecifies a medium CSNet classifier context.
M_CSNET_MONO_XLSpecifies an extra large CSNet classifier context that is for monochrome images.
M_CSNET_SSpecifies a small CSNet classifier context.
M_CSNET_XLSpecifies an extra large CSNet classifier context.
M_CUSTOMSpecifies a custom classifier context. This is a predefined classifier for image classification (M_CLASSIFICATION), segmentation (M_SEGMENTATION, or object detection (M_OBJECT_DETECTION).
M_FCNET_COLOR_XLSpecifies an extra large legacy FCNet classifier context that is for color images.
M_FCNET_MSpecifies a medium legacy FCNet classifier context.
M_FCNET_MONO_XLSpecifies an extra large legacy FCNet classifier context that is for monochrome images.
M_FCNET_SSpecifies a small legacy FCNet classifier context.
M_FCNET_XLSpecifies an extra large legacy FCNet classifier context.
M_ICNET_COLOR_XLSpecifies an extra large ICNet classifier context that is for color images.
M_ICNET_MSpecifies a medium ICNet classifier context.
M_ICNET_MONO_XLSpecifies an extra large ICNet classifier context that is for monochrome images.
M_ICNET_SSpecifies a small ICNet classifier context.
M_ICNET_XLSpecifies an extra large ICNet classifier context.
M_ODNETSpecifies an ODNet classifier context.
M_USER_ONNXSpecifies an ONNX classifier context.

M_PREDICT_INFO

Retrieves whether the entry contains results from the specified task (TaskType). For example, if your task is segmentation, you will get M_TRUE if the entry contains segmentation results and M_FALSE if it does not. When using M_PREDICT_INFO, you must specify a specific task (you cannot set the TaskType parameter to M_AUTO or M_DEFAULT). If you never called MclassTrain or MclassPredict with the specified dataset (DatasetContextClassId), M_PREDICT_INFO always returns M_FALSE.

ValueDescription
M_FALSESpecifies that the entry does not contain results from the specified task (M_CLASSIFICATION, M_SEGMENTATION, M_OBJECT_DETECTION, or M_ANOMALY_DETECTION).
M_TRUESpecifies that the entry does contain results from the specified task (M_CLASSIFICATION, M_SEGMENTATION, M_OBJECT_DETECTION, or M_ANOMALY_DETECTION).

M_PREDICTION_SIZE_X

Retrieves the number of class scores available for the entry, along the X-direction.

ValueDescription
Value >= 0Specifies the number of class scores available for the entry, along the X-direction. For image classification (which uses the entire entry image), M_PREDICTION_SIZE_X always returns 1.

M_PREDICTION_SIZE_Y

Retrieves the number of class scores available for the entry, along the Y-direction.

ValueDescription
Value >= 0Specifies the number of class scores available for the entry, along the Y-direction. For image classification (which uses the entire entry image), M_PREDICTION_SIZE_Y always returns 1.

M_RECEPTIVE_FIELD_OFFSET_X

Retrieves the offset along the X-axis, from the top-left corner of the target image, needed to place the first class score at the center of its receptive field.

ValueDescription
Value >= 0Specifies the receptive field offset along the X-axis.

M_RECEPTIVE_FIELD_OFFSET_Y

Retrieves the offset along the Y-axis, from the top-left corner of the target image, needed to place the first class score at the center of its receptive field.

ValueDescription
Value >= 0Specifies the receptive field offset along the Y-axis.

M_RECEPTIVE_FIELD_SIZE_X

Retrieves the size of the receptive field along the X-axis.

ValueDescription
Value >= 0Specifies the size of the receptive field along the X-axis.

M_RECEPTIVE_FIELD_SIZE_Y

Retrieves the size of the receptive field along the Y-axis.

ValueDescription
Value >= 0Specifies the size of the receptive field along the Y-axis.

M_RECEPTIVE_FIELD_STRIDE_X

Retrieves the stride spacing the receptive field centers in the target image along the X-axis.

ValueDescription
Value >= 0.0Specifies the stride spacing the receptive fields along the X-axis.

M_RECEPTIVE_FIELD_STRIDE_Y

Retrieves the stride spacing the receptive field centers in the target image along the Y-axis.

ValueDescription
Value >= 0.0Specifies the stride spacing the receptive fields along the Y-axis.

For retrieving general or class-specific entry results

To retrieve general or class-specific results from an entry in the images dataset (DatasetContextClassId), the ResultType parameter can be set to one of the following values. In this case, you can set the LabelOrIndex parameter to M_GENERAL or a specific class. Unless otherwise specified, you can retrieve results for any task (TaskType) or dataset (images or features).


M_NUMBER_OF_INSTANCES

Retrieves the number of instances. M_NUMBER_OF_INSTANCES is only available for M_OBJECT_DETECTION tasks.

ValueDescription
Value >= 0Retrieves the number of instances.

M_NUMBER_OF_PREDICTIONS

Retrieves the total number of class scores. For general results, the total number of class scores in an images dataset corresponds to M_PREDICTION_SIZE_X * M_PREDICTION_SIZE_Y * M_NUMBER_OF_CLASSES. The total number of scores in a features dataset is equal to the number of classes. For class-specific results, the total number of scores in an images dataset corresponds to M_PREDICTION_SIZE_X * M_PREDICTION_SIZE_Y. The total number of scores for a features dataset is one. The class score data that gets returned is organized planar-wise. Specifically, the first batch of M_PREDICTION_SIZE_X * M_PREDICTION_SIZE_Y scores are for the first class, and the subsequent batches are for the remaining classes.

ValueDescription
Value >= 0Retrieves the total number of class scores.

M_PREDICTION_SCORES

Retrieves the score of every class. Scores are returned in an array. The number of returned scores is retrievable with M_NUMBER_OF_PREDICTIONS. For general results, the returned values are organized planar-wise in a 3D volume of size M_PREDICTION_SIZE_X *M_PREDICTION_SIZE_Y * M_NUMBER_OF_CLASSES. Note, each band contains the score of one given class. For example, the index in this returned array of the score of class C at pixel (X, Y) is: _Index_ = (_C_ * [M_PREDICTION_SIZE_X](../../Reference/class/MclassGetResultEntry.md) *[M_PREDICTION_SIZE_Y](../../Reference/class/MclassGetResultEntry.md)) + (_Y_ * [M_PREDICTION_SIZE_X](../../Reference/class/MclassGetResultEntry.md)) + _X_. For class-specific results, the returned values are organized in a vector that corresponds to a 2D volume of size M_PREDICTION_SIZE_X *M_PREDICTION_SIZE_Y. For example, the index in this returned array of the score at location (X, Y) is: _Index_ = (_Y_ * [M_PREDICTION_SIZE_X](../../Reference/class/MclassGetResultEntry.md)+ X).

ValueDescription
0.0 <= Value <= 100.0Specifies all of the class scores.

For retrieving results for a specific class, a specific object detection instance, or all instances (images dataset)

To retrieve results from a specific class, a specific object detection instance, or all instances from an entry in an images dataset (DatasetContextClassId), the ResultType parameter can be set to one of the following values. In this case, you must set the LabelOrIndex parameter to M_CLASS_INDEX(), M_INSTANCE_INDEX(), or M_ALL_INSTANCES and you must set the TaskType parameter to M_OBJECT_DETECTION.


M_BOX_4_CORNERS

Retrieves an array containing the coordinates of the four corners of the specified bounding boxes. The 8 coordinates of the first box (b1) are grouped, followed by the 8 coordinates of the second box (b2), repeating this process for all specified boxes in the order b1x1, b1y1, b1x2, b1y2, b1x3, b1y3, b1x4, b1y4, b2x1, b2y1, b2x2, b2y2, b2x3, b2y3, b2x4, b2y4, etc. [Image: MclassBox4Corners.png] The size of the array will be equal to the number of specified instances (M_NUMBER_OF_INSTANCES) multiplied by 8.


M_CENTER_X

Retrieves the X-coordinates of the centers of the specified bounding boxes. When retrieving results for all instances, or a specific class, the X-coordinates are returned in an array. The size of the array can be retrieved with M_NUMBER_OF_INSTANCES. Only one coordinate will be returned if you are retrieving results for a specific instance.


M_CENTER_Y

Retrieves the Y-coordinates for the centers of the specified bounding boxes. When retrieving results for all instances, or a specific class, the Y-coordinates are returned in an array. The size of the array can be retrieved with M_NUMBER_OF_INSTANCES. Only one coordinate will be returned if you are retrieving results for a specific instance.


M_HEIGHT

Retrieves the heights of the specified bounding boxes. When retrieving results for all instances, or a specific class, the heights are returned in an array. The size of the array can be retrieved with M_NUMBER_OF_INSTANCES. Only one height will be returned if you are retrieving results for a specific instance.


M_WIDTH

Retrieves the widths of the specified bounding boxes. When retrieving results for all instances, or a specific class, the widths are returned in an array. The size of the array can be retrieved with M_NUMBER_OF_INSTANCES. Only one coordinate will be returned if you are retrieving results for a specific instance.

Combination Constants — For determining the required number of elements in the array (array size)

Optional, cannot be used alone.

Usage: You can add one of the following values to the above-mentioned values to determine the required number of elements in the array (array size).

M_NB_ELEMENTS

Retrieves the required array size (number of elements) to store the returned values.

Combination Constants — For determining whether results are available to be returned

Optional.

Usage: You can add one of the following values to the above-mentioned values to determine whether a result is available to be returned.

M_AVAILABLE

Retrieves whether the requested result type is available for retrieval.

ValueDescription
M_FALSESpecifies that the requested result type is not available.
M_TRUESpecifies that the requested result type is available.

Combination Constants — For specifying the data type

Optional.

Usage: You can add one of the following values to the above-mentioned values to cast the requested results to the required data type.

M_TYPE_AIL_DOUBLE

Casts the requested results to an AIL_DOUBLE.

M_TYPE_AIL_ID

Casts the requested results to an AIL_ID.

M_TYPE_AIL_INT

Casts the requested results to an AIL_INT.

M_TYPE_AIL_INT32

Casts the requested results to an AIL_INT32.

M_TYPE_AIL_INT64

Casts the requested results to an AIL_INT64.

This ResultType value is unavailable if your dataset results (DatasetContextClassId) were produced using MclassPredict with an ONNX classifier (that is, calling MclassInquire with M_CLASSIFIER_PREDEFINED_TYPE must not return M_USER_ONNX).

To calculate the center of the receptive field of each result returned forM_PREDICTION_SCORES (Y,X) in the target image, use the following formula: (_X_ * M_RECEPTIVE_FIELD_STRIDE_X + M_RECEPTIVE_FIELD_OFFSET_X; Y * M_RECEPTIVE_FIELD_STRIDE_Y + M_RECEPTIVE_FIELD_OFFSET_Y).

This is a predefined classifier for image classification (M_CLASSIFICATION).

This is a predefined classifier for segmentation (M_SEGMENTATION).

This is a predefined classifier for image classification (M_CLASSIFICATION) or segmentation (M_SEGMENTATION).

Copyright © 2026 Zebra Technologies.