Recognition {
SFFloat maxRange 100 # [0, inf)
SFInt32 maxObjects -1 # {-1, [0, inf)}
SFInt32 occlusion 1 # {0, 1, 2}
SFColor frameColor 1 0 0 # any color
SFInt32 frameThickness 1 # [0, inf)
SFBool segmentation FALSE # {TRUE, FALSE}
}
The Recognition node provides a Camera device with object recognition capability.
When a Camera device has a Recognition node in its recognition field, it is able to recognize which objects are present in the camera image.
Only Solids whose recognitionColors field is not empty can be recognized by the camera.
Defining the Solid.boundingObject of the object might help computing a more precise and tight fitting recognized size.
Additionally, the Recognition also provides the segmentation functionality to generate segmentation ground truth images displaying the recognized objects.
In the segmentation image, each pixel will be colored using the first item of the recognitionColors of the corresponding object rendered from the Camera device.
The segmentation image can be used as ground truth data, i.e. validated data, given that it will classify exactly the recognized objects.
An example of segmentation image is shown in the following figure: on the left you have the Camera image and on the right the corresponding segmentation image.
The pixels corresponding to the cereal boxes, that have an empty recognitionColors field, and to the background are not classified and rendered in black.
%figure "Recognition Segmentation Image"
%end
note: A current known limitation of the object recognition functionality applies to large objects, like floors, that extend all around the device that might only be partially detected.
-
The
maxRangefield defines the maximum distance at which an object can be recognized. Objects farther thanmaxRangeare not recognized. -
The
maxObjectsfield defines the maximum number of objects detected by the camera.-1means no limit. If more objects are visible to the camera, only themaxObjectsbiggest ones (considering pixel size) are recognized. -
The
occlusionfield defines if occlusions between the camera and the object should be checked and the accuracy that will be used. If theocclusionfield is set to0, then the occlusion computation will be disabled. Disabling the occlusion can be useful to allow the camera to see through thin or transparent objects that may hide the object we are interested in, but it can lead to recognized objects that are not really visible to the camera. If theocclusionfield is set to1, only the center of the object is taken into account to compute if the object is visible or not. Otherwise, if the accuracy is set to2, the outbound of the object is used to compute if the object is visible. Note that increasing theocclusionfield value decreases the simulation speed. -
The
frameColorfield defines the color used to frame the objects recognized by the camera in its overlay. -
The
frameThicknessfield defines the thickness in pixels of the frames in the camera overlay. 0 means no object frame in the camera overlay. -
The
segmentationfield defines if a segmentation ground truth image is generated based on the Solid.recognitionColorsfield value. Background and objects with emptyrecognitionColorsfield are rendered in black.
