documentation

Model Tracker

Tracking configuration parameters for setting up a model tracker.

The "modelTracker" uses model-based tracking for determining the camera pose relative to a real-world object.

Configuration File Parameters

The following parameters can be set inside the tracking configuration file:

Parameter Type Default value Runtime set-/getable

Example

modelURI string not defined YES "modelURI": "http://mysuperserver/mymodel.ply"

URI to the mandatory surface-model.

The following file formats are generally supported: 3D, 3DS, AC, ASE, B3D, BLEND, BVH, COB, CSM, DAE, DXF, FBX, GLB, GLTF, HMP, IFC, IRR, IRRMESH, LWO, LWS, LXO, MD2, MD3, MD5, MDC, MDL, MDL, MS3D, NDO, NFF, NFF, OBJ, OFF, OGEX, PK3, PLY, Q3D, Q3S, RAW, SCN, SMD, STL, TER, VTA, X, XGL, XML and ZGL.

Using FBX files can lead to problems, if the file uses an internal scaling factor. Depending on the model loader you are using, the resulting 3D-model may be scaled by that factor. Especially, the scaling factor has a different interpretation in our library than in Unity.

Usage of OBJ and PLY files is recommended, because these formats are well tested and they can be easily produced using MeshLab (http://meshlab.sourceforge.net/) and the data is stored binary (with PLY).

The usage of http URIs is currently not possible on UWP, which includes HoloLens. If you require this feature, please contact us.

lineModelURI string not defined NO "lineModelURI": "project-dir:/mylinemodel.obj"

URI to line-model. If this is not defined, then the line-model will be generated automatically at runtime using the surface-model. The supported file format is OBJ.

metaioModelURI string not defined NO "metaioModelURI": "project-dir:/SurfaceModel.obj"

URI to a surface-model created using the Metaio Creator (6.0.1). The metaioModelURI can be used instead of the modelURI.

metaioLineModelURI string not defined NO "metaioLineModelURI": "project-dir:/LineModel.obj"

URI to the line-model created using Metaio Creator (6.0.1). The metaioLineModelURI can be used instead of lineModelURI.

occlusionModelURI string not defined NO "occlusionModelURI": "project-dir:/box.obj"

Optional URI to an occlusion model. The occlusion model can be used to occlude parts of the generated line-model. This can be handy, if the 3D-model used to generate the line-model (modelURI) contains structures, which do not reliably match the real world geometry. For example the hubcaps of a car might always look different, because one can't control the rotation of the tires. In that case you can use a simply occlusion model to cover the problematic parts. This should improve the tracking results, because the generated line-model will not include those unreliable structures.

Supports the same file formats as the modelURI parameter.

initDataURI string not defined NO "initDataURI": "project-dir:InitData.binz"

URI to previously recorded initialization data. This allows an automatic initialization from multiple views.

lineModelOcclusionTest boolean true NO "lineModelOcclusionTest": false

For the case that a static line-model is used (i.e. "lineModelURI" is defined), this flag can be used to turn the occlusion test on and off. The occlusion test is responsible for only selecting visible lines of the static line-model for a certain camera view. The model defined with "modelURI" is used to perform the occlusion test.

initPose json not defined YES
"initPose": {
"type": "visionlib",
"t": [-0.1, -0.2, 6.1],
"q": [0.1, -0.1, 0.7, -0.2]
}

This defines the initial pose for the initialization phase. The initPose contains a "type" property. Type "visionlib" is assumed as the default value.

A "visionlib" initPose consists of a translation $t$ (vector t) and a rotation $R$ (quaternion q). Please notice, that the transformation $(R,t)$ does not represent the position and orientation of the camera directly. Instead it represents the transformation of a 3D point $P_w$ from world coordinates into a 3D point $P_c$ in camera coordinates: $P_c = RP_w + t$. In the internal VisionLib coordinate system the x-axis points right, the y-axis points down and the z-axis points forward. Instead of specifying t and q directly in the configuration file, you can also set the "uri" property to the path of a separate JSON file (e.g. "uri": "project-dir:IninitialPose.json"), which must contain one object with t and q.

InitPose type "metaio" allows you to import an initial pose XML file created using the Metaio Creator (6.0.1) or compatible tools. This file usually is called "InitialPose.xml". Simply set the "uri" property to the path of this XML file (e.g. "uri": "project-dir:IninitialPose.xml"). Alternatively you can copy the translation (<Translation>) and rotation <Rotation> values from the external file into t and q, just like for the "visionlib" type. Please notice, that importing the "Tracking.xml" file does not work.

metric string mm NO "metric": "cm"

This should be set to the corresponding unit size of your model. Valid values are metric scales ("mm", "cm", "dm", "m" or "km") and imperial scales ("in", "ft", "yd", "ch", "fur", "ml").

useColor boolean false NO "useColor": true

This option allows the visionLib to better distinguish on colored edges, while tracking. It is only usefull beeing turned on objects, where colors on edges are involved. It can increase the tracking quality, while having the drawback of needing more resources and processing power.

keyFrameDistance number [0.001..100000.0] 100.0 YES "keyFrameDistance": 200
The minimum distance between keyframes in mm. The line-model will only be generated for certain keyframes. Therefore a higher value improves the performance at the cost of a lower precision. Lower values cost more performance but increase the precision.
keyFrameRotation float 20.0 YES "keyFrameRotation": 10.0

Depicts the sensibility of key frame generation regarding the rotation of the camera in degrees. You should not change this parameter unless you know what you are doing! Lower values cost more performance but increase the precision.

laplaceThreshold number [0.0001..100000.0] 5.0 YES "laplaceThreshold": 7

Threshold in mm for generating the line-model. This value specifies the minimum depth distance between two neighboring pixels necessary to be recognized as an edge.

normalThreshold number [0.0001..1000.0] 1000.0 YES "normalThreshold": 1000.0

Threshold for generating the line-model. This value specifies the minimum normal difference between two neighboring pixels necessary to be recognized as an edge. Usually we set this threshold very high, because from our experience normal-based lines can't be recognized very reliably. It might make sense to use lower values for certain models, though.

lineGradientThreshold number [0...256] 40 YES "lineGradientThreshold": 50

Threshold for edge candidates in the image. High values will only consider pixels with high contrast as candidates. Low values will also consider other pixels. This is a trade-off. If there are too many candidates the algorithm might choose the wrong pixels. If there are not enough candidates the line-model might not stick to the object in the image.

lineSearchLengthInit (Deprecating) number [3...1000] 15 YES "lineSearchLengthInit": 17
The model-based tracker projects the 3D line-model into the camera image and searches for edge pixels orthogonal to the projected lines. The "lineSearchLengthInit" specifies the length of the orthogonal search lines in pixels during the initialization phase. Please use lineSearchLengthTrackingRelative and lineSearchLengthInitRelative in favor since these parameters work resolution independent.
lineSearchLengthTracking (Deprecating) number [3...1000] 15 YES

"lineSearchLengthTracking": 17

Same as "lineSearchLengthInit", but used during the tracking phase.

lineSearchLengthInitRelative number [0.00625...1.0] 0.05 YES "lineSearchLengthInitRelative": 0.07
The model-based tracker projects the 3D line-model into the camera image and searches for edge pixels orthogonal to the projected lines. The "lineSearchLengthInitRelative" specifies the length of the orthogonal search lines in percent relative to the minimum resolution during the initialization and tracking phase. Please use lineSearchLengthTrackingRelative and lineSearchLengthInitRelative in favor to the non relative parameters since these parameters work resolution independent.
lineSearchLengthTrackingRelative number [0.00625...1.0] 0.03125 YES

"lineSearchLengthTrackingRelative": 0.03125

Same as "lineSearchLengthInitRelative", but used during the tracking phase.

minNumOfCorrespondences integer [5...100000] 50 YES "minNumOfCorrespondences": 100

The minimum number of found correspondences between the projected line-model and the edge pixels in the camera image. If there are not enough correspondences the tracking result will get marked as invalid. This is a trade-off. If the value is too high, you can't track objects which only take little space in the image (e.g. because the user is too far away from the object). If the value is too low, the algorithm might start tracking the wrong object.

maxNumOfCorrespondences integer [-1...100000] -1 NO "maxNumOfCorrespondences": -1

This options restricts the VisionLib to use a certain number of Correspondences for optimization of the pose. In scenarios where models with many edges are tracked, you can restrict the processing effort using this parameter. If you experience lag due to large sized models, this parameter can help increasing runtime performance. Anyway decreasing the parameter too much, will lead to more imprecise pose determination. If set to below 1 (default), all hypotheses will be taken into account.

minInlierRatioInit float [0.1..1.0] 0.6 YES "minInlierRatioInit": 0.76

This is a quality threshold for validating the tracking result during the initialization phase. The value highly depends on your scenario. If the line-model matches the real object perfectly and there is no occlusion a high value is recommended. However, usually the line-model doesn't perfectly match the real object which is why a lower value might work better. Then again if the value is too low the algorithm might start tracking the wrong object. In our experience it is better if the "minInlierRatioInit" value is higher than the value for "minInlierRatioTracking", because it's difficult for the algorithm to recover from a bad initialization.

minInlierRatioTracking float [0.1..1.0] 0.55 YES "minInlierRatioTracking": 0.76

This is a quality threshold for validating the tracking result after the initialization phase. The value highly depends on your scenario. If the line-model matches the real object perfectly and there is no occlusion a high value is recommended. However, usually the line-model doesn't perfectly match the real object which is why a lower value might work better. Then again if the value is too low the algorithm might start tracking the wrong object. In our experience it is better if the "minInlierRatioTracking" value is lower than the value for "minInlierRatioInit", because we don't want the tracking to fail after the initialization due to effects like motion-blur.

maxFramesFeaturePrediction integer [1-5000] 30 YES, except when using ARKit "maxFramesFeaturePrediction": 25

This value defines the maximum number of frames which are used for predicting the pose when the model tracking step cannot be successfully validated. The prediction is useful for fast camera movements and motion blur. The model tracking step might fail during these cases, but the prediction will provide a rough solution. The prediction is not robust and might drift away. If the maximum number of predicted frames is exceeded the re-initialization step will be invoked.

extendibleTracking boolean false NO "extendibleTracking": true

If this value is set to true the model-based tracking will be extended with a SLAM-based tracking. This allows you to continue tracking even if the model isn't visible in the camera image anymore. The user needs to perform a SLAM dance, which means to translate and rotate the camera, so that there is enough baseline for the feature reconstruction. NOTE: On iOS devices with A9 or newer processors, ARKit will be used for returning the SLAM pose; On Android devices, ARCore will be used.

disableExternalSLAM boolean false NO "disableExternalSLAM": true

Only on iOS and Android devices: If extendibleTracking has been set to true, ARKit/ARCore will be used on these systems. If you want to stick to the internal VisionLib SLAM solution or reading image sequences you might turn this option on. It will not have any effect on other platforms than iOS and Android.

legacyCameraMode boolean false NO "legacyCameraMode": true

Only on Android devices: If set, VisionLib will not use ARCore to acquire images, even if it is present. Intrinsic calibration will not be retrieved from ARCore. Use this, if ARCore is present but the performance of the device is to limited to use it together with VisionLib tracking.

minCornerness integer 25 NO "minCornerness": 30

Minimum cornerness for the feature detector of the SLAM module. This parameter is only used when extendibleTracking is set to true.

minFeatureDistance integer [1..100] 10 NO "minFeatureDistance": 30

Minimum distance of neighboring features for the feature detector of the SLAM module. With this parameter the number of used features can be controlled. This parameter is only used when extendibleTracking is set to true.

minTriangulationAngle float [1..50] 5.0 NO "minTriangulationAngle": 10

Minimum angle in degree for the triangulation of feature points for feature reconstruction. This parameter is only used when extendibleTracking is set to true.

usePoseFiltering boolean false NO "usePoseFiltering": true

This option enables the temporal filtering of the final result of the estimated camera pose. Pose filtering can be useful if a very smooth camera pose is desired. A drawback of the filtering might be an observable lag between the camera image and the camera pose.

poseFilteringSmoothness float [0.001..1000] 0.25 NO "poseFilteringSmoothness": 0.5

This value defines the smoothness of the pose filter. Lower values will make the filter very smooth. Higher values will make the filter less lagged.

debugLevel integer [0..1] 0 NO "debugLevel": 1

This value specifies the amount of visual output for debugging purposes. Debug level 1 produces some images for visualizing several debug information. Debug level 0 generates no debug information at all. This mode is faster and should be use for a final release. Debug level 2 is a mode for internal debugging purposes. Only use this if you know what you want to do with it. Enabling this feature can significantly harm the performance of the tracking pipeline.

showLineModel boolean false YES "showLineModel": true

This option allows you to draw the line-model into the camera image.

The line-model will get drawn during all tracking states. If you need more fine-grained control, take a look at the showLineModelTracked, showLineModelCritical and showLineModelLost parameters.

The showLineModelColor, showLineModelTrackedColor, showLineModelCriticalColor and showLineModelLostColor parameters can be used to change the color used for drawing the line-model.

Please notice, that the result might not be visually appealing and glitched graphics should be expected. Therefore providing your own visualization should be preferred.

showLineModelColor array not defined NO "showLineModelColor": [0, 0, 255]

Color used for drawing the line-model, if any of the showLineModel parameters was set to true.

This will use the same color for all tracking states. If you need more fine-grained control, then take a look at the showLineModelTrackedColor, showLineModelCriticalColor and showLineModelLostColor parameters.

This parameter must be an array with three numeric values between 0 and 255. The first value represents the red-component, the second value the green-component and the third value the blue-component ([red, green, blue]).

showLineModelTracked boolean false YES "showLineModelTracked": true

This option allows you to draw the line-model into the camera image while the objects gets tracked successfully.

Please notice, that the result might not be visually appealing and glitched graphics should be expected. Therefore providing your own visualization should be preferred.

showLineModelTrackedInvalid boolean false NO "showLineModelTrackedInvalid": true

This option is for debugging your generated lines and how they fit in rough quality to your model. When enabled and a model is tracked, it will show display lines that cannot be tracked (as a rough assumption) in the critical color.

Please notice, that this might be used by you identifying model areas that are not suitable for tracking. It is recommended turning this feature off in production use.

showLineModelTrackedColor array [0, 255, 0] NO "showLineModelTrackedColor": [0, 0, 255]

Color used for drawing the line-model while the objects gets tracked successfully (showLineModel or showLineModelTracked must be set to true).

This parameter must be an array with three numeric values between 0 and 255. The first value represents the red-component, the second value the green-component and the third value the blue-component ([red, green, blue]).

showLineModelCritical boolean false YES "showLineModelCritical": true

This option allows you to draw the line-model into the camera image while the tracking is critical.

Please notice, that the result might not be visually appealing and glitched graphics should be expected. Therefore providing your own visualization should be preferred.

showLineModelCriticalColor array [255, 255, 0] NO "showLineModelCriticalColor": [0, 255, 255]

Color used for drawing the line-model while the tracking is critical (showLineModel or showLineModelCritical must be set to true).

This parameter must be an array with three numeric values between 0 and 255. The first value represents the red-component, the second value the green-component and the third value the blue-component ([red, green, blue]).

showLineModelLost boolean false YES "showLineModelLost": true

This option allows you to draw the line-model into the camera image while the tracked object is lost.

Please notice, that the result might not be visually appealing and glitched graphics should be expected. Therefore providing your own visualization should be preferred.

showLineModelLostColor array [255, 0, 0] NO "showLineModelLostColor": [255, 0, 255]

Color used for drawing the line-model while the tracked object is lost (showLineModel or showLineModelLost must be set to true).

This parameter must be an array with three numeric values between 0 and 255. The first value represents the red-component, the second value the green-component and the third value the blue-component ([red, green, blue]).

synchronous boolean false NO "synchronous": true

This parameter exists ONLY FOR TESTING PURPOSES. Don't set it to true unless you really know what your are doing! Usually the tracking utilizes multiple threads. If this parameter is set to true the whole tracking process will run in only one thread. This will reduce the performance, but in combination with the synchronous worker interface it allows you to get deterministic tracking results. This is useful if you want to get the same tracking results for a sequence of images independent from the current processor and GPU utilization.

enableTorch boolean false YES "enableTorch": true

If this value is set to true the light of the camera will be enabled (if available). This function can only be used on iOS for now.

lineModelBufferSize integer 10 YES "lineModelBufferSize": 100

This value depicts the buffer size for line models to be cached. The generated hypothesis are cached in this structure and can be serialized when enabling lineModelPersistence.

lineModelGeneration boolean true YES "lineModelGeneration": true

Controls the generation of the hypotheses used for tracking. If disabled, you will need to have at least loaded or generated hypotheses in memory in order to allow tracking. Please keep in mind, that hypotheses are position dependent and might change for every view point in respect to the tracked model.

lineModelRenderSize integer 0 NO "lineModelRenderSize": 2048

Resolution (width and height) in pixel of the internally rendered hypothesis image. Set to 0 (default) to use the resolution of the tracking image. Using a high resolution (e.g. 2048) will result in more detailed line models. This is especially noticeable in regions with many small details in close proximity. With a low resolution those details would simply get interpreted as noise and will get filtered out. Using a high resolution is a trade-off, because it might improve the line model appearance, but it will deteriorate the performance at the same time. Using a low resolution (e.g. 512) will result in worse line models, but with improved efficiency.

lineModelPersistence boolean false NO "lineModelPersistence": true

When enabled, the writeInitData and readInitData command additionally tries to write/read the generated hypotheses along with the init data. In conjunction with the lineModelGeneration parameter and the lineModelBufferSize you can pre-load/learn line models for the tracked object without the need to generate hypotheses on the fly.

learnTemplates boolean true YES "learnTemplates": false

Controls the algorithm to learn new init data at runtime.

staticScene boolean false NO "staticScene": false

When tracking unchangeable (static) scenes on devices with additional SLAM capabilities (iOS with extendibleTracking or HoloLens) it can be useful turning this flag on. It will help visionLib stabilizing the tracking with this knowledge.

simulateExternalSLAM boolean false NO "simulateExternalSLAM": true

In case you have recorded an image sequence along with extendibleTracking, it is possible simulating the whole sequence on a desktop machine. By enabling this flag, a simulation around the use of the device recording can be made. The exact same pipeline, as it would be run on the device itself will be made. Anyway, this counts only for tracking. Additional data, like detected planes and ARWorldMap data cannot be used.

constraint (type: "1DRotation") object undefined YES*

"constraint":
{
"type": "1DRotation",
"parameters": {
"up_world": {
"x": 0,
"y": 1,
"z": 0
},
"up_model": {
"x": 0,
"y": 1,
"z": 0
},
"center_model": {
"x": 0,
"y": 0,
"z": 0
}
}
}

Makes sure that the up-Vector of the model and the up-Vector of the world are aligned.
Enabling this constraint will lead to any tracking result transformation CameraFromModel fulfilling up_world = WorldFromCamera * CameraFromModel * up_model where WorldFromCamera is the inverse Slam transformation.
When the model is initially rotated to fulfill the constraint, it will be rotated around center_model.
Note: This feature can only be used when externalSlam is available or simulateExternalSLAM is switched on.
The constraint can also be set or updated during runtime with the command set1DRotationConstraint. The param of this command is the same as the content of parameters above. To disable the constraint during runtime, you can use the command disableConstraints without any param.

You can find some additional parameters in the image recorder configuration description. This will allow you recording an image sequence to your device.

Runtime parameters

Of those parameters mentioned above the following can also be accessed at runtime using the "setAttribute" and "getAttribute" JSON commands of the Worker interface or by using the VLRuntimeParameters_ModelTracker_v1 prefab in Unity:

  • keyFrameDistance
  • keyFrameRotation
  • laplaceThreshold: Changing this will only have an effect for future keyframes.
  • normalThreshold: Changing this will only have an effect for future keyframes.
  • lineSearchLengthInit
  • lineSearchLengthTracking
  • lineSearchLengthInitRelative
  • lineSearchLengthTrackingRelative
  • minInlierRatioInit
  • minInlierRatioTracking
  • ...

The "setInitPose" and "getInitPose" JSON commands can be used to change "initPose" as well.