Release date: 22.10.2020
Release info: General Release
This is a major update for VisionLib which adds:
ARFoundation
integration. Learn more about this feature here: AR Foundation Support (Beta).AdvancedModelTracking
scene. To use it in a custom scene, add the VLPlaneConstrainedMode
component to a GameObject.VLTrackingConfiguration
component in Unity to hold the most important file references and to enable the use of URIs.Thank you to all our users for giving feedback.
ARFoundationModelTracking
example scene to the Experimental
package.UEyeCameraTracking
example scene, which includes the possibility to calibrate the camera and to adjust some camera settings.AdvancedModelTracking
scene to enable and disable the Plane Constrained Mode, while using external SLAM.Constrain to Plane
and Disable Constraint
to the HoloLens example scenes to enable and disable the Plane Constrained Mode.VLTrackingConfiguration
component via an URI. This includes the possibility to use schemes and queries.VLTrackingConfiguration
component. They can also be set using an URI.project-dir
scheme.Reset Init Data
will now clear all re-initialization data and not only the static one.VL_ERROR_LICENSE_INVALID_FEATURE
issue when trying to use a device of type video
.project_dir
wasn't set correctly when the query string contained /
or \
.streaming-assets-dir
which can be used to set the URI of the tracking configuration, license file or calibration relatively to the StreamingAssets
folder. Find more information on URIs with schemes here: File AccessVLTrackingAnchor
s, that are used in multi model tracking, can now be enabled and disabled during runtime.smoothTime
parameter to VLTrackingAnchor
to smoothly update anchor transforms (i.e. prevent / reduce jumping).SimpleCameraCalibration
scene now provides a Start
and Stop
button and directly logs the calibration results to the console when pressing Calibrate
.deviceInfo
anymore. Instead, the selected cameras name, deviceID and its available formats are logged.VLTrackingConfiguration
on runtime while Auto Start Tracking
is set.VLBackgroundImage
in the wrong scene when loading a VisionLib scene additively.high
on iOS using ARKit SLAM, the display image will no longer be scaled down.640x480
or high
) inside the camera section of the tracking configuration on Windows. For details see Configuration File Reference.getModelData
from arrays to objects. Before: {"s": [1, 1, 1], "t": [0, 0, 0], "r": [0, 0, 0, 1]} Now: {"s": {"x":1, "y":1, "z":1}, "t": {"x":0, "y":0, "z":0}, "r":{"x":0, "y":0, "z":0, "w": 1}} Note: This change allows to directly deserialize to Unity Vector and Quaternion objects.VLWorkerBehaviour
can now contain schemes, e.g. local-storage-dir:
or streaming-assets-dir:
.VLWorkerBehaviour.baseDir
parameter in the VLCamera
and the VLHoloLensTracker
.VLTrackingIssues
. Please use VLIssues
instead.OnTrackerInitializedWithIssues
. Please use OnIssuesTriggered
instead. This event will only be triggered, if an issue occurred.VLAbstractApplicationWrapper
. Please use corresponding functions of VLWorker
instead.Log
to Mute
. While using LogLevel Mute
, VisionLib will not log any message at all.VLInitCameraBehaviour.overwriteOnLoad
to VLInitCameraBehaviour.usePoseFromTrackingConfig
.VLUnitySdk.GetVersionString
, VLUnitySdk.GetVersionHashString
, and VLUnitySdk.GetVersionTimestampString
now directly return the requested string
and throw an InvalidOperationException
on failure.CameraCalibrationExampleBehaviour
will now make use of a VLTrackingConfiguration
in the scene. If you have used this behaviour in a custom scene, please add a VLTrackingConfiguration
component in the hierarchy and reference your used tracking configuration file in its public field to upgrade your scene. Check the Use Input Selection
and Auto Start Tracking
option to achieve the previous behaviour.vlSDKUtil_registerScheme
function to programmatically create schemes at runtime.VLTrackingIssues
to VLIssues
and onTrackerInitializedWithWarningIssues
to onIssuesTriggered
in the Objective-C interface.vlAbstractApplicationWrapper
. Please create the vlWorker
using a nullptr
instead of an vlAbstractApplication
pointer.vlAbstractApplicationWrapper
related functions. Please use the corresponding commands or json commands of vlWorker
instead.vlSDK::vlSDK
target, the config also defines a VLSDK_FRAMEWORK_DIR
variable, which contains the full path to the vlSDK.framework
directory. This can be used to copy the framework into the application bundle structure.VL_LOG_LOG
LogLevel enum value to VL_LOG_MUTE
. While using LogLevel VL_LOG_MUTE
, VisionLib will not log any message at all. If vlLog
is called with LogLevel Mute
, the message will not be logged.vlAbstractApplicationWrapper_GetLicenseInformation
and vlWorker_GetLicenseInformation
will now be indented.vlSDKiOS.h
from the Objective-C interface. If you included it before, include vlSDK_Apple.h
instead.vlSimilarityTransformWrapper_t
to wrap similarity transforms. Such transforms have a scale factor in addition to rotation and translation in comparison to transforms represented by vlExtrinsicDataWrapper_t
, that only rotate and translate.vlWorker_AddNodeDataSimilarityTransformListener
and vlWorker_RemoveNodeDataSimilarityTransformListener
to add and remove node listeners for SimilarityTransform
events.vlWorker_AddWorldFromCameraTransformListener
and vlWorker_RemoveWorldFromCameraTransformListener
to register and unregister extrinsic data listeners for the transformation from world to camera coordinate system as provided by SLAM (like ARCore, ARKit or on HoloLens). If no SLAM approach is used, we assume that the camera is stationary and the models move in the world. We therefore set the WorldFromCameraTransform to identity in this case.vlWorker_GetNodeSimilarityTransformSync
and vlWorker_SetNodeSimilarityTransformSync
for synchronous access to node SimilarityTransforms.vlWorker_SetNodeImageSync
, vlWorker_SetNodeExtrinsicDataSync
, vlWorker_SetNodeIntrinsicDataSync
and vlWorker_SetNodeSimilarityTransformSync
now also return false
, if the node does not exist or the key has the wrong type. They previously returned true
in those cases.vlSDKUtil_getModelHash
to create the model hash of the given model into a buffer. This function is useful for requesting a license for a specific model programmatically.vlWorker_AddAnchorToWorldTransformListener
and vlWorker_RemoveAnchorToWorldTransformListener
. Added vlWorker_AddWorldFromAnchorTransformListener
and vlWorker_RemoveWorldFromAnchorTransformListener
instead. Those transformations are also from anchor to world coordinate system, however they are represented as a vlSimilarityTransformWrapper_t
.vlWorker_LoadPlugin
to load a specified VisionLib plugin (e.g. VideoUEye
)