The following example project demonstrates the general usage of the plugin by setting up model-based tracking using an iPhone.
Please have an iPhone or any other object and its 3D model at hand. A webcam is also mandatory, which should be connected to your PC before starting Unity.
Be sure to have the vlUnitPackage.unitypackage
file, which encapsulates the Unity VisionLib SDK and a valid license.xml
file at hand.
You will also need Unity 2018.4 LTS or higher (we recommend using 2018.4 LTS).
First start Unity. In the launcher window, click on NEW on the upper right, name your project and select the location in your file system. Please remember this location as you will need it later. Make sure that 3D is selected as template. Click Create project on the bottom right.
Click on Assets in Unity's top menu bar, followed by Import Package and Custom Package... . In the pop-up window, look for a file named vlUnitySDK.unitypackage and open it.
Make a right click on Main Camera in the Hierarchy panel on the left and select Delete. Then, navigate in the Project panel to the VisionLib/Utilities/Prefabs/Camera directory. Select the VLCamera and VLInitCamera prefabs and drag-and-drop them into the Hierarchy panel on the left.
Note: You can use the VLCamera as the main camera in your scene and set its transform freely in edit mode. Just note that its transformation will be automatically overwritten during runtime.
Go to your project's directory which you chose in the first step (e.g. via macOS Finder or Windows Explorer) and navigate to StreamingAssets/VisionLib/Examples/ModelTracking. Create a new text file and name it TutorialModelTracker.vl for this example. Make sure your operating system is set to display file endings, so you can correctly rename the whole identifier instead of only the part before the ending. Open your file in a text editor and insert the following content before saving and closing. This content describes a model-based tracker for which you need the model file (obj, stl,...) of the object you intend to track. For more information on the format, please refer to the Configuration File reference. For this example to work properly, you need to change the value of modelURI
to your model's filename. Also, the value of the metric
field may need to be adjusted to the scaling of your model file.
{ "type": "VisionLibTrackerConfig", "version": 1, "meta": { "name": "TutorialModelTracker", "description": "Simple object tracking demonstrator configuration file", "author": "VisionLib" }, "tracker": { "type": "modelTracker", "version": 1, "parameters": { "modelURI": "project_dir:TutorialModel.obj", //<--- replace with your models filename "useColor": true, "metric": "mm", //<--- use the correct scale for your model "initPose": { "t": [1.075873932e-05, -2.691710303e-05, 186.6348404], "q": [-0.71556605, -0.008785564998, 0.0007537285788, 0.6984894228] }, "keyFrameDistance": 50, "laplaceThreshold": 5, "normalThreshold": 1000, "lineGradientThreshold": 20, "lineSearchLengthTracking": 15, "minNumOfCorrespondences": 50, "minInlierRatioInit": 0.9, "minInlierRatioTracking": 0.8 } } }
Right click on an empty space in the Project panel on the bottom (ideally in the Assets/Scripts directory), select Create and then C# Script. Create a new GameObject with the name VLTrackingStart
and assign the script to it.
Open the previously created script. Obtain a reference to VLCamera's VLWorkerBehaviour
in C# by inheriting from VLWorkerReferenceBehaviour
and calling InitWorkerReference()
in the Start function.
Start tracking using the received reference's StartTracking
method by passing the name of your tracking configuration file as the only parameter (i.e. StartTracking("Examples/ModelTracking/TutorialModelTracker.vl")
) in the script's Start method's body. Call StopTracking()
in OnDestroy's body. Your script should look like the following:
using UnityEngine; public class Tutorial : VLWorkerReferenceBehaviour { private void Start() { InitWorkerReference(); workerBehaviour.StartTracking("Examples/ModelTracking/TutorialModelTracker.vl"); } private void OnDestroy() { workerBehaviour.StopTracking(); } }
Return to the Unity window, the next step is to add objects to your scene. Navigate to the Models folder in the project panel (Assets/Models), then right click on an empty space there and import the model as a custom Asset. Drag the model from the Project panel into your scene. Adjust the object's scaling as you like so it is not too small. The scene will be rendered where the tracked real-world object is located in the stream, so it is wise to place your objects close to and above the origin (0|0|0) of the coordinate system in order to be able to see your scene.
The object tracking is started when a correspondence between the real object and the model on screen is found. The initial view is used to define the viewpoint from which the real-world object must be observed to initiate the tracking. To configure the initial viewpoint, the most intuitive way is to use the VLInitCamera. Un-check the field Use Pose From Tracking Config
of the VLWorkerBehaviour script to use the pose of the VLInitCamera in Unity instead of the values of the initPose
field from the configuration file. Doing this allows you to easily use the placement of the VLInitCamera as a initial pose for the tracking.
A good starting point might be to look at the iPhone from above in the direction of the y-axis. So the camera is placed above and with a slight offset in direction of the z-axis, as the model is not completely centered (0|400|50). The distance of the camera to the model needs to be adjusted appropriately for the size of the model. Now the Camera should be rotated around the x-axis (90|0|0) to get the object into the field of view. If it's placed correctly, the model to be tracked is visible in the Camera Preview of the VLInitCamera. Please note that moving the object model instead of the VLInitCamera, except for the correction of the coordinate system, will produce a misalignment of the augmentation during runtime.
Run your project by clicking on the play button. Put the object you chose to track on the table and point the camera on it. As soon as VisionLib has detected the object you should see the model augmented on it in Unity. Try different positions and angles if VisionLib doesn't recognize the object immediately.
In case VisionLib does not track your object as expected, you can enable further debugging information. In particular, you can obtain a visualization of the contour of the object VisionLib is looking for by adding the following lines to the parameters
section of your tracking configuration:
"showLineModel": true
Rerun your project with this settings a green contour of your object should be visible. Because the contour is directly rendered into the video stream, your object might be blocking the view, so if no contour is visible you could temporarily hide your object in Unity. Here you see what it looks like, if the model in Unity is misaligned to the object VisionLib is looking for (a rotation of 90 degrees around the y-axis is the culprit here).