Xamarin NuGet Package >
Augmented Video for iOS
This tutorial will guide you through the creation of a simple AR application that detects and tracks an image marker stored in the app assets and plays a video on it.
Note: Our SDK is designed for use with iOS SDK 10.2+ and need an actual device to run. You can't use simulators. This is important!
A complete Visual Studio solution can be downloaded from github. The following explanations are based on the code found there.
If you haven't already studied the Local Planar Marker tutorial, please do so now. It introduces a few concepts that are important and useful for all apps developed with Pikkart's SDK on Xamarin, such as how to set license information, how to download and integrate the SDK libraries with your code, plus basic information about recognition and related event handlers.
In order for the tutorial to compile correctly, it was necessary to add two nuget packages to the main project:
- Pikkart.Ar.Sdk
- OpenTK
You should see them inside the package folder as follows:
Inside the solution you will also find a PikkartVideoPlayerBindingLibrary project that allows to instantiate and manage a video player in your apps. We provide some simple APIs:
Inside the solution you will also find the media.bundle and markers.bundle packages. The latter contains a marker's data that the app will use to determine where to position the video player inside the camera frame.
The former contains resources used in this app (the video media file to be played, video player UI controls..)
Another folder contains the marker image that you will need to print or open with an image viewer when testing the app.
Let's analyze the most important code bits. The PikkartVideoPlayer
class is a class encapsulating an AVPlayer and an OpenGL rendering surface to which video frames are redirected. This is the main class managing our video/audio data. The VideoMesh
class is our 3D Object (a simple plane with our video as texture) that will be drawn on top of the tracked image.
Our RecognitionViewController
class is where the recognition happens, in the very same fashion as the Local Planar Marker tutorial (we won't be rehearsing that here). We have a VideoMesh object we will draw into, and we create and initialize it in the viewDidLoad
method:
public class RecognitionViewController : PKTRecognitionController,IPKTIRecognitionListener {... VideoMesh _videoMesh = null;
public override void ViewDidLoad () { base.ViewDidLoad ();if (View is GLKView) { EAGLContext.SetCurrentContext(((GLKView)View).Context); _videoMesh = new VideoMesh(this); string mediaBundlePath=NSBundle.MainBundle.BundlePath+"/media.bundle"; NSBundle mediaBundle=new NSBundle(mediaBundlePath); _videoMesh.InitMesh(mediaBundle.PathForResource("pikkart_video", "mp4"), mediaBundle.PathForResource("pikkart_keyframe", "png"), 0, false,null); }
}
The VideoMesh
is created and initalized by passing to it the URL of the video file (can be a locally stored file or a web URL), the path to a keyframe image that will be shown to the user when the video is not playing (path to a locally stored image, in this case inside the media bundle) a starting seek position (in milliseconds), and whatever the video has to autostart of first detection. You can also pass a pointer to an external PikkartVideoPlayer object if you don't want the VideoMesh object to use its internal one.
We have a couple support functions that modify and rotate the projection matrix depending on device screen orientation:
bool computeModelViewProjectionMatrix(ref float[] mvpMatrix)
bool computeModelViewProjectionMatrix(ref float[] mvMatrix, ref float[] pMatrix)
Next, let's look at the DrawInRect
method, where we draw the video mesh whenever we recognize a specific marker. We also make sure to continue drawing the video even if tracking of the marker is lost, this way:
public override void DrawInRect (GLKView view, CoreGraphics.CGRect rect) { if (!isActive()) return;
if (isTracking()) { if (CurrentMarker != null) { if (CurrentMarker.Id == "3_517") { float[] mvMatrix = {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0}; float[] pMatrix = {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0}; if (computeModelViewProjectionMatrix(ref mvMatrix, ref pMatrix)) { _videoMesh.DrawMesh(ref mvMatrix, ref pMatrix); RenderUtils.CheckGLError(); } } } } if(!isTracking() && _videoMesh.isPlaying()) { float[] mvMatrix = {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0}; float[] pMatrix = {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0}; computeProjectionMatrix(ref pMatrix); if(Angle == 0) { mvMatrix[0] = 1f; mvMatrix[1] = 0; mvMatrix[2] = 0; mvMatrix[3] = -.5f; mvMatrix[4] = 0; mvMatrix[5] = -1f; mvMatrix[6] = 0; mvMatrix[7] = .4f; mvMatrix[8] = 0; mvMatrix[9] = 0; mvMatrix[10] = -1f; mvMatrix[11] = -1.3f; mvMatrix[12] = 0; mvMatrix[13] = 0; mvMatrix[14] = 0; mvMatrix[15] = 1f; } else if(Angle == 90) { mvMatrix[0] = 0; mvMatrix[1] = 1; mvMatrix[2] = 0; mvMatrix[3] = -.5f; mvMatrix[4] = 1f; mvMatrix[5] = 0; mvMatrix[6] = 0; mvMatrix[7] = -.5f; mvMatrix[8] = 0; mvMatrix[9] = 0; mvMatrix[10] = -1f; mvMatrix[11] = -1.8f; mvMatrix[12] = 0.0f; mvMatrix[13] = 0.0f; mvMatrix[14] = 0.0f; mvMatrix[15] = 1.0f; } else if(Angle == 180) { mvMatrix[0] = -1.0f; mvMatrix[1] = 0.0f; mvMatrix[2] = 0.0f; mvMatrix[3] = 0.5f; mvMatrix[4] = 0.0f; mvMatrix[5] = 1.0f; mvMatrix[6] = 0.0f; mvMatrix[7] = -0.4f; mvMatrix[8] = 0.0f; mvMatrix[9] = 0.0f; mvMatrix[10] = -1.0f; mvMatrix[11] = -1.3f; mvMatrix[12] = 0.0f; mvMatrix[13] = 0.0f; mvMatrix[14] = 0.0f; mvMatrix[15] = 1.0f; } else if(Angle == 270) { mvMatrix[0] = 0.0f; mvMatrix[1] = -1.0f; mvMatrix[2] = 0.0f; mvMatrix[3] = 0.5f; mvMatrix[4] = -1.0f; mvMatrix[5] = 0.0f; mvMatrix[6] = 0.0f; mvMatrix[7] = 0.5f; mvMatrix[8] = 0.0f; mvMatrix[9] = 0.0f; mvMatrix[10] = -1.0f; mvMatrix[11] = -1.8f; mvMatrix[12] = 0.0f; mvMatrix[13] = 0.0f; mvMatrix[14] = 0.0f; mvMatrix[15] = 1.0f; } _videoMesh.DrawMesh(ref mvMatrix, ref pMatrix); RenderUtils.CheckGLError(); } GL.Finish();
}
For our RecognitionViewController
class, we added two support functions that will either play or pause the video. These are used to respond to user interaction and app state change (ViewWillDisappear
mainly)
void playOrPauseVideo() { _videoMesh.playOrPauseVideo(); }
void pauseVideo() { _videoMesh.pauseVideo(); }
Finally we added a UITapGestureRecognizer object
to track touch on screen and used to play/pause video playing:
override Void ViewDidLoad() { base.viewDidLoad()
UITapGestureRecognizer tapGesture = new UITapGestureRecognizer((obj) => { playOrPauseVideo(); }); View.AddGestureRecognizer(tapGesture);... }
And that's all folks for this tutorial. Once compiled and run on a device you should see something like this when targeting one of our test markers:
If you want more details on how our VideoMesh
is made and how our AR video playback works take a look at the code and read the comments of our VideoMesh
classes.