iOS SDK >
Local Planar Marker

This tutorial will guide you through the creation of a simple AR application that detects and tracks an image marker stored in the app assets and draws a simple 3D model on it. 

Note: Our SDK is designed for use with Xcode (versions 8.2+) and supports iOS SDK 10.2+. The code below is compliant with the Swift 4.0 programming language. 
A complete Xcode project can be downloaded from github. The following explanations are based on the code found there.
An important thing to know for people who wish to use the demo license, is that since you are limited to one bundle identifier for your apps (com.pikkart.trial) it is advisable to delete any app created with the Pikkart SDK before installing a new one.

Before you can run this example, you need to do the following:

  1. Decompress the SDK zip file you downloaded from the Pikkart developer site and, from there, copy the pikkartAR.framework file in your Xcode project directory. Add it to the project as an existing file.
  2. Go to the "General" tab to add a few items to the "Linked Frameworks and Libraries" section:          Note: only the pikkartAR.framework file is provided in the zip file, all the other frameworks are already included with XCode.
  3. Set a wildcard provisioning profile (important!) and your own team / signing certificate into the project.
  4. Provide your own SDK license, adding it to the project directory and main group

First things first, a little recap from the Getting Started tutorial. The recognition process is started in the viewDidLoad method by calling the StartRecognition method, as follows:

 override func viewDidLoad() {
        [..]
        // Do any additional setup after loading the view, typically from a nib.
         let authInfo:PKTCloudRecognitionInfo = PKTCloudRecognitionInfo(databaseName: "")
         let options:PKTRecognitionOptions = PKTRecognitionOptions(recognitionStorage:.PKTLOCAL, andMode: .PKTRECOGNITION_CONTINUOS_SCAN, andCloudAuthInfo:authInfo )
         // start recognition
         self.startRecognition(options, andRecognitionCallback:self)
    }

Now we need to print the test marker image (001_small.jpg) or load it on an image viewer on screen in order to have something to recognize, just as we did in the very first tutorial.
Compile and run the app on a device, framing the test marker in the camera. Whenever an event that you have registered an handler for occurs, you will see the corresponding alert or console message.

In the first tutorial, the app only showed a black screen while it sampled images from the camera and did its magic in the background.  Now it's time to show the world and add rendered graphics on top of our marker.

Our ViewController.swift inherits from PKTRecognitionController which in turn descends from GLKViewController. This means that our view controller already has an open gl animation loop where we can draw our 3D objects. In ViewController.swift , we are going to override the  glkView(view: GLKView, drawInRect rect: CGRect) method, which will be called automatically by the iOS GLKit animation loop renderer:

//MARK: GLView rendering callback 
override func glkView(_ view: GLKView, drawIn rect: CGRect) {
        
        if (!self.isActive()) {
            return
        }
        // Prepare the drawable area (standard iOS GL stuff)
        glClear(GLbitfield(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT));
        // Another setup method, this time we're preparing to draw our 3D model
        self.renderCamera(withViewPortSize: CGSize(width: _ViewportWidth, height: _ViewportHeight), andAngle: Int32(_Angle))

        // The rendering only needs to happen if we have a marker to render on
        if self.isTracking() {
            if let currentMarker = self.getCurrentMarker() {
                // We are looking for a specific marker
                if (currentMarker.markerId! == "3_543") {
                    // Model-View-Projection matrix, as explained here:
                    // http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/#the-model-view-and-projection-matrices
                    var mvpMatrix:[Float] = [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
                    
                    // This should alway return true as we're checking where is the marker
                    // and how it is positioned in the real world (the mvpMatrix content is
                    // modified by the computeModelViewProjectionMatrix method and it's the
                    // real reason we're calling this method for)
                    if (self.computeModelViewProjectionMatrix(mvpMatrix))
                    {
                        // Do the actual rendering. Not explained here as it's all about Open GL on iOS.
                        _monkeyMesh!.DrawMesh(&mvpMatrix)
                        RenderUtils.checkGLError()
                    }
                }
            }
        }
        glFinish();
}

In this example we have added a couple of support function such as internal func computeModelViewProjectionMatrix(mvpMatrix:[Float]) -> Bool that computes the model-view-projection matrix that will be used by the OpenGL renderer starting from the projection and marker attitude/position matrices obtained from Pikkart's AR SDK. You can read more about Model-View-Projection matrices here: http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/#the-model-view-and-projection-matrices

Feel free to use such methods in your own code!

We make use of 3 important static functions of the our PKTRecognitionController class:

  • open func getCurrentProjectionMatrix(matrixPointer: UnsafeMutablePointer< UnsafeMutablePointer<Float>?>!)
  • open func getCurrentModelViewMatrix(matrixPointer: UnsafeMutablePointer<Float?>>!)
  • open func isTracking() -> Bool

Other important functions are:

  • open func getCurrentMarker() -> PKTMarker!
  • open func renderCamera(withViewPortSize viewPortSize: CGSize, andAngle angle: Int32)

The first return the currently tracked Marker. The Marker class contains information about the tracked image, more importantly its associated custom data (this will be explained in the next tutorial). The RenderCameraWithViewPortSize function render the last processed camera image, it's the function that enables the app to render the actual camera view, this function must be called inside your OpenGL context rendering cycle.

At the end of the recognition process , you should see something like this: