Android SDK >
Local Planar Marker

This tutorial will guide you through the creation of a simple AR application that detects and tracks an image marker stored in the app assets and draws a simple 3D model on it. Our SDK is designed for use with Android Studio (versions 2.1+), and supports Android SDK level 15+ (21+ for the 64 bit version). Android NDK is not required.

First create a new application in Android Studio

Write your App name (i.e. PikkartARTutorial), company domain and choose project location in the Create New Project Wizard and click Next

We will create an empty activity for this tutorial, select it and click Next

Give the Activity main class and Activity Layout a name and click Finish.

After a short while Android Studio will show you a new window with your app skeleton and base classes created by the app wizard.

Now it's time to set-up your app for development with Pikkart's AR SDK, in a very similar way to that described in the Getting Started section.

Copy the file <pikkart_sdk_dir>/libs/pikkart_ar_sdk.aar into the libs folder of your module. (project root/app/libs)

Open build.gradle from your app module, add  pikkart_ar_sdk.aar as a dependency and tell gradle to search the libs folder by modifying the gradle file as below. If you have already purchased a license, please set the applicationId to the package name you provided when buying your Pikkart's AR license, otherwise use "com.pikkart.trial" if you're using the trial license provided by default when you register as a developer. Since version 3.5.8 our SDK relies on Android Volley for network access, add the correct dependency with implementation 'com.android.volley:volley:1.1.1' to the app dependencies.

android 
{ 
    ...
    defaultConfig { 
        applicationId "com.pikkart.trial" 
        ...
    } 
    ...
}  
dependencies { 
    compile fileTree(dir: 'libs', include: ['*.jar'])
    compile (name: 'pikkart_ar_sdk', ext:'aar')
    implementation 'com.android.volley:volley:1.1.1'
    ...
} 
repositories { 
    flatDir{ 
        dirs 'libs' 
    } 
}

Android studio will ask you to sync gradle after the updates to the build.gradle files; sync gradle and wait for the process to finish.

Copy the license file we provided you inside your app assets dir (<android project root>/app/src/main/assets/, create the assets dir if it doesn't exist). Also copy the sample markers and media dirs (<pikkart_sample_dir>/sample/assets/markers/ and <pikkart_sample_dir>/sample/assets/media/) into your app project assets dir (you can download the sample package here).

Add the following permission to your AndroidManifest.xml:


<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-feature android:name="android.hardware.camera" android:required="true" />>
<uses-feature android:glEsVersion="0x00020000" android:required="true" />

The activity holding Pikkart's AR stuff must have set android:configChanges="screenSize|orientation" in the AndroidManifest.xml as in the following example:

<activity android:name="com.yourcompany.yourapp.Your_AR_Activity" 
android:configChanges="screenSize|orientation">

In order to support Android 6.0+ we have to slightly modify the MainActivity.java code in order to ask the user for permissions about camera access and read/write access to the SD memory.

Create a private function initLayout() and move onCreate's setContentView there.

private void initLayout() { 
    setContentView(R.layout.activity_main); 
}

Now add a new function that will ask the user for permission. The following function will ask the user for CAMERAWRITE_EXTERNAL_STORAGE and READ_EXTERNAL_STORAGE permissions. The function uses ActivityCompat.checkSelfPermission to check if requested permission have been granted before, if not it uses ActivityCompat.requestPermissions to requests missing permissions. The function receive a integer unique code to identify the request. Create a class member variable (i.e. called m_permissionCode) and assign a unique integer value to it.

private void checkPermissions(int code) {
    String[] permissions_required = new String[] { 
                Manifest.permission.CAMERA, 
                Manifest.permission.WRITE_EXTERNAL_STORAGE,
                Manifest.permission.READ_EXTERNAL_STORAGE };

    List permissions_not_granted_list = new ArrayList<>();
    for (String permission : permissions_required) {
        if (ActivityCompat.checkSelfPermission(getApplicationContext(), permission) != PackageManager.PERMISSION_GRANTED) {
            permissions_not_granted_list.add(permission);
        }
    }
    if (permissions_not_granted_list.size() > 0) {
            String[] permissions = new String[permissions_not_granted_list.size()];
            permissions_not_granted_list.toArray(permissions);
            ActivityCompat.requestPermissions(this, permissions, code);
    }
    else {
        initLayout();
    }
}

Now we have to implement a callback function that will be called by the OS once the user has granted or dismissed the requested permissions. This function implementation cycle through all requested permissions and check that all of them have been granted:

@Override
public void onRequestPermissionsResult(int requestCode, String permissions[], int[] grantResults) {
    if(requestCode==m_permissionCode) {
        boolean ok = true;
        for(int i=0;i<grantResults.length;++i) {
            ok = ok && (grantResults[i]==PackageManager.PERMISSION_GRANTED);
        }
        if(ok) {
            initLayout();
        }
        else {
            Toast.makeText(this, "Error: required permissions not granted!", Toast.LENGTH_SHORT).show();
            finish();
        }
    }
}

We can then modify our onCreate function this way:

@Override
protected void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    //if not Android 6+ run the app
    if (Build.VERSION.SDK_INT < 23) {
        initLayout();
    }
    else {
        checkPermissions(m_permissionCode);
    }
}

Your main function should look like this:

package <your.package.name>;

import android.Manifest;
import android.content.pm.PackageManager;
import android.os.Build;
import android.support.v4.app.ActivityCompat;
import android.support.v7.app.AppCompatActivity;
import android.os.Bundle;
import android.widget.Toast;
import java.util.ArrayList;
import java.util.List;

public class MainActivity extends AppCompatActivity {
    private int m_permissionCode = 100; // unique permission request code

    private void initLayout() {
        setContentView(R.layout.activity_main);
    }

    private void checkPermissions(int code) {
        // require permission to access camera, read and write external storage
        String[] permissions_required = new String[] {
                Manifest.permission.CAMERA,
                Manifest.permission.WRITE_EXTERNAL_STORAGE,
                Manifest.permission.READ_EXTERNAL_STORAGE };

        // check if permissions have been granted
        List permissions_not_granted_list = new ArrayList<>();
        for (String permission : permissions_required) {
            if (ActivityCompat.checkSelfPermission(getApplicationContext(), permission) != PackageManager.PERMISSION_GRANTED) {
                permissions_not_granted_list.add(permission);
            }
        }
        // permissions not granted
        if (permissions_not_granted_list.size() > 0) {
            String[] permissions = new String[permissions_not_granted_list.size()];
            permissions_not_granted_list.toArray(permissions);
            ActivityCompat.requestPermissions(this, permissions, code);
        }
        else { // if all permissions have been granted
            initLayout();
        }
    }

    @Override
    public void onRequestPermissionsResult(int requestCode, String permissions[], int[] grantResults) {
        // this is the answer to our permission request (our permissioncode)
        if (requestCode==m_permissionCode) {
            // check if all have been granted
            boolean ok = true;
            for(int i=0;i<grantResults.length;++i) {
                ok = ok && (grantResults[i]==PackageManager.PERMISSION_GRANTED);
            }
            if (ok) {
                // if all have been granted, continue
                initLayout();
            }
            else {
                // exit if not all required permissions have been granted
                Toast.makeText(this, "Error: required permissions not granted!", Toast.LENGTH_SHORT).show();
                finish();
            }
        }
    }

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        //if not Android 6+ run the app
        if (Build.VERSION.SDK_INT < 23) {
            initLayout();
        }
        else { // otherwise ask for permissions
            checkPermissions(m_permissionCode);
        }
    }
}

Now it's time to add and enable Pikkart SDK's AR functionalities.

First of all change the layout of the app from the default ConstraintLayout to the RelativeLayout; you can change it through your app layout XML file (usually found inside app/res/layout in Android Studio Project browser). Then add Pikkart's AR Recognition Fragment to your AR activity and delete the sample TextView auto-generated by Android Studio.

<?xml version="1.0" encoding="utf-8"?> 
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:app="http://schemas.android.com/apk/res-auto"
    xmlns:tools="http://schemas.android.com/tools"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    tools:context="pikkart.com.pikkarttutorial_10_17.MainActivity"> 
    <fragment android:layout_width="match_parent" android:layout_height="match_parent"
        android:id="@+id/ar_fragment" android:name="com.pikkart.ar.recognition.RecognitionFragment" />
    <TextView
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:text="Hello World!"
        app:layout_constraintBottom_toBottomOf="parent"
        app:layout_constraintLeft_toLeftOf="parent"
        app:layout_constraintRight_toRightOf="parent"
        app:layout_constraintTop_toTopOf="parent" />
</RelativeLayout>

Your layout should look like this (with preview):

Now it's time to start the recognition process. We can do it inside our initLayout() by adding

RecognitionFragment t_cameraFragment = ((RecognitionFragment) getFragmentManager().findFragmentById(R.id.ar_fragment));
t_cameraFragment.startRecognition(
    new RecognitionOptions(
        RecognitionOptions.RecognitionStorage.LOCAL,
        RecognitionOptions.RecognitionMode.CONTINUOUS_SCAN,
        new CloudRecognitionInfo(new String[]{})
    ), 
    this);

The startRecognition() function requires as parameters a RecognitionOptions object and a reference to an object implementing a IRecognitionListener interface. The RecognitionOptions object can be created on-the-fly using a 3-parameters constructor requiring a RecognitionManager.RecognitionStorage option that indicates whether to use LOCAL recognition mode or GLOBAL (cloud recognition + local markers) mode. The second parameter, of type RecognitionManager.RecognitionMode indicates whether to use CONTINUOUS scan/recognition of image markers or TAP_ON_SCAN mode (the latter requires user input to launch a new marker search). The last parameter is a CloudRecognitionInfo object holding the names of cloud databases in which to search markers, it's useful only when using GLOBAL as RecognitionMode and will be explained in the next tutorial.

We also need to add an additional function that will restart the recognition process when the app execution is resumed by the user. Add the following function to your main activity code:

@Override
public void onResume() {
    super.onResume();
    RecognitionFragment t_cameraFragment = ((RecognitionFragment) getFragmentManager().findFragmentById(R.id.ar_fragment));
    if(t_cameraFragment!=null) t_cameraFragment.startRecognition(
            new RecognitionOptions(
                    RecognitionOptions.RecognitionStorage.LOCAL, 
                    RecognitionOptions.RecognitionMode.CONTINUOUS_SCAN, 
                    new CloudRecognitionInfo(new String[]{})
            ), this);
}

In the two block of code above we passed to the startRecognition() function a pointer to our main activity class. The function requires an IRecognitionListener interface so we need to implement that interface in our activity class. First we have to modify our MainClass definition a bit

public class MainActivity extends AppCompatActivity implements IRecognitionListener {
    ....
}

Then we have to implement the following callback functions:

  • This function is called every time a cloud image search is initiated. In this tutorial it will never be called as we are working LOCAL only.
    @Override
    public void executingCloudSearch() {
    }
  • This function is called when the cloud image search fails to find a target image in the camera frame. In this tutorial it will never be called as we are working LOCAL only.
    @Override
    public void cloudMarkerNotFound() {
    }
  • This function is called when the app is set to do cloud operations or image search (RecognitionManager.RecognitionStorage.GLOBAL) but internet connection is absent.
    @Override
    public void internetConnectionNeeded() {
    }
  • This function is called every time a marker is found and tracking starts. In this tutorial we show a toast informing the user.
    @Override
    public void markerFound(Marker marker) {
        Toast.makeText(this, "PikkartAR: found marker " + marker.getId(),
                    Toast.LENGTH_SHORT).show();
    }
  • This function is called every time a marker is not found (after searching through all local markers, or after a tap-to-scan failed)
    @Override 
    public void markerNotFound() { 
    }
  • This function is called as soon as the current marker tracking is lost. In this tutorial we show a toast informing the user.
    @Override
    public void markerTrackingLost(String i) {
        Toast.makeText(this, "PikkartAR: lost tracking of marker " + i, Toast.LENGTH_SHORT).show();
    }
  • This function is called by our SDK to check if connection is available. As we are working LOCAL only we always return false to force our SDK to work offline only.
    @Override 
    public boolean isConnectionAvailable(Context context) { 
        return false; 
    }
  • This function is called when an ARLogo is found. Pikkart's ARLogo is a new technology that allows to embed binary codes inside an image. It lets app developers to unleash different AR experiences from visually similar marker images (i.e. images that looks the same but have different binary codes embedded in them). For an explanation on how to use this function and the ARLogo see the ARLogo tutorial and the ARLogo presentation page.
    @Override 
    public void ARLogoFound(String markerId, int code) { 
    //TODO: add your code here  
    }

You can run the application now. Print one of the test marker images (<pikkart_sample_dir>/markers_printable/), compile and run the app on a device and try it. The app currently doesn't show anything, it just sample images from the camera and does its magic in the background. It's time to add some visuals to it. In order to do that we need to create an OpenGL view, set it up and render the camera view and additional augmented content. We have created some helper classes to help you set up the rendering process that are included in the sample package. To use them copy the java classes inside the folder <pikkart_sample_dir>/sample/rendering/ into the folder <your_app_root>/app/src/main/java/<DOMAIN>/<APPLICATIONNAME> (the folder where your MainActivity is). You will find three new java classes in your app project as in the following image:

Change the package name in the three classes to match the package name of your MainActivity.

package com.pikkart.tutorial.rendering;
package <your.package.name>;

GLTextureView is a widget class that combines an Android TextureView and an OpenGL context, managing its setup, rendering etc. RenderUtils contains various OpenGL helper functions (Shader compilation etc.) and some Matrix operations. The Mesh class manage a 3D mesh (loading from a json file and rendering).

In order to create our own 3D renderer we have to extend our GLTextureView class and create a modified version that will perform our custom rendering. We use a structure that is fairly common in the world of 3D rendering on Android: the GLTextureView class manages Android UI related stuff, the set up of an OpenGL rendering context and a few other things. For the actual rendering it defines an interface (GLTextureView.Renderer) with a few callback functions that will be called during the various phases of the OpenGL context set-up, the android view set-up and the rendering cycle. Before creating our own derived version of GLTextureView we will define our rendering class implementing the GLTextureView.Renderer interface.
This class implements 4 callback functions:

  • public void onSurfaceCreated(GL10 gl, EGLConfig config) that is called every time the OpenGL rendering surface is created or recreated. When this is called all OpenGL related stuff (buffers, VBOs, textures etc.) must be recreated as well.
  • public void onSurfaceChanged(GL10 gl, int width, int height) is called every time the OpenGL surface is changed, without destroying it.
  • public void onSurfaceDestroyed() this is called when the OpenGL surface is destroyed. Usually after this point you can clean and delete your objects.
  • public void onDrawFrame(GL10 gl) this is called on every rendering cycle. This is were the actual rendering of OpenGL related stuff happens.

In our implementation we have added a couple of support function such as public boolean computeModelViewProjectionMatrix(float[] mvpMatrix) that computes the model-view-projection matrix that will be used by the OpenGL renderer starting from the projection and marker attitude/position matrices obtained from Pikkart's AR SDK.

We make use of 3 important static functions of the our RecognitionFragment class:

  • RecognitionFragment.getCurrentProjectionMatrix()
  • RecognitionFragment.getCurrentModelViewMatrix()
  • RecognitionFragment.isTracking()

Other important static function are:

  • public static Marker RecognitionFragment.getCurrentMarker()
  • public static void RecognitionFragment.renderCamera(int viewportWidth, int viewportHeight, int angle)

The first returns the currently tracked Marker. The Marker class contains information about the tracked image, its download date, its update date and more importantly its associated custom data (this will be explained in the next tutorial). The renderCamera function render the last processed camera image, it's the function that enables the app to render the actual camera view, this function must be called inside your OpenGL context rendering cycle, usually one of the first functions called inside the public void onDrawFrame(GL10 gl) function of our Renderer class implementation.

Create a new java class in your Android project (in the same folder as the MainActivity), name it ARRenderer. The following is the full implementation of our Renderer class implementing the GLTextureView.Renderer interface (copy and paste the following code inside the ARRenderer class and remember to set the correct package name).


package <your.package.name>;
import android.content.Context;
import com.pikkart.ar.recognition.RecognitionFragment;
import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.opengles.GL10;

public class ARRenderer implements GLTextureView.Renderer {
    public boolean IsActive = false;
    //the rendering viewport dimensions
    private int ViewportWidth;
    private int  ViewportHeight;
    //normalized screen orientation (0=landscale, 90=portrait, 180=inverse landscale, 270=inverse portrait)
    private int Angle;
    //
    private Context context;
    //the 3d object we will render on the marker
    private Mesh monkeyMesh = null;

    /* Constructor. */
    public ARRenderer(Context con) {
        context = con;
    }

    /** Called when the surface is created or recreated.
     * Reinitialize OpenGL related stuff here*/
    public void onSurfaceCreated(GL10 gl, EGLConfig config)  {
        gl.glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
        //Here we create the 3D object and initialize textures, shaders, etc.
        monkeyMesh = new Mesh();
        monkeyMesh.InitMesh(context.getAssets(),"media/monkey.json", "media/texture.png");
    }

    /** Called when the surface changed size. */
    public void onSurfaceChanged(GL10 gl, int width, int height) {
    }

    /** Called when the surface is destroyed. */
    public void onSurfaceDestroyed() {
    }

    /** Here we compute the model-view-projection matrix for OpenGL rendering
     * from the model-view and projection matrix computed by Pikkart's AR SDK.
     * the projection matrix is rotated accordingly to the screen orientation */
    public boolean computeModelViewProjectionMatrix(float[] mvpMatrix) {
        RenderUtils.matrix44Identity(mvpMatrix);

        float w = (float)640;
        float h = (float)480;

        float ar = (float)ViewportHeight / (float)ViewportWidth;
        if (ViewportHeight > ViewportWidth) ar = 1.0f / ar;
        float h1 = h, w1 = w;
        if (ar < h/w)
            h1 = w * ar;
        else
            w1 = h / ar;

        float a = 0.f, b = 0.f;
        switch (Angle) {
            case 0: a = 1.f; b = 0.f;
                break;
            case  90: a = 0.f; b = 1.f;
                break;
            case 180: a = -1.f; b = 0.f;
                break;
            case 270: a = 0.f; b = -1.f;
                break;
            default: break;
        }

        float[] angleMatrix = new float[16];

        angleMatrix[0] = a; angleMatrix[1] = b; angleMatrix[2]=0.0f; angleMatrix[3] = 0.0f;
        angleMatrix[4] = -b; angleMatrix[5] = a; angleMatrix[6] = 0.0f; angleMatrix[7] = 0.0f;
        angleMatrix[8] = 0.0f; angleMatrix[9] = 0.0f; angleMatrix[10] = 1.0f; angleMatrix[11] = 0.0f;
        angleMatrix[12] = 0.0f; angleMatrix[13] = 0.0f; angleMatrix[14] = 0.0f; angleMatrix[15] = 1.0f;

        float [] projectionMatrix = RecognitionFragment.getCurrentProjectionMatrix().clone();
        projectionMatrix[5] = projectionMatrix[5] * (h / h1);

        float [] correctedProjection = new float[16];

        RenderUtils.matrixMultiply(4,4,angleMatrix,4,4,projectionMatrix,correctedProjection);

        if ( RecognitionFragment.isTracking() ) {
            float [] modelviewMatrix = RecognitionFragment.getCurrentModelViewMatrix();
            float [] temp_mvp = new float[16];
            RenderUtils.matrixMultiply(4,4,correctedProjection,4,4,modelviewMatrix, temp_mvp);
            RenderUtils.matrix44Transpose(temp_mvp,mvpMatrix);
            return true;
        }
        return false;
    }

    /** Called to draw the current frame. */
    public void onDrawFrame(GL10 gl) {
        if (!IsActive) return;

        gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);

        // Call our native function to render camera content
        RecognitionFragment.renderCamera(ViewportWidth, ViewportHeight, Angle);

        float[] mvpMatrix = new float[16];
        if(computeModelViewProjectionMatrix(mvpMatrix)) {
            //draw our 3d mesh on top of the marker
            monkeyMesh.DrawMesh(mvpMatrix);
            RenderUtils.checkGLError("completed Monkey head Render");
        }

        gl.glFinish();
    }

    /* this will be called by our GLTextureView-derived class to update screen sizes and orientation */
    public void UpdateViewport(int viewportWidth, int viewportHeight, int angle) {
        ViewportWidth = viewportWidth;
        ViewportHeight = viewportHeight;
        Angle = angle;
    }
}

Create a new java class in your Android project (in the same folder as the MainActivity), name it ARView.  

The implementation of our OpenGL AR view class, derived from our GLTextureView class, is more straightforward (copy and paste the following code inside the ARView class and remember to set the correct package name.

package pikkart.com.pikkarttutorial_10_17;
import android.content.Context;
import android.content.res.Configuration;
import android.os.Build;
import android.util.DisplayMetrics;
import android.util.Log;
import android.view.Display;
import android.view.Gravity;
import android.view.WindowManager;
import android.widget.FrameLayout;
import android.widget.RelativeLayout;
import  java.lang.reflect.Method;
import javax.microedition.khronos.egl.EGL10;
import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.egl.EGLContext;
import javax.microedition.khronos.egl.EGLDisplay;

public class ARView extends GLTextureView
{
    private Context _context;
    //our renderer implementation
    private  ARRenderer _renderer;

    /* Called when device configuration has changed */
    @Override
    protected void onConfigurationChanged(Configuration newConfig) {
        //here we force our layout to fill the parent
        if (getParent() instanceof FrameLayout) {
            setLayoutParams(new FrameLayout.LayoutParams(FrameLayout.LayoutParams.MATCH_PARENT,
                    FrameLayout.LayoutParams.MATCH_PARENT, Gravity.CENTER));
        }
        else if (getParent() instanceof RelativeLayout) {
            setLayoutParams(new RelativeLayout.LayoutParams(RelativeLayout.LayoutParams.MATCH_PARENT,
                    RelativeLayout.LayoutParams.MATCH_PARENT));
        }
    }

    /* Called when layout is created or modified (i.e. because of device rotation changes etc.) */
    @Override
    protected void onLayout(boolean changed, int left, int top, int right, int bottom) {
        if(!changed) return;
        int angle = 0;
        //here we compute a normalized orientation independent of the device class (tablet or phone)
        //so that an angle of 0 is always landscape, 90 always portrait etc.
        Display display = ((WindowManager) _context.getSystemService(Context.WINDOW_SERVICE)).getDefaultDisplay();
        int rotation = display.getRotation();
        if(getResources().getConfiguration().orientation == Configuration.ORIENTATION_LANDSCAPE) {
            switch (rotation) {
                case 0:
                    angle = 0;
                    break;
                case 1:
                    angle = 0;
                    break;
                case 2:
                    angle = 180;
                    break;
                case 3:
                    angle = 180;
                    break;
                default:
                    break;
            }
        } else {
            switch (rotation) {
                case 0:
                    angle = 90;
                    break;
                case 1:
                    angle = 270;
                    break;
                case 2:
                    angle = 270;
                    break;
                case 3:
                    angle = 90;
                    break;
                default:
                    break;
            }
        }

        int realWidth;
        int realHeight;
        if(Build.VERSION.SDK_INT >= 17) {
            //new pleasant way to get real metrics
            DisplayMetrics realMetrics = new DisplayMetrics();
            display.getRealMetrics(realMetrics);
            realWidth = realMetrics.widthPixels;
            realHeight = realMetrics.heightPixels;

        } else if (Build.VERSION.SDK_INT >= 14) {
            //reflection for this weird in-between time
            try {
                Method mGetRawH = Display.class.getMethod("getRawHeight");
                Method mGetRawW = Display.class.getMethod("getRawWidth");
                realWidth = (Integer) mGetRawW.invoke(display);
                realHeight = (Integer) mGetRawH.invoke(display);
            } catch (Exception e) {
                //this may not be 100% accurate, but it's all we've got
                realWidth = display.getWidth();
                realHeight = display.getHeight();
            }
        } else  {
            //This should be close, as lower API devices should not have window navigation bars
            realWidth = display.getWidth();
            realHeight = display.getHeight();
        }
        _renderer.UpdateViewport(right-left, bottom-top, angle);
    }

    /* Constructor. */
    public  ARView(Context context) {
        super(context);
        _context = context;
        init();
        _renderer = new ARRenderer(this._context);
        setRenderer(_renderer);
        ((ARRenderer)_renderer).IsActive = true;
        setOpaque(true);
    }

    /* Initialization. */
    public void init() {
        setEGLContextFactory(new ContextFactory());
        setEGLConfigChooser(new ConfigChooser(8, 8, 8, 0, 16, 0));
    }

    /* Checks the OpenGL error.*/
    private static void checkEglError(String prompt, EGL10 egl) {
        int error;
        while ((error = egl.eglGetError()) != EGL10.EGL_SUCCESS) {
            Log.e("PikkartCore3", String.format("%s: EGL error: 0x%x", prompt, error));
        }
    }

    /* A private class that manages the creation of OpenGL contexts. Pretty standard stuff*/
    private static class ContextFactory implements EGLContextFactory {
        private static int EGL_CONTEXT_CLIENT_VERSION = 0x3098;

        public EGLContext createContext(EGL10 egl, EGLDisplay display, EGLConfig eglConfig) {
            EGLContext context;
            //Log.i("PikkartCore3","Creating OpenGL ES 2.0 context");
            checkEglError("Before eglCreateContext", egl);
            int[] attrib_list_gl20 = {EGL_CONTEXT_CLIENT_VERSION, 2, EGL10.EGL_NONE};
            context = egl.eglCreateContext(display, eglConfig, EGL10.EGL_NO_CONTEXT, attrib_list_gl20);
            checkEglError("After eglCreateContext", egl);
            return context;
        }

        public void destroyContext(EGL10 egl, EGLDisplay display, EGLContext context) {
            egl.eglDestroyContext(display, context);
        }
    }

    /* A private class that manages the the config chooser. Pretty standard stuff */
    private static class ConfigChooser implements EGLConfigChooser {
        public ConfigChooser(int r, int g, int b, int a, int depth, int stencil) {
            mRedSize = r;
            mGreenSize = g;
            mBlueSize = b;
            mAlphaSize = a;
            mDepthSize = depth;
            mStencilSize = stencil;
        }

        private EGLConfig getMatchingConfig(EGL10 egl, EGLDisplay display, int[] configAttribs) {
            // Get the number of minimally matching EGL configurations
            int[] num_config = new int[1];
            egl.eglChooseConfig(display, configAttribs, null, 0, num_config);
            int numConfigs = num_config[0];
            if (numConfigs <= 0)
                throw new IllegalArgumentException("No matching EGL configs");
            // Allocate then read the array of minimally matching EGL configs
            EGLConfig[] configs = new EGLConfig[numConfigs];
            egl.eglChooseConfig(display, configAttribs, configs, numConfigs, num_config);
            // Now return the "best" one
            return chooseConfig(egl, display, configs);
        }

        public EGLConfig chooseConfig(EGL10 egl, EGLDisplay display) {
            // This EGL config specification is used to specify 2.0 com.pikkart.ar.rendering. We use a minimum size of 4 bits for
            // red/green/blue, but will perform actual matching in chooseConfig() below.
            final int EGL_OPENGL_ES2_BIT = 0x0004;
            final int[] s_configAttribs_gl20 = {EGL10.EGL_RED_SIZE, 4, EGL10.EGL_GREEN_SIZE, 4, EGL10.EGL_BLUE_SIZE, 4,
                    EGL10.EGL_RENDERABLE_TYPE, EGL_OPENGL_ES2_BIT, EGL10.EGL_NONE};
            return getMatchingConfig(egl, display, s_configAttribs_gl20);
        }

        public EGLConfig chooseConfig(EGL10 egl, EGLDisplay display, EGLConfig[] configs) {
            boolean bFoundDepth = false;
            for (EGLConfig config : configs) {
                int d = findConfigAttrib(egl, display, config, EGL10.EGL_DEPTH_SIZE, 0);
                if (d == mDepthSize) bFoundDepth = true;
            }
            if (bFoundDepth == false) mDepthSize = 16; //min value
            for (EGLConfig config : configs) {
                int  d = findConfigAttrib(egl, display, config, EGL10.EGL_DEPTH_SIZE, 0);
                int s = findConfigAttrib(egl, display, config, EGL10.EGL_STENCIL_SIZE, 0);
                // We need at least mDepthSize and mStencilSize bits
                if (d < mDepthSize || s < mStencilSize)
                    continue;
                // We want an *exact* match for red/green/blue/alpha
                int r = findConfigAttrib(egl, display, config, EGL10.EGL_RED_SIZE, 0);
                int g = findConfigAttrib(egl, display, config, EGL10.EGL_GREEN_SIZE, 0);
                int b = findConfigAttrib(egl, display, config, EGL10.EGL_BLUE_SIZE, 0);
                int  a = findConfigAttrib(egl, display, config, EGL10.EGL_ALPHA_SIZE, 0);

                if (r == mRedSize && g == mGreenSize && b == mBlueSize && a == mAlphaSize)
                    return config;
            }

            return null;
        }

        private int findConfigAttrib(EGL10 egl, EGLDisplay display, EGLConfig config, int attribute, int defaultValue) {
            if (egl.eglGetConfigAttrib(display, config, attribute, mValue))
                return mValue[0];
            return defaultValue;
        }

        // Subclasses can adjust these values:
        protected int mRedSize;
        protected int mGreenSize;
        protected int mBlueSize;
        protected int mAlphaSize;
        protected int mDepthSize;
        protected int mStencilSize;
        private int[] mValue = new int[1];
    }
}

We are almost set. Now we need to add our ARView class to our app on top of Pikkart's AR RecognitionFragment. Simply modify the initLayout function of the MainActivity class as:

private void initLayout() {
    setContentView(R.layout.activity_main);

    ARView arView = new ARView(this);
    addContentView(arView, new FrameLayout.LayoutParams(FrameLayout.LayoutParams.MATCH_PARENT, FrameLayout.LayoutParams.MATCH_PARENT));

    RecognitionFragment t_cameraFragment = ((RecognitionFragment) getFragmentManager().findFragmentById(R.id.ar_fragment));
    t_cameraFragment.startRecognition(
                new RecognitionOptions(
                        RecognitionManager.RecognitionStorage.LOCAL,
                        RecognitionManager.RecognitionMode.CONTINUOUS_SCAN,
                        new CloudRecognitionInfo(new String[]{})
                ),
                this);
}

You can now run your app on a real device. Remeber to print a physical copy of a marker! The end result of our tutorial should look like this when tracking a marker:

A complete Android Studio project can be found ion Github @  https://github.com/pikkart-support/SDKSample_LocalPlanarMarker(remember to add you license file in the assets folder!).