Xamarin NuGet Package >
Local Planar Marker for Android

This tutorial will guide you through the creation of a simple AR application that detects and tracks an image marker stored in the app assets and draws a simple 3D model on it. 
Our SDK is designed for use with Visual Studio (versions 2015+), and supports Android SDK level 15+ (21+ for the 64 bit version). Android NDK is not required.

Note: if you prefer to read existing and working code, a complete Visual Studio project can be found on Github @  https://github.com/pikkart-support/XamarinAndroid_LocalPlanarMarker (remember to add you license file in the assets folder!).

First create a new application in Visual Studio using the Blank App (Android) template

Write your App name and choose project location in the Create New Project Wizard and click Ok.

When the project is ready close the GettinStarted.Xamarin page and open the MainActivity that xamarin had created. Now it's time to set-up your app for development with Pikkart's AR SDK.
Open the Properties menu and in the Android Manifest tab, if you have already purchased a license, set the Package name you provided when buying your Pikkart's AR license, otherwise use "com.pikkart.trial" if you're using the trial license provided by default when you register as a developer.

 Now right click on the voice References of the project and click Manage NuGet Packages...

In the NuGet page type Pikkart.ArSdk in the search bar to find our sdk
Select the Pikkart.ArSdk package and in the detail page click on Install.

After the package has been added you should see Com.Pikkart.AR.Geo and Com.Pikkart.AR.Recognition assemblies on the References menu.
Now install the Xamarin.Android.Support.v4 v25.1.0+ package too.
Then copy the license file we provided you inside your app assets dir (create the assets dir on the root of the solution folder if it doesn't exist). Also copy the sample markers and media dirs (<pikkart_sample_dir>/Assets/markers/ and <pikkart_sample_dir>/Assets/media/) into your app project assets dir (you can download the sample package here).
Add the following permissions in the Android Manifest tab in the project properties:
- CAMERA
- READ_EXTERNAL_STORAGE
- WRITE_EXTERNAL_STORAGE
and in the Manifest.xml file add the following permissions:


<uses-feature android:name="android.hardware.camera" android:required="true" />
<uses-feature android:glEsVersion="0x00020000" android:required="true" />

The activity holding Pikkart's AR stuff must have set ConfigurationChanges = ConfigChanges.ScreenSize | ConfigChanges.Orientation and an AppCompat theme in the MainActivity.cs class declaration as in the following example:

[Activity(Label = "Pikkart_SDK_tutorial", MainLauncher = true, ConfigurationChanges = ConfigChanges.ScreenSize | ConfigChanges.Orientation)]
public class MainActivity : Activity
{ 
    ...
}

In order to support Android 6.0+ we have to slightly modify the MainActivity.cs code in order to ask the user for permissions about camera access and read/write access to the SD memory.

Create a private method InitLayout() and move OnCreate's SetContentView there.

private void InitLayout()
{
    SetContentView(Resource.Layout.Main);
}

Now add a new method that will ask the user for permission. The following method will ask the user for CameraWriteExternalStorage and ReadExternalStorage permissions. The method uses ActivityCompat.CheckSelfPermission to check if requested permission have been granted before, if not it uses ActivityCompat.RequestPermissions to requests missing permissions. The method receive a integer unique code to identify the request. Create a class member variable (i.e. called m_permissionCode) and assign a unique integer value to it.

private void CheckPermissions(int code)
{
    string[] permissions_required = new string[] {
    Manifest.Permission.Camera,
    Manifest.Permission.WriteExternalStorage,
    Manifest.Permission.ReadExternalStorage
        };

    List<string> permissions_not_granted_list = new List<string>();
    foreach (string permission in permissions_required)
    {
        if (ActivityCompat.CheckSelfPermission(ApplicationContext, permission) != Permission.Granted)
        {
            permissions_not_granted_list.Add(permission);
        }
    }
    if (permissions_not_granted_list.Count > 0)
    {
        string[] permissions = new string[permissions_not_granted_list.Count];
        permissions = permissions_not_granted_list.ToArray();
        ActivityCompat.RequestPermissions(this, permissions, code);
    }
    else
    {
        InitLayout();
    }
}

Now we have to implement a callback method that will be called by the OS once the user has granted or dismissed the requested permissions. This method implementation cycle through all requested permissions and check that all of them have been granted:

public override void OnRequestPermissionsResult(int requestCode, string[] permissions, [GeneratedEnum] Permission[] grantResults)
{
    if (requestCode == m_permissionCode)
    {
        bool ok = true;
        for (int i = 0; i < grantResults.Length; ++i)
        {
            ok = ok && (grantResults[i] == Permission.Granted);
        }
        if (ok)
        {
            InitLayout();
        }
        else
        {
            Toast.MakeText(this, "Error: required permissions not granted!", ToastLength.Short).Show();
            Finish();
        }
    }
}

We can then modify our OnCreate method this way:

protected override void OnCreate(Bundle bundle)
{
    base.OnCreate(bundle);

    if (Build.VERSION.SdkInt < BuildVersionCodes.M)
    {
        // require permission to access camera, read and write external storage
        InitLayout();
    }
    else
    {
        CheckPermissions(m_permissionCode);
    }
}

Your main activity should look like this:

using Android.App;
using Android.OS;
using Android.Content.PM;
using Android.Support.V7.App;
using Android;
using System.Collections.Generic;
using Android.Support.V4.App;
using Android.Widget;
using Android.Runtime;

namespace Pikkart_SDK_tutorial
{
    [Activity(Label = "Pikkart_SDK_tutorial", MainLauncher = true, ConfigurationChanges = ConfigChanges.ScreenSize | ConfigChanges.Orientation)]
    public class MainActivity : Activity
    {
        const int m_permissionCode = 100;

        private void InitLayout()
        {
            SetContentView(Resource.Layout.Main);
        }

        private void CheckPermissions(int code)
        {
            string[] permissions_required = new string[] {
            Manifest.Permission.Camera,
            Manifest.Permission.WriteExternalStorage,
            Manifest.Permission.ReadExternalStorage
                };

            List<string> permissions_not_granted_list = new List<string>();
            foreach(string permission in permissions_required)
            {
                if(ActivityCompat.CheckSelfPermission(ApplicationContext, permission) != Permission.Granted)
                {
                    permissions_not_granted_list.Add(permission);
                }
            }
            if (permissions_not_granted_list.Count > 0)
            {
                string[] permissions = new string[permissions_not_granted_list.Count];
                permissions = permissions_not_granted_list.ToArray();
                ActivityCompat.RequestPermissions(this, permissions, code);
            }
            else
            {
                InitLayout();
            }
        }

        public override void OnRequestPermissionsResult(int requestCode, string[] permissions, [GeneratedEnum] Permission[] grantResults)
        {
            if (requestCode == m_permissionCode)
            {
                bool ok = true;
                for (int i = 0; i < grantResults.Length; ++i)
                {
                    ok = ok && (grantResults[i] == Permission.Granted);
                }
                if (ok)
                {
                    InitLayout();
                }
                else
                {
                    Toast.MakeText(this, "Error: required permissions not granted!", ToastLength.Short).Show();
                    Finish();
                }
            }
        }

        protected override void OnCreate(Bundle bundle)
        {
            base.OnCreate(bundle);

            if (Build.VERSION.SdkInt < BuildVersionCodes.M)
            {
                //you don’t have to do anything, just init your app
                InitLayout();
            }
            else
            {
                CheckPermissions(m_permissionCode);
            }
        }
    }
}

Now it's time to add and enable Pikkart SDK's AR functionalities.

First of all change the layout of the app from the default LinearLayout  to the RelativeLayout you can change it through your app layout XML file (usually found inside Resources/layout in Visual Studio Solution explorer). Then add Pikkart's AR Recognition Fragment to your AR activity.


<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:tools="http://schemas.android.com/tools"
    android:layout_width="match_parent"
    android:layout_height="match_parent">
  <fragment
      android:layout_width="match_parent"
      android:layout_height="match_parent"
      android:id="@+id/ar_fragment"
      android:name="com.pikkart.ar.recognition.RecognitionFragment" />
</RelativeLayout>

Now it's time to start the recognition process. We can do it inside our InitLayout() by adding

RecognitionFragment cameraFragment = FragmentManager.FindFragmentById<RecognitionFragment>(Resource.Id.ar_fragment);
cameraFragment.StartRecognition(
    new RecognitionOptions(
        RecognitionOptions.RecognitionStorage.Local, 
        RecognitionOptions.RecognitionMode.ContinuousScan,
    new CloudRecognitionInfo(new string[] { })
    ), 
    this);

The StartRecognition() method requires as parameters a RecognitionOptions object and a reference to an object implementing a IRecognitionListener interface. The RecognitionOptions object can be created on-the-fly using a 3-parameters constructor requiring a RecognitionOptions.RecognitionStorage option that indicates whether to use Local recognition mode or Global (cloud recognition + local markers) mode. The second parameter, of type RecognitionOptions.RecognitionMode indicates whether to use Continous scan/recognition of image markers or TapToScan mode (the latter requires user input to launch a new marker search). The last parameter is a CloudRecognitionInfo object holding the names of cloud databases in which to search markers, it's useful only when using Global as RecognitionMode and will be explained in the next tutorial.

In the block of code above we passed to the StartRecognition() method a pointer to our main activity class. The function requires an IRecognitionListener interface so we need to implement that interface in our activity class. First we have to modify our MainClass definition a bit

public class MainActivity : Activity, IRecognitionListener {
    ....
}

Then we have to implement the following callback functions:

  • This function is called every time a cloud image search is initiated. In this tutorial it will never be called as we are working Local only.
    public void ExecutingCloudSearch() {
    }
  • This function is called when the cloud image search fails to find a target image in the camera frame. In this tutorial it will never be called as we are working Local only.
    public void CloudMarkerNotFound() {
    }
  • This function is called when the app is set to do cloud operations or image search (RecognitionOptions.RecognitionStorage.Global) but internet connection is absent.
    public void InternetConnectionNeeded() {
    }
  • This function is called every time a marker is found and tracking starts. In this tutorial we show a toast informing the user.
    public void MarkerFound(Marker p0)
    {
        Toast.MakeText(this, "PikkartAR: found marker " + p0.Id,
            ToastLength.Short).Show();
    }
    
  • This function is called every time a marker is not found (after searching through all local markers, or after a tap-to-scan failed)
    public void MarkerNotFound() { 
    }
  • This function is called as soon as the current marker tracking is lost. In this tutorial we show a toast informing the user.
    public void MarkerTrackingLost(string p0) {
        Toast.MakeText(this, "PikkartAR: lost tracking of marker " + p0, ToastLength.Short).Show();
    }
  • This function is called by our SDK to check if connection is available. As we are working Local only we always return false to force our SDK to work offline only.
    public bool IsConnectionAvailable(Context p0) { 
        return false; 
    }
  • This function is called when an ARLogo is found. Pikkart's ARLogo is a new technology that allows to embed binary codes inside an image. It lets app developers to unleash different AR experiences from visually similar marker images (i.e. images that looks the same but have different binary codes embedded in them). For an explanation on how to use this function and the ARLogo see the ARLogo tutorial and the ARLogo presentation page.
    public void ARLogoFound(string p0, int p1) { 
        //TODO: add your code here  
    }

You can run the application now. Print one of the test marker images (<pikkart_sample_dir>/printable_markers/), compile and run the app on a device and try it. The app currently doesn't show anything, it just sample images from the camera and does its magic in the background. When you frame a marker in your camera, a toast should appear that says "Marker found". Success!

 

It's time to add some visuals to it. In order to do that we need to create an OpenGL view, set it up and render the camera view and additional augmented content. We have created some helper classes (included in the sample package) to help you set up the rendering process. To use them copy the c# classes inside the pikkart sample package into the root of the project. You will find three new c# classes in your app project as in the following image:

Change the namespace in the three classes to match the namespace of your app.

namespace PikkartSample.Droid
namespace Pikkart_SDK_tutorial

GLTextureView is a widget class that combines an Android TextureView and an OpenGL context, managing its setup, rendering etc. RenderUtils contains various OpenGL helper functions (Shader compilation etc.) and some Matrix operations. The Mesh class manage a 3D mesh (loading from a json file and rendering).

In order to create our own 3D renderer we have to extend our GLTextureView class and create a modified version that will perform our custom rendering. We use a structure that is fairly common in the world of 3D rendering on Android: the GLTextureView class manages Android UI related stuff, the set up of an OpenGL rendering context and a few other things. For the actual rendering it defines an interface (GLTextureView.Renderer) with a few callback functions that will be called during the various phases of the OpenGL context set-up, the android view set-up and the rendering cycle. Before creating our own derived version of GLTextureView we will define our rendering class implementing the GLTextureView.Renderer interface.
This class implements 4 callback methods:

  • public void OnSurfaceCreated(IGL10 gl, EGLConfig config) that is called every time the OpenGL rendering surface is created or recreated. When this is called all OpenGL related stuff (buffers, VBOs, textures etc.) must be recreated as well.
  • public void onSurfaceChanged(IGL10 gl, int width, int height) is called every time the OpenGL surface is changed, without destroying it.
  • public void OnSurfaceDestroyed() this is called when the OpenGL surface is destroyed. Usually after this point you can clean and delete your objects.
  • public void onDrawFrame(IGL10 gl) this is called on every rendering cycle. This is were the actual rendering of OpenGL related stuff happens.

In our implementation we have added a couple of support method such as public bool computeModelViewProjectionMatrix(float[] mvpMatrix) that computes the model-view-projection matrix that will be used by the OpenGL renderer starting from the projection and marker attitude/position matrices obtained from Pikkart's AR SDK.

We make use of 3 important static methods and attributes of the our RecognitionFragment class:

  • RecognitionFragment.GetCurrentProjectionMatrix()
  • RecognitionFragment.GetCurrentModelViewMatrix()
  • RecognitionFragment.IsTracking { get; }

Other important static methods and attributes are:

  • public static Marker RecognitionFragment.CurrentMarker { get; }
  • public static void RecognitionFragment.RenderCamera(int viewportWidth, int viewportHeight, int angle)

The first returns the currently tracked Marker. The Marker class contains information about the tracked image, its download date, its update date and more importantly its associated custom data (this will be explained in the next tutorial). The RenderCamera method render the last processed camera image, it's the method that enables the app to render the actual camera view, this method must be called inside your OpenGL context rendering cycle, usually one of the first functions called inside the public void onDrawFrame(IGL10 gl) function of our Renderer class implementation.

Create a new c# class in your Android project (in the same folder as the MainActivity), name it ARRenderer. The following is the full implementation of our Renderer class implementing the GLTextureView.Renderer interface (copy and paste the following code inside the ARRenderer class and remember to set the correct namespace).


using Android
using Android.Content;
using Javax.Microedition.Khronos.Opengles;
using Com.Pikkart.AR.Recognition;
using Javax.Microedition.Khronos.Egl;
using System.Threading.Tasks;
using Android.App;
using System;

namespace Pikkart_SDK_tutorial
{
    public class ARRenderer : GLTextureView.Renderer
    {
        public bool IsActive = false;
        //the rendering viewport dimensions
        private int ViewportWidth;
        private int ViewportHeight;
        //normalized screen orientation (0=landscale, 90=portrait, 180=inverse landscale, 270=inverse portrait)
        private int Angle;
        //
        private Context context;
        //the 3d object we will render on the marker
        private Mesh monkeyMesh = null;

        ProgressDialog progressDialog;

        /* Constructor. */
        public ARRenderer(Context con)
        {
            context = con;
            progressDialog = new ProgressDialog(con);
        }

        /** Called when the surface is created or recreated. 
          * Reinitialize OpenGL related stuff here*/
        public void OnSurfaceCreated(IGL10 gl, EGLConfig config)
        {
            gl.GlClearColor(1.0f, 1.0f, 1.0f, 1.0f);
            //Here we create the 3D object and initialize textures, shaders, etc.

            Task.Run(async () =>
            {
                try
                {
                    InitMeshes();
                }
                catch (OperationCanceledException ex)
                {
                    Console.WriteLine("init failed: {ex.Message}");
                }
                catch (Exception ex)
                {
                    Console.WriteLine(ex.Message);
                }
            });
        }

        private void InitMeshes()
        {
            ((Activity)context).RunOnUiThread(() =>
            {
                progressDialog = ProgressDialog.Show(context, "Loading textures", "The 3D template textures of this tutorial have not been loaded yet", true);
            });

            monkeyMesh = new Mesh();
            monkeyMesh.InitMesh(context.Assets, "media/monkey.json", "media/texture.png");

            if (progressDialog != null)
                progressDialog.Dismiss();
        }

        /** Called when the surface changed size. */
        public void onSurfaceChanged(IGL10 gl, int width, int height)
        {
        }

        /** Called when the surface is destroyed. */
        public void onSurfaceDestroyed()
        {
        }

        /** Here we compute the model-view-projection matrix for OpenGL rendering
          * from the model-view and projection matrix computed by Pikkart's AR SDK.
          * the projection matrix is rotated accordingly to the screen orientation */
        public bool computeModelViewProjectionMatrix(float[] mvpMatrix)
        {
            RenderUtils.matrix44Identity(mvpMatrix);

            float w = (float)640;
            float h = (float)480;

            float ar = (float)ViewportHeight / (float)ViewportWidth;
            if (ViewportHeight > ViewportWidth) ar = 1.0f / ar;
            float h1 = h, w1 = w;
            if (ar < h / w)
                h1 = w * ar;
            else
                w1 = h / ar;

            float a = 0f, b = 0f;
            switch (Angle)
            {
                case 0:
                    a = 1f; b = 0f;
                    break;
                case 90:
                    a = 0f; b = 1f;
                    break;
                case 180:
                    a = -1f; b = 0f;
                    break;
                case 270:
                    a = 0f; b = -1f;
                    break;
                default: break;
            }

            float[] angleMatrix = new float[16];

            angleMatrix[0] = a; angleMatrix[1] = b; angleMatrix[2] = 0.0f; angleMatrix[3] = 0.0f;
            angleMatrix[4] = -b; angleMatrix[5] = a; angleMatrix[6] = 0.0f; angleMatrix[7] = 0.0f;
            angleMatrix[8] = 0.0f; angleMatrix[9] = 0.0f; angleMatrix[10] = 1.0f; angleMatrix[11] = 0.0f;
            angleMatrix[12] = 0.0f; angleMatrix[13] = 0.0f; angleMatrix[14] = 0.0f; angleMatrix[15] = 1.0f;

            float[] projectionMatrix =(float[]) RecognitionFragment.GetCurrentProjectionMatrix().Clone();
            projectionMatrix[5] = projectionMatrix[5] * (h / h1);

            float[] correctedProjection = new float[16];

            RenderUtils.matrixMultiply(4, 4, angleMatrix, 4, 4, projectionMatrix, correctedProjection);

            if (RecognitionFragment.IsTracking)
            {
                float[] modelviewMatrix = RecognitionFragment.GetCurrentModelViewMatrix();
                float[] temp_mvp = new float[16];
                RenderUtils.matrixMultiply(4, 4, correctedProjection, 4, 4, modelviewMatrix, temp_mvp);
                RenderUtils.matrix44Transpose(temp_mvp, mvpMatrix);
                return true;
            }
            return false;
        }

        /** Called to draw the current frame. */
        public void onDrawFrame(IGL10 gl)
        {
            if (!IsActive) return;

            gl.GlClear(GL10.GlColorBufferBit | GL10.GlDepthBufferBit);

            // Call our native function to render camera content
            RecognitionFragment.RenderCamera(ViewportWidth, ViewportHeight, Angle);

            float[] mvpMatrix = new float[16];
            if (computeModelViewProjectionMatrix(mvpMatrix))
            {
                if (monkeyMesh != null && monkeyMesh.MeshLoaded)
                {
                    if (monkeyMesh.GLLoaded)
                    {
                        //draw our 3d mesh on top of the marker
                        monkeyMesh.DrawMesh(mvpMatrix);

                    }
                    else
                        monkeyMesh.InitMeshGL();

                    RenderUtils.CheckGLError("completed Monkey head Render");
                }

            }

            gl.GlFinish();
        }

        /* this will be called by our GLTextureView-derived class to update screen sizes and orientation */
        public void UpdateViewport(int viewportWidth, int viewportHeight, int angle)
        {
            ViewportWidth = viewportWidth;
            ViewportHeight = viewportHeight;
            Angle = angle;
        }
    }
}

Create a new c# class in your Android project (in the same folder as the MainActivity), name it ARView.  

The implementation of our OpenGL AR view class, derived from our GLTextureView class, is more straightforward (copy and paste the following code inside the ARView class and remember to set the correct namespace.

using System;
using Android.Content;
using Android.OS;
using Android.Runtime;
using Android.Views;
using Android.Widget;
using Android.Content.Res;
using Android.Util;
using Javax.Microedition.Khronos.Egl;

namespace Pikkart_SDK_tutorial
{
    class ARView : GLTextureView
    {
        private Context _context;
        //our renderer implementation
        private ARRenderer _renderer;

        /* Called when device configuration has changed */
        protected override void OnConfigurationChanged(Configuration newConfig)
        {
            //here we force our layout to fill the parent
            if (Parent is FrameLayout)
            {
                LayoutParameters = new FrameLayout.LayoutParams(FrameLayout.LayoutParams.MatchParent,
                        FrameLayout.LayoutParams.MatchParent, GravityFlags.Center);
            }
            else if (Parent is RelativeLayout)
            {
                LayoutParameters = new RelativeLayout.LayoutParams(RelativeLayout.LayoutParams.MatchParent,
                        RelativeLayout.LayoutParams.MatchParent);
            }
        }

        /* Called when layout is created or modified (i.e. because of device rotation changes etc.) */

        protected override void OnLayout(bool changed, int left, int top, int right, int bottom)
        {
            if (!changed) return;
            int angle = 0;
            //here we compute a normalized orientation independent of the device class (tablet or phone)
            //so that an angle of 0 is always landscape, 90 always portrait etc.
            var windowmanager = _context.GetSystemService(Context.WindowService).JavaCast<IWindowManager>();
            Display display = windowmanager.DefaultDisplay;
            int rotation = (int)display.Rotation;
            if (Resources.Configuration.Orientation == Android.Content.Res.Orientation.Landscape)
            {
                switch (rotation)
                {
                    case 0:
                        angle = 0;
                        break;
                    case 1:
                        angle = 0;
                        break;
                    case 2:
                        angle = 180;
                        break;
                    case 3:
                        angle = 180;
                        break;
                    default:
                        break;
                }
            }
            else
            {
                switch (rotation)
                {
                    case 0:
                        angle = 90;
                        break;
                    case 1:
                        angle = 270;
                        break;
                    case 2:
                        angle = 270;
                        break;
                    case 3:
                        angle = 90;
                        break;
                    default:
                        break;
                }
            }

            int realWidth;
            int realHeight;
            if ((int)Build.VERSION.SdkInt >= 17)
            {
                //new pleasant way to get real metrics
                DisplayMetrics realMetrics = new DisplayMetrics();
                display.GetRealMetrics(realMetrics);
                realWidth = realMetrics.WidthPixels;
                realHeight = realMetrics.HeightPixels;

            }
            else if ((int)Build.VERSION.SdkInt >= 14)
            {
                //reflection for this weird in-between time
                try
                {
                    Java.Lang.Reflect.Method mGetRawH = Display.Class.GetMethod("getRawHeight");
                    var mGetRawW = Display.Class.GetMethod("getRawWidth");
                    realWidth = (int)mGetRawW.Invoke(display);
                    realHeight = (int)mGetRawH.Invoke(display);
                }
                catch (Exception e)
                {
                    //this may not be 100% accurate, but it's all we've got
                    realWidth = display.Width;
                    realHeight = display.Height;
                }
            }
            else
            {
                //This should be close, as lower API devices should not have window navigation bars
                realWidth = display.Width;
                realHeight = display.Height;
            }
            _renderer.UpdateViewport(right - left, bottom - top, angle);
        }

        /* Constructor. */
        public ARView(Context context) : base(context)
        {

            _context = context;
            init();
            _renderer = new ARRenderer(this._context);
            setRenderer(_renderer);
            ((ARRenderer)_renderer).IsActive = true;
            SetOpaque(true);
        }

        /* Initialization. */
        public void init()
        {
            setEGLContextFactory(new ContextFactory());
            setEGLConfigChooser(new ConfigChooser(8, 8, 8, 0, 16, 0));
        }

        /* Checks the OpenGL error.*/
        private static void checkEglError(String prompt, IEGL10 egl)
        {
            int error;
            while ((error = egl.EglGetError()) != EGL10.EglSuccess)
            {
                Log.Error("PikkartCore3", String.Format("%s: EGL error: 0x%x", prompt, error));
            }
        }

        /* A private class that manages the creation of OpenGL contexts. Pretty standard stuff*/
        private class ContextFactory : EGLContextFactory
        {
            private static int EGL_CONTEXT_CLIENT_VERSION = 0x3098;

            public EGLContext createContext(IEGL10 egl, EGLDisplay display, EGLConfig eglConfig)
            {
                EGLContext context;
                //Log.i("PikkartCore3","Creating OpenGL ES 2.0 context");
                checkEglError("Before eglCreateContext", egl);
                int[] attrib_list_gl20 = { EGL_CONTEXT_CLIENT_VERSION, 2, EGL10.EglNone };
                context = egl.EglCreateContext(display, eglConfig, EGL10.EglNoContext, attrib_list_gl20);
                checkEglError("After eglCreateContext", egl);
                return context;
            }

            public void destroyContext(IEGL10 egl, EGLDisplay display, EGLContext context)
            {
                egl.EglDestroyContext(display, context);
            }
        }

        /* A private class that manages the the config chooser. Pretty standard stuff */
        private class ConfigChooser : EGLConfigChooser
        {
            public ConfigChooser(int r, int g, int b, int a, int depth, int stencil)
            {
                mRedSize = r;
                mGreenSize = g;
                mBlueSize = b;
                mAlphaSize = a;
                mDepthSize = depth;
                mStencilSize = stencil;
            }

            private EGLConfig getMatchingConfig(IEGL10 egl, EGLDisplay display, int[] configAttribs)
            {
                // Get the number of minimally matching EGL configurations
                int[] num_config = new int[1];
                egl.EglChooseConfig(display, configAttribs, null, 0, num_config);
                int numConfigs = num_config[0];
                if (numConfigs <= 0)
                    throw new Exception("No matching EGL configs");
                // Allocate then read the array of minimally matching EGL configs
                EGLConfig[] configs = new EGLConfig[numConfigs];
                egl.EglChooseConfig(display, configAttribs, configs, numConfigs, num_config);
                // Now return the "best" one
                return chooseConfig(egl, display, configs);
            }

            public EGLConfig chooseConfig(IEGL10 egl, EGLDisplay display)
            {
                // This EGL config specification is used to specify 2.0 com.pikkart.ar.rendering. We use a minimum size of 4 bits for
                // red/green/blue, but will perform actual matching in chooseConfig() below.
                int EGL_OPENGL_ES2_BIT = 0x0004;
                int[] s_configAttribs_gl20 = {EGL10.EglRedSize, 4, EGL10.EglGreenSize, 4, EGL10.EglBlueSize, 4,
                    EGL10.EglRenderableType, EGL_OPENGL_ES2_BIT, EGL10.EglNone};
                return getMatchingConfig(egl, display, s_configAttribs_gl20);
            }

            public EGLConfig chooseConfig(IEGL10 egl, EGLDisplay display, EGLConfig[] configs)
            {
                bool bFoundDepth = false;
                foreach (EGLConfig config in configs)
                {
                    int d = findConfigAttrib(egl, display, config, EGL10.EglDepthSize, 0);
                    if (d == mDepthSize) bFoundDepth = true;
                }
                if (bFoundDepth == false) mDepthSize = 16; //min value
                foreach (EGLConfig config in configs)
                {
                    int d = findConfigAttrib(egl, display, config, EGL10.EglDepthSize, 0);
                    int s = findConfigAttrib(egl, display, config, EGL10.EglStencilSize, 0);
                    // We need at least mDepthSize and mStencilSize bits
                    if (d < mDepthSize || s < mStencilSize)
                        continue;
                    // We want an *exact* match for red/green/blue/alpha
                    int r = findConfigAttrib(egl, display, config, EGL10.EglRedSize, 0);
                    int g = findConfigAttrib(egl, display, config, EGL10.EglGreenSize, 0);
                    int b = findConfigAttrib(egl, display, config, EGL10.EglBlueSize, 0);
                    int a = findConfigAttrib(egl, display, config, EGL10.EglAlphaSize, 0);

                    if (r == mRedSize && g == mGreenSize && b == mBlueSize && a == mAlphaSize)
                        return config;
                }

                return null;
            }

            private int findConfigAttrib(IEGL10 egl, EGLDisplay display, EGLConfig config, int attribute, int defaultValue)
            {
                if (egl.EglGetConfigAttrib(display, config, attribute, mValue))
                    return mValue[0];
                return defaultValue;
            }

            // Subclasses can adjust these values:
            protected int mRedSize;
            protected int mGreenSize;
            protected int mBlueSize;
            protected int mAlphaSize;
            protected int mDepthSize;
            protected int mStencilSize;
            private int[] mValue = new int[1];
        }
    }
}

 

We are almost set. Now we need to add our ARView class to our app on top of Pikkart's AR RecognitionFragment. Simply modify the InitLayout method of the MainActivity class as:


private void InitLayout()
{
    SetContentView(Resource.Layout.Main);

    ARView arView = new ARView(this);
    AddContentView(arView, new FrameLayout.LayoutParams(FrameLayout.LayoutParams.MatchParent, FrameLayout.LayoutParams.MatchParent));

    RecognitionFragment cameraFragment = FragmentManager.FindFragmentById<RecognitionFragment>(Resource.Id.ar_fragment);
    cameraFragment.StartRecognition(
        new RecognitionOptions(
            RecognitionOptions.RecognitionStorage.Local, 
            RecognitionOptions.RecognitionMode.ContinuousScan,
        new CloudRecognitionInfo(new string[] { })
        ), 
        this);
}

You can now run your app on a real device. Remeber to print a physical copy of a marker! The end result of our tutorial should look like this when tracking a marker: