Android SDK >
Cloud Planar Marker

For this tutorial we will start with the application we had developed at the end of the first tutorial (Local Planar Marker).
Feel free to make a copy so that you can keep both on your computer. Should you choose to modify the old example, be advised that you can download the one described here on Github @ https://github.com/pikkart-support/SDKSample_CloudPlanarMarker (remember to add you license file in the assets folder!).

The first thing we need to do is update the application manifest(AndroidManifest.xml) and add the following network related permissions:

<uses-permission android:name="android.permission.INTERNET" /> 
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" /> 
<uses-permission android:name="android.permission.ACCESS_WIFI_STATE" /> 

The full manifest file should look like this:

We need now to change the way recognition is launched. To allow a better control on the number on cloud recognition calls, when Pikkart's AR SDK is set to GLOBAL marker search the app developer can select TAP_TO_SCAN mode. This way each call to the startRecognition method start a single small burst of calls to our cloud recognition service. We simply need to change the way our MainActivity works like this:

First we need to implement the View.OnTouchListener interface in order to detect user tap on the screen and start a marker search. Our MainActivity class declaration becomes:

public class MainActivity extends AppCompatActivity implements IRecognitionListener, View.OnTouchListener

Then we change the initLayout function, deleting the call to the startRecognition function. If you want you can keep this function unchanged and the app will make a first call to the cloud recognition service as soon as the app layout has been created and the camera activated. We also add a class member variable m_arView to hold a reference to our ARView view that is created in the initLayout function.

For this tutorial we change the function like this:

    ...

private ARView m_arView = null; 

    ...

private void initLayout()
{

    setContentView(R.layout.activity_main);

    m_arView = new ARView(this);
    m_arView.setOnTouchListener(this);
    addContentView(m_arView, new FrameLayout.LayoutParams(FrameLayout.LayoutParams.MATCH_PARENT, FrameLayout.LayoutParams.MATCH_PARENT));
    RecognitionFragment t_cameraFragment = ((RecognitionFragment) getFragmentManager().findFragmentById(R.id.ar_fragment));
    t_cameraFragment.startRecognition(
        new RecognitionOptions(
                RecognitionOptions.RecognitionStorage.LOCAL,
                RecognitionOptions.RecognitionMode.CONTINUOUS_SCAN,
                new CloudRecognitionInfo(new String[]{})
        ),
        this);

}

Differently from the former tutorial, we deleted the call to the RecognitionFragment method startRecognition and added a call to the rendering view setOnTouchListener function that will register the MainActivity class as a touch listener of the ARView object. 

We also change the onResume() method in the same way

@Override
public void onResume() {
    super.onResume();
    RecognitionFragment t_cameraFragment = ((RecognitionFragment) getFragmentManager().findFragmentById(R.id.ar_fragment));
    if(t_cameraFragment!=null) t_cameraFragment.startRecognition(
            new RecognitionOptions(
                    RecognitionOptions.RecognitionStorage.LOCAL,
                    RecognitionOptions.RecognitionMode.CONTINUOUS_SCAN,
                    new CloudRecognitionInfo(new String[]{})
            ), this);
}

We add a new function that will start the recognition process on demand (the user tap on the screen):

private void doRecognition()
{
    RecognitionFragment t_cameraFragment = ((RecognitionFragment) getFragmentManager().findFragmentById(R.id.ar_fragment));
    t_cameraFragment.startRecognition(
            new RecognitionOptions(
                    RecognitionOptions.RecognitionStorage.GLOBAL,
                    RecognitionOptions.RecognitionMode.TAP_TO_SCAN,
                    new CloudRecognitionInfo(new String[]{})
            ),
            this);
}

Notice that we set the RecognitionStorage options to GLOBAL (meaning cloud marker search and local cache), the recognition mode to TAP_TO_SCAN. The last parameters tell the system in which of our cloud databases to search. This is the marker database cloud code we took note in the previous tutorial. This is a list of databases ids (as plain strings), so you can add as many databases as you want. You can also leave the list empty, and the system will search inside all your app databases. 

Now we modify the isConnectionAvailable member method of our MainActivity class (a method we had to implement in order to implement the IRecognitionListener interface). In this method we need to check wathever a internet connection is available:

@Override
public boolean isConnectionAvailable(Context context)
{
    ConnectivityManager connectivityManager = (ConnectivityManager) getSystemService(Context.CONNECTIVITY_SERVICE);
    NetworkInfo activeNetworkInfo = connectivityManager.getActiveNetworkInfo();
    return activeNetworkInfo != null && activeNetworkInfo.isConnected();
}

This is standard stuff from the Android documentation.

The last bit we need to change in our MainActivity class is the implementation of the View.OnTouchListener interface. This function will simply call our doRecognition method:

@Override
public boolean onTouch(View view, MotionEvent motionEvent) 
{
    doRecognition();
    return false;
}

And that is all you have to do to use our amazing cloud recognition service. If you want to add a little bit more flavour you can modify the ARRenderer class this way in order to show a different monkey on the cloud markers:

First we add a new class member variable to store our blue monkey head:

private Mesh blue_monkeyMesh = null; 

We initialize that new object inside our onSurfaceCreated method by adding

blue_monkeyMesh=new Mesh();
blue_monkeyMesh.InitMesh(context.getAssets(),"media/monkey.json", "media/texture2.png");

Lastly we change the onDrawFrame method:

public void onDrawFrame(GL10 gl)
{
    if (!IsActive) return;
    gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
    // Call our native function to render camera content
    RecognitionFragment.renderCamera(ViewportWidth, ViewportHeight, Angle);
    // Render custom content
    if(RecognitionFragment.isTracking()) {
        Marker currentMarker = RecognitionFragment.getCurrentMarker();
        float[] mvpMatrix = new float[16];
        if (computeModelViewProjectionMatrix(mvpMatrix)) {
            if(currentMarker.getId().compareTo("3_522")==0 || currentMarker.getId().compareTo("3_754")==0 || currentMarker.getId().compareTo("3_1836")==0) {
                monkeyMesh.DrawMesh(mvpMatrix);
            }
            else {
                blue_monkeyMesh.DrawMesh(mvpMatrix);                
            }
            RenderUtils.checkGLError("completed Monkey head Render");
        }
    }
    gl.glFinish();
}

Here we draw the yellow monkey head on the local markers (the ones in the markers folder) and a blue monkey head on the cloud markers.

The final result should be similar to this:

You can find a complete project for this tutorial on Github @ https://github.com/pikkart-support/SDKSample_CloudPlanarMarker (remember to add you license file in the assets folder!).