Android SDK >
Augmented Video

Using the Local Planar marker tutorial as a base, we are now going to add an augmented video to our app. The video will appear on top of a marker and show one of Pikkart's video ads.

Note: if you prefer to look at existing, working code you can download a complete project for augmented video from Github @ https://github.com/pikkart-support/SDKSample_AugmentedVideo (remember to add you license file in the assets folder!).

First we need to copy some premade base classes from our sample package into the project dir. 

Copy the content of the folder <pikkart_sample_dir>/sample/video/ into the folder <your_app_root>/app/src/main/java/<DOMAIN>/<APPLICATIONNAME> (the folder where your MainActivity is). You will find three new java classes in your app project as in the following image:

Change the package name in the three video classes to match the package name of your MainActivity.

package com.pikkart.tutorial.rendering;
package <your.package.name>;

The FullscreenVideoPlayer class is a fullscreen video player activity, needed in case an android device doesn't support AR Videos. The PikkartVideoPlayer class is a class encapsulating an android MediaPlayer and an OpenGL rendering surface to witch video frames are redirected. This is the main class managing our video/audio data. The VideoMesh class is our 3D Object (a simple plane with our video as texture) that will be drawn on top of the tracked image. 

Make sure that the sample markers and media dirs from the sample package are copied in your app assets folder (<pikkart_sample_dir>/sample/assets/markers/ and <pikkart_sample_dir>/sample/assets/media/).

Once all is set-up, we need to change some bits of code in our ARRenderer class. First we need to add a new member variable for the VideoMesh object we need to draw, and we create and initialize it in the onSurfaceCreated method:

public class ARRenderer implements GLTextureView.Renderer {
...
private VideoMesh videoMesh = null;

public void onSurfaceCreated(GL10 gl, EGLConfig config)  {
    gl.glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
    monkeyMesh=new Mesh();
    monkeyMesh.InitMesh(context.getAssets(),"media/monkey.json", "media/texture.png");

    videoMesh = new VideoMesh((Activity)context);
    videoMesh.InitMesh(context.getAssets(), "media/pikkart_video.mp4", "media/pikkart_keyframe.png", 0, false, null);
}

The VideoMesh is created and initialized by passing to it a reference to the current app AssetManager, the URL of the video file (can be a locally stored file or a web URL), the path to a keyframe image that will be shown to the user when the video is not playing (path to a locally stored image, in this case inside the media folder of the app assets folder) a starting seek position (in milliseconds), and whatever the video has to auto start of first detection. You can also pass a pointer to an external PikkartVideoPlayer object if you don't want the VideoMesh object to use its internal one.

Next we added a couple more support functions that modify and rotate the projection matrix depending on device screen orientation:

public boolean computeModelViewProjectionMatrix(float[] mvMatrix, float[] pMatrix) {
    RenderUtils.matrix44Identity(mvMatrix);
    RenderUtils.matrix44Identity(pMatrix);

    float w = (float)640;
    float h = (float)480;
    float ar = (float)ViewportHeight / (float)ViewportWidth;
    if (ViewportHeight > ViewportWidth) ar = 1.0f / ar;
    float h1 = h, w1 = w;
    if (ar < h/w)
        h1 = w * ar;
    else
        w1 = h / ar;

    float a = 0.f, b = 0.f;
    switch (Angle) {
        case 0: a = 1.f; b = 0.f;
            break;
        case 90: a = 0.f; b = 1.f;
            break;
        case 180: a = -1.f; b = 0.f;
            break;
        case 270: a = 0.f; b = -1.f;
            break;
        default: break;
    }
    float[] angleMatrix = new float[16];
    angleMatrix[0] = a; angleMatrix[1] = b; angleMatrix[2]=0.0f; angleMatrix[3] = 0.0f;
    angleMatrix[4] = -b; angleMatrix[5] = a; angleMatrix[6] = 0.0f; angleMatrix[7] = 0.0f;
    angleMatrix[8] = 0.0f; angleMatrix[9] = 0.0f; angleMatrix[10] = 1.0f; angleMatrix[11] = 0.0f;
    angleMatrix[12] = 0.0f; angleMatrix[13] = 0.0f; angleMatrix[14] = 0.0f; angleMatrix[15] = 1.0f;

    float[] projectionMatrix = RecognitionFragment.getCurrentProjectionMatrix().clone();
    projectionMatrix[5] = projectionMatrix[5] * (h / h1);

    RenderUtils.matrixMultiply(4,4,angleMatrix,4,4,projectionMatrix,pMatrix);

    if( RecognitionFragment.isTracking() ) {
        float[] tMatrix = RecognitionFragment.getCurrentModelViewMatrix();
        mvMatrix[0]=tMatrix[0]; mvMatrix[1]=tMatrix[1]; mvMatrix[2]=tMatrix[2]; mvMatrix[3]=tMatrix[3];
        mvMatrix[4]=tMatrix[4]; mvMatrix[5]=tMatrix[5]; mvMatrix[6]=tMatrix[6]; mvMatrix[7]=tMatrix[7];
        mvMatrix[8]=tMatrix[8]; mvMatrix[9]=tMatrix[9]; mvMatrix[10]=tMatrix[10]; mvMatrix[11]=tMatrix[11];
        mvMatrix[12]=tMatrix[12]; mvMatrix[13]=tMatrix[13]; mvMatrix[14]=tMatrix[14]; mvMatrix[15]=tMatrix[15];
        return true;
    }
    return false;
}
 

We then proceed to modify the onDrawFrame methods, we draw the video mesh if we recognize a specific marker, and also we make sure to still draw the video even if tracking of the marker is lost, this way:

/** Called to draw the current frame. */
public void onDrawFrame(GL10 gl) {
    if (!IsActive) return;

    gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);

    // Call our native function to render camera content
    RecognitionFragment.renderCamera(ViewportWidth, ViewportHeight, Angle);

    if(RecognitionFragment.isTracking()) {
        Marker currentMarker = RecognitionFragment.getCurrentMarker();
        //Here we decide which 3d object to draw and we draw it
        if(currentMarker.getId().compareTo("3_522")==0){
            float[] mvMatrix = new float[16];
            float[] pMatrix = new float[16];
            if (computeModelViewProjectionMatrix(mvMatrix, pMatrix)) {
                videoMesh.DrawMesh(mvMatrix, pMatrix);
                RenderUtils.checkGLError("completed video mesh Render");
            }
        }
        else {
            float[] mvpMatrix = new float[16];
            if (computeModelViewProjectionMatrix(mvpMatrix)) {
                monkeyMesh.DrawMesh(mvpMatrix);
                RenderUtils.checkGLError("completed Monkey head Render");
            }
        }
    }
    gl.glFinish();
}

Lastly for our ARRenderer class, we add two support functions that will either play or pause the video. These will be used by our MainActivity to respond to user interaction and app state change (onPause and onResume mainly)

public void playOrPauseVideo() {
    if(videoMesh!=null) videoMesh.playOrPauseVideo();
}

public void pauseVideo() { if(videoMesh!=null) videoMesh.pauseVideo(); }

It's time to modify the ARView class. It just requires the addition of a couple of functions: one function to respond to user input (tap on screen) and a second and third function to pause video playing:

@Override
public boolean onTouchEvent(MotionEvent e){
    if(e.getAction()==MotionEvent.ACTION_UP)
        _renderer.playOrPauseVideo();
    return true;
}

@Override
public void onPause() {
    super.onPause();
    _renderer.pauseVideo();
}

public void pauseVideo() { if(_renderer!=null) _renderer.pauseVideo(); }

Lastly we need to override two member functions of our MainActivity class in order to correctly handle the app state. First make a class member variable m_arView to hold a reference to our ARView view that was created in the main activity initLayout function, then we need to override the onPause and onResume function in this way to correctly pause the videos:

public class MainActivity extends AppCompatActivity implements IRecognitionListener
    ...
    private ARView m_arView = null;
    ...
    private void initLayout()
    {
        setContentView(R.layout.activity_main);

        m_arView = new ARView(this);
        addContentView(m_arView, new FrameLayout.LayoutParams(FrameLayout.LayoutParams.MATCH_PARENT, FrameLayout.LayoutParams.MATCH_PARENT));

        RecognitionFragment t_cameraFragment = ((RecognitionFragment) getFragmentManager().findFragmentById(R.id.ar_fragment));
        t_cameraFragment.startRecognition(
                new RecognitionOptions(
                        RecognitionManager.RecognitionStorage.LOCAL,
                        RecognitionManager.RecognitionMode.CONTINUOUS_SCAN,
                        new CloudRecognitionInfo(new String[]{})
                ),
                this);
    }
    ...

    @Override
    public void onResume() {
        super.onResume();
        //restart recognition on app resume
        RecognitionFragment t_cameraFragment = ((RecognitionFragment) getFragmentManager().findFragmentById(R.id.ar_fragment));
        if(t_cameraFragment!=null) t_cameraFragment.startRecognition(
                new RecognitionOptions(
                    RecognitionManager.RecognitionStorage.LOCAL,
                    RecognitionManager.RecognitionMode.CONTINUOUS_SCAN,
                    new CloudRecognitionInfo(new String[]{})
                ), this);
        //resume our renderer
        if(m_arView!=null) m_arView.onResume();
    }

    @Override
    public void onPause() {
        super.onPause();
        //pause our renderer and associated videos
        if(m_arView!=null) m_arView.onPause();
    }
    @Override
    public void markerTrackingLost(String s) {
        m_arView.pauseVideo();
    }

 

And that's all folks for this tutorial. Once compiled and run on a device you should see something like this when targeting one of our test markers:

If you want more details on how our VideoMesh is made and how our AR video playback works take a look at the code and read the comments of our VideoMesh and PikkartVideoPlayer classes.