Documentation

Input Plugins API

The input plugins API provides a means to alter the inputs and outputs of the Wikitude Native SDK. For the input case specifically, custom frame data of arbitrary sources can be supplied as an input to the Wikitude SDK Native API for processing. Complementary, for the output case, the default rendering of the Wikitude SDK Native API can be substituted with more advanced implementations. Both cases are illustrated in two separate samples.

Unity interface

Input plugins are enabled by setting the Has Input Module toggle in the Plugin script to true. Once you do that, a number of additional options will appear, allowing you to configure how the input plugin behaves.

  1. Requests Camera Frame Rendering toggle controls if the SDK will display the image on the screen. If you want to do your own camera rendering, this should be turned off.
  2. Invert Frame will flip the frame vertically. The SDK expects the that the first row of pixels to correspond to the top of the image, because this is how the native cameras provide the data. However, when accessing the texture data from a Unity texture (including WebCamTexture) with GetPixels32(), the first row of the data will correspond to the bottom of the image. You can set this toggle to automatically flip image to convert from Unity format to the one expected by the SDK. This option is available only when the ColorSpace is RGBA.

Simple Input Plugin sample

The first sample shows how to grab the camera feed with Unity and send it to the Wikitude SDK for processing and rendering. The logic of the sample is contained in the SimpleInputPluginController script.

When the OnInputPluginRegistered event is called, we initialize the buffer required to store the frame data. In the Update function, once we get a valid frame, we get the pixels from it using the GetPixels32(Color32[]) method provided by the WebCamTexture class. To avoid additional copies of the data, we can obtain the native pointer to the data directly and send this to the SDK. The SDK will only read from this pointer during the duration of the call, so you don't need to keep the pointer around.

private void SendNewCameraFrame() {
    GCHandle handle = default(GCHandle);
    try {
        handle = GCHandle.Alloc(_pixels, GCHandleType.Pinned);
        IntPtr frameData = handle.AddrOfPinnedObject();

        var metadata = new ColorCameraFrameMetadata();
        metadata.HorizontalFieldOfView = 58.0f;
        metadata.Width = _feed.width;
        metadata.Height = _feed.height;
        metadata.CameraPosition = CaptureDevicePosition.Back;
        metadata.ColorSpace = FrameColorSpace.RGBA;
        metadata.TimestampScale = 1;

        var plane = new CameraFramePlane();
        plane.Data = frameData;
        plane.DataSize = (uint)_frameDataSize;
        plane.PixelStride = 4;
        plane.RowStride = _feed.width;
        var planes = new List<CameraFramePlane>();
        planes.Add(plane);

        var cameraFrame = new CameraFrame(++_frameIndex, 0, metadata, planes);
        InputPlugin.NotifyNewCameraFrame(cameraFrame);
    } finally {
        if (handle != default(GCHandle)) {
            handle.Free();
        }
    }
}

Custom Rendering samples

The second sample works very similarly to the first one, except that the frame is also sent to the another script called CustomCameraRenderer.cs, which handles rendering of the camera frame with a custom edge detection shader. This script is placed on the camera and uses a CommandBuffer to instruct Unity to blit the camera texture to the screen using a custom material.

_drawFrameBuffer = new CommandBuffer();
_drawFrameBuffer.Blit(_currentFrame, BuiltinRenderTextureType.CameraTarget, EffectMaterial);
camera.AddCommandBuffer(eventForBlit, _drawFrameBuffer);

The script also handles how to draw the camera frame when the aspect ratio of the feed doesn't match the aspect ratio of the screen.