Windows Developers Can Use HTML5 and JavaScript App Model With Windows 1.8 SDK

Posted on Tuesday, September 24, 2013 by STUART PARKERSON, Global Sales

Windows developers can access the interactivity of Kinect for Windows with the new HTML5/JavaScript app model. Updates include adding color to Kinect, creating a new API for background removal, and other improvements.

Some of the new features of the Kinect for Windows SDK 1.8 include:

New Background Removal
A new green screening background removal API, which removes the background behind the user so that it can be replaced with an artificial background. This feature is useful for advertising, augmented reality gaming, training and simulation, and other immersive experiences that place the user in a different virtual environment.

Realistic Color Capture with Kinect Fusion
The new Kinect Fusion API scans the color of the scene along with the depth information so that it can capture the color of the object along with its three-dimensional (3D) model. The API also produces a texture map for the mesh created from the scan. This feature provides a full fidelity 3D model of a scan, including color, which can be used for full color 3D printing or to create accurate 3D assets for games, CAD, and other applications.

Improved Tracking Algorithm with Kinect Fusion
This updated algorithm makes it easier to scan a scene and makes it easier to maintain its lock on the scene as the camera position moves, yielding a more reliable and consistent scanning.

HTML Interaction Sample
This sample demonstrates implementing Kinect-enabled buttons, simple user engagement, and the use of a background removal stream in HTML5. It allows developers to use HTML5 and JavaScript to implement Kinect-enabled user interfaces, which was not possible previously—making it easier for developers to work in whatever programming languages they prefer and integrate Kinect for Windows into their existing solutions.

Multiple-sensor Kinect Fusion Sample
This sample shows developers how to use two sensors simultaneously to scan a person or object from both sides—making it possible to construct a 3D model without having to move the sensor or the object. It demonstrates the calibration between two Kinect for Windows sensors, and how to use Kinect Fusion APIs with multiple depth snapshots. It is ideal for retail experiences and other public kiosks that do not include having an attendant available to scan by hand.

Adaptive UI Sample
This sample demonstrates how to build an application that adapts itself depending on the distance between the user and the screen—from gesturing at a distance to touching a touchscreen. The algorithm in this sample uses the physical dimensions and positions of the screen and sensor to determine the best ergonomic position on the screen for touch controls as well as ways the UI can adapt as the user approaches the screen or moves further away from it. As a result, the touch interface and visual display adapt to the user’s position and height, which enables users to interact with large touch screen displays comfortably. The display can also be adapted for more than one user.

Also updated is the Human Interface Guidelines (HIG) with guidance to complement the new Adaptive UI sample.

For more information click here.

More App Developer News

NEX22-DO personal observatory dome from NexDome



L eXtreme dual passband light pollution filter from Optolong



Focal Reducer and Field Flattener for TV102 scopes from Tele Vue



Powertank 12V Power Supply from Celestron



ARCO camera rotator and field de rotator



Copyright © 2024 by Moonbeam Development

Address:
3003 East Chestnut Expy
STE# 575
Springfield, Mo 65802

Phone: 1-844-277-3386

Fax:417-429-2935

E-Mail: contact@appdevelopermagazine.com