5/20/2019 7:26:29 AM
Self serve AI platform has launched
Artificial Intelligence,Industrial IoT,Self Serve AI,Learning AI
https://appdevelopermagazine.com/images/news_images/Self-Serve-AI-Platfrorm-has-Launched-App-Developer-Magazine_w8bu3d29.jpg
App Developer Magazine

Self serve AI platform has launched



Richard Harris Richard Harris in Artificial Intelligence Monday, May 20, 2019
9,759

A new AI self serve platform has just launched that allows developers to build edge-based solutions without a background in artificial intelligence.

Xnor.ai has launched AI2GO, a self-serve platform that enables developers, device creators and companies to build smart, edge-based solutions without training or background in AI. AI2GO is available now and contains more than a hundred fully-trained models optimized to run on resource-constrained devices such as mobile devices, wearables, smart cameras, remote sensors and more. AI2GO models are being used today to build solutions for retail analytics, smart home, and industrial IoT.

Ali Farhadi, co-founder, and CXO, of Xnor.ai told us in an email; “By providing access to deep learning that can readily run on-device, we believe we afford all companies regardless of team, budget or hardware the opportunity to participate in this new era of AI innovation. AI2GO enables this vision through a platform of a large number of models, running on a large number of devices, that are able to operate under a large number of constraints."

Before AI2GO, AI relied almost exclusively on expensive hardware running in the cloud and was restricted to a handful of companies in the world. Even with the tools that were available then, building AI products required knowledge and expertise in deep learning to design, train and implement solutions. Deploying these models at the edge required solving for a whole host of constraints, including memory, power, and latency, which made development for on-device AI almost impossible.

AI2GO promises to change the scale and speed at which AI solutions can be built. As the first platform to offer hundreds of fully-trained edge AI models with state-of-the-art accuracy, developers no longer need to worry about data collection, annotation, training, model architecture or performance optimization. They simply download the complete solution and are ready to go. Enterprise customers using Xnor will continue to benefit from our custom trained and highest performance models. In the coming months, the AI2GO platform will provide enterprise customers access to fully optimized models along with additional custom features including automated training and re-training, and performance optimization for large-scale development teams.

The release of AI2GO is a continuation of Xnor’s mission to bring AI Everywhere to Everyone. In 2017, Xnor demonstrated you could remove the cost barrier by running deep learning on $5 hardware; in 2019, it removed the barrier of power with solar powered AI; now, with AI2GO, Xnor is removing the barrier of AI expertise.

Using AI2GO is simple. First, the user selects their preferred hardware (Raspberry Pi, Linux, Ambarella, etc.), then chooses an AI use case, for example, a “pet classifier for a home security camera,” a “person detector for a dash cam,” or a “person segmenter for video conferencing applications.” Because AI2GO models are designed to run in resource-constrained environments, Xnor provides the user with the novel opportunity to tune their model for latency (milliseconds) and memory footprint (megabytes) in order to fit within the user’s set of constraints. Once the user has specified their constraints the available models are listed, ranked by accuracy. The user can then download an Xnor Bundle (XB), a module containing a deep learning model, an inference engine. Xnor also provides an accompanying SDK that includes access to code samples, demo applications, benchmarking tools and technical documentation that makes it simple for everyone to start building a smart application.

Xnor will be hosting events to onboard new users to AI2GO in the coming months including at the Embedded Vision Summit in California, May 20-23, in booth #419.