The speed of Artificial Intelligence learning just got faster

Posted on Monday, November 12, 2018 by RICHARD HARRIS, Executive Editor

Researchers at Hong Kong Baptist University (HKBU) have partnered with a team from Tencent Machine Learning to create a new technique for training Artificial Intelligence (AI) machines faster than ever before, while maintaining accuracy - and in doing so, broke the world record.

During the experiment the team succeeded in training two popular deep neural networks called AlexNet and ResNet-50 in just 4 minutes and 6.6 minutes respectively. Previously the fastest time was 11 minutes for AlexNet and 15 minutes for ResNet-50.

AlexNet and ResNet-50 are deep neural networks built on ImageNet, a large-scale dataset for visual recognition. Once trained, the system was able to recognise and label an object in a given photo. The result is significantly faster than previous records and outperforms all other existing systems.

Machine learning is a set of mathematical approaches which enables computers to “learn” from data, without explicitly being programmed by humans. The resulting algorithms can then be applied to a variety of data and visual recognition tasks used in AI.

The HKBU team is comprised of Professor Chu Xiaowen and PhD student Shi Shaohuai from the Department of Computer Science.

Professor Chu said: “We have proposed a new optimised training method that significantly improves the best output without losing accuracy. In AI training, researchers strive to train their networks faster, but this can lead to a decrease in accuracy. As a result, training machine-learning models at high speed while maintaining accuracy and precision is a vital goal for scientists.”

Professor Chu said the time required to train AI machines is affected by both computing time and communication time. The research team attained breakthroughs in both aspects to create this record-breaking achievement.

This included adopting a simpler computational method, known as FP16, to replace the more traditional one, FP32, making computation much faster without losing accuracy. As communication time is affected by the size of data blocks, the team came up with a communication technique named “tensor fusion”, which combines smaller pieces of data into larger ones, optimising the transmission pattern and thereby improving the efficiency of communication during AI training.

More App Developer News

Sidewinder telescope accessory mounting rings from Rouz Astro



Draco 62 quintuplet imaging refractor telescope from Founder Optics



PS Align Pro and Xasteria Weather app updates



Social engineering takeover attacks are on the rise



Epic Games defeats Google in court



Copyright © 2024 by Moonbeam Development

Address:
3003 East Chestnut Expy
STE# 575
Springfield, Mo 65802

Phone: 1-844-277-3386

Fax:417-429-2935

E-Mail: contact@appdevelopermagazine.com