We all know that algorithms are getting smarter every day, but are they also getting greener?
Not at all, and that’s becoming a significant problem. As a result, researchers are working hard to discover new ways for developing smaller algorithms. In this article, we’re going to discuss why Tiny AI is an important step to secure a future for AI.
Artificial intelligence has demonstrated many breakthroughs over the last few years. Deep learning is one of those breakthroughs that powers many A.I. systems that deliver high accuracies. Thanks to deep learning, algorithms can scan medical images and identify tumors, navigate autonomous cars even in complex traffic patterns, and translate literature from almost any language to another.
Every day A.I. gets more accurate, but there is a hidden environmental cost to high accuracy.
Researchers at the University of Massachusetts Amherst recently conducted a study that revealed how energy intensive training an algorithm is. According to the study, training one single algorithm might consume 5x the lifetime carbon dioxide emissions of an average car, or the equivalent of about 300 round-trip flights between New York and San Francisco. In search of high accuracy, we seem to have lost our focus on energy efficiency.
Roy Schwartz, a research scientist at the Allen Institute, and his co-authors suggest in a paper called Green A.I., that A.I. researchers should strive to make energy efficiency an evaluation criterion alongside accuracy and other measures.
A recent Slate article quotes Schwartz saying “We don’t want to reach a state where A.I. will be become a significant contributor to global warming.”
That’s where Tiny A.I. might help.
What is Tiny AI?
Tiny AI is the term that is used to describe the efforts of the AI research community to reduce the size of algorithms, especially those that require large amounts of datasets and computational power. Tiny AI Researchers develop methods, called distillation methods, that not only reduce the size of a model but do so while accelerating inference and maintaining high levels of accuracy. Using these distillation methods, a model can be scaled down significantly, by factors reaching up to 10x. Besides, a much smaller algorithm can be deployed on the edge without sending data to the cloud, rather making decisions on the device.
Take BERT as an example. BERT is a pre-trained language model (PLM) developed by Jacob Devlin and his team at Google. This algorithm is very useful, because it helps you write. It can do that, because unlike previous models, BERT understands the words and the context. As a result, BERT can make writing suggestions or finish your sentences.
But BERT is a large model. MIT Technology Review reported that the larger version of BERT had 340 million data parameters. Furthermore, training it one time required as much as electricity as would be sufficient to power a U.S. household for 50 days.
BERT became a perfect target for Tiny A.I. researchers. In a recent article, researchers at Huawei claimed that they were able to reduce the size of BERT by 7.5x while improving the speed by 9.4x.
They called their new model, TinyBERT. But, how good is TinyBERT in comparison to BERT? The authors claim that TinyBERT achieves 96% of the performance of its teacher, BERT.
As these advances evolve, we will see many benefits of Tiny A.I. On the one hand, existing services like voice assistants, and cameras won’t need to transfer data between the cloud and local devices.
On the other hand, Tiny AI will make it possible for us to deploy complex algorithms at edge devices. For example, medical image analysis using a smartphone. Or autonomous driving without a cloud. On top of it, having your data stored on edge improves data privacy and security, as well.
Considering the explosive growth of A.I., it’s important to have researchers and engineers who study and measure the environmental implications of training and deploying their algorithms.
Let’s not just strive to build more accurate models. Let’s also consider the impact of them on the environment. If not, we may find ourselves with one more technology that damages our planet.