Integer addition algorithm could reduce energy needs of AI by 95% [View all]
From TechXplore
![](https://scx2.b-cdn.net/gfx/news/2024/researchers-find-a-way-1.jpg)
16-bit, 8-bit floating point numbers defined in IEEE 754 and on various hardware for tensor computations, and the 16-bit integer. MSB stands for most significant bit and LSB stands for least significant bit. Credit: arXiv (2024). DOI: 10.48550/arxiv.2410.00907
___________________________________________________________________________
A team of engineers at AI inference technology company BitEnergy AI reports a method to reduce the energy needs of AI applications by 95%. The group has published a
paper describing their new technique on the arXiv preprint server.
...
The new technique is basicinstead of using complex floating-point multiplication (FPM), the method uses integer addition. Apps use FPM to handle extremely large or small numbers, allowing applications to carry out calculations using them with extreme precision. It is also the most energy-intensive part of AI number crunching.
The researchers call their new method Linear-Complexity Multiplicationit works by approximating FPMs using integer addition. They claim that testing, thus far, has shown that the new approach reduces electricity demand by 95%.
The one drawback it has is that it requires different hardware than that currently in use. But the research team also notes that the new type of hardware has already been designed, built and tested.
a little bit more ...