Contemporary computing and communications are best characterized by exponential growth. Growth in numbers of computers, smart mobile devices, data created, and data transmitted. By 2025, it is expected that there will be over 50 billion IoT (Internet of Things) devices, and 150 zettabytes (a billion-trillion) of data. The number of network-connected devices is growing at an even faster rate, given the advent of mobile computing, wearables and IoT, with over 20 billion IoT devices expected to be network-connected.

New networks (i.e. 5G) are being invented; however, they are costly to deploy, and many IoT devices cannot use these networks due to inherent limitations in CPU horsepower, and battery power. Alternative technologies such as LPWANs (low power wide area networks), and 2G/3G cellular networks are more likely to support these IoT devices, and these networks are designed for low power usage but have limited bandwidth.

The only viable solution to the massive growth in data transmitted and stored is to fundamentally change the encoding method. In short – send less data!

While compression algorithms can reduce the size of the data load, compression has inherent disadvantages. Algorithms take time to process (computational latency). Compression is inherently sensitive to error-prone networks (i.e. wireless & satellite). It is ineffective for small data packages. And entire files must be decompressed in order to be accessed and searched – a lengthy process. Compression is simply NOT optimized for IoT.

AtomBeam addresses all of these issues with a completely unique approach using machine learning and artificial intelligence (AI) – to learn, encode and reduce the data size for transmission and storage, all with minimal computational latency and a host of other benefits.