Q: What is the unique innovation you offer in your data compression solution? Is it the aspect of offloading the dictionary generation to a more powerful computer to minimize load on the IoT nodes, or is the algorithm itself unique in some way?
A: Thanks for your question. Our chief scientist, Josh Cooper, who is a mathematician and Professor at the University of South Carolina, is glad to answer them. His response follows.
Whenever we get one of these “isn’t this a 50-year-old idea” questions, my instinct is to respond with something like: Sure, Huffman coding is old, and data reduction by rewriting (aka “source coding”) is even older. We’re doing it in a new way designed for 21st-century data needs, and making it work correctly and efficiently requires quite a bit more technology and algorithmic apparatus. Using two-layer (primary + secondary) Huffman codes to ensure that every possible input can be encoded is novel. Modeling data in advance using ML is a very new paradigm, and it’s completely novel to use it in a lossless data-reduction setting. Selecting algorithmic parameters/hyperparameters to optimize algorithms for performance on particular hardware or under special computational operating constraints is novel in a data-reduction setting. Data reduction in pure-streaming/instant-on mode is novel, and it is what IoT, sensors, etc., need. Small-footprint, low-latency, memory-efficient runtime executables for data encoding and decoding on lightweight processors is novel for data reduction. Much of what AtomBeam brings to the table are major qualitative advancements that solve new problems. Yes, there were cars 100 years ago, but clearly, self-driving electric vehicles with all the communications, controls, computations, and power infrastructure that entails are an enormously valuable advancement.