Q: Is the technology all built in-house? Who built the machine learning (ML) models and how much data have you generated to train them?
A: Yes, we build all of our technology ourselves, with the exception of the front end/UI, which is done by four excellent, dedicated Bolivia-based contractors who are all but AtomBeam team members in name. They take equity as a significant part of their compensation and are fully integrated into the development process. The software was architected by Josh Cooper, PhD, a full professor and mathematician at the University of South Carolina, and was built by a team that includes highly experienced and qualified mathematicians and engineers. On the amount of data needed for ML to train, it varies a lot by dataset. In most cases a few hundred kilobytes is enough, or even less, but we like to see a few megabytes in the sample. We have trained on as little as a 10 kB, but that is not enough to generate good Codebooks; we recommend using at least 250 kB as a minimum data sample.