Autonomio development thrives towards establishing  industry gold-standard in three aspects:

>>  Flexibility in terms of data ingestion
>>  Out-of-the-box text classification capability
>>  Minimal cognitive load caused by effective use

The problem of data

Data scientists use a significant fraction of their time in transforming the data in various ways to make its shape and type into an acceptable format for the model. This is a particular headache for inexperienced and less computer-savvy researchers.  Our belief is that the starting use case is to be able to puke any data on the model and still get a result that indicates the potential the input signals have to make the prediction in question.

The problem of unstructured data

Because most of the data in the world is text, Autonomio has a particular focus on dealing with unstructured data. NLPs have been widely used for a range of purposes and sometimes with a high degree of success. What is not clear from the glorified success stories, is that unstructured data is still largely more of a cost factor than a benefit. By combining word2vec and deep learning, Autonomio makes it possible to train and deploy a state-of-the-art text classification neural network in minutes across a wide range of applications and languages.

The problem of cognitive load

Traditionally data science tools, and especially those related to machine learning, have been inaccessible to most researchers. Tools such as Keras and Tensorflow, which Autonomio depends on, still require significant effort from new users in order to get to a successful result for the first time.