“We cannot solve our problems with the same thinking we used to create them” – Albert Einstein

Autonomio is an augmented intelligence workbench that takes the power of advanced deep learning, and for the first time makes it accessible to non-programmers.

Autonomio utilizes the Tensorflow backend through the Keras deep learning abstraction layer and introduces a new paradigm for ease-of-use. Autonomio makes previously hard to access data science workflows accessible to everyone that can start a computer and a web browser. Previously this level of capability have been available only for a handful of computer-savvy researchers.

Because the browser based user interface is accompanied by a fully-featured Jupyter Notebook, Autonomio is suitable for advanced practitioners and introduces a novel set of conveniences to cut down workflow complexity and cognitive overhead.

Autonomio is a brainchild of Mikko Kotila and Amit Phalsankar, each with more than 10 years of individual R&D in natural language processing, machine vision and signals intelligence and close 20 years of individual experience in data analytics.

The codebase is maintained as open-source under MIT license by Botlab, a non-profit foundation based out of Boston and Helsinki.

read the docs
get the code


Autonomio development thrives towards establishing  industry gold-standard in three aspects:

>>  Flexibility in terms of data ingestion
>>  Out-of-the-box text classification capability
>>  Minimal cognitive load caused by effective use

The problem of data

Data scientists use a significant fraction of their time in transforming the data in various ways to make its shape and type into an acceptable format for the model. This is a particular headache for inexperienced and less computer-savvy researchers.  Our belief is that the starting use case is to be able to puke any data on the model and still get a result that indicates the potential the input signals have to make the prediction in question.

The problem of unstructured data

Because most of the data in the world is text, Autonomio has a particular focus on dealing with unstructured data. NLPs have been widely used for a range of purposes and sometimes with a high degree of success. What is not clear from the glorified success stories, is that unstructured data is still largely more of a cost factor than a benefit. By combining word2vec and deep learning, Autonomio makes it possible to train and deploy a state-of-the-art text classification neural network in minutes across a wide range of applications and languages.

The problem of cognitive load

Traditionally data science tools, and especially those related to machine learning, have been inaccessible to most researchers. Tools such as Keras and Tensorflow, which Autonomio depends on, still require significant effort from new users in order to get to a successful result for the first time.



1/ Getting Started

Deep learning with Autonomio is as easy and intuitive as it would be to learn and play an interesting computer game. If data plays an important role in your daily life, it will be better than any game.

>>  Run in a docker container for zero-hassle experience
>>  Option for both Autonomio GUI and Jupyter based use
>>  Available for install through PyPi or Git source

2/ Workflow

>>  Access through GUI or intuitive single-command namespace
>>  Optimized for Notebook use
>>  Train and deploy a neural network in minutes
>>  Never touch code (if you don’t want to)
>>  Access through browser as self-hosted SaaS

3/ Features

>>  TensorFlow backend with Keras abstractions
>>  word2vec based word vectorization
>>  language agnostic classification capability
>>  semantic support for over 10 languages through spaCy
>>  Both ‘x’ and ‘y’ can be text objects (e.g. tweet text and category labels)
>>  Integrated plotting with special focus on deep learning outputs


Autonomio Roadmap

“It takes a long journey to know the horse’s strength” – Chinese Proverb For the foreseeable future, Autonomio r&d efforts are focused on three aspects: – CORE – STATS – NON-STATS The main distinction between these three is that CORE and STATS have the potential to effect the results, where as NON-STATS never can. The naming convention …


Love. Some of us struggled over a decade to do a fraction of what we can do in a day by leveraging the amazing open-source tools in the PyData and Python Deep Learning eco-systems. None of what we’re doing is even remotely possible without such eco-systems thriving the way they do.

We highly encourage you to consider becoming a contributor in one of these wonderful projects. Actually it’s more important that you contribute to them, and than to Autonomio.