“It takes a long journey to know the horse’s strength” – Chinese Proverb
For the foreseeable future, Autonomio r&d efforts are focused on three aspects:
The main distinction between these three is that CORE and STATS have the potential to effect the results, where as NON-STATS never can. The naming convention has been agreed upon to clearly facilitate for this dichotomy. Below I will provide a brief outline for each aspect.
Here our goal is to remove all doubt from the user in regards to the integrity and reliability of the system:
– ensure that outputs have 100% integrity
– move testing to “expected output validation”
– establish 100% code coverage
AUTONOMIO STATS CAPABILITIES
Here our goal is to push deep learning implementation to the next level. The focus is to significantly expand the supported/covered workflow in comparison to currently available platforms (Keras etc.). In practice this means to extend the workflow to cover things that the user does just before and just after using a typical deep learning system. Mainly this effort will consist of two separate parts:
– a deep learning based abstraction layer that automatically configures the model for optimal output
– an abstraction layer that performs robust validation far beyond the means presented in current systems or literature
Philosophically speaking, the focus is on moving away from the idea of artificial intelligence or augmented intelligence, towards autonomous intelligence capability.
AUTONOMIO NON-STATS CAPABILITIES
Here our goal is to reduce barriers to every day use of deep learning, state- of-the-art language processing and particularly the seamless integration of the two. The research and development that focus on this third aspect, leverage means other than those commonly considered in deep learning technology development. Namely workflow/process automation, visualization, various other UX factors, and modernization of documentation in to something that is easily accessible as part of common workflows, but without ever distracting advanced users.
– create a layperson ready version of key aspects of Keras documentation (losses, etc)
– run a design thinking workshop to identify key data scientist needs
– screen record deep learning workflows and quantify the results