Solving the Problem of Problem Solving

We have to re-think the way we think about solving problems. As the problems we deal with increase in complexity, so must the methods we use to solve those problems evolve. And how do we do that? By turning to nature, as the great masters of the past did to unravel the mysteries of the human mind.

Solving problems harder than we can imagine

Having worked in the field of Artificial Intelligence since 2004, I finally feel confident that we have the technological capabilities we need in order to solve very hard problems. Including problems we’re not even thinking about yet. We’re not thinking about those problems because we don’t know how. That’s where AI, and particularly the deep learning method comes in.

The issue is that we humans don’t have a well established method / way for solving problems. So any machine intelligence approach, would be somehow burdened by our inability to deal with hard problems in an effective way. In this article, I propose a simple guideline that will help us address this problem. As a result, providing us with the ground we need in order to empower machine intelligence solutions to support us in areas where we most desperately need it. I will loosely use the example of what I see as one of the biggest and most pressing problems, extreme poverty.

How to Solve Really Hard Problems: Self-Organising Structures

First you have main workgroup, a group of people (or a single individual) who identifies and articulates what the actual problem statement is. For example “extremely poor could be lifted out of their poverty, thus adding $5 trillion in new annual spending to the global economy”. This group is the upstream source for all the other groups, and do nothing except work on the problem definition itself.

Then there is a second group that works on the breaking down of the problem. Dissecting it to its constituents. This group will never work on anything else except identifying the parts that make up the problem-set. For example “information deprivation”, “easily avoidable negative health outcomes” and “lack of skills and jobs” are the main factors related with remaining in poverty.

Then there is a group to further dissect each of these in to contributing causes, more groups to cover each from the angle of relevant interventions and so forth. Each group completely disconnected from anything except their own scope, and their immediate upstream and downstream connecting nodes.

Basically all these groups do is break their subset of the problem further in to ever increasingly actionable items. So far nobody is thinking about solutions. At this point, we could be 10-20 or even 100 layers down in the problem solving structure. Each layer consisting of one or more human or neural network working either individually or in co-operation. The point so far is to stay away as far as possible from any idea of solution.

At some point the solutions layers kick in and start to look at actual ways to replace the negative causes with positive ones. These groups don’t think about things like feasibility, but more groups will be introduced later to take care of that. At that point the prioritization groups, the feasibility groups, and various quality assurance groups, together with many, are introduced to this novel intelligence eco-system.

Every now and then there is a random group introduced, like virus or bacteria, that tries to breakdown the system entirely. Attacking it various ways. To test how much staying power the structure has. Also cancer like mutation functions are introduced randomly or otherwise, and serve the function of creating mutations that can then be adopted or dropped depending on how well they perform in various simulations.

The Nature’s Way

Each of these groups are organizations of people and machines. They could be just one man or machine, or they could be many. At this point, you should think about them as parts of a self-organizing structure. Each on its own having its activity founded on the principles of self-organization.

Each group feeds the one below them with tasks and gives feedback back one step up. This way all the groups are continuously adjusting their organization on the input that is coming to them. So it’s a flux, nothing is fixed, nothing ever stays the same, the only rule is interconnectedness.

Some of the groups could have overlap, so it would not have to take that many people to solve problems using this approach. If you look at nature, these are the kinds of structures you will always find. Cascading flux. One organization doing something to give feedback to the organization its part of and also assigning another organization to do something else. Nature is infinitely efficient, so if we want to tap in to that kind of power to find meaningful and practical solutions to our hardest problems, it seems obvious that we must mimic it.

For some reason we human try to solve problems on our own, or then we try to solve them as a large groups, we are all working more or less on the same thing or a similar thing, working as “peers”.

Further there is no element of continuous self-organization, but instead things have a tendency of reaching a certain kind of stasis. A point in time when the organizer (the problem solver) forgets what got them motivated in the first place, that is the curiosity driven by the uncertain nature of things. Once a satisfying result is achieved, it is almost as if the nature of things no longer was uncertainty.

There is always a sense of glorification that comes with problem solving, so that ends up messing things more. You want to be able to say “I did this” and “I solved this”. But in the way things organize in nature, no such moment exist. No moment is more certain than another, there is no “absolute zero” when something suddenly becomes complete and stops. This is proven with modern science in respect to movement of atoms.

That’s what solving a problem fundamentally is about, an organization of some kind. The identification and mapping out of an organization that was not previously known, or was lost for some reason. It is never more than just a temporary moment in an endless flux. To tell oneself anything but, is to betray the true spirit of problem solving. Nothing can ever be truly and completely known.

We still haven’t solved the most important problem of all. How to solve problems.

Autonomio Roadmap

“It takes a long journey to know the horse’s strength” – Chinese Proverb

For the foreseeable future, Autonomio r&d efforts are focused on three aspects:

– CORE
– STATS
– NON-STATS

The main distinction between these three is that CORE and STATS have the potential to effect the results, where as NON-STATS never can. The naming convention has been agreed upon to clearly facilitate for this dichotomy. Below I will provide a brief outline for each aspect.

AUTONOMIO CORE

Here our goal is to remove all doubt from the user in regards to the integrity and reliability of the system:

– ensure that outputs have 100% integrity
– move testing to “expected output validation”
– establish 100% code coverage

AUTONOMIO STATS CAPABILITIES

Here our goal is to push deep learning implementation to the next level. The focus is to significantly expand the supported/covered workflow in comparison to currently available platforms (Keras etc.). In practice this means to extend the workflow to cover things that the user does just before and just after using a typical deep learning system. Mainly this effort will consist of two separate parts:

– a deep learning based abstraction layer that automatically configures the model for optimal output
– an abstraction layer that performs robust validation far beyond the means presented in current systems or literature

Philosophically speaking, the focus is on moving away from the idea of artificial intelligence or augmented intelligence, towards autonomous intelligence capability.

AUTONOMIO NON-STATS CAPABILITIES

Here our goal is to reduce barriers to every day use of deep learning, state- of-the-art language processing and particularly the seamless integration of the two. The research and development that focus on this third aspect, leverage means other than those commonly considered in deep learning technology development. Namely workflow/process automation, visualization, various other UX factors, and modernization of documentation in to something that is easily accessible as part of common workflows, but without ever distracting advanced users.

– create a layperson ready version of key aspects of Keras documentation (losses, etc)
– run a design thinking workshop to identify key data scientist needs
– screen record deep learning workflows and quantify the results

Pseudo-randomness is the opposite of randomness

How to Randomness

When we say “pick a random number” the obvious problem is that randomization is not possible. So you’re stuck with pseudo randomization, and this is not all, there is another problem as well.

if I pick a random number and it’s between 0 and 100 your odds of figuring it out is 1/100. But if I pick a number from the range 0 to 1000 and I tell you the range, then the odds for you guessing it is  1/1000. How if I don’t tell you that it’s 0 to 1000. So actually it can be just one of 1000 integers, but you can’t know that. So it becomes as random as having to guess for numbers between 0 and infinite.

That’s the problem. How do you pick a number randomly from infinity?

To summarize. We have random which is void of any tampering. It is the entire number space as it is. Actually it’s wrong to even say “number space” because it implies limits. Indeed, it is infinity. Now let’s take a number out of that by some means, now we’re cutting down the entropy from maximum (pure random) to something that no longer is random. We’ve introduced order to where there was no order before. We’ve fundamentally altered the system by tampering with it. Now there is no randomness left.

There is still entropy, but that is not the same thing as “random”. Actually, entropy is a measure of order and not the measure of randomness. It’s the measure to say how far we’ve come from randomness.

The Randomness Paradox

Because we try to create randomness out of randomness, we end up with the opposite of randomness. Varying degrees of opposite of randomness. Because the only way to “take random number” is to apply some form of order i.e. lack of entropy in to a random system. The process is actually to take “order” and not to take “random number”. What we call pseudo randomization is a process of adding order in to randomness.

We can’t make randomness of anything like it. Because it is the actual natural state of everything. It is our language and the methods that are based on language, such as mathematics, that reveal the relative side of the ultimate, which is the random state.