BRAVE NEW COMPUTING

Moore’s law gives us 2x more computational output every 18 months. To say accurately, the transistor density in the processing unit doubles every 18 months. Basically that is the pace of r&d Intel is committed to. They’re doing a good job, actually an excellent one, at keeping up with this pace stipulated by the company’s iconic founder, Gordon Moore. While there is much talk about Moore’s law coming to its inevitable end, that is not the real problem, and that should not be the focus of attention. This article is about the problem we should be instead focused on, and proposes a very high-level solution for it. Or a way of looking at solving the problem. 

Efficiency of computational output is intimately connected with progress in scientific research and technological advancement. Various fields of research contribute to advancement in computing, and advancements in computing contribute to advancements across various fields of scientific research. Namely those benefiting from computational output.

Unlike Intel’s commitment to systematic advancements to the hardware level of computing, no comparable effort is found within the software industry. Isolated exceptions that do exist get mostly lost in the noise. Instead we have adopted a model where continuous increases in output come with the cost of more resources. For some reason we have learn to associate scale with ever increasing cost.

We’re Not Equipped to Write Code

The way we write code to tell the underlying electronic framework (the computer) what to do, seems fundamentally wrong. Actually, we’re rarely telling the hardware what to do. Instead we talk with a piece of code that talks with another piece of code that talks with another piece of code…

So it’s like talking with someone who speaks with Japanese, but instead of having just one translator from Japanese to English, you have translator from Japanese to Chinese, another for the translation from Chinese to Russian, one for Russian to Finnish and then maybe one who translates it from Finnish to English. Even though you basically know this is what happened, you feel like you understand.

It’s very rare that any one individual has understanding of all the different layers that sit between his code and the underlying hardware. With the advent of web development, many programmers are no longer intimately familiar with principles of electronics and binary logic. That kind of thinking is mostly associated with the square-eyed nerds of the 70s. That’s the time before nerds became “in”.

The closer the code sits to the hardware, the more efficient it is. The hardware operations represent the theoretical maximum level of efficiency for those hardware operations. Closer the code is to that level, the more efficiency potential it has. That is the level where we end up doing only what we really have to. The resource is used on “need” and not on “can” basis. Think of it as “just enough data” as opposed to “big data”.

While this seems obvious, it is not our current mindset at all. We tend to go overboard with things and then later correct them when they break down or the situation becomes unbearable for a given reason. In many cases we no longer have the chance to really fix things once they break.

Currently all of this contributes towards an unnecessary slow down of progress. Most importantly, that slow down comes at the cost of advancements in scientific research and technology development.

New Kind of Computer Programs

In any computer solution, three different aspects meet; concept, use of hardware and use of software. We humans come up with great concepts and we’re very good at executing them (even though inefficiently) to a level where the required output becomes available by means of computing.

Basically we know how to tell the machine what to do, but not how to do it efficiently. We typically lack in understanding of the underlying hardware and are not capable of intuitively communicating with the means provided by the language of binary machines.

In an ordinary consumer PC, there are billions of events taking place every second. We’re not good with speed and we’re even worse with many things happening simultaneously. Furthermore, we’re terrible with seeing actual causes and constituents that led to a particular thing or an event. Basically we ignore all of that and create generalisations out of everything. The functions required to efficiently manage the type of complexity found in computational operations, are not typically found within our cognition.

On the other hand we are very good at mapping out processes and systems and coming up with improvements to those processes and systems at a relatively high level. We are also very good at all kinds of training and generally in passing on our knowledge and understanding about things. It is the machine itself that is ideal to manage the fundamental level of code (and coding).

The discussion on intelligence and machines does not have to take place on any kind of superiority context. On the contrary, it can be about helping the machines in becoming more responsive to our needs. If we look at the advancements in the relevant fields of science and technology over the past decades, it seems more likely that we are going towards co-existing with intelligent machines, as opposed to becoming some kind of subordinates of machines with “greater than human intelligence”.

These “hardware agents” promise to first manage our code, with development taking them inevitably towards creating code, writing entire programs. This way the machine can be equipped to help us solve our problems in before unthought ways. Taking us to frontiers of science entirely unknown to us at this point of our development. In his seminal paper on “man-machine symbiosis” the legendary master of computing, JCR Licklider, carried this particular point across 50 years ago.

It is fair to say that the world of computing suffers from many inefficiencies. Based on some of the latest findings from our propriety research, much more could be done in terms of increasing computational efficiency. We have every reason to believe that such goal can be reached without any additional investment to hardware. The problem of computing seems to currently be more connected with “too much” rather than with “too little”.

A plausible solution seems one where we work together with the machine, until the machine is able to surpass our skill in writing code. It seems likely that in this equation, it will remain our responsibility to do the conceptual work long in to the future. Also some of the higher levels of code might be better off written by us, due to our intuitive abilities that support visual presentations.

In this proposed model, while we human will retain our role as creators of programs, all of the hardware level coding is done by the machine. All of the hardware level maintenance routines are also taken care of by the machine. This will help the man come up with currently unimaginable concepts for computer programs, and execute them at ease. The machine will help us operate those programs at the maximum theoretical level of efficiency and reliability.

Absent the cognitive disadvantages partially also discussed in this article, the machine has the potential for writing code that we could have not dreamed of. Providing us with the support we need in solving problems of science and technology, and eventually helping us to solve problems we did not even know exist.

To embark on this journey of brand new computing, we start by carefully investigating the way computer programs are created and maintained, and the way those programs ultimately interact with the underlying hardware. We identify what is currently inferior in our situation, what are the causes leading to that particular situation, what is the superior situation we thrive towards and how to practically reach that superior situation.