causes and constituent

We humans do a terrible job in seeing things for their constituents and the causes that led to the thing to appear. Doing marginally better in either of these fields could mean significant improvements in solving certain kinds of problems. 

The more this problem of machine intelligence is looked at, the more it appears that “greater than human intelligence” is not only an oxymoron, but also not something we should seriously seek after. 

Even it was somehow possible for a non-human to have human intelligence, it would automatically have all the cognitive flaws of humans as well. 

On Value of Implicative Negation

it seems that the more the machine knows about things, the more there are to know about future things. even if the thing that arises is entirely unknown from the basis of knowing it before, the machine already has the potential to know a lot about. It does not know what the thing is, but it knows partially what the thing is not. 

if we want the machines to do things that we can’t imagine, then we have to let the machines do what they want to do. For that to make any sense, the machine has to want first. Want in a way that is more responsible than the way we humans want. 

When you want to know if someone feels happy for something that you’ve done to their benefit, and if it really is a benefit, you have to feel what the person feels, and you have to understand the true nature of that person. 

Same goes for pain. There is no way to understand if an action is causing harm of benefit, than understanding the true nature of the person and understanding how that person feels. That seems to be part of the puzzle. 

Truth vs. Justified True Belief

When epistemology attempts to make the difference between what is the truth and what is “justified true belief”, it fails to address the fundamental issue of words and truth not being compatible. 

In other words, by the means of words and language, we can never get to anything better than “justified true belief”. For example we believe that the ball will fall when we drop it. 

So the question is, should the focus in epistemology be in identifying the difference between “justified true belief” and “justified false belief”?

The Problem of Singularity

One of the key problems with the singularity proposition, of “greater than human intelligence” machines is the comparison to human intelligence. What do you mean by “more intelligent than human” or “as intelligent as human”?

If in the first place it was accurate in some meaningful sense to make this kind of comparison, is it not true that if one machine can be as intelligent as human, then by the power of numbers, machines can be much smarter than human. There can be virtually infinite number of machines, where there can be only a finite number of humans.

If we see “intelligence” as some kind of an objective currency of doing things, which it most certainly is not, then it should not matter if it’s one really smart machine, or trillions of machines that are “as smart as human”.

The most important thing to contemplate for this discussion is what “intelligence” means. We tend to look at intelligence on the scale of our perception of time. For example we don’t think that the intelligence of a rock is very impressive, because we can’t see anything happening. We don’t also consider water too intelligent, yet within it we can find the intelligence to create all life forms.

What AI promises to provide us, is support for humanity in those areas of intelligence we are weak in. Like JCR Licklider said it in the 60s, man-machine symbiosis. Not symbiosis in the sense of some kind of a physical/mechanical connection, but in the sense of the man being empowered by the (still largely hidden) power of the machine.

The conclusion we’ve had in our research spanning over the past ten years, is that moral values play a huge role in us developing strong AI. Contrary to the common belief, responsibly created, the machine has great potential for moral judgement. Unlike us, it is not burdened by millions of generations of ruthless battle for survival.

It seems that what we call “free will” and think only we have, the particles and everything else has just the same. It’s the will to move. 

What are the results we are looking for? Which causes seem to have the propensity towards those results? How can we best create those causes?

It is within the nature of intelligence to perpetually move.