AI developer Jürgen Schmidhuber — exemplifies why Elon Musk is worried about artificial intelligence's too-random development

© 2017 Peter Free

 

21 December 2017

 

 

Stupidity is sometimes apparent — as soon as it exits one's mouth

 

Human beings have a pronounced tendency to minimize the danger inherent in weapons and potential weapons escalation, when they sit to profit from both.

 

Take Jürgen Schmidhuber, co-director for the Dalle Molle Institute for Artificial Intelligence Research in Manno, Switzerland.

 

Interviewed in Berlin, Schmidhuber presented the following as apparently unassailable fact. He provided no evidence, and not even a decently reasoned hypothesis, in its support:

 

 

[Question:]

 

Is there a date by which, given current progress, machines could ‘rule us’?

 

[Answer:]

 

I would be very surprised if, within a few decades, there are no AIs smarter than ourselves. They won’t really rule us.

 

They will be very interested in us — ‘artificial curiosity’ as the term goes. That’s among the areas I work on. As long as they don’t understand life and civilisation, they will be super interested in us and their origins. In the long run, they will be much more interested in others of their kind and it will expand to wherever there are resources.

 

There’s a billion times more sunlight in space than here. They will emigrate and be far away from humans. They will have very little, if anything, to do with humans.

 

© 2017 Jacob Koshy, AIs won’t really rule us, they will be very interested in us: Juergen Schmidhuber, The Hindu (20 December 2017) (paragraph split)

 

 

Dumb-ass-ery?

 

Schmidhuber, a computer scientist who works directly with artificial intelligence, ignores even the most obvious engineering parameters necessary to build and program AI.

 

AI does not magically decide what it wants to do. Nor does it inherently, absent the imposition of reliable constraints and direction, decide to go off by itself to be alone (with others of its kind) in space.

 

Once smarter than we are (which is not hard to do), it becomes virtually impossible to forecast what AI might be concerned with and how it might elect to achieve its self-developed goals.

 

Our engineering conundrum is metaphorically similar to that posed for the amoeba, who is attempting to design a human.

 

Schmidhuber's answer is so cosmically silly, as to flummox even minimally working minds.

 

 

Lack of foresight among its developers — is why Elon Musk has cautioned us about AI's unregulated development

 

Three years ago, Musk warned that:

 

 

“I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful.

 

"I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.

 

“With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like – yeah, he’s sure he can control the demon. Doesn’t work out."

 

© 2017 Samuel Gibbs, Elon Musk: artificial intelligence is our biggest existential threat, The Guardian (27 October 2014)

 

 

The moral? — The intelligence to tinker is often not the same — as that which allows us to forecast tinkering's consequences

 

We should indeed be concerned about AI's potentially negative implications.

 

Especially so, if casually air-headed people like Herr Schmidhuber are charged with its development.