3-minute read - by Sander Hofman, December 22, 2015
Moore’s Law’s exponential growth in computing power is enabling the development of smarter robots, self-driving cars and autonomous drones. But gloomy scenarios in popular culture fuel our fears that the continuous development of artificial intelligence (AI) might one day lead to the end of human civilization at the hand of our own superintelligent creation. And if Stephen Hawking, Bill Gates and Elon Musk worry about it, why shouldn’t we?
“If superintelligence takes over, a 'Moore’s Law company' like ASML would surely carry part of the blame,” Pim Haselager says with a wink. But the associate professor of artificial intelligence at the Radboud University in Nijmegen, the Netherlands, is quick to add: “Let’s not get paranoid. Even though computational speed might increase rapidly for many years to come, it’s highly questionable whether our capacity to build better models of cognition increases at the same speed. There’s a famous quote in AI land: ‘If our brain was so simple that we could understand it, we would be so stupid that we couldn’t.’”
“If our brain was so simple that we could understand it, we would be so stupid that we couldn’t.”
In November 2015, Haselager spoke to a packed auditorium on the ASML campus in Veldhoven, the Netherlands. Some 150 scientists and engineers had come to hear him discuss Nick Bostrom’s book ‘Superintelligence’ in an ASML Tech Talk, a series of science- and technology-focused talks running since 2009.
“What is clear is that the question about the relationship between artificial intelligence and human intelligence is becoming inevitable,” Haselager says, showing a graph that Bostrom calls the ‘intelligence explosion’.






