Jotting this down because I think it’s an idea worth exploring and I was considering adding it to my talk for Thursday but it doesn’t really fit.
I was going to mention that a great resource for re-learning how to communicate is Nonviolent Communication by Marshall Rosenberg and I started to ponder Elon Musk’s recent comments on being wary of AI.
So the idea is this: how do we think we’re going to design benevolent AI when the language of our culture is war. Meaning… we’ve been at war for all of my adult life, about 20 years. We use adversarial language to communicate. If AI is going to be programmed by us, and this is the way we communicate, why do we think it’s not going to be like us?
The other thought is that maybe we should think about AI more like nuclear weapons. We haven’t had a nuclear war because we know that the destructive scale would be immense, and there is the aspect of mutually assured destruction that prevents anyone from turning to nuclear weapons. The issue with this is that the destruction from a nuclear weapon is tangible – physical things go away. In AI, the changes would be behavioral and slower moving and we could pass a point where understand the outcomes of our decisions.
Some links:
http://www.livescience.com/48481-elon-musk-artificial-intelligence-threat.html
Slate articles have gotten so terrible it’s not even funny.
The comments in this one are better than the actual article.
“I don’t need your civil war.”