Bezos said there ought to be some oversight. "I don't know what the solution" should be, he said, "but, smart people need to be thinking about that."
It could perhaps be modeled after the Geneva Convention, suggested Bezos, which is a series of publicly recognized international laws protecting medical workers in conflict regions and prisoners of war.
"It would have to be a big treaty," he said — "something that would help regulate these weapons, because they're actually, they have a lot of issues.
"So that one I think is genuinely scary," reiterated Bezos.
On the other hand, Bezos is not concerned that some omnipotent artificial intelligence will conquer humans.
"The idea that there is going to be a general AI overlord that subjugates us or kills us all, I think, is not something to worry about. I think that is overhyped," said Bezos.
"First of all, we are nowhere close to knowing how to build a general AI — something that could set its own objectives," the Amazon CEO said.
Right now we have "narrow AI," explained Bezos, where machine intelligence assists in a specific task. Billionaire tech titan Elon Musk has made a similar distinction: Narrow, functional artificial intelligence is what is used in self-driving cars, according to Musk, while general AI "literally has a million times more compute power and an open-ended utility function."
Further, continued Bezos, "I think it is unlikely that such a thing's first instincts would be to exterminate us. That would seem surprising to me."
Instead, Bezos said it is "much more likely it will help us. ... We are perfectly capable of hurting ourselves. We could use some help. So I am optimistic about that one and certainly don't think we need to worry about it today."
In contrast, Tesla and SpaceX CEO Musk said at the South by Southwest festival in Austin, "We have to figure out some way to ensure that the advent of digital super intelligence is one which is symbiotic with humanity. I think that is the single biggest existential crisis that we face and the most pressing one."
And according to the late, legendary physicist Stephen Hawking, "Success in creating effective AI, could be the biggest event in the history of our civilization. Or the worst. We just don't know. So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it," he said in November. "Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy."