伊隆·马斯克、迪普曼创始人和其他科技大佬签署了不开发致命性人工智能武器的协议 [美国媒体]

科技大佬们,包括伊隆·马斯克和谷歌人工智能子公司迪普曼的三位联合创始人,签订了一份保证不发展“自主致命武器”的协议。



A remote-controlled robot equipped with a machine gun under development by the United States Marine Corps.

图:美国海军陆战队正在开发的配备机枪的远程操控机器人

Tech leaders, including Elon Musk and the three co-founders of Google’s AI subsidiary DeepMind, have signed a pledge promising to not develop “lethal autonomous weapons.”

科技大佬们,包括伊隆·马斯克和谷歌人工智能子公司迪普曼的三位联合创始人,签订了一份保证不发展“自主致命武器”的协议。

It’s the latest move from an unofficial and global coalition of researchers and executives that’s opposed to the propagation of such technology. The pledge warns that weapon systems that use AI to “[select] and [engage] targets without human intervention” pose moral and pragmatic threats. Morally, the signatories argue, the decision to take a human life “should never be delegated to a machine.” On the pragmatic front, they say that the spread of such weaponry would be “dangerously destabilizing for every country and individual.”

这是全球研究者和公司高层反对该技术扩散的最新运动。该协议警告,“在没有人为干涉的情况下【先择】和【攻击】目标”的人工智能武器系统会引发道德风险和实际威胁。协议中写道,从道德上来说,我们“永远也不应该让一台机器来决定”是否夺取一个人类的生命。协议中还写道,从务实的角度来看,此类武器的扩散很危险,将“破坏每个国家的稳定并让每个人都感到不安”。

The pledge was published today at the 2018 International Joint Conference on Artificial Intelligence (IJCAI) in Stockholm, and it was organized by the Future of Life Institute, a research institute that aims to “mitigate existential risk” to humanity. The institute has previously helped issue letters from some of the same individuals, calling on the United Nations to consider new regulations for what are known as lethal autonomous weapons, or LAWS. This, however, is the first time those involved have pledged individually to not develop such technology.

该协议于今天在斯德哥尔摩举办的2018国际人工智能联合会议上发布,协议的签署是由生命未来研究所安排的,该研究所旨在“减少人类生存的风险”。此前,本次签署协议的其中一些科技大佬已经通过生命未来研究所发布了一次公开信,他们在信中呼吁联合国考虑制定限制开发自主致命武器的新规定。这是科技大佬们第一次代表个人承诺不发展这种技术。



So far, attempts to muster support for the international regulation of autonomous weapons have been ineffectual. Campaigners have suggested that LAWS should be subject to restrictions, similar to those placed on chemical weapons and landmines. But note that it’s incredibly difficult to draw a line between what does and does not constitute an autonomous system. For example, a gun turret could target individuals but not fire on them, with a human “in the loop” simply rubber-stamping its decisions.

到目前为止,制定国际法规限制发展自主武器的呼吁还没有得到人们的响应。活动家们提议自主致命武器应该受制于各种限制条件,就像对使用化学武器和地雷的制约一样。但是值得注意的是,界定什么情况下构成自主系统、什么情况下不构成自主系统是一件极其困难的事情。比如,一座炮塔可以自主锁定人员,但不会向其开火,炮塔内的人则只是不加思考便机械式地批准开火。

They also point out that enforcing such laws would be a huge challenge, as the technology to develop AI weaponry is already widespread. Additionally, the countries most involved in developing this technology (like the US and China) have no real incentive not to do so.

他们也指出,强制推行这样的法律会碰到巨大的挑战,因为发展人工智能武器的技术已经很普遍了。另外,大部分开发这种技术的国家(如美国和中国)都没有不开发该技术的动机。



AI is already being developed to analyze video footage from military drones.

图:军用无人机的人工智能已经可以分析视频片段了 。

Paul Scharre, a military analyst who has wrriten a book on the future of warfare and AI, told The Verge that the pledge was unlikely to have an effect on international policy, and that such documents did not do a good enough job of teasing out the intricacies of this debate. “What seems to be lacking is sustained engagement from AI researchers in explaining to policymakers why they are concerned about autonomous weapons,” said Scharre.

军事分析师斯查瑞写了一本关于战争和人工智能的未来的书,他告诉本站记者,这份协议不太可能对国际政策产生效力,这样的文件不足以梳理此中错综复杂的关系。斯查尔表示,“在向决策者们解释为什么担心自主武器的问题上,人工智能研究者们似乎缺乏恒心”。

He also added that most governments were in agreement with the pledge’s main promise — that individuals should not develop AI systems that target individuals — and that the “cat is already out of the bag” on military AI used for defensive. “At least 30 nations have supervised autonomous weapons used to defend against rocket and missile attack,” said Scharre. “The real debate is in the middle space, which the press release is somewhat ambiguous on.”

他还说道,大多数国家的政府都同意这份协议的主要条件——私人不应该开发以人为目标的人工智能系统,他们也同意,军用人工智能负责国家的防御已经是众所周知的事情。斯查尔说,“至少有30个国家已经拥有用于防御火箭和导弹打击的自主武器,不过他们对其进行了监管,真正引发争论的是界限不清的地方,这也是媒体在某种程度上模糊的地方”

However, while international regulations might not be coming anytime soon, recent events have shown that collective activism like today’s pledge can make a difference. Google, for example, was rocked by employee protests after it was revealed that the company was helping develop non-lethal AI drone tools for the Pentagon. Weeks later, it published new research guidelines, promising not to develop AI weapon systems. A threatened boycott of South Korea’s KAIST university had similar results, with the KAIST’s president promising not to develop military AI “counter to human dignity including autonomous weapons lacking meaningful human control.”

不过,虽然国际法规短期内不可能出台,但是最近的事件显示今天这样的集体活动是可以产生影响的。比如,谷歌在被揭露为五角大楼开发非致命性人工智能无人机之后,遭到了其员工的抗议。几星期后,该公司发布了新的研究指导方针,承诺不会开发人工智能武器系统。遭到联合抵制威胁的韩国KAIST大学也面临了同样的结果,KAIST的主席承诺不会开发违背人类尊严的军用人工智能,包括没有人类控制的自主武器。

In both cases, it’s reasonable to point out that the organizations involved are not stopping themselves from developing military AI tools with other, non-lethal uses. But a promise not to put a computer solely in charge of killing is better than no promise at all.

不管怎样,应当指出的是,这些作出承诺的组织并没有表示不会开发非军用人工智能。但是,得到一个不让电脑完全自主杀人的承诺总比没有承诺强。


OkBTC
Thank you to those who signed, but I’m concerned most about those who won’t sign, and who work in secret to develop AGI without any thought in to creating friendly AI. Good start though.

感谢那些签订该协议的人,但是我最担心的是那些不愿签的人,以及那些秘密开发强人工智能并且压根不考虑开发友好型人工智能的人。但这是个好的开始。



GreatNorthWeb
...and then we will be forced to create our own autonomous killers.

那么我们就不得不开发自己的无人杀手了。

level 4
candleboy_
Human operators will never go out of style though. Drones are susceptible to EMP, humans are not.

但是人类操作员是永远也不会过时的。无人机容易受到电磁脉冲的影响,人类不会。

BreakdancingMammal
I'm pretty sure they can EMP-proof a drone. Panasonic Toughbooks have been EMP proof for a while now.

我很确定他们可以做出抗电磁脉冲的无人机。松下的可以抗电磁脉冲的笔记本电脑已经出来有段时间了。

G4ME
Its the atomic bomb all over again.

这是新的原子弹。

Azonata
And once someone has it, everyone will get it. Do people really believe that the US military will let ethics stand in the way of losing out to China and Russia?

一旦有人拥有了这种武器,所有人都会有。你们真的相信美国军方会让道德阻挡自己的脚步从而被中国和俄罗斯取代吗?

abedfilms
There is 0 possibility the us military would wait until someone else has it before developing their own.. Of course they are developing it, even if in secret

可能性为0,美国军方不会等到别人有了才发展自己的人工智能无人机。他们当然在开发,秘密地开发。

gqtrees
exactly. Elon and all these guys can say we won't do it...but somwhere on a military base some R&D group has some of the top talents coding away on exactly this..most likely

确实。伊隆和这些家伙可以说我们不会开发这种武器。但是在军事基地里,一些研发团队的顶级人才正在为这种武器写编码....很可能就是这样。

xtense
Around 1890 ish the czar of russia tried the same thing. To sign a pact that stated that everyone is quite pleased with the level of destruction weapons have reached. He was pushing this ideea because russia was almost bankrupt and didnt have money to spend on weapon technological developments. 25 ish years later we know what happened anyway. You cant stop progress be that military or technological. Your best bet is to educate people to understand the impact the misuse of technology has brought in history.

1890年前后,俄罗斯沙皇做了同样的尝试。签署了一份协定,并声明大家对现有武器的杀伤力很满意。他推行这项运动是因为俄罗斯当时几乎已经破产了,没有钱再进行武器技术的开发了。25年后,我们都知道发生了什么。你无法阻止军事或技术的进步。能做的只有让人们了解技术的不当使用在历史上所造成的影响。

griffdog82
Yeah , after the Biological Weapons Convention, Russia still managed to produce three tons of smallpox.
I doubt a treaty can magically stop this.

是啊,在签订生物武器公约后,俄罗斯还是设法生产了3吨天花病毒。
我不相信一个协定就能神奇般地阻止人工智能武器的发展。



Demigod787
Honestly I doubt this would stop governments like the US, China or Israel from incorporating AI into the military sectors. It's a fact that once a country starts using weapons with AI they'd have an overwhelming advantage which country can accept being threatened like that

说实话,我不信这协议能阻止美国、中国或以色列等国政府把人工智能用于军事领域。事实上,一旦某个国家开始使用人工智能武器,他们就会有压倒性的胜利,哪个国家能接受被这样威胁?