人工智能:它们有时可能产生错觉 [美国媒体]

乘客注意到停车标,但他所乘坐的汽车在加速行驶,恐慌顿时涌上心头。他看见前方火车正向自己这边疾驰而来,于是大声呼叫前排驾驶员,但他想起来车里没有驾驶员。火车以120英里的时速撞向这辆无人驾驶汽车,当即车毁人亡。



The passenger noticed the stop sign, but his car was speeding up and panic rushed to his heart. When he saw the train coming towards him, he called out to the driver in the front row, but he remembered that there was no driver in the train. The train crashed into the driverless car at 120 miles per hour, killing people immediately.

乘客注意到停车标,但他所乘坐的汽车在加速行驶,恐慌顿时涌上心头。他看见前方火车正向自己这边疾驰而来,于是大声呼叫前排驾驶员,但他想起来车里没有驾驶员。火车以120英里的时速撞向这辆无人驾驶汽车,当即车毁人亡。

This is a virtual scene, but it reflects the real defects of AI mechanism. In recent years, more and more machine-generated audiovisual illusion. When the recognition system of these machines is disturbed by "noise", they will produce illusions. In the worst case, their illusions may be as dangerous as the above scenarios, and although parking signs are obvious to humans, machines fail to recognize them.

这是一个虚拟场景,但反映出人工智能机制中存在的真实缺陷。最近几年来,机器产生视听错觉的情形越来越多。当这些机器的识别系统受到“噪音”干扰时,它们会产生错觉。在最糟糕的情况下,它们产生的错觉可能像上述场景一样危险,虽然停车标志对人类来说十分显眼,机器却没能识别出来。



More recently, Athalye and his colleagues turned their attention to physical objects. By slightly tweaking the texture and colouring of these, the team could fool the AI into thinking they were something else. In one case a baseball that was misclassified as an espresso and in another a 3D-printed turtle was mistaken for a rifle. They were able to produce some 200 other examples of 3D-printed objects that tricked the computer in similar ways. As we begin to put robots in our homes, autonomous drones in our skies and self-driving vehicles on our streets, it starts to throw up some worrying possibilities.

最近,阿塔利和他的同事们把注意力转向了实际物体。发现只要稍微调整一下它们的纹理和颜色,他的团队就可以骗过人工智能,把这些物体认作别的东西。在一个案例中,棒球被误认为是一杯浓缩咖啡,而在另一个案例中,3D打印的海龟被误认为是步枪。还有其他例子,他们制造了约200个3D打印物体,这些物体以类似的方式欺骗了电脑。今天当我们开始在家里使用机器人、在空中运用自动驾驶无人机、在街道上行驶自动驾驶汽车时,机器人的这种误觉开始抛出一些令人担忧的可能性。

“At first this started off as a curiosity,” says Athalye. “Now, however, people are looking at it as a potential security issue as these systems are increasingly being deployed in the real world.”

阿塔利说,“起初,这只是一种好奇,然而,随着这些智能系统越来越多地部署在现实世界中,人们正将其视为一个潜在的安全问题。”



To Carlini, such adversarial examples “conclusively prove that machine learning has not yet reached human ability even on very simple tasks”.

在卡里尼看来,这些对抗性的例子“最终证明,哪怕在非常简单的任务上,机器学习也没有达到人类的能力”。

Under the skin

内在原理

Neural networks are loosely based on how the brain processes visual information and learns from it. Imagine a young child learning what a cat is: as they encounter more and more of these creatures, they will start noticing patterns – that this blob called a cat has four legs, soft fur, two pointy ears, almond shaped eyes and a long fluffy tail. Inside the child’s visual cortex (the section of the brain that processes visual information), there are successive layers of neurons that fire in response to visual details, such as horizontal and vertical lines, enabling the child to construct a neural ‘picture’ of the world and learn from it.

人工神经网络是大致模仿大脑(即生物神经网络)处理视觉信息的功能并从中学习方法。想象一个小孩正在学习认识猫是什么东西:当他们见到这种动物的次数越来越多时,就会开始注意到这种动物的一些固定模式,发现这团叫做猫的东西有四条腿,有柔软的皮毛、两只尖耳朵、杏仁状的眼睛和一条毛茸茸的长尾巴。在儿童的视觉皮层(大脑中处理视觉信息的区域)内,多层神经元会对视觉细节做出反应,如水平和垂直的线条,使儿童能够构建一幅世界的神经“图画”,并从中学习视觉识别。

Neural networks work in a similar way. Data flows through successive layers of artificial neurons until after being trained on hundreds or thousands of examples of the same thing (usually labelled by a human), the network starts to spot patterns which enable it to predict what it is viewing. The most sophisticated of these systems employ ‘deep-learning’ which means they possess more of these layers.

神经网络的工作原理与此类似,获取的数据通过多层人工神经元网络传输进行信息处理,在接受到成百上千个相同物体的样本(通常由人类标记)的训练之后,神经网络开始建立此物体的视觉识别模式,从而能够在其后认得出正在观看的东西是这种物体。其中最复杂的系统采用“深度学习”,这意味着需要拥有更多的信息处理层。



“Definitely it is a step in the right direction,” says Madry. While this approach does seem to make frameworks more robust, it probably has limits as there are numerous ways you could tweak the appearance of an image or object to generate confusion.

麦德里说, “这无疑是朝着正确方向迈出的一步。”虽然这种方法看起来确实使框架更加强大,但也可能有一些限制,因为有许多方法可以改变图像或物体的外观从而产生混淆。

A truly robust image classifier would replicate what ‘similarity’ means to a human: it would understand that a child’s doodle of a cat represents the same thing as a photo of a cat and a real-life moving cat. Impressive as deep learning neural networks are, they are still no match for the human brain when it comes to classifying objects, making sense of their environment or dealing with the unexpected.

一个真正强大的图像分类器会复制"相似性"对人类的作用,因而可以认出一个孩子涂鸦的猫和一张猫的照片以及一只现实生活中移动的猫代表的是同一样东西。尽管深度学习神经网络令人印象深刻,但在对物体进行分类、感知周遭环境或处理突发事件方面,仍无法与人脑匹敌。

If we want to develop truly intelligent machines that can function in real world scenarios, perhaps we should go back to the human brain to better understand how it solves these issues.

如果我们想要开发出能够在现实世界中发挥作用的真正智能机器,或许我们应该回到人脑上来,更好地理解人脑是如何解决这些问题的。

Binding problem

捆绑问题



In their desire to keep things simple, engineers building artificial neural frameworks have ignored several properties of real neurons – the importance of which is only beginning to become clear. Neurons communicate by sending action potentials or ‘spikes’ down the length of their bodies, which creates a time delay in their transmission. There’s also variability between individual neurons in the rate at which they transmit information – some are quick, some slow. Many neurons seem to pay close attention to the timing of the impulses they receive when deciding whether to fire themselves.

为了简单易行,构建当代人工神经框架的工程师忽略了真实人脑神经元的一些特性,而科技界才刚刚开始明白这些特性非常重要。神经元通过将动作电位(action potentials)或“峰电位”(spikes)信号发送到身体的各个部位来进行交流,这就造成了神经元传输的时间延迟。个体神经元之间在传递信息的速度上也有差异,有些快,有些慢。许多神经元在决定是否放电时,似乎会密切关注它们接收到的脉冲的时机。

“Artificial neural networks have this property that all neurons are exactly the same, but the variety of morphologically different neurons in the brain suggests to me that this is not irrelevant,” says Jeffrey Bowers, a neuroscientist at the University of Bristol who is investigating which aspects of brain function aren’t being captured by current neural networks.

“人工神经网络有这个属性,即所有神经元完全相同,但大脑中的神经元却有不同形态,这让我意识到,人脑神经元的差异性不是无关紧要的,”布里斯托大学(University of Bristol)的神经系统科学家鲍尔斯(Jeffrey Bowers)说。他正在调查大脑哪些方面的功能未被当前人工神经网络所采用。



“Our hypothesis is that the feature binding representations present in the visual brain, and replicated in our biological spiking neural networks, may play an important role in contributing to the robustness of biological vision, including the recognition of objects, faces and human behaviours,” says Stringer.

斯特林格说,“我们的假设是,视觉大脑中呈现的捆绑特征,以及在我们的生物强化神经网络中的复制,可能在增强生物视觉的稳健性方面发挥重要作用,包括对物体、面孔和人类行为的识别。”

Stringer’s team is now seeking evidence for the existence of such neurons in real human brains. They are also developing ‘hybrid’ neural networks that incorporate this new information to see if they produce a more robust form of machine learning.

斯特林格的研究小组目前正在寻找证据,证明真实的人类大脑中存在这样的神经元。他们还在开发“混合”神经网络,将这些新信息结合进人工神经网络,看看是否能产生一种更强大的机器学习形式。

“Whether this is what happens in the real brain is unclear at this point, but it is certainly intriguing, and highlights some interesting possibilities,” says Bowers.

鲍尔斯说, “目前还不清楚这是否在真的大脑中发生,但这确实很吸引人,并突出了一些有趣的可能性。”



“It is becoming ever clearer that the way the brain works is quite different to how our existing deep learning models work,” he says. “So, this indeed might end up being a completely different path to achieving success. It is hard to say how viable it is and what the timeframe needed to achieve success here is.”

他说,“越来越清楚的是,大脑的工作方式与我们现有的机器深度学习模式非常不同,因此,最终可能会走上一条完全不同的路才能成功。很难说可行性有多大,以及取得成功需要多长时间。”

In the meantime, we may need to avoid placing too much trust in the AI-powered robots, cars and programmes that we will be increasingly exposed to. You just never know if it might be hallucinating.

与此同时,对于越来越多人工智能驱动的机器人、汽车和程序,我们可能需要避免对其过于信任。因为你永远不知道人工智能是不是正在产生被误导的视觉。

阅读: