文/茱莉亞·博斯曼 譯/周臻
By Julia Bossmann
Optimizing logistics, detecting fraud, composing art, conducting research, providing translations:intelligent machine systems are transforming our lives for the better. As these systems become more capable,our world becomes more efficient and consequently richer.
[2] Tech giants such as Alphabet1Alphabet公司(Alphabet Inc.)是一家設(shè)在美國(guó)加州的控股公司。公司前身為谷歌。公司重整后,谷歌成為其最大子公司。,Amazon, Facebook, IBM and Microsoft—as well as individuals like Stephen Hawking and Elon Musk2埃隆·馬斯克為SpaceX 的CEO 和首席設(shè)計(jì)師,以聯(lián)合創(chuàng)辦了特斯拉汽車和PayPal而聞名?!猙elieve that now is the right time to talk about the nearly boundless landscape of artificial intelligence. In many ways,this is just as much a new frontier for ethics and risk assessment as it is for emerging technology. So which issues and conversations keep AI experts up at night?
優(yōu)化物流、檢測(cè)欺詐、創(chuàng)作藝術(shù)、開展研究、提供翻譯:智能機(jī)器系統(tǒng)正在改善我們的生活。隨著這些系統(tǒng)變得更能干,我們的世界變得更高效,進(jìn)而更富有。
[2]諸如 Alphabet、亞馬遜、臉書、IBM和微軟這樣的科技巨頭,以及諸如史蒂芬·霍金和埃隆·馬斯克這樣的人士相信,現(xiàn)在正是討論人工智能無(wú)限前景的好時(shí)機(jī)。從許多方面來(lái)看,這既是新興技術(shù),也是倫理和風(fēng)險(xiǎn)評(píng)估的一個(gè)新的前沿。那么是哪些問(wèn)題和討論讓人工智能專家們睡不著覺呢?
[3]勞工階層主要關(guān)注自動(dòng)化問(wèn)題。當(dāng)我們發(fā)明了工作自動(dòng)化的方法時(shí),我們可以為人們創(chuàng)造機(jī)會(huì)來(lái)?yè)?dān)任更復(fù)雜的角色,從主導(dǎo)前工業(yè)時(shí)代的體力勞動(dòng),轉(zhuǎn)到全球化社會(huì)中戰(zhàn)略和行政工作特有的認(rèn)知?jiǎng)趧?dòng)。
[4]以卡車運(yùn)輸為例:目前僅在美國(guó)就有數(shù)百萬(wàn)人從事該職業(yè)。如果特斯拉的埃隆·馬斯克所承諾的無(wú)人駕駛卡車在未來(lái)十年能夠廣泛應(yīng)用,他們?cè)趺崔k?但在另一方面,如果我們降低事故風(fēng)險(xiǎn),無(wú)人駕駛卡車似乎是一種合乎道德的選擇。同樣的情形也可能適用于辦公人員和發(fā)達(dá)國(guó)家的大多數(shù)勞動(dòng)力。
[5]這取決于我們?nèi)绾卫梦覀兊臅r(shí)間。大多數(shù)人仍然依靠用時(shí)間來(lái)?yè)Q取收入,以維持自己和家庭的生活。我們只能希望這個(gè)機(jī)會(huì)能幫人們從非勞力的活動(dòng)中找到意義,比如照顧家庭,融入社區(qū),或者學(xué)習(xí)新的方式為人類社會(huì)做出貢獻(xiàn)。
[6]如果我們成功過(guò)渡,某天我們可能會(huì)回頭發(fā)覺,僅僅為了謀生而出賣大部分醒著的時(shí)間是多么愚昧。
[3] The hierarchy of labour is concerned primarily with automation. As we’ve invented ways to automate jobs, we could create room for people to assume more complex roles, moving from the physical work that dominated the preindustrial globe to the cognitive labour that characterizes strategic and administrative work in our globalized society.
[4] Look at trucking: it currently employs millions of individuals in the United States alone. What will happen to them if the self-driving trucks promised by Tesla’s Elon Musk become widely available in the next decade?But on the other hand, if we consider the lower risk of accidents, self-driving trucks seem like an ethical choice. The same scenario could happen to office workers, as well as to the majority of the workforce in developed countries.
[5] This is where we come to the question of how we are going to spend our time. Most people still rely on selling their time to have enough income to sustain themselves and their families.We can only hope that this opportunity will enable people to find meaning in non-labour activities, such as caring for their families, engaging with their communities and learning new ways to contribute to human society.
[6] If we succeed with the transition,one day we might look back and think that it was barbaric that human beings were required to sell the majority of their waking time just to be able to live.
[7] Though artificial intelligence is capable of a speed and capacity of processing that’s far beyond that of humans, it cannot always be trusted to be fair and neutral. Google and its parent company Alphabet are one of the leaders when it comes to artif i cial intelligence, as seen in Google’s Photos service, where AI is used to identify people, objects and scenes. But it can go wrong, such as when a camera missed the mark on racial sensitivity, or when a software used to predict future criminals showed bias against black people.
[8] We shouldn’t forget that AI systems are created by humans, who can be biased and judgemental. Once again, if used right, or if used by those who strive for social progress, artificial intelligence can become a catalyst for positive change.
[7]雖然人工智能的處理速度和能力遠(yuǎn)遠(yuǎn)超越人類,但不能信任它是永遠(yuǎn)公正和中立的。谷歌及其控股集團(tuán)Alphabet是人工智能的領(lǐng)先者之一,其提供的谷歌照片服務(wù)是人工智能的一種,主要用于識(shí)別人物、物體和場(chǎng)景。但這會(huì)出錯(cuò),比如一臺(tái)相機(jī)沒能標(biāo)記種族敏感信息,或者預(yù)測(cè)未來(lái)犯罪的軟件表現(xiàn)出對(duì)黑人的偏見。
[8]我們不要忘記,人工智能系統(tǒng)是由有偏見、武斷的人類所創(chuàng)造的。再說(shuō),如果正確使用,或者用于努力實(shí)現(xiàn)社會(huì)進(jìn)步,人工智能會(huì)成為積極變革的催化劑。
[9]一項(xiàng)科技變得越強(qiáng)大,它越會(huì)被用于善良抑或邪惡目的。這不僅指用于取代人類士兵的機(jī)器人或自主武器,而且也指那些如被惡意使用會(huì)帶來(lái)破壞的人工智能系統(tǒng)。由于這些戰(zhàn)斗并不只在戰(zhàn)場(chǎng)上發(fā)生,網(wǎng)絡(luò)安全將變得尤為重要。畢竟,我們應(yīng)對(duì)的是一個(gè)速度和殺傷力比我們大幾個(gè)數(shù)量級(jí)的系統(tǒng)。
[9] The more powerful a technology becomes, the more can it be used for nefarious reasons as well as good. This applies not only to robots produced to replace human soldiers, or autonomous weapons, but to AI systems that can cause damage if used maliciously.Because these fights won’t be fought on the battleground only, cybersecurity will become even more important. After all, we’re dealing with a system that is faster and more capable than us by orders of magnitude.
[10] It’s not just adversaries we have to worry about. What if artificial intelligence itself turned against us?This doesn’t mean by turning “evil”in the way a human might, or the way AI disasters are depicted in Hollywood movies. Rather, we can imagine an advanced AI system as a “genie in a bottle3在英語(yǔ)里let the genie out of the bottle本身就比喻to allow something evil to happen that cannot then be stopped?!?that can fulfill wishes, but with terrible unforeseen consequences.
[11] In the case of a machine, there is unlikely to be malice at play, only a lack of understanding of the full context in which the wish was made. Imagine an AI system that is asked to eradicate cancer in the world. After a lot of computing,it spits out a formula that does, in fact,bring about the end of cancer—by killing everyone on the planet. The computer would have achieved its goal of “no more cancer” very efficiently, but not in the way humans intended i
[10]我們不僅要提防對(duì)手。如果人工智能本身亦背叛我們呢?這不是指像人類一樣變“邪惡”,也不會(huì)像好萊塢電影里描繪的人工智能災(zāi)難那樣。相反,我們可以預(yù)想到一個(gè)像“瓶子里的精靈”一樣的發(fā)達(dá)的人工智能系統(tǒng),能實(shí)現(xiàn)愿望,但會(huì)有可怕的不可預(yù)見的后果。
[11]對(duì)機(jī)器來(lái)說(shuō),實(shí)現(xiàn)愿望的過(guò)程中不太可能產(chǎn)生惡意,只是缺乏對(duì)愿望范疇的全面理解。試想一個(gè)人工智能系統(tǒng)被要求根除全世界的癌癥。經(jīng)過(guò)大量的計(jì)算,它搞出一個(gè)方案,事實(shí)上,的確可以根除癌癥——?dú)⑺赖厍蛏系乃腥?。?jì)算機(jī)可以非常有效地實(shí)現(xiàn)“再無(wú)癌癥”的目標(biāo),但卻
[12]人類能夠處于食物鏈頂端,并不是因?yàn)橛屑饫难例X或強(qiáng)肌肉。人類的主導(dǎo)地位幾乎完全取決于我們的聰明才智。我們可以勝過(guò)更大、更快、更強(qiáng)壯的動(dòng)物,是因?yàn)槲覀兡軇?chuàng)造并使用工具來(lái)控制它們:既有籠子和武器之類的物理工具,也有訓(xùn)練和調(diào)理等認(rèn)知工具。
[13]這就產(chǎn)生了一個(gè)關(guān)于人工智能的嚴(yán)肅問(wèn)題:會(huì)不會(huì)有一天,人工智能對(duì)我們也有相同的優(yōu)勢(shì)?我們也沒法指望“拔插頭”,因?yàn)橐慌_(tái)足夠先進(jìn)的機(jī)器會(huì)預(yù)見到這一舉動(dòng)并保護(hù)自己。 這就是所謂的“奇點(diǎn)”:人類不再是地球上最聰明生物的時(shí)間點(diǎn)。
[14]神經(jīng)科學(xué)家仍在努力破解意識(shí)的秘密,我們也越來(lái)越多地了解獎(jiǎng)勵(lì)和厭惡的基本原理。我們甚至與智力低下的動(dòng)物共用這種機(jī)制。某種程度上,我們正在人工智能系統(tǒng)中建立類似的獎(jiǎng)勵(lì)和厭惡機(jī)制。例如,強(qiáng)化學(xué)習(xí)類似于訓(xùn)練狗:通過(guò)虛擬獎(jiǎng)勵(lì)來(lái)提升表現(xiàn)。
[12] The reason humans are on top of the food chain is not down to sharp teeth or strong muscles. Human dominance is almost entirely due to our ingenuity and intelligence. We can get the better of bigger, faster, stronger animals because we can create and use tools to control them: both physical tools such as cages and weapons, and cognitive tools like training and conditioning.
[13] This poses a serious question about artificial intelligence: will it, one day, have the same advantage over us?We can’t rely on just “pulling the plug”either, because a sufficiently advanced machine may anticipate this move and defend itself. This is what some call the“singularity”: the point in time when human beings are no longer the most intelligent beings on earth.
[14] While neuroscientists are still working on unlocking the secrets of conscious experience, we understand more about the basic mechanisms of reward and aversion. We share these mechanisms with even simple animals.In a way, we are building similar mechanisms of reward and aversion in systems of artificial intelligence. For example, reinforcement learning is similar to training a dog: improved performance is reinforced with a virtual reward.
[15] Right now, these systems are fairly superficial, but they are becoming more complex and life-like. Could we consider a system to be suffering when its reward functions give it negative input? What’s more, so-called genetic algorithms work by creating many instances of a system at once, of which only the most successful“survive” and combine to form the next generation of instances. This happens over many generations and is a way of improving a system. The unsuccessful instances are deleted. At what point might we consider genetic algorithms a form of mass murder?
[16] Once we consider machines as entities that can perceive, feel and act, it’s not a huge leap to ponder their legal status. Should they be treated like animals of comparable intelligence?Will we consider the suffering of“feeling” machines?
[17] Some ethical questions are about mitigating suffering, some about risking negative outcomes. While we consider these risks, we should also keep in mind that, on the whole, this technological progress means better lives for everyone. Artificial intelligence has vast potential, and its responsible implementation is up to us. ■
[15]現(xiàn)今,這些系統(tǒng)是相當(dāng)膚淺的,但是它們正變得越來(lái)越復(fù)雜和逼真。我們可以認(rèn)為一個(gè)系統(tǒng)正為自我負(fù)面評(píng)價(jià)痛苦么?更甚者,在所謂的遺傳算法中,一次性創(chuàng)建一種體系的多個(gè)實(shí)例,僅使其中最成功的那些“存活”、結(jié)合并形成下一代的實(shí)例,讓其經(jīng)過(guò)許多世代,是改進(jìn)一種體系的方式。不成功的實(shí)例被刪除。什么時(shí)候我們可以認(rèn)為,遺傳算法其實(shí)是一種形式的大規(guī)模謀殺?
[16]一旦我們將機(jī)器視為可以感知、感覺和行為的實(shí)體,那么思考其法律地位就迫在眉睫了。他們應(yīng)該像擁有類同智慧的動(dòng)物一樣被對(duì)待嗎?我們會(huì)考慮“有感覺的”機(jī)器的痛苦嗎?
[17]一些道德問(wèn)題是關(guān)于減輕痛苦的,一些是關(guān)于承擔(dān)不良后果風(fēng)險(xiǎn)的。在考慮這些風(fēng)險(xiǎn)的同時(shí),我們也應(yīng)該記住,這項(xiàng)技術(shù)的進(jìn)步,總體上意味著帶給每個(gè)人更好的生活。人工智能具有巨大的潛力,而我們要對(duì)其實(shí)施負(fù)責(zé)。 □