• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Want to Get Humans to Trust Robots? Let Them Dance人機(jī)共舞,建立信任

    2024-02-19 11:42:53薩姆·瓊斯/文袁峰/譯
    英語世界 2024年2期
    關(guān)鍵詞:肯尼羅杰舞者

    薩姆·瓊斯/文 袁峰/譯

    A performance with living and mechanical partners can teach researchers how to design more relatable1 bots.

    人機(jī)搭檔表演能教會研究人員如何設(shè)計(jì)更讓人認(rèn)同的機(jī)器人。

    A dancer shrouded in shades of blue rises to her feet and steps forward on stage. Under a spotlight, she gazes at her partner: a tall, sleek2 robotic arm. As they dance together, the machine’s fluid movements make it seem less stere-otypically3 robotic—and, researchers hope, more trustworthy.

    舞者從深淺不一的藍(lán)色光暈中站起身來,走上舞臺。聚光燈下,她凝視著舞伴——一架頎長、優(yōu)美的機(jī)械臂。人機(jī)共舞時(shí),機(jī)械臂動作流暢,看起來并不刻板機(jī)械,研究人員希望這也會讓它看上去更可靠。

    “When a human moves one joint, it isn’t the only thing that moves. The rest of our body follows along,” says Amit Rogel, a music technology graduate researcher at Georgia Institute of Technology. “There’s this slight continuity that almost all animals have, and this is really what makes us feel human in our movements.” Rogel programmed this subtle follow-through4 into robotic arms to help create FOREST, a performance collaboration between researchers at Georgia Tech, dancers at Kennesaw State University and a group of robots.

    “當(dāng)人活動關(guān)節(jié)時(shí),不只是關(guān)節(jié)在動,身體其他部位也順勢而動。”佐治亞理工學(xué)院音樂技術(shù)研究生研究員阿米特·羅杰爾說,“幾乎所有動物都具有這種細(xì)微的動作連貫性,而這確實(shí)讓我們感覺自己是人而非機(jī)器?!绷_杰爾將這種微妙的順勢動作編入機(jī)械臂的程序,助力創(chuàng)作“福雷斯特”——佐治亞理工學(xué)院研究人員、肯尼索州立大學(xué)舞者和一組機(jī)器人三方協(xié)作的表演項(xiàng)目。

    The goal is not only to create a memorable performance, but to put into practice what the researchers have learned about building trust between humans and robots. Robotics are already widely used, and the number of collaborative robots—which work with humans on tasks such as tending factory machines and inspecting manufacturing equipment—is expected to climb significantly in the coming years. But although they are becoming more common, trust in them is still low—and this makes humans more reluctant to work with them. “People may not understand how the robot operates, nor what it wants to accomplish,” says Harold Soh, a computer scientist at the National University of Singapore. He was not involved in the project, but his work focuses on human-robot interaction and developing more trustworthy collaborative robots.

    該項(xiàng)目的目標(biāo)不僅是創(chuàng)作令人難忘的表演,而且還要將研究人員對建立人類與機(jī)器人互信的認(rèn)識付諸實(shí)踐。機(jī)器人技術(shù)已得到廣泛應(yīng)用,與人類協(xié)同執(zhí)行照看工廠機(jī)器和檢查制造設(shè)備等任務(wù)的協(xié)作機(jī)器人的數(shù)量有望在未來幾年大幅攀升。但盡管機(jī)器人日益常見,人們對它們的信任度卻仍很低,因而越加不愿意與其協(xié)作?!叭藗兛赡懿涣私鈾C(jī)器人如何運(yùn)作,也不明白它想要完成什么任務(wù)?!毙录悠聡⒋髮W(xué)計(jì)算機(jī)科學(xué)家蘇順鴻表示。他雖未參與該項(xiàng)目,但其工作側(cè)重于人類與機(jī)器人交互和開發(fā)更值得信賴的協(xié)作機(jī)器人。

    Although humans love cute fictional machines like R2-D2 or WALL-E, the best real-world robot for a given task may not have the friendliest looks, or move in the most appealing way. “Calibrating5 trust can be difficult when the robot’s appearance and behavior are markedly different from humans,” Soh says. However, he adds, even a disembodied6 robot arm can be designed to act in a way that makes it more relatable7. “Conveying emotion and social messages via a combination of sound and motion is a compelling approach that can make interactions more fluent and natural,” he explains.

    人類喜愛R2-D2或WALL-E之類可愛的科幻機(jī)器人,但現(xiàn)實(shí)世界中執(zhí)行特定任務(wù)的最佳機(jī)器人未必是外表最友善或動作最迷人的?!爱?dāng)機(jī)器人的外表和行為與人類迥然不同時(shí),難以通過調(diào)試建立信任。”蘇順鴻說,但他又指出,即便是無軀體的機(jī)械臂,也可設(shè)計(jì)得行為舉止更讓人認(rèn)同。他解釋說:“通過聲音與動作相結(jié)合的方式表達(dá)情感和傳達(dá)社交信息具有說服力,能使交互更加順暢、自然?!?/p>

    That’s why the Georgia Tech team decided to program nonhumanoid8 machines to appear to convey emotion, through both motion and sound. Rogel’s latest work in this area builds off years of research. For instance, to figure out which sounds best convey specific emotions, Georgia Tech researchers asked singers and guitarists to look at a diagram called an “emotion wheel,” pick an emotion, and then sing or play notes to match that feeling. The researchers then trained a machine learning model—one they planned to embed in the robots—on the resulting data set. They wanted to allow the robots to produce a vast range of sounds, some more complex than others. “You could say, ‘I want it to be a little bit happy, a little excited and a little bit calm,’” says project collaborator Gil Weinberg, director of Georgia Tech’s Center for Music Technology.

    正因如此,佐治亞理工學(xué)院團(tuán)隊(duì)決定為非人形機(jī)器編制程序,使其看似能通過動作和聲音表達(dá)情感。羅杰爾在這一領(lǐng)域的最新工作建立在多年研究的基礎(chǔ)上。例如,為了弄清哪些聲音最能表達(dá)特定情感,佐治亞理工學(xué)院研究人員讓多名歌手和吉他手觀看《情感輪盤》示意圖,挑選一種情感,然后詠唱或演奏匹配的樂音來表達(dá)該情感。然后,研究人員運(yùn)用由此獲得的數(shù)據(jù)集訓(xùn)練一個(gè)機(jī)器學(xué)習(xí)模型——他們計(jì)劃將該模型嵌入機(jī)器人。他們想讓機(jī)器人發(fā)出各種各樣的聲音,其中一些聲音比其他聲音更復(fù)雜。佐治亞理工學(xué)院音樂技術(shù)中心主任、項(xiàng)目協(xié)作者吉爾·溫伯格說:“你可以說,‘我希望它有些許快樂、些許興奮、些許平靜?!?/p>

    Next, the team worked to tie those sounds to movement. In 2020, the researchers had demonstrated that combining movement with emotion-based sound improved trust in robotic arms in a virtual setting (a requirement fostered by the pandemic). But that experiment only needed the robots to perform four different gestures to convey four different emotions. To broaden a machine’s emotional-movement options for his new study, which has been conditionally accepted for publication in Frontiers in Robotics and AI, Rogel waded through9 research related to human body language. “For each one of those body language [elements], I looked at how to adapt that to a robotic movement,” he says. Then, dancers affiliated with Kennesaw State University helped the scientists refine those movements. As the performers moved in ways intended to convey emotion, Rogel and fellow researchers recorded them with cameras and motion-capture suits, and subsequently generated algorithms so that the robots could match those movements. “I would ask [Rogel], ‘can you make the robots breathe?’ And the next week, the arms would be kind of ‘inhaling’ and ‘exhaling,’” says Kennesaw State University dance professor Ivan Pulinkala.

    接下來,研究團(tuán)隊(duì)將這些聲音與動作結(jié)合起來。2020年,研究人員證明,將動作與情感性聲音相結(jié)合,增進(jìn)了人們在虛擬環(huán)境中對機(jī)械臂的信任(這是疫情催生的要求)。但這項(xiàng)實(shí)驗(yàn)只需機(jī)器人做出四種不同手勢來表達(dá)四種不同情感。羅杰爾費(fèi)心進(jìn)行與人類肢體語言相關(guān)的研究,從而在自己的新研究中拓寬了機(jī)器的情感動作選項(xiàng)。該研究已被《機(jī)器人與人工智能前沿》雜志擬錄用。他表示:“對于其中每一種肢體語言[元素],我都在研究如何使其適配機(jī)器人動作?!彪S后,肯尼索州立大學(xué)下屬的舞者們協(xié)助科研人員優(yōu)化了這些動作。當(dāng)表演者舞動傳情時(shí),羅杰爾與其他研究人員用相機(jī)和動作捕捉服予以記錄,然后生成算法,以便機(jī)器人能適配這些動作?!拔視枺_杰爾),‘你能讓機(jī)器人呼吸嗎?’于是在下一周,機(jī)器臂將會做‘吸氣’‘呼氣’動作?!笨夏崴髦萘⒋髮W(xué)舞蹈學(xué)教授伊萬·普林卡拉說。

    Pulinkala choreographed10 the FOREST performance, which put into practice what the researcher-dancer team learned about creating and deploying emotion-based sounds and movements. “My approach was to kind of breathe a sense of life into the robots and have the dancers [appear] more ‘mechanized,’” Pulinkala says, reflecting on the start of the collaboration. “I asked, ‘How can the robots have more emotional physicality11? And how does a dancer then respond to that?’”

    普林卡拉編排了這場弗雷斯特表演,將研究人員與舞者聯(lián)合團(tuán)隊(duì)在創(chuàng)作和運(yùn)用富于情感的聲音和動作方面的認(rèn)識付諸實(shí)踐?!拔业淖龇ㄊ墙o機(jī)器人注入生命感,而讓舞者[顯得]更具‘機(jī)械感’。”普林卡拉說,并回想起合作之初:“我自問,‘機(jī)器人如何才能有更多的情感性體征?舞者對此又作何回應(yīng)?’”

    According to the dancers, this resulted in machines that seemed a little more like people. Christina Massad, a freelance professional dancer and an alumna of Kennesaw State University, recalls going into the project thinking she would be dancing around the robots—not with them. But she says her mindset shifted as soon as she saw the fluidity of the robots’ movements, and she quickly started viewing them as more than machines. “In one of the first rehearsals, I accidentally bumped into one, and I immediately told it, ‘Oh my gosh, I’m so sorry,’” she says. “Amit laughed and told me, ‘It’s okay, it’s just a robot.’ But it felt like more than a robot.”

    舞者們說這導(dǎo)致機(jī)器看上去更有點(diǎn)像人??死锼沟倌取ゑR薩德是一名自由職業(yè)舞者,也是肯尼索州立大學(xué)校友。她記得加入這個(gè)項(xiàng)目時(shí),還以為自己會圍著機(jī)器人跳舞,而不是與其共舞。但她說,看到機(jī)器人動作流暢,她的看法頓時(shí)改變,隨即不再將它們視為單純的機(jī)器?!霸谧畛醯囊淮闻啪氈?,我不小心撞到一個(gè)機(jī)器人,立馬就對它說,‘天哪,真對不起。’”她說?!鞍⒚滋匦χ鴮ξ艺f,‘沒事,它只是個(gè)機(jī)器人。’可它給人的感覺卻不僅僅是機(jī)器人?!?/p>

    Soh says he finds the performance fascinating and thinks it could bring value to the field of human-robot relationships. “The formation and dynamics of trust in human-robot teams is not well-understood,” he says, “and this work may shed light on the evolution of trust in teams.”

    蘇順鴻說,他發(fā)現(xiàn)機(jī)器人的表演引人入勝,認(rèn)為這或可對研究人類與機(jī)器人的關(guān)系具有意義。“人們尚未充分了解人類與機(jī)器人組合中信任的形成和發(fā)展變化?!彼f,“這項(xiàng)工作可使人進(jìn)一步了解人機(jī)組合中信任的演化?!?/p>

    (譯者為“《英語世界》杯”翻譯大賽獲獎?wù)撸?/p>

    1 relatable能讓人認(rèn)同的,能讓人產(chǎn)生共鳴的。? 2 sleek線條流暢的,造型優(yōu)美的。? 3 stereotypically模式化地,刻板地。? 4 follow-through順勢動作。

    5 calibrate調(diào)諧,調(diào)適。

    6 disembodied脫離軀體的;由看不見的人發(fā)出的。? 7 relatable可明白的,可理解的。? 8 nonhumanoid非人形的,非類人的。

    9 wade through艱難地處理,費(fèi)力地閱讀。? 10 choreograph設(shè)計(jì)舞蹈動作,編舞。

    11 physicality身體特征,肉體性。

    猜你喜歡
    肯尼羅杰舞者
    舞者
    香格里拉(2023年2期)2024-01-04 05:36:24
    舞者
    月亮高高掉進(jìn)水里頭
    歌海(2023年2期)2023-05-30 05:21:15
    山那邊
    歌海(2022年1期)2022-03-29 21:39:55
    舞者
    輕音樂(2022年2期)2022-02-24 02:33:46
    烤紅薯
    小馬虎兔子羅杰
    可疑的舞者
    肯尼威克人的跨世紀(jì)之爭 美洲第一人
    大眾考古(2014年9期)2014-06-21 07:11:18
    肯尼的窗
    石景山区| 乌拉特后旗| 青冈县| 蒙自县| 乌拉特后旗| 丹巴县| 江口县| 阳西县| 鄯善县| 城市| 玉门市| 恩平市| 民勤县| 板桥市| 清水县| 平凉市| 界首市| 桃园市| 唐山市| 双辽市| 宜兰县| 远安县| 景德镇市| 天峨县| 大理市| 鄯善县| 陆良县| 延津县| 郯城县| 乌什县| 金堂县| 永泰县| 潮州市| 额尔古纳市| 大关县| 奎屯市| 甘谷县| 兴安县| 登封市| 丹寨县| 抚宁县|