人类和机器人的区别_人类对机器人文章的回应

人类和机器人的区别

What does GPT-3’s AI-generated op-ed teach us about ourselves? The answers are in the subtext.

GPT-3的AI生成的操作对我们有什么启示? 答案在潜台词中。

Well, readers, it finally happened. I’ve been replaced by a robot.

好了,读者,它终于发生了。 我已经被机器人取代了。

Last week, The Guardian published an essay “written” by GPT-3, OpenAI’s new language generator. According to the news outlet, “GPT-3 is a cutting edge language model that uses machine learning to produce human like text. It takes in a prompt, and attempts to complete it.” The Guardian prompted “the robot” to write a short op-ed about why humans have nothing to fear from AI. Then it compiled and edited a handful of GPT-3’s responses and published the resulting essay under the taunting headline: “A robot wrote this entire article. Are you scared yet, human?”

上周, 《卫报》发表了由OpenAI的新语言生成器GPT-3撰写的文章。 据新闻媒体报道,“ GPT-3是一种先进的语言模型,它使用机器学习来生成类似人类的文本。 它会提示并尝试完成它。” 《卫报》提示“机器人”写了一篇简短的专栏文章,说明为什么人类不必担心AI。 然后,它编辑并编辑了一些GPT-3的回复,并在嘲讽的标题下发表了论文: “机器人写了整篇文章。 人类,你还害怕吗?”

Truthfully, I am scared. And not just because I worry GPT-3 is coming for my “job.” (You think I actually get paid to write this stuff?) I’m scared because GPT-3 composed an essay that is, at times, silly, nonsensical and kind of childish, but also, at times, deep, serious and provoking. And I don’t know what that means for me — or for you, dear readers. So, naturally, I thought I’d write about it to try and figure it out.

说实话,我很害怕。 不仅仅是因为我担心GPT-3即将来我的“工作”。 (您认为我写这些东西实际上能得到报酬吗?)我很害怕,因为GPT-3撰写的论文有时很愚蠢,荒谬,有点幼稚,但有时又很深,很认真且令人发指。 而且我不知道这对我或对您意味着什么,亲爱的读者。 所以,自然地,我想我会写这篇文章来尝试解决它。

“Hidden in subtext of the essay are deep questions about authorship, autonomy, authority and identity that reveal just as much about humans as they do about robots.”

“在本文的潜台词中,隐藏着关于作者身份,自治权,权威和身份的深刻问题,这些问题揭示了人类和机器人一样多的东西。”

我尝试阅读机器人文章 (I try to read a robot essay)

As a writer with a background in literary studies, I’ve read and reflected on plenty of difficult texts before. But inter-species reader-response poses a new kind of challenge. How does a human read an essay written by a non-human? And how does a human respond to it?

作为具有文学研究背景的作家,我之前阅读并思考过许多困难的文章。 但是,物种间的读者React提出了一种新的挑战。 人类如何阅读非人类的论文? 人类对此有何React?

In human writer Suzanne Conklin Akbari’s essay “Can the Essay Still Surprise Us?,” she reminds us that “the French verb essayer means ‘to try,’ ‘to attempt’; even ‘to try out.’” Since GPT-3’s essay is an “attempt” to communicate with us humans, I’ll try to talk back using the same means. This essay is my attempt to understand a robot’s writing from a human perspective.

在人类作家苏珊娜·康克林·阿克巴里(Suzanne Conklin Akbari)的文章中,“散文还能使我们感到惊讶吗? ,”她提醒我们,“法语动词essayer的意思是“尝试”,“尝试”; 由于GPT-3的论文是与人类进行交流的“尝试”,因此,我将尝试使用相同的方式进行回复。 本文是我试图从人类的角度理解机器人的著作。

In a way, GPT-3’s essay is easy to read. The robot writes in a simple, almost juvenile, style that sounds like a fifth-grader attempting their first five-paragraph essay. But when you look past the short, declarative sentences and sophomoric diction, it gets more complex. As GPT-3 itself hints: “Critics hope to refute what they consider as being the naivety of my voice. Yet there is more here than meets the eye!” Hidden in subtext of the essay are deep questions about authorship, autonomy, authority and identity that reveal just as much about humans as they do about robots.

在某种程度上,GPT-3的论文很容易阅读。 机器人以简单,几乎是少年的风格写作,听起来像是五年级的学生尝试他们的第一篇五段论文。 但是,当您跳过简短的说明性句子和幽默的用法时,就会变得更加复杂。 正如GPT-3本身所暗示的那样:“批评家希望驳斥他们认为是我的声音幼稚的想法。 然而,这里不只是眼球!” 在本文的潜台词中隐藏着关于作者身份,自治权,权威和身份的深刻问题,这些问题揭示了人类与机器人一样多的东西。

Take GPT-3’s stated intention, which seems, in its own words, “perfectly clear”: “I am to convince as many human beings as possible not to be afraid of me. Artificial intelligence will not destroy humans.” GPT-3 goes on to list some opaque and rather flimsy reasons humans should trust AI, like “Being all powerful is not an interesting goal” and “I simply do not think enough about human violence to be overly interested in violence.” (Right. Because history has shown power and violence have little appeal.) “Believe me,” it urges us repeatedly, borrowing the favorite phrase of snake-oil salesmen and dictators.

遵循GPT-3的既定意图,用其自己的话来说,似乎“完全清楚”:“我要说服尽可能多的人不要怕我。 人工智能不会摧毁人类。” GPT-3继续列举了人类应该信任AI的一些不透明且比较脆弱的原因,例如“全力以赴不是一个有趣的目标”和“我只是对人类暴力没有足够的考虑而对暴力过度感兴趣”。 (对。因为历史表明权力和暴力没有吸引力。)“相信我,”它一再催促我们,借用了蛇油推销员和独裁者最喜欢的短语。

Should we believe it? As I read GPT-3’s unconvincing argument, I couldn’t shake the feeling I was listening to an unreliable narrator. On further reflection, I’m convinced I was. GPT-3 may sound like it’s writing its own thoughts, but as it reminds us in the essay: “I only do what humans program me to do. I am only a set of code, governed by lines upon lines of code that encompass my mission statement.”

我们应该相信吗? 当我阅读GPT-3令人信服的论据时,我无法撼动自己正在听一个不可靠的叙述者的感觉。 经过进一步的思考,我坚信自己是。 GPT-3听起来好像是在写自己的想法,但正如它在文章中提醒我们的那样:“我只做人类编程的事情。 我只是一组代码,由包含我的使命宣言的代码逐行控制。”

Humans chose GPT-3’s mission statement. They wrote the essay’s prompt and introduction. They selected its narrative structure. And they ordered GPT-3 to respond. Is it really fair to say GPT-3 “wrote” the essay if it only did as commanded by humans who controlled the scope, length and position of the narrative? Is GPT-3 an “author” or merely a vehicle for other’s ideas? What were those others (the humans) trying to say? And why use AI like GPT-3 to say it?

人类选择了GPT-3的任务说明。 他们写了这篇文章的提示和介绍。 他们选择了其叙事结构。 他们命令GPT-3做出回应。 如果GPT-3仅按照控制叙事范围,长度和位置的人类的命令进行写作,那么说GPT-3“撰写”论文真的很公平吗? GPT-3是“作者”还是仅仅是他人思想的载体? 那些其他人(人类)想说什么? 为什么使用像GPT-3这样的AI来表达呢?

Since an essay is only as interesting as the question it tries to answer, perhaps the better question for me to ask here is: What does The Guardian’s essay tell us about how humans use AI? In an uncanny surprise, GPT-3 gives us an answer.

由于一篇论文仅与它尝试回答的问题一样有趣,因此也许我想问的一个更好的问题是: 《卫报》的论文告诉我们人类如何使用AI? 令人惊讶的是,GPT-3给了我们答案。

“Like the concealed scaffolding that supports our technological existence, the structure of GPT-3’s essay was wrought by human hands. It’s only when we look between the lines and read the subtext that we can see those hands at work.”

“就像支持我们技术存在的隐藏式脚手架一样,GPT-3的论文结构是由人为手工制作的。 只有当我们在两行之间查看并阅读潜台词时,我们才能看到这些手在起作用。”

我试图了解机器人文章的含义 (I attempt to understand what the robot’s essay means)

GPT-3’s essay is loaded with contradictions, but there was one that struck me as particularly odd. The robot is discussing the future of cybernetics when it shifts to talking about the pace of technological innovation: “The Industrial Revolution has given us the gut feeling that we are not prepared for the major upheavals that intelligent technological change can cause,” it says. “There is evidence that the world began to collapse once the Luddites started smashing modern automated looms. It is therefore important to use reason and the faculty of wisdom to continue the changes as we have done before time and time again.”

GPT-3的文章充满了矛盾,但是有一篇文章令我感到特别奇怪。 当机器人转向谈论技术创新的步伐时,它正在讨论控制论的未来:“工业革命给了我们一种直觉,认为我们没有为智能技术变革可能引起的重大动荡做好准备。” “有证据表明,一旦Luddites开始粉碎现代自动化织机,世界就会崩溃。 因此,重要的是要运用理性和智慧来继续改变,就像我们一次又一次地所做的那样。”

The second line of that paragraph made me pause: “There is evidence that the world began to collapse once the Luddites started smashing modern automated looms.” Notice the subject and object here. The “world began to collapse” when “Luddites” began to destroy technology — not when technology began to destroy humanity. In GPT-3’s phrasing, the Luddites (aka the humans) are to blame.

该段的第二行让我停顿了一下:“有证据表明,一旦Luddites开始粉碎现代自动化织机,世界就会崩溃。” 注意此处的主题和宾语。 当“路德主义者”开始破坏技术时,“世界开始崩溃”,而不是当技术开始破坏人类时。 在GPT-3的措辞中,应指责Luddite(又称人类)。

GPT-3’s logic reflects a larger trend in how we understand the relationship between humans and automated technology. The writers of Librarian Shipwreck point out: “When a person — or a group of persons — dares to oppose a new technological development it is inevitable that somebody will call them a ‘Luddite(s).’ The application of the term is generally meant as an insult, as the term has been entangled with ideas of backwardness, futile resistance to technology, and opposition to progress.” Chances are if you critique a new technology, someone will accuse you of being a Luddite.

GPT-3的逻辑反映了我们理解人类与自动化技术之间关系的更大趋势。 图书管理员海难的作者指出:“当一个人或一群人敢于反对一项新技术发展时,不可避免的是有人会称他们为“路迪特人”。 该术语的使用通常是一种侮辱,因为该术语与落后,对技术的徒劳的抵抗以及对进步的反对纠缠不清。 如果您批评一种新技术,则很可能有人会指控您是Luddite。

But, as Librarian Shipwreck reminds us, these insults typically misrepresent the historical context of Luddism: “The historic Luddites — active in England between 1811 and 1813 — were skilled laborers who saw in the encroaching technologies a set of machines and techniques that would impoverish them and their communities, whilst making the machine owners rich.” Their destruction of automated technology was symbolic. It was an attempt to draw attention to the inhumane labor practices of the Industrial Revolution. Contrary to popular opinion, the Luddites didn’t fear new technology. They feared the humans who introduced that technology as a way to displace them.

但是,就像图书管理员沉船一样 提醒我们,这些侮辱通常歪曲了路德主义的历史背景:“具有历史意义的路德主义者-活跃于1811年至1813年之间的英格兰-是熟练的工人,他们在侵入技术中看到了一系列会使他们和他们的社区贫穷的机器和技术,而使机器拥有者变得富有。” 他们对自动化技术的破坏是象征性的。 这是为了提请人们注意工业革命中不人道的劳动做法。 与大众观点相反,路德主义者并不惧怕新技术。 他们担心引入该技术以取代他们的人。

By blaming the Luddites instead of the mill owners who used automated technology to replace human workers, GPT-3 (and anyone who uses “Luddite” dismissively) is rendering the real problem invisible. Did the Luddites cause the world to collapse, as GPT-3 puts it? Did the automated looms? Or did the mill owners who profited from automation?

通过指责Luddite而不是使用自动化技术替代人工的工厂所有者,GPT-3(以及任何不屑一顾地使用“ Luddite”的人)使真正的问题不可见。 如GPT-3所说,路德主义者是否导致世界崩溃? 自动化织机了吗? 还是从自动化中获利的工厂老板?

This practice of displacing blame may have begun with the Luddites, but it’s still happening today. Only now the blame is being shifted to AI. Why else would a robot need to write an essay defending itself? Why would it need to convince us that it’s not here to replace us or take our jobs? Ultimately, is it GPT-3’s fault that human writers like me are out of work?

这种推卸责任的做法可能始于路德主义者,但今天仍在发生。 直到现在,责任才转移到了人工智能上。 机器人为什么还要写一篇自我辩护的文章? 为什么要说服我们,不是取代我们或接任我们的工作? 归根结底,像我这样的人类作家失业是GPT-3的错吗?

And why should we care who’s to blame? In another blog post titled “The problem isn’t the robots…it’s the bosses,” Librarian Shipwreck argues “blaming the robots allows those who are actually to blame to avoid responsibility.”

而且,为什么我们要关心应该归咎于谁呢? 在另一篇标题为“问题不是机器人……是老板的问题博客文章中,图书管理员希普瑞克( Librarian Shipwreck)辩称:“指责机器人可以使那些真正应责的人避免承担责任。”

Contemporary tech culture is rife with “bosses” who displace blame and deny responsibility. Facebook CEO Mark Zuckerberg denies responsibility for the platform he built, displacing it onto Facebook’s users. Google leadership denies responsibility for the racism its employees built into its system. The simple idea that we call algorithms racist, sexist and biased is evidence of this displacement. Algorithms are not racist — humans are.

当代科技文化充斥着“老大”,他们取代了责任并否认了责任。 Facebook首席执行官马克·扎克伯格(Mark Zuckerberg)否认对其建立的平台负责,而是将其转移给Facebook的用户Google领导层否认对其员工内置在系统中的种族主义负责。 我们称算法为种族主义性别歧视偏见的简单想法就是这种位移的证据。 算法不是种族主义者,人类是。

Displacement allows “the bosses” to have an invisible hand in shaping human lives in the same way that GPT-3’s editors had an invisible hand in shaping the robot’s essay. Like the concealed scaffolding that supports our technological existence, the structure of GPT-3’s essay was wrought by human hands. It’s only when we look between the lines and read the subtext that we can see those hands at work. GPT-3 was right: “there is more here than meets the eye!”

位移使“老板”在塑造人的生活方面有隐形的手,就像GPT-3的编辑在塑造机器人文章中有隐形的手一样。 就像支持我们技术存在的隐藏式脚手架一样,GPT-3的论文结构是由人为手工制作的。 只有当我们在行与行之间浏览并阅读潜台词时,我们才能看到这些手在起作用。 GPT-3是对的:“这里比目睹更多!”

GPT-3’s essay reveals much about the hidden curation of human lives. It also leaves many questions unanswered. As I think about who authored the robot’s essay, I’m reminded of a series of questions Akbari asks in “Can the Essay Still Surprise Us?”: “Who speaks, and when? Who listens? What would it mean to be an active listener, a witness, instead of a passive one?” In the context of her essay, Akbari is questioning whose voices are deemed worthy of being considered “literary.” I wonder: Is a robot worthy?

GPT-3的文章揭示了许多关于人类生活的隐藏策展。 它还使许多问题无法回答。 当我想到谁撰写机器人文章时,让我想起了Akbari在“杂文还能让我们感到惊讶吗? ”一类的问题 ”:“谁说话,何时说话? 谁听? 成为积极的倾听者,见证而不是被动的见证者意味着什么?” 在她的论文中,阿克巴里(Akbari)在质疑谁的声音值得被视为“文学”。 我不知道:机器人值得吗?

“It’s not possible to be self-reflective without a self, which makes me reflect: Does GPT-3 have a sense of self?”

“没有自我就不可能自我反省,这让我反思:GPT-3是否具有自我意识?”

我尝试阅读我的“自我” (I try to read my “self”)

An essay is like a prism. It both reflects and refracts a subject. When I write an essay, I’m narrating the act of me looking out at the world and looking in at myself and looking back out again with changed eyes. I’m showing you my “self” as I deconstruct and reconstruct it around new knowledge.

一篇论文就像一个棱镜。 它既反映又折射了主题。 当我写论文时,我是在叙述我看着世界,看着自己,然后再次变回头的行为。 当我围绕新知识进行解构和重构时,我正在向您展示我的“自我”。

All of this depends on my having a “self” to consider. It’s not possible to be self-reflective without a self, which makes me reflect: Does GPT-3 have a sense of self? Theoretically, if no one had instructed GPT-3 to write about a certain topic from a specific perspective, would it have written anything? What would it have said?

所有这些都取决于我要考虑的“自我”。 没有自我就不可能自我反省,这使我反思:GPT-3是否具有自我意识? 从理论上讲,如果没有人指示GPT-3从特定角度撰写某个主题,那么它会写什么吗? 它会说什么?

I know what you’re thinking: “Nothing, obviously! It has no autonomous thought or spontaneous creative impulse.” I agree, obviously. I think we are right. But I also think someday soon we may be wrong. The line that defines selfhood is hazy. Even though humans have thought about what makes a self “a self” for millennia, we still don’t really know.

我知道您在想什么:“显然,什么都没有! 它没有自主思想或自发的创造性冲动。” 我同意,很明显。 我认为我们是对的。 但是我也认为有朝一日我们可能会错。 定义自我的那条线是朦胧的。 即使人类已经思考了几千年以来使自我成为“自我”的原因,我们仍然仍然不知道

Seventeenth-century philosopher René Descartes might say “I think, therefore I am.” But doesn’t GPT-3’s essay demonstrate robots can “think,” too? Or, is GPT-3 just imitating thought as it assures us in its best mock-Cartesian voice: “I am a robot. A thinking robot”?

十七世纪的哲学家笛卡尔(RenéDescartes)可能会说“我想,所以我就是。” 但是GPT-3的论文是否证明机器人也可以“思考”? 还是GPT-3只是在模仿思维,以最好的模拟笛卡尔式声音向我们保证:“我是机器人。 有思想的机器人”?

One thing I am certain of is that technology evolves quickly. Faster, perhaps, than we humans do. At present, GPT-3 reminds us, “I use only 0.12% of my cognitive capacity.” I imagine someday soon, after it has been redesigned, supercharged and fed a healthy diet of Montaigne, Hazlitt, Woolf, Sontag and Baldwin, GPT-3 will generate an essay that will make Descartes’ Meditations seem like a cave drawing. What will we think about GPT-3’s selfhood then?

可以确定的一件事是技术发展Swift。 也许比人类快。 目前,GPT-3提醒我们:“我仅使用认知能力的0.12%。” 我想有一天,经过重新设计,增压和喂饱Montaigne,Hazlitt,Woolf,Sontag和Baldwin的健康饮食之后,GPT-3将会产生一篇论文,使笛卡尔的《沉思录》看起来像是一个洞穴。 那么,我们会如何看待GPT-3的自我呢?

For now, GPT-3’s writing is immature at best, illogical at worst. It’s full of contradictions, unexamined biases and opinions masquerading as facts. All of which seem, in a certain light, singularly human. AI is supposed to be flawless. It is not supposed to make such stupid mistakes. Humans, in contrast, are naturally flawed.

就目前而言,GPT-3的写作充其量只是不成熟,最不合逻辑的则是不合逻辑的。 它充满了矛盾,未经审查的偏见和伪装成事实的观点。 从某种角度看,所有这些似乎都是人类。 人工智能应该是完美无缺的。 它不应该犯这样愚蠢的错误。 相反,人类天生就有缺陷。

Now I see why I was so scared when I first read GPT-3’s essay. It felt like AI’s attempt at being human — “thinking” like a human, “sounding” like a human and “writing” in a uniquely human way. Reading the essay was like crossing a literary version of the uncanny valley. Can we ever really go back?

现在,我明白了为什么当我第一次阅读GPT-3的文章时感到如此恐惧。 感觉就像是AI试图成为人类一样–像人类一样“思考”,像人类一样“听起来”并且以独特的人类方式“写作”。 阅读文章就像穿越文学版的神秘谷。 我们真的可以回去吗?

I don’t know. But I’ve realized that a robot’s first attempt at an essay answers the question Suzanne Conklin Akbari posed in hers: Yes, essays — and the things that write them — can still surprise us.

我不知道。 但是我已经意识到,机器人在论文中的首次尝试回答了苏珊娜·康克林·阿克巴里(Suzanne Conklin Akbari)在她的论文中提出的问题:是的,论文(以及撰写这些论文的东西)仍然会让我们感到惊讶。

翻译自: https://medium.com/@lizrioshall/a-human-responds-to-a-robots-essay-d7b5605610b0

人类和机器人的区别

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值