js 无害化_道德第一无害

js 无害化

Laura [fic] is a busy mom of two. Her crazy schedule went out of whack with COVID-19 work from home. Every day feels like an exercise in primal survival. To minimize mental strain, she defaults to autopilot mode whenever the stakes are low and don’t call for her undivided attention. Having an old-school butler would be a saving grace. That’s not a financial option, so Laura tries to lean in on whatever virtual helpers she can afford. Amazon ships her breakfast fare, Netflix entertains her kids during meetings with the C-suite folks, and Doordash delivers warm dinner plates that bring the family together. With these virtual helpers, she can at least turn off her decision-making muscles for trite things and focus on the big picture. Virtual helpers don’t always make the best choices, but who has the time to keep score. Laura occasionally gets non-dairy milk instead of the regular 2%. And she sometimes finds the kids half-way through a horror movie even though such content is off-limits. Every now and then, the Doordash chef forgets that Laura’s order is gluten-free, leading to the all-too-familiar emergency frozen dinner. The recommender AI means well.

劳拉( fic )是两个很忙的妈妈。 她疯狂的日程安排使在家中COVID-19的工作变得异常困难。 每天感觉就像在原始生存中锻炼。 为了最大程度地减少精神压力,每当赌注很低并且不要求她全神贯注时,她默认使用自动驾驶模式。 拥有一个老派管家将是一种节省。 这不是一个财务选择,因此劳拉(Laura)尝试依靠她可以负担的任何虚拟助手。 亚马逊运送她的早餐,Netflix在与高级管理人员见面时招待孩子,Doordash提供温暖的餐盘,将家人聚在一起。 有了这些虚拟助手,她至少可以关闭自己的决策能力来应对陈腐的事情,并专注于大局。 虚拟助手并不总是做出最佳选择,但是谁有时间保持得分。 劳拉偶尔会获得非乳制品的牛奶,而不是常规的2%。 她有时会在恐怖电影中途找到孩子,即使这类内容是禁止进入的。 时不时地,Doordash的厨师忘记了劳拉的命令是无麸质的,导致了大家都非常熟悉的紧急冷冻晚餐。 推荐人AI表示很好。

推荐人 (The Recommender Generation)

Our life is going on autopilot and for many of us that’s a much-needed respite. Yet, there are a lot of unanswered questions about the influence and long-term consequences of recommendations made by algorithms. The “recommender” generation is probably suffering from the boiling frog syndrome. The boiling frog is a fable that describes a frog being slowly boiled alive. The premise is that if you drop a frog into boiling water, the frog will jump out. However, if you put the frog in lukewarm water and slowly bring it to a boil, the frog will not perceive the danger and be cooked to death. The fable is a metaphor for our inability to react to harmful threats that arise gradually rather than suddenly.

我们的生活一直在进行自动驾驶,对我们许多人来说,这是一个迫切需要的喘息机会。 但是,关于算法建议的影响和长期后果,还有许多未解决的问题。 “推荐人”一代可能正在患上沸腾青蛙综合症。 沸腾的青蛙是一个寓言,描述了一只青蛙慢慢地活着煮的故事。 前提是,如果将青蛙放入沸水中,青蛙会跳出来。 但是,如果将青蛙放在温水中,然后慢慢煮沸,青蛙就不会意识到危险并被煮熟致死。 寓言是对我们无法应对逐渐而不是突然出现的有害威胁的隐喻。

Image for post

Ben leads awakening workshops for individuals looking to optimize their human potential. Over the years, Ben coached over 500 people from all over the world. He recently confessed his concern about people’s willingness to defer problem-solving to a higher authority. People seem too comfortable adopting solutions to their life problems if a credentialed individual proposes these solutions. The fact that this person has known their specific life circumstances for a mere few hours or sometimes not at all (think YouTube videos) is a negligible detail. Why are we too quick to relinquish autonomy to a higher authority, even when it comes across as trustworthy? And what are the implications of this innate human behavior in a world where AI assistants become prevalent in our social and private lives?

Ben领导了针对寻求最大程度发挥人类潜力的个人的唤醒研讨会。 多年来,Ben指导了来自世界各地的500多人。 他最近承认自己对人们愿意将解决问题的责任推迟到更高权威上的关注。 如果有证书的人提出这些解决方案,那么人们似乎会为采用解决自己的生活问题的方案感到自在。 这个人只知道几个小时甚至有时根本不了解自己的具体生活情况(想想YouTube视频),这一事实可以忽略不计。 为什么即使自主权是值得信赖的,我们也还是太快地放弃了自治权? 在AI助手在我们的社交和私人生活中盛行的世界中,这种与生俱来的人类行为意味着什么?

“We can’t solve problems by using the same kind of thinking we used when we created them.” ~ Albert Einstein

“我们无法通过使用与创建问题时相同的思想来解决问题。” 〜爱因斯坦

AI scientists are mirroring human flaws in the AI universe — after all, robots reflect their creator. We know from research in behavioral economics that humans have innate biases. Daniel Kahneman, a Nobel prize laureate in behavioral economics, brought to light a few (of the many) shortcomings in the human decision-making process. Anchoring, for example, is a cognitive bias where an individual depends too heavily on an initial piece of information — the “anchor” — to make subsequent judgments during decision making. Once you establish the value of this anchor, it becomes a yardstick for all future arguments. We assimilate information that aligns with the anchor while dismissing information that is less related. Another human bias is the availability bias. It is a mental shortcut that relies on immediate examples that come to a person’s mind when evaluating a specific decision. If you recall something, it must be important, or at least more important than alternative solutions that don’t easily come to mind. People weigh their judgments more heavily towards recent information, forming new opinions that are biased by the latest news.

AI科学家正在反映AI宇宙中的人为缺陷-毕竟,机器人反映了他们的创造者。 从行为经济学的研究中我们知道,人类具有先天的偏见。 诺贝尔经济学奖获得者丹尼尔·卡尼曼 ( Daniel Kahneman )揭示了人类决策过程中的(许多)缺点。 例如,锚定是一种认知偏见,其中个人过于依赖初始信息(即“锚”)而无法在决策过程中做出后续判断。 一旦确定了该锚点的值,它将成为所有将来参数的准绳。 我们吸收与锚对齐的信息,而忽略不相关的信息。 另一个人为偏差是可用性偏差。 这是一种思维捷径,它依赖于在评估特定决策时想到的即时示例。 如果您想起某件事,它必须很重要,或者至少比不容易想到的替代解决方案重要。 人们对最新信息的判断权重更高,形成了受最新消息影响的新观点。

寻求道德守则的启发 (Seeking Inspiration for a Code of Ethics)

Doctors have a moral obligation to improve the health of all people. Since ancient times, doctors had to abide by rules and guiding principles. The standard oath of ethics in medicine is the Hippocratic Oath. It requires a new physician to swear to ethical standards that include medical confidentiality and non-maleficence. Medical oaths evolved throughout decades, with the most significant revision — the “Declaration of Geneva”- having emerged post World War II. Swearing a revised form of the medical oath remains a rite of passage for medical graduates in many countries.

医生在道德上有义务改善所有人的健康。 从远古时代开始,医生就必须遵守规则和指导原则。 医学上的标准伦理誓言希波克拉底誓言 。 它要求新医师宣誓遵守包括医疗保密和非恶意在内的道德标准。 医疗誓言演变了数十年,其中最重大的修改-“日内瓦宣言”-在第二次世界大战后出现了。 宣誓就职的医疗誓言经过修改,仍然是许多国家的医学毕业生通过的仪式。

Should AI scientists define guiding principles to address the ethics, values, and compliance of their work? Such an oath would make scientists aware of their social and moral responsibilities. The idea of an ethical code of practice for professions outside medicine is not at all a novelty. Similar to the Hippocratic Oath of medicine, the Archimedean Oath is an ethical code of practice for engineers. A group of students of the École Polytechnique Fédérale de Lausanne (EPFL) proposed this oath in 1990. Over time, the Archimedean Oath got mild adoption in several European engineering schools. Scientists have their own oath — the Hippocratic Oath for Scientists — proposed by Sir Joseph Rotblat in his 1995 acceptance speech for the Nobel Peace Prize.

AI科学家是否应该定义指导原则以解决其工作的道德,价值观和合规性? 这样的誓言将使科学家意识到他们的社会和道德责任。 为医学以外的职业制定道德守则的想法一点也不新颖。 与希波克拉底誓言类似, 阿基米德誓言是工程师的职业道德守则。 洛桑联邦理工学院(EPFL)的一群学生在1990年提出了这一誓言。随着时间的推移,阿基米德誓言在欧洲的几所工程学校得到了温和的采用。 科学家们有自己的誓言,即约瑟夫·罗特普拉特爵士在1995年诺贝尔和平奖获奖感言中提出的希波克拉底科学家誓言

道德AI指南 (Guidebook for Ethical AI)

Much like medicine impacts the wellbeing of the people, so will AI systems selectively influence our life experience. AI adoption in the real world happens so seamlessly we barely notice. Are we suffering from a boiling frog syndrome? The jury is still out. Like any tool, AI can be used to do good or cause harm. For example, a quick Google search on AI for hiring brings up positive headlines like “Using AI to eliminate bias from hiring” but also negative headlines like “AI-assisted hiring is biased. Here’s how to make it more fair.”

就像医学会影响人们的福祉一样,人工智能系统也会有选择地影响我们的生活经验。 人工智能在现实世界中的采用是如此无缝地发生,我们几乎没有注意到。 我们正在患上沸腾青蛙综合症吗? 陪审团仍在。 像任何工具一样,人工智能可以用来做善事或造成伤害。 例如,谷歌对AI进行快速搜索以显示积极的标题,例如“ 使用AI消除招聘中的偏见 ”,但同时出现否定的标题,例如“ AI辅助招聘有偏见。 这是使它更加公平的方法 。”

The stage is open for proposals on the AI code of ethics. An effective program should bring together diverse stakeholders from the industry, academia, and the government. Such an interdisciplinary committee will enable us to design a future worth living.

该阶段开放了有关AI道德规范的建议。 一个有效的计划应将行业,学术界和政府的不同利益相关者召集在一起。 这样一个跨学科的委员会将使我们能够设计一个值得生存的未来。

翻译自: https://towardsdatascience.com/ai-ethics-first-do-no-harm-23fbff93017a

js 无害化

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值