In the present moment nothing is more apparent than the acute need to actively counteract persistent prejudices, false information and stigma wherever we see it. And in building for a new generation of internet users we must seriously contemplate how our biases implicitly or explicitly shape the technology we leverage and what principles should guide us. Read the thoughts of Grace Health founder and CEO Thérèse Mannheimer below.
目前,除了在任何地方看到主动消除持久性偏见,虚假信息和污名化的迫切需要外,没有什么比这更明显了。 在建设新一代互联网用户时,我们必须认真考虑我们的偏见如何隐式或显式影响我们利用的技术以及应指导我们的原则。 在下面阅读Grace Health创始人兼首席执行官ThérèseMannheimer的想法。
Automating manual tasks to enable scale is no small feat. We know this. But when dealing with health data this is an even more daunting task and to really make sure we are on top of our game, we talk often and problematise around how to manage the implications when transferring tasks from a human to a machine. Below is our approach at Grace Health. It’s not the truth, but an insight into our effort to keep the dialogue sober and realistic.
使手动任务自动化以实现规模可观并非易事。 我们知道这一点。 但是,在处理健康数据时,这是一项更加艰巨的任务,并且要真正确保我们处于领先地位,我们经常谈论如何在将任务从人转移到机器时如何处理影响的问题。 以下是我们在Grace Health的方法。 这不是事实,而是对我们为保持对话清醒和现实所作的努力的深刻见解。
With a vision to improve women’s health across the world, we have our challenge cut out for us. By chatting with our automatic health assistant women are able to track and further understand their period, get friendly notifications and predictions about their cycle, plus get answers to the most common health issues. Coming up we will also provide access to medical assistance in the privacy of her own phone and to connect the dots, products and services all the way to her door step (last month we launched pharmacy delivery for our users in Accra, Ghana). When reaching out and hoping to connect with, a global market of women, scaling is of the utmost importance. This is why utilising tech — in our case AI — to reach as many as possible is a no-brainer. Easy in theory, difficult in practice.
为了改善全世界妇女的健康,我们面临着挑战。 通过与我们的自动健康助手聊天,女性可以跟踪和进一步了解自己的病期,获得有关其周期的友好通知和预测,以及获得最常见健康问题的答案。 即将来临,我们还将在她自己的手机私密下提供医疗救助,并将点,产品和服务一直连接到她的家(上个月,我们在加纳的阿克拉为用户推出了药房送货服务)。 当寻求并希望与全球女性市场建立联系时,扩大规模至关重要。 这就是为什么利用技术(在我们的案例中是AI)来达到尽可能多的目标是显而易见的。 理论上容易,实践上很难。
Now to our point.
现在到我们的观点。
People in the West have a tendency to view their perspective as the right one and the way everyone should live, which of course is ignorant and wrong. However, there are perspectives and ideologies that could be useful for markets to adapt from each other, and our stance is that a more liberal approach to education and rights around sexual and reproductive health is one of them.
西方人倾向于将自己的观点视为正确的观点,认为每个人的生活方式,这当然是无知和错误的。 但是,存在一些观点和意识形态可能对市场相互适应有用,我们的立场是,围绕性健康和生殖健康采取更加自由的教育和权利的方法就是其中之一。
Let’s get back to the title of this piece, “Why non biased AI doesn’t exist” So what is even ‘non-biased’? The term non-biased literally means not biased. In short, neutral (as in not taking sides) whereas non-biased means completely free from bias. To be unbiased, you have to be 100% fair — you can’t have a favourite or opinions that would color or shape your judgment. Artificial intelligence (AI) on the other hand, is an area of computer science that emphasises the creation of intelligent machines that work and react like humans, replicating behaviour such as problem solving, reasoning, perception and planning. All traits that humans hone over time, largely due to our experiences and notions. Also known as bias.
让我们回到本文的标题,“为什么不存在无偏爱的AI”,那么什至是“无偏爱”呢? 术语“无偏见”字面意思是“无偏见”。 简而言之,中性(如不采取任何态度)而无偏则意味着完全没有偏见。 要保持公正,您必须做到100%公正-您不能拥有任何喜欢或喜欢的观点会影响您的判断力。 另一方面,人工智能(AI)是计算机科学领域,它强调创建像人一样工作和做出React的智能机器,复制诸如解决问题,推理,感知和计划之类的行为。 人类随着时间的流逝而磨练的所有特征,很大程度上归功于我们的经验和观念。 也称为偏见。
Machines can be taught to act and react like humans only if they have abundant information relating to the world. Artificial intelligence models must be given access to objects, categories, properties and relations between all of them to implement knowledge engineering. Here’s where I argue that the bias slips in.
仅当机器拥有与世界有关的大量信息时,机器才能被教导像人类一样行动和做出React。 必须授予人工智能模型访问对象,类别,属性以及它们之间的关系的权限,以实施知识工程。 这是我认为偏见逐渐消失的地方。
IBM states on their website “AI systems are only as good as the data we put into them. Bad data can contain implicit racial, gender, or ideological biases. Many AI systems will continue to be trained using bad data, making this an ongoing problem. But we believe that bias can be tamed and that the AI systems that will tackle bias will be the most successful.” and we agree, with the emphasis on tamed.
IBM在其网站上表示:“人工智能系统仅与我们放入其中的数据一样好。 错误的数据可能包含隐含的种族,性别或意识形态偏见。 许多AI系统将继续使用不良数据进行训练,这将成为一个持续存在的问题。 但是我们相信偏见可以得到缓解,解决偏见的人工智能系统将是最成功的。” 我们同意,强调驯服。
When a service or machine is developed by a human, you automatically transfer not only biases and prejudice, but you also transfer your whole value base and perception of the world. What is right? To whom? When? Where? When you automate human behaviour it is almost impossible to not teach it to mimic human behaviour and deduction principles which then also implies bias.
当人类开发服务或机器时,您不仅会自动转移偏见和偏见,而且还会转移您的整个价值基础和对世界的看法。 什么是正确的? 给谁? 什么时候? 哪里? 当您使人类行为自动化时,几乎不可能不教它模仿人类行为和演绎原理,这也就意味着偏见。
Well, is bias always detrimental? We don’t necessarily think so. Bias and predisposed opinions skew the way we make decisions, but they also give us a framework for how to make sense of the world. We’re not necessarily trying to answer the ethical question but rather shine a light on the complexity and potential of using AI to replicate humans, and why the discourse is needed from time to time.
好吧,偏见总是有害的吗? 我们不一定是这样。 偏见和倾向性的意见使我们的决策方式倾斜,但它们也为我们提供了一个如何理解世界的框架。 我们并不一定要回答道德问题,而是要揭示使用AI复制人类的复杂性和潜力,以及为什么不时需要讨论。
We base our company and product on three key ideas:
我们将公司和产品基于以下三个关键思想:
每个人都有权为自己做出明智的决定 (Every person has the right to make informed decisions for herself)
每个人都有爱自己想要的人的权利(Every person has the right to love who they want to)
强奸或暴力永远都不行。 这是犯罪行为,应举报 (Rape or violence is never ok. It’s criminal and should be reported)
This is basically predisposed opinions on what is right and wrong, meaning bias.
这基本上是对正确与错误的偏见,即偏见。
In the case of Grace Health, we want the woman — HER — to decide what will happen to her and not. To choose who she trusts and who she doesn’t. This is, unfortunately, not the standard for millions of women around the world and the sheer thought of it can be perceived as foreign. Sometimes even uncomfortable or alien.
就恩典健康公司而言,我们希望妇女(她)决定她会怎样而不是她。 选择她信任的人和不信任的人。 不幸的是,这并不是世界上数百万妇女的标准,纯粹的想法可以被认为是外国的。 有时甚至不舒服或陌生。

This is OUR paradigm. It is the foundation on which we make our decisions. It is the reason why we exist, a belief system if you will. It informs and helps us decide what content to write and what features to create. This belief is based on a set of “rights” and “wrongs” and therefore, biases. And since we are creating a service using a machine learning model that bases its assumptions on this truth, it is inherently biased. The difference here is that it is chosen and conscious, not random or incidental.
这是我们的范例。 这是我们做出决定的基础。 这就是我们存在的原因,如果您愿意的话,这是一个信仰体系。 它可以帮助我们确定要编写的内容和要创建的功能。 这种信念基于一系列“正确”和“错误”,因此存在偏见。 而且由于我们正在使用基于其真实性的假设的机器学习模型来创建服务,因此它固有地存在偏差。 此处的区别在于它是有选择的且有意识的,不是随机的或偶然的。
Uncomplicated? No. Inserting this bias brings along other types of negative and involuntary bias based on norms; e.g. the hetero norm, the couple norm, the freedom norm etc. and this is a continuing part of our challenge. On top of this, we always need to question ourselves and the values around us.
简单? 不会。引入这种偏见会带来基于规范的其他类型的消极和非自愿偏见。 例如异性规范,夫妇规范,自由规范等,这是我们挑战的持续部分。 最重要的是,我们始终需要质疑自己和周围的价值观。
Building the first digital women’s health clinic for the next billion users, Grace Health does not only want to enable access for women of today but for the next generations to come.
Grace Health不仅为今天的女性提供服务,而且为下一代服务,为下一个十亿用户建立了第一家数字女性健康诊所。
Interested in reading more about our work on bias and AI? We’ve been working with some of the experts in the field on a project founded by the Swedish Innovation Agency, Vinnova — read more here.
有兴趣阅读更多关于我们在偏见和人工智能方面的工作的信息吗? 我们一直在与该领域的一些专家合作开展由瑞典创新局Vinnova创建的项目-在此处了解更多信息。
翻译自: https://medium.com/grace-health-insights/why-non-biased-ai-doesnt-exist-ed4fe90442fb