logo

Intelligence in Our Image Risks of Bias and Errors in Artificial Intelligence

PDF Publication Title:

Intelligence in Our Image Risks of Bias and Errors in Artificial Intelligence ( intelligence-our-image-risks-bias-and-errors-artificial-inte )

Previous Page View | Next Page View | Return to Search List

Text from PDF Page: 028

18 An Intelligence in Our Image learn more autonomously. This suggests that individuals should expect more artificial agents to mirror human biases. The second angle of the algorithmic bias problem often applies when working with policy or social questions. This is the difficulty of defining ground truth or identifying robust guiding principles. Our ground truth or criteria for judging correctness are often culturally or socially informed, as the IBM Watson and Google autocomplete exam- ples illustrate. Learning algorithms would need to optimize over some measure of social acceptability in addition to whatever performance metrics they are optimizing internally to perform a task. This dual optimization easily leads to dilemmas. In fact, the recent work on fair algorithms shows that there is usually a trade-off between accuracy and fairness. Enforcing fairness constraints can mean actively occlud- ing or smearing informative variables. This can reduce the strength of algorithmic inference. Another angle on the problem is that judgments in the space of social behavior are often fuzzy, rather than well-defined binary crite- ria.1 This angle elaborates on the second point. The examples presented earlier show fuzzy cultural norms (“do not swear,” “do not bear false witness,” “present a balanced perspective”) influencing human judg- ment of correct algorithmic behavior. We are able to learn to navigate complex fuzzy relationships, such as governments and laws, often rely- ing on subjective evaluations to do this. Systems that rely on quantified reasoning (such as most artificial agents) can mimic the effect but often require careful design to do so. Capturing this nuance may require more than just computer and data scientists. Another system has evolved over centuries to answer policy ques- tions subject to fuzzy social norms and conflicting reports or data: the law. Grimmelmann and Narayanan (2016) pointed out that, while 1 Here, fuzzy has a precise meaning, referring to properties and sets that have inexact boundaries of definition. It is based on the idea of multivalued (i.e., not binary) logic and set membership pioneered by such thinkers as Max Black (in his work on vague sets) and Lotfi Zadeh (in his work on fuzzy logic). As a concrete example, think about the set of tempera- tures you would call “warm.” The border between the temperature sets “warm” and “not- warm” is inexact. In the swearing AI examples we discuss, swearing is neither absolutely forbidden nor absolutely permissible; its social acceptability exists on a spectrum.

PDF Image | Intelligence in Our Image Risks of Bias and Errors in Artificial Intelligence

intelligence-our-image-risks-bias-and-errors-artificial-inte-028

PDF Search Title:

Intelligence in Our Image Risks of Bias and Errors in Artificial Intelligence

Original File Name Searched:

RAND_RR1744.pdf

DIY PDF Search: Google It | Yahoo | Bing

Cruise Ship Reviews | Luxury Resort | Jet | Yacht | and Travel Tech More Info

Cruising Review Topics and Articles More Info

Software based on Filemaker for the travel industry More Info

The Burgenstock Resort: Reviews on CruisingReview website... More Info

Resort Reviews: World Class resorts... More Info

The Riffelalp Resort: Reviews on CruisingReview website... More Info

CONTACT TEL: 608-238-6001 Email: greg@cruisingreview.com | RSS | AMP