Intelligence in Our Image Risks of Bias and Errors in Artificial Intelligence

PDF Publication Title:

Intelligence in Our Image Risks of Bias and Errors in Artificial Intelligence ( intelligence-our-image-risks-bias-and-errors-artificial-inte )

Previous Page View | Next Page View | Return to Search List

Text from PDF Page: 029

The Problem in Focus: Factors and Remedies 19 crypto currencies and algorithmic (“smart”) contracts might excel at enforcing binary property rights, property rights in the real world are fuzzy and contentious. Similar concerns apply to algorithms: What we consider proper algorithmic behavior can sometimes be only defined imprecisely. The law has evolved for adjudicating such fuzzy complexities. U.S. law also recognizes that procedures that are reasonable on face value can have adverse and disparate impact. An understand- ing of this concept of disparate impact is only slowly spreading in the algorithm research community. There is an increasing body of work on the social and legal impact of data and algorithms (Gangadharan, Eubanks, and Barocas, 2015). And a growing body of evidence shows that algorithms do not automatically treat diverse populations fairly and equitably just by virtue of being reasonable algorithms (Barocas and Selbst, 2016; DeDeo, 2015; Dwork et al., 2012; Feldman et al., 2015; Hardt, 2014). Other Technical Factors Other technical factors besides those we have already discussed pro- mote algorithmic bias. Machine learning algorithms have issues han- dling sample-size disparities. This is a direct consequence of the fact that machine learning algorithms are inherently statistical methods and are therefore subject to the statistical sample-size laws. Learning algorithms may have difficulty capturing specific cultural effects when the population is strongly segmented. This is related to the problem of statistical inference on highly nonstationary training data (particularly when default models do not account for nonstationary effects). Sample-Size Disparity Machine learning algorithms are statistical estimation methods. Their measures of estimation error often vary in inverse proportion with data sample sizes. This means that these methods will typically be more error-prone on low-representation training classes than with others. A credit-estimation algorithm would be more error-prone on subpopu-

PDF Image | Intelligence in Our Image Risks of Bias and Errors in Artificial Intelligence

PDF Search Title:

Intelligence in Our Image Risks of Bias and Errors in Artificial Intelligence

Original File Name Searched:

RAND_RR1744.pdf

DIY PDF Search: Google It | Yahoo | Bing

Cruise Ship Reviews | Luxury Resort | Jet | Yacht | and Travel Tech More Info

Cruising Review Topics and Articles More Info

Software based on Filemaker for the travel industry More Info

The Burgenstock Resort: Reviews on CruisingReview website... More Info

Resort Reviews: World Class resorts... More Info

The Riffelalp Resort: Reviews on CruisingReview website... More Info

CONTACT TEL: 608-238-6001 Email: greg@cruisingreview.com (Standard Web Page)