PDF Publication Title:
Text from PDF Page: 021
recently, Angwin et al. (2016) reported on extreme systematic bias in a criminal risk assessment algorithm in widespread use in sentencing hearings across the country. Citron and Pasquale (2014) wrote about what they call the scored society and its pitfalls. By scored society, they mean the current state in which unregulated, opaque, and sometimes hidden algorithms pro- duce authoritative scores of individual reputation that mediate access to opportunity. These scores include credit, criminal, and employability scores. Citron and Pasquale particularly focused on how such systems violate reasonable expectations of due process, especially expectations of fairness, accuracy, and the existence of avenues for redress. They argue that algorithmic credit scoring has not reduced bias and discrim- inatory practices. On the contrary, such scores serve to legitimize bias already existing in the data (and software) used to train the algorithms. Pasquale (2015) followed a similar line of inquiry with an exhaustive report on the extensive use of unregulated algorithms in three spe- cific areas: reputation management (e.g., credit scoring), search engine behavior, and the financial markets. Barocas and Selbst (2016) wrote a recent influential article addressing the fundamental question of whether big data can result in fair or neutral behavior in algorithms.11 They argue that the answer to this question is a firm negative without reforming how big data and the associated algorithms are applied. Barocas and Selbst (and other researchers in this field) borrow disparate impact from legal doctrine originally introduced in the 1960s and 1970s to test the fairness of employment practices.12 The authors use the term to denote the sys- tematic disadvantages artificial agents impose on subgroups based on patterns learned via procedures that appear reasonable and nondis- criminatory on face value. Gandy (2010) used rational discrimination to refer to an identical concept. He was arguing for the need of regu- 11 Barocas and Selbst use big data as an umbrella term for the massive data sets and algorith- mic techniques used to analyze such data. 12 In Griggs v. Duke Power Co. (1971), the U.S. Supreme Court ruled against Duke Power Company’s practice of using certain types of employment tests and requirements that were unrelated to the job. Such tests may be innocuous on face value. But, on closer inspection, these tests “operate invidiously” to discriminate on the basis of race. Algorithms: Definition and Evaluation 11PDF Image | Intelligence in Our Image Risks of Bias and Errors in Artificial Intelligence
PDF Search Title:
Intelligence in Our Image Risks of Bias and Errors in Artificial IntelligenceOriginal File Name Searched:
RAND_RR1744.pdfDIY PDF Search: Google It | Yahoo | Bing
Cruise Ship Reviews | Luxury Resort | Jet | Yacht | and Travel Tech More Info
Cruising Review Topics and Articles More Info
Software based on Filemaker for the travel industry More Info
The Burgenstock Resort: Reviews on CruisingReview website... More Info
Resort Reviews: World Class resorts... More Info
The Riffelalp Resort: Reviews on CruisingReview website... More Info
CONTACT TEL: 608-238-6001 Email: greg@cruisingreview.com (Standard Web Page)