logo

Intelligence in Our Image Risks of Bias and Errors in Artificial Intelligence

PDF Publication Title:

Intelligence in Our Image Risks of Bias and Errors in Artificial Intelligence ( intelligence-our-image-risks-bias-and-errors-artificial-inte )

Previous Page View | Next Page View | Return to Search List

Text from PDF Page: 019

was forced to make SABRE more transparent after antitrust proceed- ings shed light on these concerns. Friedman and Nissenbaum (1996) also examined the history of the algorithm for the National Resident Match Program, which matches medical residents to hospitals throughout the United States. The algorithm’s seemingly equitable assignment rules favored hospi- tal preferences over resident preferences and single residents over mar- ried residents.9 Friedman and Nissenbaum also looked at the British Nationality Act Program, which was designed to encode British citi- zenship law. The act itself has fairness issues. And any faithful algo- rithm implementing the act has inherited, and thus magnified, these issues. The more interesting point was that the British Nationality Act Program presented authoritative responses that hid relevant legal options in the act from nonexpert users. The program’s responses were procedurally correct. But translating the law into an exact algorithm lost important nuances. The systems Friedman and Nissenbaum reported on were the larger, industrial-scale systems common in the early days of personal computing and the internet. The exponential growth of the internet and the personal computer user base expanded the scope of these prob- lems. Algorithms began to mediate more of our interactions with infor- mation. Google is the canonical case in point. Google’s search and advertising placement algorithms were digesting massive amounts of user-generated data to learn to optimize service for users (both regu- lar users and advertising entities). Such systems were some of the first to expose the results of learning algorithms to widespread personal consumption. 9 This example refers to the National Resident Match Program matching algorithm in use before Alvin Roth changed it in the mid-90s (Roth, 1996). This matching procedure was an incarnation of the stable matching algorithm first formalized by David Gale and Lloyd Shapley (Gale and Shapley, 1962). It is stable in the sense that neither hospital nor resident can do better, given the whole groups’ stated preference order. However, contrary to initial claims, the program led to stable matches that guaranteed the hospitals their best acceptable choices but only guaranteed acceptable choices for students (sometimes their least acceptable ones). Algorithms: Definition and Evaluation 9

PDF Image | Intelligence in Our Image Risks of Bias and Errors in Artificial Intelligence

intelligence-our-image-risks-bias-and-errors-artificial-inte-019

PDF Search Title:

Intelligence in Our Image Risks of Bias and Errors in Artificial Intelligence

Original File Name Searched:

RAND_RR1744.pdf

DIY PDF Search: Google It | Yahoo | Bing

Cruise Ship Reviews | Luxury Resort | Jet | Yacht | and Travel Tech More Info

Cruising Review Topics and Articles More Info

Software based on Filemaker for the travel industry More Info

The Burgenstock Resort: Reviews on CruisingReview website... More Info

Resort Reviews: World Class resorts... More Info

The Riffelalp Resort: Reviews on CruisingReview website... More Info

CONTACT TEL: 608-238-6001 Email: greg@cruisingreview.com | RSS | AMP