logo

Intelligence in Our Image Risks of Bias and Errors in Artificial Intelligence

PDF Publication Title:

Intelligence in Our Image Risks of Bias and Errors in Artificial Intelligence ( intelligence-our-image-risks-bias-and-errors-artificial-inte )

Previous Page View | Next Page View | Return to Search List

Text from PDF Page: 018

8 An Intelligence in Our Image their behavior is causally determined by human specification.7 The term misbehaving algorithm is only a metaphor for referring to artifi- cial agents whose results lead to incorrect, inequitable, or dangerous consequences. The history of such misbehaving artificial agents extends at least as far back as the advent of ubiquitous computing systems. Batya Friedman and the philosopher Helen Nissenbaum (1996) discussed bias concerns in the use of computer systems for tasks as diverse as scheduling, employment matching, flight routing, and automated legal aid for immigration. Friedman and Nissenbaum’s discussion was nom- inally about the use of computer systems. But their critique was aimed at the procedures these systems used to generate their results: algo- rithms. Friedman and Nissenbaum’s analyses reported inequitable or biased behavior in these algorithms and proposed a systematic frame- work for thinking about such biases. Friedman and Nissenbaum (1996) wrote about the Semi- Automated Business Reservations Environment (SABRE) flight book- ing system, which American Airlines had sponsored (see also Sandvig et al., 2014). SABRE provided an industry-changing service. It was one of the first algorithmic systems to provide flight listings and rout- ing information for airline flights in the United States. But its default information sorting behavior took advantage of typical user behavior to create a systematic anticompetitive bias for its sponsor.8 SABRE always presented agents with flights from American Airlines on the first page, even when other airlines had cheaper or more-direct flights for the same query. Nonpreferred flights were often relegated to the second and later pages, which agents rarely reached. American Airlines 7 Many of the debates over liability in automated systems revolve around this question: What degree of AI autonomy is sufficient to limit the ethical responsibility of human administra- tors for the consequences of the AI’s actions? For example, to what extent is a company, such as Google, Facebook, or Tesla, liable for unforeseeable second-, third-, or higher-order effects of using its automated systems? How do we delimit foreseeability for a system that is necessarily (at least for now) opaque? The Google defamation example we offer later shows law courts beginning to grapple with such questions on the limits of liability. 8 American Airlines’ employees took to calling the practice screen science.

PDF Image | Intelligence in Our Image Risks of Bias and Errors in Artificial Intelligence

intelligence-our-image-risks-bias-and-errors-artificial-inte-018

PDF Search Title:

Intelligence in Our Image Risks of Bias and Errors in Artificial Intelligence

Original File Name Searched:

RAND_RR1744.pdf

DIY PDF Search: Google It | Yahoo | Bing

Cruise Ship Reviews | Luxury Resort | Jet | Yacht | and Travel Tech More Info

Cruising Review Topics and Articles More Info

Software based on Filemaker for the travel industry More Info

The Burgenstock Resort: Reviews on CruisingReview website... More Info

Resort Reviews: World Class resorts... More Info

The Riffelalp Resort: Reviews on CruisingReview website... More Info

CONTACT TEL: 608-238-6001 Email: greg@cruisingreview.com | RSS | AMP