logo

Intelligence in Our Image Risks of Bias and Errors in Artificial Intelligence

PDF Publication Title:

Intelligence in Our Image Risks of Bias and Errors in Artificial Intelligence ( intelligence-our-image-risks-bias-and-errors-artificial-inte )

Previous Page View | Next Page View | Return to Search List

Text from PDF Page: 017

algorithm is a function of how correct its implementation is. For exam- ple, does an algorithm for calculating tips correctly implement percent- age multiplication and addition? Does an algorithm for calculating a tax burden take proper account of taxable income and apply the right rules according to the tax code? Did the sorting algorithm actually sort the entire data set or ignore parts of it? These are questions concerning concrete, sometimes objectively verifiable concepts. But the validity of a learning algorithm is a somewhat different creature. It is a function of the correctness of its implementation (what algorithm designers tend to focus on) and the correctness of its learned behavior (what lay users care about). As a recent example, take Micro- soft’s AI chatbot, Tay. The algorithms behind Tay were properly imple- mented and enabled it to converse in a compellingly human way with Twitter users. Extensive testing in controlled environments raised no flags. A key feature of its behavior was the ability to learn and respond to user’s inclinations by ingesting user data. That feature enabled Twit- ter users to manipulate Tay’s behavior, causing the chatbot to make a series of offensive statements (Lee, 2016). Neither its experience nor its data took novelty in a new context into account. This type of vulnerability is not unique to this example. Learn- ing algorithms tend to be vulnerable to characteristics of their training data. This is a feature of these algorithms: the ability to adapt in the face of changing input. But algorithmic adaptation in response input data also presents an attack vector for malicious users. This data diet vulnerability in learning algorithms is a recurring theme. “Misbehaving” Algorithms: A Brief Review As artificial agents take a larger role in decisionmaking processes, more attention needs to be paid to the effects of fallible and misbehaving artificial agents. Artificial agents are, by definition, not human. Moral judgment typically requires an element of choice, empathy, or agency in the actor. There can be no meaningful morality associated with artificial agents; Algorithms: Definition and Evaluation 7

PDF Image | Intelligence in Our Image Risks of Bias and Errors in Artificial Intelligence

intelligence-our-image-risks-bias-and-errors-artificial-inte-017

PDF Search Title:

Intelligence in Our Image Risks of Bias and Errors in Artificial Intelligence

Original File Name Searched:

RAND_RR1744.pdf

DIY PDF Search: Google It | Yahoo | Bing

Cruise Ship Reviews | Luxury Resort | Jet | Yacht | and Travel Tech More Info

Cruising Review Topics and Articles More Info

Software based on Filemaker for the travel industry More Info

The Burgenstock Resort: Reviews on CruisingReview website... More Info

Resort Reviews: World Class resorts... More Info

The Riffelalp Resort: Reviews on CruisingReview website... More Info

CONTACT TEL: 608-238-6001 Email: greg@cruisingreview.com | RSS | AMP