logo

Intelligence in Our Image Risks of Bias and Errors in Artificial Intelligence

PDF Publication Title:

Intelligence in Our Image Risks of Bias and Errors in Artificial Intelligence ( intelligence-our-image-risks-bias-and-errors-artificial-inte )

Previous Page View | Next Page View | Return to Search List

Text from PDF Page: 015

a dish or steps for calculating your federal tax burden. Church and Turing’s definitions lead directly to the common conception of algo- rithms as just code for crunching numbers. The late Marvin Minsky (1961) and other pioneering AI thinkers (such as John McCarthy and Frank Rosenblatt) whose work followed Church and Turing were thinking about a different aspect of algo- rithms: empowering computing systems with the gift of intelligence. A prime hallmark of intelligence is the ability to adapt or learn induc- tively from “experience” (i.e., data). Their efforts led to the formula- tion of learning algorithms for training computing systems to learn and/or create useful internal models of the world. These algorithms also consist of rote sequential computational procedures at the micro- scopic level. The difference is that the algorithms are not just crunch- ing numbers through static mathematical models but update their behavior iteratively based on models tuned in response to their experi- ence (input data) and performance metrics.3 Yet the problem of learn- ing remains notoriously hard.4 Many of the initial algorithms tried to mimic biological behaviors.5 The grand goal was (and still is) to create autonomous AI capable of using such advanced learning algorithms to rival or exceed fluid human intelligence. Such systems are often called general AI systems in current discussions. Commercial successes—such as Google’s recent AlphaGo triumph (Silver et al., 2016) and Micro- 3 Valiant (2013) makes the argument that evolution itself is a type of learning algorithm iteratively adapting biological and social traits to improve a reproductive fitness performance metric. 4 The problem of learning to distinguish between truth and falsehood based on experience is more formally known as the problem of induction. The central question is how justifiable applying generalizations based on limited past experience to new scenarios is. Philosophers have given much thought to this problem. David Hume in particular expressed concerns about the use of induction for learning about causality (Hume, 2000, Sec. VII). Bertrand Russell explains the point with the example of a chicken who has learned to identify the farmer as the cause of its daily meals based inductively on extensive past experience. It has no reason to expect the farmer to be the cause of its final demise (Russell, 2001). 5 There was an initial flurry of interaction between AI pioneers and psychologists (both behaviorist and physiologically inclined) to try to understand how animals learn new behaviors. Algorithms: Definition and Evaluation 5

PDF Image | Intelligence in Our Image Risks of Bias and Errors in Artificial Intelligence

intelligence-our-image-risks-bias-and-errors-artificial-inte-015

PDF Search Title:

Intelligence in Our Image Risks of Bias and Errors in Artificial Intelligence

Original File Name Searched:

RAND_RR1744.pdf

DIY PDF Search: Google It | Yahoo | Bing

Cruise Ship Reviews | Luxury Resort | Jet | Yacht | and Travel Tech More Info

Cruising Review Topics and Articles More Info

Software based on Filemaker for the travel industry More Info

The Burgenstock Resort: Reviews on CruisingReview website... More Info

Resort Reviews: World Class resorts... More Info

The Riffelalp Resort: Reviews on CruisingReview website... More Info

CONTACT TEL: 608-238-6001 Email: greg@cruisingreview.com | RSS | AMP