logo

Intelligence in Our Image Risks of Bias and Errors in Artificial Intelligence

PDF Publication Title:

Intelligence in Our Image Risks of Bias and Errors in Artificial Intelligence ( intelligence-our-image-risks-bias-and-errors-artificial-inte )

Previous Page View | Next Page View | Return to Search List

Text from PDF Page: 022

12 An Intelligence in Our Image latory constraints on decision support systems to address the runaway negative externalities (such as cumulative disadvantages) these systems foster. Barocas and Nissenbaum (2014) discussed how algorithms and big data also circumvent any legal privacy guarantees we have grown to expect. The standard safeguard against algorithmic disparate impact effects is to hide sensitive data fields (such as gender and race) from learning algorithms. But the literature on modern reidentification techniques recognizes that learning algorithms can implicitly recon- struct sensitive fields and use these probabilistically inferred proxy vari- ables for discriminatory classification (DeDeo, 2015; Feldman et al., 2015). The power of these inference techniques only grows as more data sets are added to the training base (Ohm, 2010). This poses a prob- lem for regulation; it is possible to legislate against the explicit use of protected information (such as race and gender in the Equal Employ- ment Opportunity and Fair Housing acts).13 But it is harder to legis- late against the use of probabilistically inferred information. Pasquale (2015) reported that data fusion agencies already take advantage of this regulatory loophole. Algorithm designers and researchers have recently begun to work on technical approaches to certify and/or remove algorithmic dispa- rate impacts. Feldman et al. (2015) presented an approach to certifying that a classification algorithm is fair according to U.S. legal standards. Their correction procedure performs rank-preserving modifications to the input data to control disparate impact. DeDeo (2015) presented a method for modifying the output of an algorithm to decorrelate its output from protected variables. Dwork et al. (2012) leveraged some of Dwork’s own insights on privacy (Dwork, 2008a; Dwork, 2008b) to develop a theoretical framework for fair classification algorithms. This approach looks for context-sensitive fair similarity metrics for comparing and classifying individuals regardless of protected category membership. 13 The Fair Housing Act a set forth in Title VIII of the Civil Rights Act of 1968 and is set forth in 42 U.S. Code 3504–3606.

PDF Image | Intelligence in Our Image Risks of Bias and Errors in Artificial Intelligence

intelligence-our-image-risks-bias-and-errors-artificial-inte-022

PDF Search Title:

Intelligence in Our Image Risks of Bias and Errors in Artificial Intelligence

Original File Name Searched:

RAND_RR1744.pdf

DIY PDF Search: Google It | Yahoo | Bing

Cruise Ship Reviews | Luxury Resort | Jet | Yacht | and Travel Tech More Info

Cruising Review Topics and Articles More Info

Software based on Filemaker for the travel industry More Info

The Burgenstock Resort: Reviews on CruisingReview website... More Info

Resort Reviews: World Class resorts... More Info

The Riffelalp Resort: Reviews on CruisingReview website... More Info

CONTACT TEL: 608-238-6001 Email: greg@cruisingreview.com | RSS | AMP