PDF Publication Title:
Text from PDF Page: 020
10 An Intelligence in Our Image Latanya Sweeney and Nick Diakopoulos pioneered the study of misbehavior in Google systems (Sweeney, 2013; Diakopoulos, 2013; Diakopoulos, 2016). Their work exposed instances of algorithmic defa- mation in Google searches and ads. Diakopoulos discussed a canonical example of such algorithmic defamation in which search engine auto- completion routines, fed a steady diet of historical user queries, learn to make incorrect defamatory or bigoted associations about people or groups of people.10 Sweeney showed that such learned negative associa- tions affect Google’s targeted ads. In her example, just searching for certain types of names led to advertising for criminal justice services, such as bail bonds or criminal record checking. Diakopoulos’s exam- ples included consistently defamatory associations for searches related to transgender issues. Studies like Sweeney’s and Diakopoulos’s are archetypes in the growing field of data and algorithmic journalism. More news and research articles chronicle the many missteps of the algorithms that affect different parts of our lives, online and off. IBM’s Jeopardy- winning AI, Watson, famously had to have its excessive swearing habit corrected after its learning algorithms ingested some unsavory data (Madrigal, 2013). There have also been reports on the effects of Waze’s traffic routing algorithms on urban traffic patterns (Bradley, 2015). One revealing book describes the quirks of the data and algorithms underlying the popular OkCupid dating service (Rudder, 2014). More recently, former Facebook contractors revealed that Facebook’s news feed trend algorithm was actually the result of subjective input from a human panel (Tufekci, 2016). Others began writing on the effects of algorithms in governance, public policy, and messy social issues. Artificial agents have to contend with another layer of complexity and peril in these spaces. Bad behav- ior here could have far-reaching, populationwide, and generational consequences. Citron (2007) reported on how the spread of algorithmic deci- sionmaking to legal domains deprives citizens of due process. More 10 Germany now holds Google partially responsible for the correctness of its autocomplete suggestions (Diakopoulos, 2013).PDF Image | Intelligence in Our Image Risks of Bias and Errors in Artificial Intelligence
PDF Search Title:
Intelligence in Our Image Risks of Bias and Errors in Artificial IntelligenceOriginal File Name Searched:
RAND_RR1744.pdfDIY PDF Search: Google It | Yahoo | Bing
Cruise Ship Reviews | Luxury Resort | Jet | Yacht | and Travel Tech More Info
Cruising Review Topics and Articles More Info
Software based on Filemaker for the travel industry More Info
The Burgenstock Resort: Reviews on CruisingReview website... More Info
Resort Reviews: World Class resorts... More Info
The Riffelalp Resort: Reviews on CruisingReview website... More Info
CONTACT TEL: 608-238-6001 Email: greg@cruisingreview.com | RSS | AMP |