A Blog by Jonathan Low

 

Apr 12, 2015

Algorithmic Accountability: The Science of Knowing When Data-Driven Decisions Are Wrong

As a species we tend to have a fear of numbers. We learn, as babies, to communicate with words. Our survival to adulthood depends on knowing how to do that.

Numerology is learned, generally in a traditional educational setting and some people are better at it than others. One might argue that entire professions have chosen their careers based on aversion to numbers (are you running with me, advertisers, PR specialists and marketers?) or at least used to before data became central to everything everywhere.

But here we are. Data are us. And our fear of inadequacy or our ignorance or our resentment tends to encourage deferring to data. If it's a number it must be right, because numbers are numbers, not some slick face-man trying to put one over on us. Except, of course, that someone had to produce those numbers and she might just have an attitude or a point of view or, heaven forefend, an agenda. Which means that data can represent whatever its generator or interpreter wants it to, just like the translation of a book from the original Serbo-Croatian or Urdu.

So, as the following article explains, a new school of risk management known as algorithmic accountability has sprung up to attempt to measure and assess the potential one way or another. Now all we need to know is who is holding the algorithmic accountants accountable. JL

Steve Lohr reports in the New York Times:

Big companies and start-ups are beginning to use technology in decisions like medical diagnosis, crime prevention and loan approvals. The application of data science to such fields raises questions of when close human supervision of an algorithm’s results is needed.
Armies of the finest minds in computer science have dedicated themselves to improving the odds of making a sale. The Internet-era abundance of data and clever software has opened the door to tailored marketing, targeted advertising and personalized product recommendations.
Shake your head if you like, but that’s no small thing. Just look at the technology-driven shake-up in the advertising, media and retail industries.
This automated decision-making is designed to take the human out of the equation, but it is an all-too-human impulse to want someone looking over the result spewed out of the computer. Many data quants see marketing as a low-risk — and, yes, lucrative — petri dish in which to hone the tools of an emerging science. “What happens if my algorithm is wrong? Someone sees the wrong ad,” said Claudia Perlich, a data scientist who works for an ad-targeting start-up. “What’s the harm? It’s not a false positive for breast cancer.”
But the stakes are rising as the methods and mind-set of data science spread across the economy and society. Big companies and start-ups are beginning to use the technology in decisions like medical diagnosis, crime prevention and loan approvals. The application of data science to such fields raises questions of when close human supervision of an algorithm’s results is needed.
These questions are spurring a branch of academic study known as algorithmic accountability. Public interest and civil rights organizations are scrutinizing the implications of data science, both the pitfalls and the potential. In the foreword to a report last September, “Civil Rights, Big Data and Our Algorithmic Future,” Wade Henderson, president of The Leadership Conference on Civil and Human Rights, wrote, “Big data can and should bring greater safety, economic opportunity and convenience to all people.”
Take consumer lending, a market with several big data start-ups. Its methods amount to a digital-age twist on the most basic tenet of banking: Know your customer. By harvesting data sources like social network connections, or even by looking at how an applicant fills out online forms, the new data lenders say they can know borrowers as never before, and more accurately predict whether they will repay than they could have by simply looking at a person’s credit history.

0 comments:

Post a Comment