A Blog by Jonathan Low

 

Feb 6, 2013

How Biases Distort Online Ad Delivery

We are becoming aware of the fact that the information, choices and ads we receive online are the product of past searches, purchases and behaviors.

What is also becoming apparent is that there are biases built into some of these selections. These biases are harmful in several ways. First, they may limit our choices and narrow our opportunities in ways that are convenient for those designing the algorithms that govern them, but restrictive in terms our own potential. Second, they may lead to sub-optimal and unproductive results because of what factors drive them to exclude. And finally, they may actually discriminate against certain types of people in certain situations due to vocabulary.

As the following article explains, some of those biases are far more explicit than we might have imagined possible in this day and age. Gender, race, religion, health, weight, height, education and a raft of other data we may consider incidental to the decision at hand may attain out-sized importance within the confines of the search.

Part of the reason for this is inherent in the culture. Part of it may be by design. The companies involved in designing the models that dictate choice vehemently deny that they tolerate bias. They may even actually try to guard against it. The larger problem may be that our culture has built these codes into its system in ways that make them difficult to expose and delete.

The real risk is that a society that self-limits is a society - and economy - that is prone to underperformance, for whatever reason. JL

The MIT Technology Review reports:
“Have you ever been arrested? Imagine the question not appearing in the solitude of your thoughts as you read this paper, but appearing explicitly whenever someone queries your name in a search engine.” So begins Latanya Sweeney at Harvard University in a compelling paper arguing that racial discrimination plagues online ad delivery.

Many people will have experience Googling friends, colleagues and relatives to find out about their online presence—the websites on which they appear, their pictures, hobbies and so on.

Sweeney’s interest is in the ads that appear alongside these results. When she entered her name in Google an ad appeared with the wording:

“Latanya Sweeney, Arrested? 1) Enter name and state 2) Access full background. Checks instantly. www.instantcheckmate.com”

This is suggestive wording. It suggests that Latanya Sweeney has a criminal record the details of which can be accessed by clicking on the ad. But after hitting the link and paying the necessary subscription fee, Sweeney says she found no record of arrest.

What’s interesting about this is that Sweeney’s first name is also suggestive–that she is black. The question Sweeney asks is whether a similar search with a name suggestive of a white racial profile also serves up ads mentioning arrest records.

The answer is a powerful wake up call. Sweeney says she has evidence that black identifying names are up to 25 per cent more likely to be served with an arrest-related ad. “There is discrimination in delivery of these ads,” she concludes.

Sweeney gathered this evidence by collecting over 2000 names that were suggestive of race. For example, first names such as Trevon, Lakisha and Darnell suggest the owner is black while names like Laurie, Brendan and Katie suggest the owner is white.

She then entered these plus surnames into Google.com and Reuters.com and examined the ads they returned. Most names generated ads for public records. However, black-identifying names turned out to be much more likely than white-identifying names to generate ads that including the word “arrest” (60 per cent versus 48 per cent). All came from www.instantcheckmate.com.

She says the results are statistically significant with a less than 0.1 per cent chance that they were generated by chance.

On Reuters.com, black identifying names were 25 per cent more likely to be served with an arrest-related ad.

That’s an extraordinary result and one that raises more questions than it answers. The biggest puzzle, of course, is what causes the ads to be served up in this pattern. Here the mystery of Google’s Adsense service obscures matters considerably.

Sweeney says there are essentially three possibilities. One is that www.instantcheckmate.com has set up the arrest-mentioning ads to be served up to black identifying names. Another is that Google has somehow biased its ad serving mechanism in this way.

A more insidious explanation is that society as a whole is to blame. If Google’s Adsense service learns which ad combinations are more effective, it would first serve the arrest-related ads to all names at random. But this would change if it were to discover that click-throughs are more likely when these ads are served against a black-identifying name. In other words, the results merely reflect the discriminatory pattern of clicks from ordinary people.

Of course, we can’t know without greater insight into the black box that is Google Adsense.

Clearly Sweeney has discovered a serious problem here given the impact online presence can have an individual’s employment prospects.

Whatever the cause, Sweeney says technology may offer some kind of solution. If the algorithms behind Adsense can reason about maximising revenues, she says they ought to be able to reason about the legal and social consequences of certain patters of click-throughs.

0 comments:

Post a Comment