Jill Priluck reports in the New Yorker:
The Office of Technology, Research and Investigation will explore the effect of algorithms on markets.Should that lead to even more scrutiny of algorithm-based price-fixing, the bots may find that they’re no match for human will.
On the day after Easter this year, an online poster retailer named David Topkins became the first e-commerce executive to be prosecuted under antitrust law. In a complaint that was scant on details, the U.S. Department of Justice’s San Francisco division charged Topkins with one count of price-fixing, in violation of the Sherman Act. The department alleged that Topkins, the founder of Poster Revolution, which was purchased in 2012 by Art.com, had conspired with other sellers between September of 2013 and January of last year to fix the prices of certain posters sold on Amazon Marketplace. Topkins pleaded guilty and agreed to pay a twenty-thousand-dollar fine. “We will not tolerate anticompetitive conduct, whether it occurs in a smoke-filled room or over the Internet using complex pricing algorithms,” Assistant Attorney General Bill Baer, of the department’s antitrust division, said. “American consumers have the right to a free and fair marketplace online, as well as in brick and mortar businesses.”
Casual observers might wonder why, for its first Sherman Act antitrust case against an online-sales executive, the Department of Justice targeted a relatively small-time retailer in the wall-décor industry. After all, Silicon Valley is no doubt replete with e-commerce executives who have colluded to bend rules and harm consumers. And the department’s case rested on allegations of fairly standard price-fixing behavior: according to prosecutors, Topkins and his co-conspirators, who were unnamed in the complaint, collected, exchanged, monitored, and discussed how much to charge for posters that were sold, distributed, and paid for on Amazon’s auction site from California to other states. Coupled with the details of the complaint, however, Baer’s statement suggests that prosecutors might have been interested in a tool underlying Topkins’s apparent misdeeds: an algorithm he had coded to instruct his company’s software to set prices.
The first section of the Sherman Antitrust Act, which was passed in 1890, amid the heyday of American oil, steel, and railroad monopolies, suggests the breadth of the activities that prosecutors and regulators have traditionally been able to challenge. “Every contract, combination in the form of trust or otherwise, or conspiracy, in restraint of trade or commerce among the several States, or with foreign nations, is declared to be illegal,” it states. Since the Sherman Act was bolstered, in 1914, with the passage of the Clayton Act, the country’s antitrust apparatus has allowed the federal government to go after all kinds of businesses, and has typically encompassed new industries as they have emerged. Algorithm-driven (or bot-driven) selling, however, poses a new and formidable challenge to existing antitrust laws. If the practice hasn’t yet become a full-blown conundrum for prosecutors and regulators, the Topkins case suggests that it soon might. In capturing a plea, the Department of Justice was apparently able to rely on evidence of a “meeting of the minds” among co-conspirators. Topkins’s algorithm wasn’t an impediment to prosecution, because the seller had otherwise demonstrated a will to collude with other parties and then coded the algorithm to carry out the agreement. But often there is no evidence of a prior agreement when computers are in play, which means that antitrust prosecutions involving algorithms could be harder to prove in the future.
It’s likely that bot-driven price-fixing is more prevalent than the lack of prosecutions suggests. Algorithms are in high demand, and robotic sellers can combine with other automated pricing and selling mechanisms to monitor human activity and mine data in retail, services, and other areas, with few or no people involved. They can also make pricing predictions and decisions, reacting seamlessly to changes in the marketplace. Uber’s infamous surge pricing, for example, uses an algorithm to push up prices or, as Uber would put it, to balance supply and demand when many cars are needed simultaneously. When such algorithms go deeply awry, we notice: recall when, in 2011, Amazon priced “The Making of a Fly,” a paperback biology textbook, at $1,730,045.91, and that, during a snowstorm in 2013, Uber charged Jessica Seinfeld, the wife of Jerry Seinfeld, four hundred and fifteen dollars to drop off her kids at a sleepover and a bar mitzvah.
In a working paper published by the University of Oxford Centre for Competition Law and Policy, the researchers Ariel Ezrachi and Maurice E. Stucke explain that Uber’s algorithm can lead to horizontal collusion if the algorithm gives rise to an “alternative universe” that pushes up prices based on the perceived market value of a ride, rather than its actual market value. A human need not be involved. When Uber was criticized for the rise in rates that led to Jessica Seinfeld’s expensive trip, the C.E.O. and co-founder Travis Kalanick argued that their algorithms, not the people working for the company, were responsible. “We are not setting the price. The market is setting the price. We have algorithms to determine what that market is,” he said.
Ezrachi and Stucke suggest other ways in which algorithms can behave as cartels. One of these would involve “predictable agents,” which are designed to deliver predictable outcomes in response to market conditions. According to Ezrachi and Stucke, these agents, when adopted by multiple actors, “more easily reach a tacit agreement, detect breaches and punish deviations.” The result is a “conscious parallelism” that leads to higher prices. The pair also identify the potential for price-fixing by “smart machines,” which are programmed to achieve an outcome that they pursue via self-learning and experimentation. Evidence of intent would be difficult to establish in both types of cases, especially ones involving smart machines.
Salil Mehra, a professor at Temple University’s Beasley School of Law, notes in a forthcoming article in the Minnesota Law Review that the potential prosecution-proofing that algorithms provide to companies may leave cartels not only more likely to form but also more durable. Economists typically assert that cartels dissolve naturally after members cheat or become irrational. When computers are the actors, though, detection is faster and not prone to human errors or failings, making defection less likely. Automated participants can identify price changes more quickly, allowing defectors who lower prices at the expense of the group to be sifted out earlier. Given this dynamic, participants have little incentive to either “cheat” the group or to leave it. Put another way, computers are likely to handle the classic prisoner’s dilemma better than humans.
For decades, a movement has been under way to establish a broader legal definition of the level of intent and agreement that a cartel entails. Mehra cites the example, in 1968, of Judge Richard Posner of the Seventh Circuit Court of Appeals, who pointed out the potential problem of companies making pricing decisions with an eye to maintaining “healthy” prices in the industry generally, even if firms aren’t explicitly colluding with competitors. “Tacit collusion is not an unconscious state,” Posner wrote. The late Harvard Law School professor Donald F. Turner, too, advocated for collusion to be defined along broader lines.
Decades on, the law has yet to address the potential for bots to exploit the gaps in antitrust law. Ezrachi and Stucke suggest that, in such cases, officials might have to “delve into the heart of the algorithm and establish whether it is designed in a way that would or may lead to exploitation.” It may also be possible for lawyers to point to the adverse effects of the algorithm, irrespective of how it was designed.
There might be room, too, for regulators to jump into the fray. Cartel cases are politically popular: according to the University of Michigan Law School’s Daniel Crane, both the Bush and Obama Administrations brought a steady stream of price-fixing and other fraud complaints. And in fact, just last month, the Federal Trade Commission created the Office of Technology, Research and Investigation, which will explore the effect of algorithms on markets. Should that lead to even more scrutiny of algorithm-based price-fixing, the bots may find that they’re no match for human will.
0 comments:
Post a Comment