Apr 10, 2015

Will Algorithmic Misbehavior Prediction Really Make Businesses Like JPMorgan Less Bad?

Everything else has been outsourced, why not morality?

Since data has ostensibly delivered us from uncertainty, perhaps it can deliver us from evil. At least that is the 'solution' being proposed at an increasing number of institutions faced with chronic wrong-doing.

The problem, the righteous intone, is not so much that this sort of behavior is bad - who can afford to be judgmental in a highly competitive economy, after all? - but that the few bad apples who are caught mean that we all suffer as a result. Primarily because it cuts into bonuses.

So an algorithm that identifies probable wrong-doers based on behavioral clues could weed out those prone to what is euphemistically termed poor decision-making. The reality may be, however, that such statistically significant determinants may simply identify those most likely to get caught because they are insufficiently skilled. That is certainly not a bad thing from the corporate perspective. If changing behavior is actually the goal, though, everyone in an organization takes their cues from the top - not from an algorithm. JL

Matt Levine reports in Bloomberg:

A fine looks a lot like a price, and raises the specter that big banks are just running around paying for the right to do bad things. Penalties for misconduct might be just a "cost of doing business" for big banks.
Sometimes banks do bad things and then they get in trouble. The form of trouble that they get in is, for the most part, that they have to pay big fines. Many people find this to be an unsatisfying form of trouble. In particular, a fine looks a lot like a price, and raises the specter that big banks are just running around paying for the right to do bad things. People don't like the idea that penalties for misconduct might be just a "cost of doing business" for big banks. Back when Bank of America was getting fined billions of dollars every two weeks for mortgage badness, I took this line myself: Constant negotiated fines suck the moral dimension out of bank punishments, making it easier for bankers to rationalize misbehavior as just a losing trade rather than as something about which they should be ashamed.
Here is a Bloomberg News story about JPMorgan's use of algorithms "to identify rogue employees before they go astray." It is partly interesting for the algorithms, which will look at "dozens of inputs, including whether workers skip compliance classes, violate personal trading rules or breach market-risk limits," and then "refine those data points to help predict patterns of behavior." I suppose there is a certain creepiness to the algorithms -- "We’re taking technology that was built for counter-terrorism and using it against human language, because that’s where intentions are shown," says a guy, creepily -- though I don't want to overstate that. I used to work at a bank that blocked curse words in e-mail, because cursing in e-mail is apparently an early warning sign of a propensity to structure bad synthetic CDO-squareds. I am not especially troubled by the notion that people who break rules or skip compliance classes should be monitored a bit more closely for compliance. Seems like a pretty sensible algorithm to me.
What I think is more interesting, though, is what drove JPMorgan into the arms of the algorithms:
A February memo from executives including Chief Operating Officer Matt Zames urged employees to flag compliance concerns to managers and reminded them that scandals hurt bonuses for everyone. 
And this:
Meeting the company’s financial targets depends on reducing legal bills. The investment bank’s return on equity will rise to 13 percent from last year’s 10 percent largely by cutting legal and other expenses, according to a February presentation.
Talk about taking the moral dimension out of bank punishments! The most unhinged fantasies of big-bank evil do not encompass a PowerPoint slide weighing the costs and benefits, from a bonus and/or return-on-equity perspective, of not committing any more fraud. Let's go look at the slide:
roe walk
JPMorgan managed to translate, "Let's stop manipulating currencies and electricity and whatever else we were going to manipulate" into a green bar with a 2.5% label on it. It's so beautifully bland.
Here's the thing though. When you run a big bank, you will have misbehavior. Any big group of people will include some people who do bad things. That's not a problem of banking; it's a problem of statistics. Here is Warren Buffett:
Somebody is doing something today at Berkshire that you and I would be unhappy about if we knew of it. That's inevitable: We now employ more than 330,000 people and the chances of that number getting through the day without any bad behavior occurring is nil.
And he's jolly and beloved; imagine Jamie Dimon saying that.
On the other hand, the nice thing about banks is that they are good at managing problems of statistics. It's their whole business. A bank is a collection of weighted random number generators. You make a lot of loans, in the absolute certainty that some of them will default, but you have processes and algorithms and approaches to calibrate the defaults and keep them to a manageable level and price them appropriately. You trade stocks and bonds and currencies and derivatives, using a whole host of hotly debated statistical risk management models to reduce the chances that those trades will blow you up. Banking, like insurance, is a business built around knowing that bad things will happen, but trying to mitigate their effects with statistical methods.
All JPMorgan is doing with its fancy algorithms is applying that same reasoning to its own employees' misbehavior. It knows its employees will do bad stuff. It knows that will cost it money, dragging down its ROE and its employees' bonuses. The goal is to minimize those costs and maximize ROE (and bonuses). That is not a noble goal, exactly, but it happens to coincide with the regulatory goal of JPMorgan not doing bad things.
When you phrase that regulatory goal as a moral absolute, it is impossible: Someone, somewhere, will do bad things. Telling banks to do something impossible just makes them grumpy and sneery; it encourages distrust of regulators and strong rhetorical zero-tolerance policies rather than practical efforts to manage known risks. But turning misbehavior into a statistical problem, a cost of doing business, makes it tractable. Banks know how to cut costs! Just get an algorithm in there, clean it right up.
You could object. The bank's goal is not to minimize legal costs, but rather to optimize them; a trade that increases expected profit more than it increases expected legal risk is good for ROE, and so JPMorgan's algorithms might allow it even if regulators wouldn't. But that is in some sense a trivial problem; all regulators have to do is set the expected fine at or higher than the expected social cost of JPMorgan's misbehavior, and then JPMorgan will go off and solve for a level of misbehavior that is both individually and socially optimal.And any algorithm, like the regulatory regime it implements, is subject to gaming; the most determined JPMorgan wrongdoers will keep it clean over e-mail and show up to all their compliance classes in order to get away with the big stuff. Here is a considerably less optimistic article about JPMorgan's use of compliance algorithms, which I once characterized as showing "that JPMorgan's obligation is not so much to catch money laundering as it is to persuade regulators -- both before and after any money laundering occurs -- that it has the right systems in place to detect money laundering." Algorithms and procedures and records are great for demonstrating compliance, whether or not they're any good for creating compliance.
Still I mostly find JPMorgan's algorithms pretty cheery. Regulators didn't like some stuff JPMorgan was doing. They communicated their dislike to JPMorgan in a language it understood (money). JPMorgan reacted to that communication by setting about solving the problem in the ways it understands (algorithms, computers, statistics). The system, in its lurchy and inscrutable way, is working.
  1. There is a famous behavioral finance article on day-care centers; you may know it from "Freakonomics." The result is that if you tell people to show up at 4 p.m. to pick up their kids from day care, they mostly do, but sometimes don't. But if you fine them for being late, then they're more likely to be late; the fine is just a cost of doing day-care. The paper's title is "A Fine Is a Price."
  2. Not a JPMorgan guy, by the way, but "Tim Estes, chief executive officer of Digital Reasoning Systems Inc.," which "counts Goldman Sachs Group Inc. and Credit Suisse Group AG as clients and investors, but not JPMorgan." This isn't just JPMorgan: Everyone's getting into the algorithmic-misbehavior-prediction business.
  3. Disclosure: As you might know if you have a Bloomberg terminal, my current employer also monitors e-mails for cursing.
  4. So I sympathize with Lee Hale, the global head of sanctions at HSBC, who notoriously said that "some big breach, some regulatory breach" at the bank was "cast-iron certain" in the future. Of course it is! It's cast-iron certain at every bank! The probability of some big regulatory breach occurring at some point in the future at a large entity subject to hundreds of complex and overlapping regulatory regimes is one. 
  5. Basel bank capital regulation even requires a layer of operational risk capital, basically requiring banks to calculate (and have equity against) their risk of future rogue trading and fraud. 
  6. This assumes that the algorithms are designed to maximize return on equity, which is a stretch; I'm sure that they're mostly designed for compliance, and that the optimization is pretty loose. The Zames memo also reminds executives that the problems go beyond fines and cause reputational damage, though I suppose you could try to quantify that too. 
  7. I mean, ha ha ha. It is not a trivial problem in practice. It seems quite plausible that financial-crisis-related mortgage-fraud fines were too small, by several orders of magnitude, relative to the harm caused by mortgage fraud. On the other hand I suspect that the FX-manipulation fines are too big, though I could be persuaded otherwise. In particular the FX fines look much bigger than the banks' profits from FX manipulation, but (1) their profits may not correctly measure the costs they imposed and (2) I have no way of measuring their ex ante probability of getting caught.

No comments:

Post a Comment