Greg Satell reports in Digital Tonto:
Some look at the growing excitement around causal AI and scoff that it is just common sense. But our inability to encode a basic understanding of cause and effect relationships into our algorithms has been an impediment to making machines truly smart. Causal approaches to AI are being deployed in healthcare to understand the causes of complex diseases and evaluate which interventions may be most effective. As systems detect confounders and imagine counterfactuals, we’ll be able to evaluate the effectiveness of interventions that have been tried, but also those that haven’t, which will help come up with better solutions
In 2011, IBM’s Watson system beat the best human players in the game show, Jeopardy! Since then, machines have shown that they can outperform skilled professionals in everything from basic legal work to diagnosing breast cancer. It seems that machines just get smarter and smarter all the time.Yet that is largely an illusion. While even a very young human child understands the basic concept of cause and effect, computers rely on correlations. In effect, while a computer can associate the sun rising with the day breaking, it doesn’t understand that one causes the other, which limits how helpful computers can be.That’s beginning to change. A group of researchers, led by artificial intelligence pioneer Judea Pearl, are working to help computers understand cause and effect based on a new causal calculus. The effort is still in its nascent stages, but if they’re successful we could be entering a new era in which machines not only answer questions, but help us pose new ones.Observation and Association
Most of what we know comes from inductive reasoning. We make some observations and associate those observations with specific outcomes. For example, if we see animals going to a drink at a watering hole every morning, we would expect to see them at the same watering hole in the future. Many animals share this type of low-level reasoning and use it for hunting.Over time, humans learned how to store these observations as data and that’s helped us make associations on a much larger scale. In the early years of data mining, data was used to make very basic types of predictions, such as the likelihood that somebody buying beer at a grocery store will also want to buy something else, like potato chips or diapers.The achievement over the last decade or so is that advancements in algorithms, such as neural networks, have allowed us to make much more complex associations. To take one example, systems that have observed thousands of mammograms have learned to associate the ones that show a tumor with a very high degree of accuracy.However, and this is a crucial point, the system that detects cancer doesn’t “know” it’s cancer. It doesn’t associate the mammogram with an underlying cause, such as a gene mutation or lifestyle choice, nor can it suggest a specific intervention, such as chemotherapy. Perhaps most importantly, it can’t imagine other possibilities and suggest alternative tests.Confounding Intervention
The reason that correlation is often very different from causality is the presence of something called a confounding factor. For example, we might find a correlation between high readings on a thermometer and ice cream sales and conclude that if we put the thermometer next to a heater, we can raise sales of ice cream.I know that seems silly, but problems with confounding factors arise in the real world all the time. Data bias is especially problematic. If we find a correlation between certain teachers and low test scores, we might assume that those teachers are causing the low test scores when, in actuality, they may be great teachers who work with problematic students.Another example is the high degree of correlation between criminal activity and certain geographical areas, where poverty is a confounding factor. If we use zip codes to predict recidivism rates, we are likely to give longer sentences and deny parole to people because they are poor, while those with more privileged backgrounds get off easy.These are not at all theoretical examples. In fact, they happen all the time, which is why caring, competent teachers can, and do, get fired for those particular qualities and people from disadvantaged backgrounds get mistreated by the justice system. Even worse, as we automate our systems, these mistaken interventions become embedded in our algorithms, which is why it’s so important that we design our systems to be auditable, explainable and transparent.Imagining A Counterfactual
Another confusing thing about causation is that not all causes are the same. Some causes are sufficient in themselves to produce an effect, while others are necessary, but not sufficient. Obviously, if we intend to make some progress we need to figure out what type of cause we’re dealing with. The way to do that is by imagining a different set of facts.Let’s return to the example of teachers and test scores. Once we have controlled for problematic students, we can begin to ask if lousy teachers are enough to produce poor test scores or if there are other necessary causes, such as poor materials, decrepit facilities, incompetent administrators and so on. We do this by imagining counterfactual, such as “What if there were better materials, facilities and administrators?”Humans naturally imagine counterfactuals all the time. We wonder what would be different if we took another job, moved to a better neighborhood or ordered something else for lunch. Machines, however, have great difficulty with things like counterfactuals, confounders and other elements of causality because there’s been no standard way to express them mathematically.That, in a nutshell, is what Judea Pearl and his colleagues have been working on over the past 25 years and many believe that the project is finally ready to bear fruit. Combining humans innate ability to imagine counterfactuals with machines’ ability to crunch almost limitless amounts of data can really be a game changer.Moving Towards Smarter Machines
Make no mistake, AI systems’ ability to detect patterns has proven to be amazingly useful. In fields ranging from genomics to materials science, researchers can scour massive databases and identify associations that a human would be unlikely to detect manually. Those associations can then be studied further to validate whether they are useful or not.Still, the fact that our machines don’t understand concepts like the fact that thermometers don’t increase ice cream sales limits their effectiveness. As we learn how to design our systems to detect confounders and imagine counterfactuals, we’ll be able to evaluate not only the effectiveness of interventions that have been tried, but also those that haven’t, which will help us come up with better solutions to important problems.For example, in a 2019 study the Congressional Budget Office estimated that raising the national minimum wage to $15 per hour would result in a decrease in employment from zero to four million workers, based on a number of observational studies. That’s an enormous range. However, if we were able to identify and mitigate confounders, we could narrow down the possibilities and make better decisions.While still nascent, the causal revolution in AI is already underway. McKinsey recently announced the launch of CausalNex, an open source library designed to identify cause and effect relationships in organizations, such as what makes salespeople more productive. Causal approaches to AI are also being deployed in healthcare to understand the causes of complex diseases such as cancer and evaluate which interventions may be the most effective.Some look at the growing excitement around causal AI and scoff that it is just common sense. But that is exactly the point. Our historic inability to encode a basic understanding of cause and effect relationships into our algorithms has been a serious impediment to making machines truly smart. Clearly, we need to do better than merely fitting curves to data.
0 comments:
Post a Comment