A Blog by Jonathan Low

 

Jun 28, 2013

The Era of Cognitive Computing

Humans have always tried to improve their tools. From wood to stone to metal. And now, telescoping time, from tabulating to programming to cognition.

We have gone from developing tools that help us overcome our physical limitations to creating those that enhance our intellectual abilities. We are evolving these devices to move beyond instruction to cognition so that they can recognize threats or opportunities and then help us choose the best options.

This process is driven by our understanding of how much we dont know. And by our need to refine the reams of raw data we now have available into something more useful. There is simply too much of it for us to make sense of it on our own, so we are employing machines to assist us with that task. Not just to identify and categorize, but to interpret and project. This is a bit scary for us, because we always assumed that ability to make judgements was what separated us from other species. Asking for assistance with that feels like an admission of weakness to some and it stokes fears that the Luddites were right, that our ability to make the kind of middle class living to which we have become accustomed is threatened.

To others, who can see past the immediate question of technology adaption, the transition is liberating. Because in their optimism they believe that this frees humans to pursue other tasks and dreams that may be more productive and, ultimately, more beneficial. JL

Irving Wladawsky-Berger comments in his blog:

Tools have played a central role in human evolution since our ancestors first developed hand axes and similar such stone tools a few million years ago.  Ever since, we’ve been co-evolving right alongside the tools we create.  “We shape our tools and they in turn shape us,” observed noted author and educator Marshal McLuhan in the 1960s.
The Industrial Revolution led to dramatic improvements in productivity and standard of living over the past two hundred years.  This is due largely to the machines we invented to make up for our physical limitations - the steam engines that enhanced our physical power, the railroads and cars that made up for our slow speed, and the airplanes that gave us the ability to fly.
Similarly, for the past several decades computers have been augmenting our intelligence and problem solving capabilities.  And, according to IBM’s John Kelly and Steve Hamm, there is much more to come.  In Smart Machines: IBM’s Watson and the Era of Cognitive Computing, Research director John Kelly and writer and strategist Steve Hamm, note that “We are at the dawn of a major shift in the evolution of technology.  The changes that are coming over the next two decades will transform the way people live and work, just as the computing revolution has transformed the human landscape over the past half century.  We call this the era of cognitive computing.”
Their book will be published this fall.  In this recently released excerpt, Kelly and Hamm point out that cognitive systems represent the third era in the history of computing.  In the first era, computers were essentially tabulating machines that counted things.  The tabulating era began in the 19th century with the work of Charles Babbage, Herman Hollerith and other inventors.  These early computer were used in a number of applications including national population censuses and the control of looms and other industrial machines.  Next came the programmable computing era which emerged in the 1940s.  Most computers being used today are based on the Von Neumann architectural principles laid out in 1945 by mathematician John von Neumann.  Any problem that can be expressed as a set of instructions can be codified in software and executed in such stored-program machines.  This architecture has worked very well for many different kinds of scientific, business, government and consumer applications.  But, its very strength, the ability to break down a problem into a set of instructions to be embedded in software, is proving to be its key limitation in the emerging world of big data.
Digital technologies are now found all around us, from the billions of mobile devices carried by almost every person in the planet to the explosive growth of what McKinsey is now calling the Internet of All Things.  These digital devices are generating gigantic amounts of information every second of every hour of every day, and we are now asking our computers to help us make sense of all this data.  What is it telling us about the environment we live in?  How can we use it to make better decisions?  Can it help us understand our incredible complex economies and societies?
This kind of data-driven computing is radically different from the instruction-driven computing we’ve been living with for decades.  And, in fact, such data-driven, sense-making, insight-extracting, problem-solving cognitive computers seem to have more in common with the structure of the human brain than with the architecture of a classic Von Neumann machine.  But, while inspired by the way our brains process and make sense of information, the objective of cognitive machines is not to think like a human, something we barely understand.
Rather, we want our cognitive machines to deal with large amounts and varieties of unstructured information in real time.  Our brains have evolved to do so quite well over millions of years.  But our brains can’t keep up with the huge volumes of information now coming at us from all sides.  So, just like we invented industrial machines to helps us overcome our physical limitations, we now need to develop a new generation of machines to help us get around our cognitive limitations.
The quest for machines that exhibit the kind of problem solving intelligence we associate with humans is not new.  Artificial intelligence (AI) was one of the hottest areas in computer sciences in the 1960s and 1970s.  Many of the AI leaders in those days were convinced that you could build a machine as intelligent as a human being within a couple of decades.  They were trying to do so by somehow programming the machines to exhibit intelligent behavior, even though to this day we have no idea what intelligence is, let alone how to translate intelligence into a set of instructions to be executed by a machine.  In the 1980s, the Japanese government even mounted a major national program, the Fifth Generation Computer Project, to develop highly sophisticated AI-like machines and programming languages.  After years of unfulfilled promises, a so called AI winter of reduced interest and funding set in.
But, while these ambitious AI approaches met with disappointment, a more applied, focused use of AI techniques was making progress, such as the use of natural language processing with limited vocabularies in voice response systems and the development of industrial robots for manufacturing applications. The biggest breakthrough in these engineering-oriented, AI-ish applications occurred when we switched paradigms.  Instead of trying to program computers to act intelligently, - an approach that had not worked so well in the past, - we embraced a statistical, brute force approach based on analyzing vast amounts of information using powerful computers and sophisticated algorithms.
We discovered that such a statistical, information-based approach produced something akin to intelligence or knowledge.  Moreover, unlike the earlier programming-based projects, the statistical approaches scaled very nicely.  The more information you had, the more powerful the supercomputers, the more sophisticated the algorithms, the better the results.  Deep Blue, IBM’s chess playing supercomputer, demonstrated the power of such a statistical, brute force approach by beating then reigning chess champion Gary Kasparov in a celebrated match in May, 1997.
Since that time, analyzing or searching large amounts of information has become increasingly important and commonplace in a wide variety of disciplines.  Today, most of us use search engines as the primary mechanism for finding information in the World Wide Web.  It is amazing how useful these mostly key-word based approaches have proven to be in everyday use.  And, beyond these word-oriented search engines, statistical, information-based systems are being extended in a number of directions.
In February, 2011, Watson, IBM’s question-answering system won the Jeopardy! Challenge against the two best human Jeopardy! players.  Watson demonstrated that computers could now extract meaning from the unstructured knowledge developed by humans in books, articles, newspapers, web sites, social media, and anything written in natural language.  Watson dealt with the information much as a human would, analyzing multiple options at the same time, considering the probability that each option was the answer to the problem it was dealing with, and then selecting the option with the highest probability of being right.
This is pretty much how a human experts would make decisions in endeavors like medical diagnoses, financial advice, customer service or strategy formulation.  The human experts will typically consider a few of the most likely options based on the knowledge they have and make a decision.  They will generally be right most of the times, but may have trouble when faced with a new or infrequently occurring problem beyond the scope of the most likely options.  Also, we all have biases based on our personal experiences that make it hard to consider options beyond the scope of our intuition.
A cognitive system, on the other hand, can analyze many thousands of options at the same time, including the large number of infrequently occurring ones as well as ones that the expert has never seen before.  It evaluates the probability of each option being the answer to the problem, and then comes up with the most likely options, that is, those with the highest probabilities.  Moreover, the cognitive system has access to huge amounts of information of all kinds, both structured and unstructured, including not only books and documents, but also speech, pictures, videos and so on.  These cognitive systems are truly beginning to augment our human cognitive capabilities much as earlier machines have augmented our physical ones.  “Cognitive systems will extract insights from data sources from which we acquire almost no insight today, such as population-wide health care records, or from new sources of information, such as sensors monitoring pollution in delicate marine environments,” write Kelly and Hamm.  “Such systems will still sometimes be programmed by people using if A, then B logic, but programmers won’t have to anticipate every procedure and every rule that will be required.  Instead, computers will be equipped with interpretive capabilities that will make it possible for them to learn from the data and evolve over time as they gain new knowledge or as the demands on them change.”
“The goal isn’t to replicate human brains, though.  This isn’t about replacing human thinking with machine thinking.  Rather, in the era of cognitive systems, humans and machines will collaborate to produce better results - each bringing their own superior skills to the partnership.  The machines will be more rational and analytic - and, of course, possess encyclopedic memories and tremendous computational abilities.  People will provide judgment, intuition, empathy, a moral compass and human creativity.”
In the end, this is today’s version of the quest that drove our ancestors to start developing stone tools a few million years ago, and that inspired the inventors of the many machines developed over the past few hundred years.  We just want our smart machines to make us smarter.

0 comments:

Post a Comment