MillenniumPost
Inland

Making AI intelligent

Author Kartik Hosanagar, a Wharton professor and tech entrepreneur, in his new book, A Human's Guide to Machine Intelligence, shows readers how algorithms and Artificial Intelligence have become irrevocable aspects of our life that must be exploited for enhanced performance. Excerpts:

On May 28, 1783, Hungarian inventor Wolfgang von Kempelen wrote a letter to Benjamin Franklin, inviting him to the Paris hotel where Kempelen was staying to view "a very interesting machine" that had "mystified all the members of the Académie des sciences." Franklin, serving at the time as U.S. Ambassador to France, accepted the invitation and soon declared Kempelen a genius. The machine, called the Mechanical Turk, was an automated chess-playing robot bedecked in fur-lined robes and a turban. It was shown throughout Europe for the next eight decades, playing against such eminences as Napoleon Bonaparte, Edgar Allan Poe, and Franklin himself.


Usually, it won – though it was defeated by François- André Danican Philidor, the European chess champion of the era who nonetheless admitted it was the most tiring game he had ever played. A young Charles Babbage, whom many credit with creating the first conceptual design for a programmable computer, was so enthralled that, years later, he harked back to the Mechanical Turk in arguing that chess was among the most compelling applications for his Analytical Engine.

By then, Kempelen's machine had a number of detractors, and indeed, it turned out that the device had never been an automaton at all, but rather "an elaborate hoax involving a hidden human player, an articulated mechanical arm, and a series of magnetic linkages." Still, scientists and engineers had believed in it, perhaps because the dream of artificial intelligence is itself so enchanting.

It took another 150 years or so before that fantasy began to be approached with the tools of science rather than of trickery.

In 1950, the true scientific foundations for AI were laid when the English mathematician Alan Turing published a paper posing a simple question: "Can machines think?" Turing imagined a scenario in which a computer might chat with humans and trick them into believing that it, too, was human. This hypothesized imitation game became known as the Turing test, and established an ambitious milestone by which to measure the intelligence of machines thereafter.



Soon after, the American computer scientist John McCarthy proposed a workshop that would engage with just this quest, seeking to discover how to make machines solve the kinds of problems that only humans were assumed to be capable of solving. And yet McCarthy struggled to raise funds. Turing's question may have been provocative, but how one might actually address it was a prospect that many people could not get their heads around. A representative from The Rockefeller Foundation, to which McCarthy appealed for money, observed, "The general feeling here is that this new field of mathematical models for thought . . . is still difficult to grasp very clearly." Nevertheless, the foundation reluctantly gave him $7,500 to organize the event.

At the time, researchers in this area had been focusing on narrow fields with names such as automata studies, cybernetics, and information processing. In McCarthy's view, none of these research areas encompassed the significance of the revolution ahead, and none had a name that would help outsiders understand the enormity of what was being studied. In one attempt to address this, he urged the mathematician Claude Shannon to change the title of a book he'd written, which was to be called Automata Studies. McCarthy deemed this too conservative; Shannon rejected the suggestion. But McCarthy got his chance when naming his conference, which he called the Dartmouth Summer Research Project on Artificial Intelligence, thus, historians believe, coining the term. "Calling it AI made it extremely ambitious, and it inspired many people to enter the field, which has been responsible for a lot of the progress. At the same time, it also created these highly inflated expectations," Pedro Domingos, a computer scientist at the University of Washington, Seattle, noted during our recent conversation.

The workshop eventually took place in the summer of 1956 and drew about twenty experts. From it emerged the scaffolding on which AI today is built: the recognition that human-level intelligence is the gold standard to aim for in machines. "I think the main thing [the workshop established] was the concept of artificial intelligence as a branch of science. Just this inspired many people to pursue AI goals in their own ways," McCarthy later remarked. The conference confirmed that Turing's questions, however enormous, were the ones that best framed this new field: What is thinking, what are machines, where do the two meet, how, and to what end?

By then Alan Turing had died tragically at the age of forty-two, of cyanide poisoning – whether as the result of an accident or suicide remains unclear. He never attended any large gathering of AI scientists. However, the workshop was attended by another pioneer, Herb Simon, who in 1978 would receive the Nobel Prize in Economics. His most important contribution to economics was pointing out the deficiency in the dominant economic model of decision making at the time – and still a commonly used model in microeconomics – which held that people make perfectly rational decisions to maximize their utility. Instead he suggested that, because of practical constraints such as limited time and the cognitive burden of decision making, people often seek a satisfactory solution rather than the perfectly optimal one. Simon's notion of bounded rationality was a cornerstone for the field of behavioral economics. He also won the Turing Award, which is often described as the Nobel of computer science, for his contributions to the founding of AI.

A few months prior to attending the Dartmouth workshop, Simon told one of his classes that "over the Christmas holiday, Al Newell and I invented a thinking machine." Simon and Newell had built the first symbolic software program, which they called the Logic Theorist. The software proved the theorems presented in the seminal three- volume Principia Mathematica, by Bertrand Russell and Alfred North Whitehead, and even "[proved] one theorem more elegantly than had Russell and Whitehead," according to one historian of science. In response to this tremendous feat of engineering, Russell himself observed, "I [only] wish Whitehead and I had known of this possibility before we both wasted ten years doing it by hand."

By 1959 Newell and Simon had built the first general problem solver that could tackle a broad class of problems expressed as well-formed formulae. For many observers, this software program demonstrated that artificial intelligence could be created by humans, cementing Simon and colleagues' place as pioneers in the AI revolution.

By the mid- 1960s, the AI community's ambitions had grown. "Machines will be capable, within twenty years, of doing any work a man can do," Simon declared. By 1967 Mac Hack VI, developed by engineers at MIT, became the first computer to enter a human chess tournament and win a game. At the time, the AI community used a number of different approaches to build intelligent systems. Some, such as Simon, relied on rules of logic. Others used statistical techniques to infer probabilities of events based on data. (For a familiar contemporary example, if an email contains words such as "free money" and "get out of debt," then the probability that the email is spam rises.) And still others used neural networks, a technique inspired by how a network of neurons in the human brain fires to create new knowledge or validate existing knowledge. However, this approach lost favor in the community in 1969 when Marvin Minsky, an AI pioneer and one of the attendees of the original Dartmouth conference, along with his colleague Seymour Papert, published a book, Perceptrons, outlining the limitations of neural networks. Their criticisms soon became commonly held beliefs, and most people in the research community dropped neural networks in favor of other approaches. In hindsight, the understanding of how best to build neural networks was limited, and the computing resources that were available back then were insufficient for such sophisticated techniques. "Your brain is the best supercomputer on earth, and people were trying to do this through the computer that they had back then. They ran a little bit ahead of themselves," says Domingos.

During the following decade considerable funding went into the field, but like the 1970s itself, this was a period of overpromising and under- delivering. By the 1980s financial backers from both government and industry grew frustrated at not seeing grand applications of AI come to fruition. So an "AI winter" set in. Funding for AI plummeted and was directed instead to other areas of computer science, such as networking, databases, and information retrieval. Media coverage of AI decreased as well. Creating machine intelligence was, it turned out, a harder problem than its advocates anticipated. Humans sometimes take their own intelligence for granted, forgetting that evolution spent hundreds of millions of years refining it— and that process is far from complete. The field had to set more- realistic nearterm goals.

(Excerpted with permission from A Human's Guide to Machine Intelligence, written by Kartik Hosanagar, published by Penguin Random House. The excerpt here is a part of a chapter titled, 'Algorithms Become Intelligent: A brief history of AI'.)

Next Story
Share it