Professor challenges 'smart' machine thinking
Asim Roy, an information systems professor at the W. P. Carey School of Business, was on sabbatical at Stanford University in 1991 when several years of thinking about the operation of the brain and artificial intelligence systems inspired him to act.
In a message to the leading Connectionist scholars, he threw down the gauntlet, challenging the prevailing school of thought and thereby the very foundations of the technologies behind “smart” machines and artificial intelligence.
“There was a Connectionist mailing list (online) and I just came flat-out and said, ‘Hey, all of your theories of brain-like learning don’t make sense,’ ” Roy says.
Roy’s colleagues around the world did not take kindly to his blunt, confrontational postulating. Roy says some researchers reacted badly – and some dissenters even walked out on his presentations.
“It’s hard to upset a science,” he says.
The conflict broke out at the intersection of a number of disciplines: cognition and learning, neuroscience, computer science, robotics, artificial intelligence and philosophy.
The prevailing wisdom in artificial intelligence is that humans learn by storing a system of rules. Thus, if one were learning to hit a tennis ball, one would be told to grip the racket at a certain place, in a certain way, with a certain pressure; to move one’s shoulder, arm and wrist in just the right way; to look at the ball in a specific way and place; and so on, given one instance and one set of conditions.
If the ball were to bounce just the slightest bit faster, slower, higher or lower, countless new rules would need to be called forth and applied from one’s memory. Combining all of the possible permutations of a bouncing ball, codifying every possible rule, would become an endless task. Although such rules can be very effective in limited cases, it would take enormous computing machines to store every rule needed to perform a certain task.
What Roy and the opposing Connectionist faction have sought is an understanding of how best to copy what the human brain seems to do: connect experiences and understandings and learn from them.
Roy says that if you were to teach someone how to hit a tennis ball every possible way without the advantage of actually swinging a racket and connecting with a ball, the individual still wouldn’t be able to play tennis, because human learning comes from data generated from the practice of a task.
Connectionists believe that this learning comes from the most basic of building blocks in the neural network: neurons. Rather than storing an incomprehensible number of rules, the brain stores tiny little bits of data and, depending on how they are connected, crafts solutions.
While the human brain may not be the best model upon which to pattern learning in machines, Roy notes that the brain still outperforms today’s computers – and if researchers are to craft a human-like machine, it should be patterned after human faculties.
Roy’s control theory holds that while there are connections that have to be made between neurons (or, in the case of a computer network, neural nodes), there also is a titular controller organizing the system.
It’s been nearly 10 years since he first began work on an academic paper defending his theory. During this time, he was ostracized for his work, and after a half-dozen rejections, revisions and resubmissions, the journal IEEE Transactions on Systems, Man and Cybernetics (Part A: Systems and Humans) is set to publish his paper, titled “Connectionism, Controllers and a Brain Theory,” in early 2009.
In this paper, Roy postulates that there are parts of the brain that control other parts. To the dismay of Connectionists, he proves his theory partly by showing that Connectionist brain-like learning systems actually use higher-level controllers to guide the learning in their systems, contrary to the widespread belief that they use only local controllers at the level of the neurons.
“A new theory is on the table, and it practically invalidates Connectionism,” says Roy of the paper’s acceptance.
Although there still are many skeptics, a number of scientists have lined up behind Roy since his paper was accepted for publication.
“Professor Roy’s paper goes to the core of the inherent limitations of commonly accepted theories of brain function and organization, and points the way to a new hybrid framework that combines the insights of existing theories to overcome their shortcomings,” says Christian Lebiere of Carnegie Mellon University.
Lebiere’s book, “The Atomic Components of Thought” (co-written with John Anderson, also of Carnegie Mellon), presents a unified theory of a cognitive architecture, one that competes with connectionism as a theory of the brain.
What form Roy’s controller takes is still a point of speculation; at the theory’s early stage, the controller is little more than a generic, guiding “ghost in the machine.” Roy’s work is not based on brain imaging scans or laboratory dissections but is more theoretical and logic-based.
“What I did was structurally analyze Connectionist algorithms to prove that they actually use control theoretic notions, even though they deny it,” Roy says. “Plus, I added some neuroscience evidence.”
In the forthcoming article, Roy uses a series of basic analogies that compare brain functions (and well-known algorithms in his field) to more everyday interactions to explain what he sees as inherent flaws in Connectionist thinking.
The Connectionists would argue that the human is not controlling the device but rather is part of a mutual give-and-take dynamic.
While Roy is not a neuroscientist, he highlights findings from other research that support his control theory. Roy says his theory does not suggest that there is a single executive controller in the brain; instead, it suggests that there very well could be “multiple distributed controllers” controlling various subsystems or modules of the brain.
This significant shift in thinking about how the brain works and learns eventually could have an impact on the design of industrial robots. Roy’s new control theory will not revolutionize the thinking behind artificial intelligence overnight. Instead, he says it may be decades before the new philosophy effectively changes the way computers operate.