Active Logic, Metacognitive Computation, and Mind

Toward Human-Level Cognitive Adequacy
Our long-range aim is to design and implement common sense in a computer. Click here for details.


If you would like to learn more about Active Logic, we suggest you start with one of our primers.

General Aims

We propose to design and implement common sense in a computer.

Common sense in a computer is a bit hard to define, but the idea we are aiming at is comparable to human-level common sense (often understood as distinct from expert or special cleverness). For instance, solving the mutilated checkerboard problem takes a special clever insight, and thus is not what we have in mind. But note that much of the AI community would not draw the definitional lines as we have; for that reason, we sometimes use the expression “cognitive adequacy” to refer to our conception. This is intended to suggest a kind of general-purpose reasoning ability that will serve the agent to “get along” (learning as it goes) in a wide and unpredicted range of environments. Consequently, one hallmark of common sense is the ability to recognize, and initiate appropriate responses to, novelty, error, and confusion. Examples of such responses include learning from mistakes, aligning action with reasoning and vice versa, and seeking (and taking) advice.

A closely related hallmark is the ability to reason about anything whatever (that is brought to one’s attention). This does not mean being clever about it, or knowing much about it, or being able to draw significant conclusions; it can mean as little as realizing that the topic is not understood, asking for more information, and learning appropriately from whatever advice is given. That might seem like very little, if our aim were to have clever solutions to tricky problems. But consider this: virtually no AI programs exhibit even that “little” amount of elementary common sense; they are not able to know when they are confused, let alone seek—and use—clarifying data. On the other hand, cleverness—in highly limited domains and for tightly specified representations—has been built into many programs, a kind of “idiot savantry” that fails utterly when outside those narrow strictures.

One large piece of what is needed for cognitive adequacy, then, is what we call “perturbation tolerance:” the ability to keep going adequately when subjected to unanticipated changes. This includes changes to the knowledge base (KB); e.g.~the changes might introduce inconsistencies, or make a goal impossible or ambiguous. Worse, the knowledge representation (KR) system might change (new terms, new meanings for old terms, different notational conventions, etc), especially if other agents are involved; and of course there are typos (missing parentheses and the like) that appear to defy any prearranged methodology. And there are changes to physical sensors and effectors, and how things in the world work.

Then what is it to “keep going adequately” in the face of such changes? Among other things, this will require (i) never “hanging” or “breaking;” (ii) recognizing when there is a difficulty to be addressed; (iii) making an assessment of options to deal with the difficulty; (iv) choosing and putting an option into action. Such a suite of abilities will require keeping track of one’s own history of activity, including one’s own past reasoning. Such an agent will then, in Nilsson’s phrase, have a lifetime of its own, and keeping track of its own processes and history will allow it to look back at what it is doing and use that knowledge to guide its upcoming behavior.

People tend to do this well. Is this ability an evolutionary hodgepodge, a holistic amalgam of countless parts with little or no intelligible structure? Or might there be some few key modular features that provide this “adequacy?” We think there is strong evidence for the latter, and we have a specific hypothesis about it and how to build it in a computer. In a nutshell, we propose what we call the metacognitive loop (MCL) as the essential distinguishing feature of commonsense reasoning. And we claim that the state of the art is very nearly where it needs to be, to allow this to be designed and implemented.

Our postulated “metacognitive loop”—in both human and machine commonsense reasoning—allows humans (and should allow machines) to function effectively in novel situations, by noting errors and adopting strategies for dealing with them. The loop has three main steps: (i) monitor events for a possible anomaly, (ii) assess its type and possible stategies for dealing with it, and (iii) guide one or more strategies into place while continuing to monitor (looping back to step i) for new anomalies that may arise either as part of the strategy underway or otherwise.

People clearly use something very like MCL to keep an even keel in the face of a confusing, shifting world. This is an obvious no-brainer, at the level of individual personal experience: we often notice things amiss and take appropriate action. In addition there is empirical evidence for this in studies of human learning strategies, where, for instance, an individual tasked with memorizing a list of foreign-language word-meaning pairs will make judgments of relative difficulty along the way, in framing study strategies.

We suspect that MCL is profoundly involved in many human behaviors well beyond formal learning situations, and that there is a specialized MCL module that carries out such activity on a nearly continual basis, without which we would be everyday incompetents—although perhaps idiot savants—as opposed to everyday commonsense reasoners. However, while we are interested in gathering additional information about MCL behaviors in humans, our main focus is on taking this as motivation for building a similar capability into computers.

A major application domain that we are using in this work is natural language human-computer dialog. Errors of miscommunication are prevalent in dialog, especially human-computer dialog. Our work to date has indicated that something analogous to our loop not only is active in human dialog but also can be a powerful tool for automated systems. One example is the learning of new words: if a word is used that the system does not know, this can be processed as an anomaly, which in turn can trigger various stategies including asking for help (“What does that word mean?”). Other examples include disambiguations, and appropriate retraction of implicatures and presuppositions. The long-range aim is a computer system that, via dialog with humans, learns a much larger vocabulary in the process of noting and correcting its misunderstandings, akin to a foreigner learning a new language.