Toward Human-Level Cognitive Adequacy
Our long-range aim is to design and implement common sense in a
computer. Click here
for details.
If you would like to learn more about Active Logic, we suggest
you start with one of our primers.
It is easy to produce examples of applications that would benefit from the addition of a reasoning component; from a system which could appropriately adjust conditions on a manufacturing line to maintain quality, to one which could help travelers make sensible vacation plans, what is desired is a combination of the benefits of a human expert, able to apply knowledge and experience to novel situations, and a computer, able to do this more quickly, for more users, with less expense than the expert. However, the real world is dynamic, complex, and not completely knowable. Its state changes constantly and sometimes even drastically, complete modeling of even its most narrow aspects is often computationally intractable, and new things are always being discovered. Thus, any model of the world, if it is to remain accurate [1], must itself be capable of dynamic responses to the world; further, because no model is complete, any use to which the model is put (e.g. to serve as the basis for predictions, or for the derivation of facts not currently represented in the model) will produce uncertain results. Changes in, and discoveries about the world will require not just revision of the model, but may require reconsideration of any predictions, conclusions or generalizations in which the revised beliefs played a role; such changes may even require alterations in the methods or rules by which these derivations were generated.
Responses to these issues fall into roughly two categories: those that favor work with simplified models (e.g. microworlds, or formal domains like mathematics), and those who favor work with simplified reasoners (e.g. heuristics, or subsumption-architecture based robotics). Each approach has advantages; the former can allow the application of rigorous, certain, and theoretically justifiable methods; the latter can perform in real (or realistic) environments. Of course, it will always be true that both approaches are required, and preferences for one over the other will generally be relative to the problem domain, but the division roughly corresponds to one between formal and implementational research into machine reasoning. This division is itself not completely firm: there are implementational studies based on (formal or informal) theories (e.g., CYC, SOAR, OSCAR) [Lenat and Guha, 1990; Lenat et. al., 1990], and there are theories framed with attention toward implementation (e.g., predicate circumscription). Formal/theoretical work tends to focus on very narrow problems (and even on very special cases of very narrow problems) while trying to get them “right” in a very strict sense. In contrast, implementational work tends to aim at fairly broad ranges of behavior, with the focus less on getting it “right” than with getting it to “work” within some acceptable range of performance. It is sometimes urged that this gap is intrinsic to the topic: intelligence is not a unitary thing for which there will be a single theory, but rather a “society” of sub intelligences—some algorithmic and strictly rule-governed, others heuristic and inexact, still others based on reactivity and pattern-recognition—whose overall behavior cannot be reduced to useful characterizing and predictive principles.
Active Logic is a formal architecture that is more closely tied to implementational constraints than is usual for formalisms, and which has been used to solve a number of commonsense problems in a unified manner. In particular, Active Logic seeks to apply theoretically justifiable, principled (logic-based) methods of reasoning to dynamic, uncertain—and to this extent real-world—contexts. [Elgot- Drapkin, 1988; Elgot-Drapkin, et. al., 1988, Elgot-Drapkin and Perlis, 1990, Bhatia, et. al. 2001] Instead of aiming at optimal solutions to isolated, well-specified and temporally narrow problems, active logic was developed to permit satisfying solutions to under-specified and temporally extended problems, much closer to real-world needs.
In order to bridge this gap, however, we need to be aware of the challenges that face a logic-based formalism if it is to be applied to real-world contexts; we need to know, that is, how the problem of uncertainty will express itself. Two aspects of the problem of uncertainty in logic are worth mentioning in particular: the consistency check problem, and the swamping problem.
One obvious way to deal with uncertainty and incomplete knowledge is to make assumptions: in the absence of opposing evidence, assume such and such, e.g. if it is a bird, assume it can fly. This is sometimes called default reasoning. However, the situation is not so straightforward, because (assuming that we want to maintain a consistent knowledge base) we will need to check to see if the default assumption is in fact consistent with our current knowledge state. This means not only must its negation not appear in our belief set, but—and here's the rub—that negation must not be logically entailed by those beliefs. But there is no general procedure for determining whether a given belief is consistent with any given set of beliefs; more generally, there is no procedure to determine whether any given set of beliefs is itself consistent. This is the consistency check problem. This problem has two consequences which are worth stating more explicitly: (1) For any sufficiently complex knowledge base which was not produced by logical rules from a database known to be consistent, and/or to which non- entailed facts are to be added, it will not be possible to know whether it is consistent, nor to use principled methods to maintain consistency. Contradictions are in this sense practically inevitable. (2) It is not possible to know, for any given proposition, whether that proposition is derivable from current knowledge. But traditional approaches to common sense reasoning operate on a broad sense of “know” or “believes,” such that it can be said that an agent knows what is currently in its belief set, as well as anything derivable from that set. But since it is not possible to predict (or to know) what an agent does, or does not “know” on the basis of current beliefs, one must wait until processing is complete (until one has produced the set of all derivable theorems) to discover what the agent knows. Not only is this not cognitively plausible, but under conditions of uncertainty, in which formulas are periodically added to and removed from the knowledge base, there can never be a time when processing is complete.
This brings us to the swamping problem. In addition to the obvious reasons for wanting to maintain consistency (if we are to query our knowledge base, we would generally prefer to get an answer, and not an answer and its negation), there is another, more theoretical reason: from a contradiction, everything follows. More technically, given a contradiction, all well formed formulas (wffs) are entailed as theorems. This is the swamping problem, for it means that a knowledge base that contains a contradiction will eventually contain all possible propositions. This would seem to hamper its usefulness as a knowledge base, not to mention occupy a good deal of memory. However, it is worth noting that this is a practical problem only in so far as it is imagined that our interest is exclusively in the end-state of the reasoning system, for all implementations of standard logic draw conclusions in steps, and it may be some time before any given knowledge base is effectively swamped; time enough to detect and address the problem. What is wanted, then, for real-world applications, is a model of logical reasoning that can:
Active Logic is designed to meet these desiderata.
Motivated in part by the thought that human reasoning takes place step-wise, in time—and that this feature supports human mental flexibility—Active Logic works by combining inference rules with a constantly evolving measure of time (a “Now”) that can itself be referenced in those rules. As an example, from Now(t)—the time is now “t”—one infers Now(t+1), for the fact of an inference implies that time (at least one “time-step”) has passed. All the inference rules in Active Logic work temporally in this way: at each time-step all possible one step inferences are made, and only propositions derived at time t are available for inferences at time t+1. There are special persistence rules so that every theorem a present at time t implies itself at time t+1; likewise there are special rules so that if the knowledge base contains both a theorem a and its negation ¬a, these theorems and their consequences are “distrusted” so they are neither carried forward themselves nor used in further inference. [2]
From these features come others, most notably:
[1] Or become accurate: for any model will contain mistakes.
[2] However, they are maintained in the knowledge base so that, although they cannot be used to reason with, they can be reasoned about. For details on the mechanisms involved here see the more technical introduction.