PhD Proposal: Encoding and Exploiting Context in Logic-Based AI Systems
IRB-4107
Traditional symbolic approaches to AI suffer from a scaling problem: as the amount of facts in a knowledge base increases, the number of possible inferences that can be drawn by a reasoner grows at a much faster rate. One strategy to address this problem is to incorporate contextual metadata so that the reasoner can evaluate the relevance of its facts to the current situation and prioritize consideration of the most relevant facts. For my dissertation, I will explore and classify available forms of context, paying special attention to spatial context. I will investigate how context can be acquired and encoded, and describe a procedure for augmenting a formal logic with contextual metadata (resulting in what I have termed a "C-Logic''). Finally, I will design an architecture for a logical reasoner that directs inference using context. I plan to produce a demo implementation of such a reasoner as a proof-of-concept, but this is not the focus of the work. The specific method I propose for encoding context is to embed facts in a vector space, whose dimensions correspond to different forms of context; this enables the use of nearest-neighbor queries to retrieve the facts closest to the embedding of the agent's current context. Time permitting, I would also like to experiment with potential roles for machine learning in the acquisition and quantification of contextual information.