Re: Some references about ontology and analogy: SUO redux
I have followed this thread and find myself agreeing with almost everything John
Sowa has said, but I think he is leaving out one important ingredient -- the
topic of the SUO list: we are still missing a standard upper ontology.
John F. Sowa wrote:
> Offline> What would be sufficient to be successful?
> As I said in the slides (challenge.pdf), current systems of
> deduction are already at the level where they can compete with
> (or sometimes surpass) the ability of the average human. Just
> adding more formal axioms and definitions and more efficient
> theorem provers won't do anything for the problem of knowledge
> The areas where current systems are woefully inadequate compared
> to the average human are on the left side of the cognitive cycle.
> Those are the areas related to learning -- which Peirce called
> induction and abduction. Current systems of data mining, for
> example, are better than humans in finding statistical patterns
> in a relational database, but the really hard work is analyzing
> the real-world situation in order to determine what to put in
> a relational database. No learning systems today can even come
> close to doing anything like that.
Yes, the other forms of reasoning, including analogy, are important, and
Doug Lenat's earlier papers strongly emphasized the importance of analogy and
the goal of Cyc to enable analogy. Perhaps Vivomind's approach is more potent.
But it is still the approach of a single vendor, and has the same limitation
from the point of view of the community that hopes, as a community, to advance
the art and science of computer intelligence: by using representations that are
not widely adopted throughout the community, the lessons that can be re-used
elsewhere are only of a general nature. Most of the effort expended will be
useful only for Vivomind and those that use its specific knowledge
representation and reasoning program.
To advance any science, it is important to be able to share and re-use
results. Those who are concerned with exploring the potential for specific
types of reasoning or analogizing, or how to include contexts in a reasoning
system, can benefit immensely from each other's work, but only if there is
enough commonality in some respect that the lessons of one group's work can be
reused in another group's system. A standard upper ontology is not the only
element needed to maximize this kind of result reuse, but it is a critical
central element and I doubt that serious progress will be made toward
human-level intelligence until a significant fraction of those doing research in
the field coalesce on the use of a common SUO. It doesn't even have to be the
same SUO used in industrial applications, but there's no reason why it can't be.
Cyc may not have made as much progress as they promised because they have
focused too narrowly on one type of reasoning. The community as a whole (much
larger than Cycorp) has not done any better than Cycorp because (in my
analysis), in spite of trying many different approaches, they don't even have
what Cyc has (internally) -- a common upper ontology to permit meaningful
comparison and reuse of the results of different approaches.
At this point I plead with all to quit talking about the "monolithic" upper
ontology red herring. In the past ten years, it has been reiterated over and
over that a community-wide "Standard Upper Ontology" will contain a mechanism to
include alternative theories, possible worlds, and alternative (but logically
consistent) views. The important feature will be that whatever representation
one group decides it must use will be included in a way so that identical
concepts are represented only once, alternative views have transforming axioms
to convert from one view (used by one group or in one context) to another view,
logically incompatible representations are identified and sequestered from each
other, and mappings from one view to another are already included and checked so
that they don't have to be laboriously and inaccurately attempted over and over.
And of course, synonyms and namespaces are seamlessly accommodated. With that
kind of SUO, research on optimization of reasoning methods can proceed with
greater hope that reuse of results between groups will focus on the differences
caused by the reasoning and not founder on the inability to make a comparative
interpretation of results that use different knowledge representation paradigms.
Recently I have suggested in different fora that a widely used SUO is likely
to emerge only from an adequately funded development program (> $4 million) that
provides an opportunity for significant input from a large representative sample
of those working on ontologies -- more than 40 groups would probably have to be
represented. I still think that. But this group might still have impact, if it
could try once again to actually build an SUO as a community, and try to find
areas of agreement on some of the more fundamental issues.
We tried that before. The effort quickly petered out. As I interpret it,
the problem is that we would all like to see some kind of consensus before
providing a group blessing on any decision, but the nature of our beast is a
diverse community with only minimal spare-time effort to contribute, and in the
valuable small amounts of time available, we cannot exhaustively debate every
detail of every issue. It is imperative to define a goal and a timetable, and
use voting as a means of resolving disputes, quickly. Everyone fears that a
result generated by a hurried process will have residual flaws, but I seriously
doubt that they could be any worse than the flaws that already exist in the
current candidates for an SUO. Whether there are flaws in any product will only
be provable when the SUO is put to the test in practical applications (and,
preferably, compared with alternatives using the same reasoning system). We know
that even the high levels of an SUO will be open to change for at least an
initial period, and we have to be ready to change when there is evidence that it
If there is some subset of those who participate in the IEEE-SUO who are
willing once again to try to develop agreement on some part of an SUO, I will be
willing to participate as we did before. I would suggest that such a project
be conducted as a working group, without the formality of IEEE voting procedures
to make decisions.
If anyone would like to try again, send a note to this list or to me
directly at: pcassidy at mitre.org.
> In those slides I mentioned our current work on the VivoMind
> Analogy Engine. I believe that's part of the solution, but
> there is a lot more work needed to address the "challenge of
> knowledge soup": analyze the "blooming buzzing confusion",
> as William James called the sensory overload on an infant
> (or an adult for that matter) and determine how to organize it
> People call natural language texts "unstructured", but those
> texts have an enormous amount of structure built into them
> in comparison to the total sensory input that impinges on the
> human (or any animal) body. The most difficult problem is to
> analyze input of that kind (or input from a video camera, for
> that matter) and to relate it to words in any natural language.
> But as anyone who has been working with NLP knows, there is a
> lot of hard work needed even after that step has been done.
> Unlike Roger Schank, who believes that logic is totally
> irrelevant, I believe that logic is relevant. But I agree
> with Roger that the most important work is on the learning
> processes that enable an infant to learn language and enable
> an adult to analyze and organize any real-world input.
> John Sowa
MICRA, Inc. || (908) 561-3416
735 Belvidere Ave. || (908) 668-5252 (if no answer above)
Plainfield, NJ 07062-2054