Re: Interoperability and Vagueness
Many of the points made in this exchange seem to stem from
difficulties with an implicit assumption, namely that a word or
a word-sense has a simple, static mapping to a concept in an
ontology, or that an ontology is constructed from an arrangement
of words or word-senses (Would you consider a WordNet to be
an ontology?) Are you using concept and word-sense interchangeably?
It seems that contextualization, metonymy, multiple languages in the
world, non-compositionality, other difficulties pointed out in this
thread, all seem to point in the direction of complex and dynamic
mappings between words as used in a particular utterance in a particular
language, and ontological concepts or an ontologically-anchored
representation of the meaning of that utterance.
While we may or may not know what such a representation should look
like, at the limit, what are the arguments for using words or
word-senses from that language in attempting to build such a
representation, as some researchers do?
While one can argue about vagueness in the specification
of concepts in an ontology, word ambiguity appears to me to
be a separate issue.
John F. Sowa wrote:
> Thanks for the reference to that paper by
> Adam Kilgarriff. There's also a lot of other
> interesting work on computational linguistics,
> lexicography, and related work at ITRI at the
> University of Brighton. (See note below).
> Some further comments:
> Alan Cruse>> I am similarly pessimistic about the
> >> possibility of representing meaning by prototypes:
> >> what would seem to be called for is an indefinitely
> >> large set of prototypes-within-prototypes."
> RF> What would prototypes-within-prototypes look like?
> You would have to ask Cruse. My guess is that you
> can't just have one prototype for "dog", but a large
> collection of prototypes for beagle, dachshund, collie,
> mutt, etc. Then you would also need prototypes for
> parts of dogs, such as a typical dog nose, tail, ears,
> paws, etc., with special cases for each type of dog.
> You would have to do something similar for every kind
> of animal, car, truck, operating system, legal system,
> country, province, state, city, town, village, etc.
> There are two approaches to word senses that I like
> very much. They start from different points of view,
> but they reach similar conclusions:
> 1. Monosemy. That's the idea that each word has
> a single core meaning, and in different contexts,
> it can diverge from the core in several ways:
> metaphor, metonymy, and specialization.
> 2. Microsenses. Each word has an open-ended number
> of meanings that differ from one another by small
> degrees from one context to another. Each use
> could be distinguished as a different microsense.
> Philosophically, I relate microsenses to Wittgenstein's
> notion of language games, in which the meaning of each
> word changes with the game (or context) in which it
> is used. For a formal ontology, I would handle these
> approaches in terms of the lattice of theories:
> 1. Assign the common core to a very general concept
> type (or predicate) in the ontology.
> 2. Treat the metaphors and metonyms as systematic
> methods for generating new types in the hierarchy,
> which may or may not be subtypes of the common core.
> 3. Treat the specializations as a systematic way of
> generating new microsenses for each language game.
> 4. Formalize each game in terms of a theory within
> the lattice.
> I do not claim that the lattice of theories corresponds
> to the way that people normally understand language. But
> what I do claim is that it provides a systematic way of
> formalizing and representing any particular use that anyone
> might want to relate to some formalized computer application.
> However, I also believe that most use of language is never
> going to be formalized, and there is no reason why it should.
> But if, for some purpose, you want to do so, here's how.
> John Sowa
> Information Technology Research Institute
> The Information Technology Research Institute (ITRI) is a dedicated
> research department within the University of Brighton. The current focus
> of the Institute's work is on computational linguistics, language
> engineering, and human computer interfaces. Our research addresses the
> following theoretical issues: architectures for natural language
> generation, constraint based reasoning, controlled languages, corpora,
> diagrammatic reasoning, dialog, discourse, integrating text and
> graphics, lexical knowledge bases, lexical representation, message
> understanding, multilinguality, natural language interfaces, text
> generation, underspecification, word sense disambiguation. The work of
> the Institute is funded primarily by grants from national and European
> research councils and contracts with commercial organizations. Much of
> our work is on highly theoretical issues, but the Institute is also
> strongly committed to strategic research, ensuring whenever possible
> that the results of our research can provide solutions to real-world
> For ITRI publications see