SUO: Re: Enhancing Data Interoperability with Ontologies...
I agree with many of your comments, but I'd like to
clarify and extend some of the points. The quotations
marked HH come from both of your earlier notes.
HH> With XSLT in particular, the transformation is applied
> at the syntax level. What happens when untransformed
> assertions bump into each other inside the RDF engine;
> XSLT is not a tool which can be applied there. The symbols
> must be grounded in such a way that the engine recognises
> the relationships between them.
This is a very important point, which emphasizes my point
that all the logical specifications used for an application
must be integrated and related to one another. A syntactic
transformer, such as XSLT, cannot check whether the web
semantics are consistent with the semantics specified
anywhere else. A Web Ontology Language such as OWL must
not be isolated from the constraints and specifications
used for the database or the application programs.
HH> I'm trying to use RDF. It appeals for a number of reasons,
> not least among which is that there is a great deal of work
> going on in the RDF space, which means that tools are being
> developed which seem to be rather lacking for dealing with
> e.g. Conceptual Graphs.
Indeed, that is an extremely important reason for using any
tools. MS Windows is by any measure the worst of all major
operating systems available today. However, people use it
for one reason only: there is more software available for it.
HH> Perhaps it is unfortunate that RDF didn't take an
> existing notation of logic and "webize" it.
Some of us are developing the proposed Common Logic standard
to enable any logic-based specification in any notation to
be transformed automatically to any other. Among the logics
supported are predicate calculus, KIF, CGs, and even RDF and OWL.
HH> Are there any resources which spell out how RDF falls short
> of FOL in its expressiveness?
HH> Has there been any further discussion of
> http://www.w3.org/DesignIssues/CG.html ?
As Tim B-L says in that article, there are no serious
bottlenecks to making CGs compatible with the web ideas.
That is why the Common Logic project can include CGs, KIF,
and predicate calculus within a common semantic foundation.
HH> Does "support for URIs" mean "using URIs as identifiers"?
Yes, a file that contains Common Logic statements can be
identified by a URI, and local names inside the file can be
concatenated to the name of the file. CGs support contexts,
which Tim says are "remarkably similar" to the web contexts.
HH> Now apart from the problem that controlled NLs presumably
> use natural language words and thus appear to be useful
> in a Semantic Web context only at a user interface level
> (with some mapping defined from universal to local identifiers),
> I have some difficulty with this argument.
The semantics of the controlled NLs is defined by their mapping
to the base logic (which I assume is Common Logic in any of the
supported notations). See the CLCE draft:
HH> Isn't there a problem that eg controlled English is not English,
> which raises the chances of misunderstanding?
Misunderstandings are always possible with any language of any
kind. Reading controlled English as English is less likely to
cause misunderstandings than reading an unfamiliar language:
CLCE: Every human has two parents.
The fact that a language might be unambiguous does not
guarantee that what the author intended to say has any
relationship to what was said or to anything that the
reader might happen to glean from reading the text.
HH> While reading of controlled NL may be relatively
> straightforward for the uninitiated, why would writing
> it be any easier than writing any other formal language?
I suggest that you compare the preceding examples of
CLCE and OWL.
HH> As for burying the ontology issue...
I agree with you completely on this issue. I wasn't
burying the ontology issue. I was just deploring my
own role in helping to popularize the word "ontology".
I think the word "model" would be less confusing.
HH> ... pretending that a language is English seems likely
> to result in less, not more deliberation.
I make it clear that controlled English is not English.
And designing tools that punish any infraction of the rules
of CLCE syntax and semantics is extremely easy. One of the
challenges is to make the tools more helpful and forgiving.
HH> The current WWW is a pale imitation of Bush's memex,
> and is far from being its realisation....
I agree that V.B. suggested many ideas that are still
incompletely implemented and certainly not universally
available on all WWW platforms. That is one more
reason why I don't want to isolate web semantics from
the semantics of databases or application programs.
HH> ... But database technology as it stands is equally
> glaringly unsuitable for many tasks outside those for
> which it is already heavily used. It doesn't provide
> for a notion of a globally distributed but unified
> information store....
Yes indeed. That is why I deplore any attempt to address
WWW semantics while ignoring DB semantics and vice-versa.
HH> I don't see how Java and .NET differ in any particular
> respect from the languages which existed before them....
It's not the languages by themselves, but the supporting tools
that integrate them with the browser and the server and address
the security issues, development issues, and distribution
issues across multiple interacting platforms.
I was contrasting the attention that Sun devoted to addressing
the use of Java in a network environment with the lack of
attention of the W3C to the problems of integrating their
proposals with the database and programming technology.
JS> The most glaring absence is the lack of any consideration
> for what has been happening in AI since the 1970s...
HH> Any references to these things?
There is a large number of important technologies that have
moved from AI into current practice, and there are many more
proposals. As one example, I would cite my proposal for
a Flexible Modular Framework that can accommodate both
legacy technology and new innovations:
Architectures for Intelligent Systems
As another example, I would cite the following paper,
which shows what can be accomplished with AI technology:
See, for example, the following paragraph from that paper,
which discusses tools based on that technology that can
analyze and compare both the software and the English
documentation of legacy systems. All this processing was
done on unrestricted English, without any assistance from
In one major application, VAE was used to analyze the programs and
documentation of a large corporation, which had systems in daily use
that were up to forty years old (LeClerc & Majumdar 2002). Although the
documentation specified how the programs were supposed to work, nobody
knew what errors, discrepancies, and obsolete business procedures might
be buried in the code. The task required an analysis of 100 megabytes of
English, 1.5 million lines of COBOL programs, and several hundred
control-language scripts, which called the programs and specified the
data files and formats. Over time, the English terminology, computer
formats, and file names had changed. Some of the format changes were
caused by new computer systems and business practices, and others were
required by different versions of federal regulations. In three weeks of
computation on a 750 MHz Pentium III, VAE combined with the Intellitex
parser was able to analyze the documentation and programs, translate all
statements that referred to files, data, or processes in any of the
three languages (English, COBOL, and JCL) to conceptual graphs, and use
the CGs to generate an English glossary of all processes and data, to
define the specifications for a data dictionary, to create dataflow
diagrams of all processes, and to detect inconsistencies between the
documentation and the implementation.