SUO: Re: CG: Re: Link Grammar and Parser
Your point that parsing methodologies and linguistic theories are somewhat
independent is an important one. Some things that go by the name of
'grammars' are actually just ways of organizing constraints and rules
(e.g., 'unification grammars' are a particular way of handling rules
features and constraints, and can be used to implement dependency grammars
as well as phrase structure grammars).
Similarly, bottom up (shift reduce) parsing, and rule based top down
parsing can both be used with many different linguistic theories and with
or without AVMs and unification.
It would be nice to have a taxonomy of (1) linguistic theories, (2) parsing
approaches (left corner, shift and reduce, ...), (3) ways of handling
constraints, --- or for that matter, a simple list of the different
'dimensions' of the problem.
Does anyone know of such a list or taxonomy?
On Wed, Dec 03, 2003 at 08:27:16AM -0500, John F. Sowa wrote:
> For reasons similar to Yorick's, I believe that the theory
> or formalism behind a lot of acronyms ending in G is less
> relevant than the selection among a variety of computational
> techniques. Following is a nonexhaustive sample:
> - Top-down with backtracking or bottom-up, chart-style.
> - Form of output, such as parse tree, dependency graph,
> feature structure, or some combination or variation.
> - A basic context-free backbone (whether it is called
> context-free or whether it is written in a rule-like
> style is irrelevant) augmented with tests of various
> kinds, which may be called syntactic, semantic,
> pragmatic, or whatever.
> - Tradeoffs between the number of grammar rules (CF or
> otherwise) and the number of patterns or constructs
> associated with the words -- i.e., is it a complex
> grammar with many pages of rules or a simple grammar
> with most structural information in the lexicon.
> - Methods for keeping track of ambiguities and reducing
> or managing then by grouping, testing, marking, etc.
> - Use of background knowledge, corpora, statistics, etc.,
> and at what stage during the parse.
> - And most importantly, is the parser written by a
> superprogrammer or by a newbie who just learned
> language X? And did the author spend many long
> hours working and reworking it on lots of texts?
> There is certainly a connection between the choice of
> formalism and the choice of computational techniques,
> but it is not completely deterministic. Just look at
> the enormous number of published methods for parsing
> context-free grammars. Other formalisms might have
> as many techniques if they had as many people working
> on them.