1. Trang chủ
  2. » Luận Văn - Báo Cáo

Báo cáo khoa học: "PRIORITY UNION AND GENERALIZATION IN DISCOURSE GRAMMARS" doc

8 261 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 8
Dung lượng 732,4 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Source: 5a Target: 5b Priority[ AGENT th°mas ] Union: PATIENT beetle like 12 Hannah likes beetles.. Source: 5a Target: 5c [ AGENT hannah 1 Priority PATIENT caterpillar Union: like

Trang 1

P R I O R I T Y U N I O N A N D G E N E R A L I Z A T I O N

I N D I S C O U R S E G R A M M A R S

Claire G r o v e r , C h r i s B r e w , S u r e s h Manandhar, Marc M o e n s

H C R C L a n g u a g e T e c h n o l o g y G r o u p

T h e U n i v e r s i t y o f E d i n b u r g h

2 B u c c l e u c h P l a c e

E d i n b u r g h E H 8 9 L W , U K

I n t e r n e t : C G r o v e r © e d a c u k

A b s t r a c t

We describe an implementation in Carpenter's ty-

ped feature formalism, ALE, of a discourse gram-

m a r of the kind proposed by Scha, Polanyi,

et al We examine their m e t h o d for resolving

parallelism-dependent anaphora and show that

there is a coherent feature-structural rendition of

this type of g r a m m a r which uses the operations

of prwrity union and generalization We describe

an augmentation of the ALE system to encompass

these operations and we show that an appropriate

choice of definition for priority union gives the de-

sired multiple o u t p u t for examples of vP-ellipsis

which exhibit a strict/sloppy ambiguity

1 D i s c o u r s e G r a m m a r

Working broadly within the sign-based paradigm

exemplified by HPSG (Pollard and Sag in press)

we have been exploring computational issues for

a discourse level g r a m m a r by using the A L E sy-

stem (Carpenter 1993) to implement a discourse

grammar Our central model of a discourse gram-

mar is the Linguistic Discourse Model (LDM) most

often associated with Scha, Polanyi, and their co-

workers (Polanyi and Scha 1984, Scha and Polanyi

1988, Priist 1992, and most recently in Priist, Scha

and van den Berg 1994) In LDM rules are defi-

ned which are, in a broad sense, unification gram-

mar rules and which combine discourse constitu-

ent units (DCUS) These are simple clauses whose

syntax and underresolved semantics have been de-

termined by a sentence g r a m m a r but whose fully

resolved final form can only be calculated by their

integration into the current discourse and its con-

text T h e rules of the discourse g r a m m a r act to

establish the rhetorical relations between constitu-

ents and to perform resolution of those anaphors

whose interpretation can be seen as a function of

discourse coherence (as opposed to those whose

interpretation relies on general knowledge)

For illustrative purposes, we focus here on Prfist's

rules for building one particular type of rhetorical

relation, labelled "list" (Priist 1992) His central

thesis is that for DCUs to be combined into a list

they must exhibit a degree of syntactic-semantic parallelism and that this parallelism will strongly determine the way in which some kinds of anaphor are resolved T h e clearest example of this is vP- ellipsis as in (la) but Priist also claims that the subject and object pronouns in (lb) and (lc) are parallelism-dependent anaphors when they occur

in list structures and must therefore be resolved to

the corresponding fully referential subject/object

in the first m e m b e r of the list

(1) a Hannah likes beetles So does Thomas

b Hannah likes beetles She also likes caterpillars

c Hannah likes beetles T h o m a s hates them

(2) is Priist's list construction rule It is intended

to capture the idea that a list can be constructed

out of two DCUs, combined by means of connec- tives such as and and or T h e categories in Priist's

rules have features associated with them In (2) these features are s e r e (the unresolved semantic interpretation of the category), c o n s e m (the con- textually resolved semantic interpretation), and

s c h e m a (the semantic information t h a t is com- mon between the daughter categories)

(2) list [ s e r e : e l T~ ((Cl ¢'$2) RS2),

s c h e m a : C1 ¢ $2] 4

DCUI [ sem : S l , c o n s e m : C1] + DCU2 [ s e r e : ~ S 2 , c o n s e m : ((Cl ~'$2) ~$2)]

Conditions:

C1 • $2 is a characteristic generalization of C1 and S~; R E {and, or }

Priist calls the operation used to calculate the va- lue for s c h e m a the most specific common deno- minator (MSCD, indicated by the symbol ¢) The

MSCD of C1 and $2 is defined as the most specific generalization of C1 that can unify with 5'2 It is essential that the result should be contentful to a degree that confirms that the list structure is an

appropriate analysis, and to this end Pr/ist impo- ses the condition that the value of s c h e m a should

Trang 2

be a characteristic generalization of the informa-

tion contributed by the two daughters There is

no f o r m a l definition of this notion; it would re-

quire knowledge f r o m m a n y sources to determine

whether sufficient informativeness had been achie-

ved However, assuming t h a t this condition is met,

Priist uses the c o m m o n information as a source for

resolution of underspecified elements in the second

d a u g h t e r by encoding as the value of the second

d a u g h t e r ' s c o n s e m the unification of the result of

MSCD with its pre-resolved semantics (the f o r m u l a

((Ca / $2) Iq $2)) So in Priist's rule the MSCD

operation plays two distinct roles, first as a test for

parallelism (as the value of the m o t h e r ' s s c h e m a )

and second as a basis for resolution (in the com-

posite operation which is the value of the second

d a u g h t e r ' s c o n s e m ) There are certain problems

with MSCD which we claim s t e m f r o m this a t t e m p t

to use one operation for two purposes, and our pri-

m a r y concern is to find alternative means of achie-

ving Prfist's intended analysis

2 A n A L E D i s c o u r s e G r a m m a r

For our initial exploration into using ALE for dis-

course g r a m m a r s we have developed a small dis-

course g r a m m a r whose lexical items are complete

sentences (to circumvent the need for a sentence

g r a m m a r ) and which represents the semantic con-

tent of sentences using feature structures of type

event whose sub-types are indicated in the follo-

wing p a r t of the t y p e hierarchy:

event (3)

agentive

plus_patient prop-art

emot-att action believe assume

like hate kick catch

In addition we have a very simplified semantics of

noun phrases where we encode t h e m as of type

entity with the s u b t y p e s indicated below:

animate

4"-.,

hannah jessy thomas sam brother beetle bee cater-

pillar

Specifications of which features are a p p r o p r i a t e for which type give us the following representations of the semantic content of the discourse units in (1): (5) a Hannah likes beetles

[ AGENT hannah ]

PATIENT beetle like

b So does Thomas [ AGENT thomas ] agentive

c She also likes caterpillars [ AGENT female ]

PATIENT caterpillar like

d Thomas hates them [ AGENT thomas ]

PATIENT entity hate

2 1 C a l c u l a t i n g C o m m o n G r o u n d

T h e SCHEMA feature encodes the information t h a t

is c o m m o n between daughter Dcus and Prtist uses MSCD to calculate this information A feature- structural definition of MSCD would return as a result the m o s t specific feature structure which is

at least as general as its first a r g u m e n t but which

is also unifiable with its second a r g u m e n t For the e x a m p l e in (lc), the MSCD operation would be given the two a r g u m e n t s in (5a) and (5d), and (6) would be the result

(6) [ AGENT human ]

PATIENT beetle

e m o t _ a t t

We can contrast the MSCD operation with an operation which is more c o m m o n l y discussed in the context of feature-based unification systems,

namely generalization This takes two feature- structures as input and returns a feature struc- ture which represents the c o m m o n information in them Unlike MSCD, generalization is not asym- metric, i.e the order in which the a r g u m e n t s are presented does not affect the result T h e genera- lization of (5a) and (5d) is shown in (7)

(7) [ AGENT human ]

PATIENT entity

e m o t _ a t t

It can be seen f r o m this e x a m p l e t h a t the MSCD result contains more information t h a n the genera- lization result Informally we can say t h a t it seems

to reflect the c o m m o n information between the two inputs after the parallelism-dependent ana- phor in the second sentence has been resolved T h e reason it is safe to use MSCD in this context is pre- cisely because its use in a list structure guarantees

Trang 3

t h a t the pronoun in the second sentence will be

resolved to beetle In fact the result of MSCD in

this case is exactly the result we would get if we

were to perform the generalization of the resolved

sentences and, as a representation of what the two

have in c o m m o n , it does seem t h a t this is more de-

sirable than the generalization of the pre-resolved

forms

If we turn to other examples, however, we discover

that MSCD does not always give the best results

T h e discourse in (8) m u s t receive a constituent

structure where the second and third clauses are

combined to form a contrast pair and then this

contrast pair combines with the first sentence to

form a list (Prfist has a separate rule to build

contrast pairs but the use of MSCD is the s a m e as

in the list rule.)

(8) H a n n a h likes ants T h o m a s likes bees but

Jessy hates them

[._PATIENT insect_.J

like

AGENT hannah~ [AGENT human~

PATIENT ant _] PATIENT bee _J

ATIENT bee [ [-PATIENT entity I

T h e tree in (9) d e m o n s t r a t e s the required struc-

ture and also shows on the m o t h e r and interme-

diate nodes what the results of MSCD would be As

we can see, where elements of the first a r g u m e n t

o f MSCD are more specific than the corresponding

elements in the second, then the more specific one

occurs in the result Here, this has the effect t h a t

the structure [like, AGENT hannah, PATIENT ins-

ect ] is somehow claimed to be c o m m o n ground

between all three constituents even though this is

clearly not the case

Our solution to this problem is to dispense with

the MSCD operation and to use generalization in-

stead However, we do propose t h a t generalization

should take inputs whose parallelism dependent

anaphors have already been resolved 1 In the case

of the c o m b i n a t i o n of (5a) and (5d), this will give

1As described in the next section, we use priority

union to resolve these anaphors in both lists and con-

trasts The use of generalization as a step towards

checking that there is sufficient common ground is sub-

sequent to the use of priority ration as the resolution

mechanism

exactly the s a m e result as MSCD gave (i.e (6)),

but for the example in (8) we will get different re- sults, as the tree in (10) shows (Notice t h a t the representation of the third sentence is one where the a n a p h o r is resolved.) T h e resulting generaliza- tion, [emot_att, AGENT human, PATIENT insect], is

a much more plausible representation of the com-

m o n information between the three DCUs t h a n the results of MSCD

[_PATIENT insect~

ATIENT ant J [_PATIENT bee _J

LPATIENT bee _] [_PATIENT bee _]

2.2 R e s o l u t i o n o f Parallel A n a p h o r s

We have said t h a t MSCD plays two roles in Pr/ist's rules and we have shown how its function in cal- culating the value of SCHEMA can be better served

by using the generalization operation instead We turn now to the composite operation indicated in (2) by the formula ((C, /S~)NS2) This com- posite operation calculates MSCD and then unifies

it back in with the second of its a r g u m e n t s in or- der to resolve any parallelism-dependent a n a p h o r s

t h a t might occur in the second DCU In the discus- sion t h a t follows, we will refer to the first DcU in the list rule as the source and to the second DCU

as the target (because it contains a parallelism- dependent a n a p h o r which is the target of our at-

t e m p t to resolve t h a t anaphor)

In our ALE i m p l e m e n t a t i o n we replace Pr/ist's composite operation by an operation which has oc- casionally been proposed as an addition to feature- based unification systems and which is usually re- ferred to either as default unification or as priority union 2 Assumptions a b o u t the exact definition of this operation vary b u t an intuitive description of

it is t h a t it is an operation which takes two feature structures and produces a result which is a merge

of the information in the two inputs However, the information in one of the feature structures is

"strict" and cannot be lost or overridden while the information in the other is defensible T h e opera- tion is a kind of union where the i n f o r m a t i o n in the strict structure takes priority over t h a t in the

~See, for example, Bouma (1990), Calder (1990), Carpenter (1994), Kaplan (1987)

Trang 4

default structure, hence our preference to refer to

it by the n a m e priority union Below we demon-

s t r a t e the results of priority union for the e x a m -

ples in ( l a ) - ( l c ) Note t h a t the target is the strict

structure and the source is the defeasible one

(11) Hannah likes beetles So does Thomas

Source: 5a

Target: 5b

Priority[ AGENT th°mas ]

Union: PATIENT beetle

like

(12) Hannah likes beetles She also likes

caterpillars

Source: 5a

Target: 5c

[ AGENT hannah 1

Priority PATIENT caterpillar

Union:

like

(13) Hannah likes beetles Thomas hates them

Source: 5a

Target: 5d

AGENT thomas ]

Priority Union: PATIENT beetle

hate

For these examples priority union gives us exactly

the s a m e results as Priist's composite operation

We use a definition of priority union provided by

C a r p e n t e r (1994) (although note t h a t his n a m e for

the operation is "credulous default unification")

It is discussed in m o r e detail in Section 3 T h e pri-

ority union of a target T and a source S is defined

as a two step process: first calculate a m a x i m a l

feature structure S' such t h a t S' E S, and then

unify the new feature structure with T

T h i s is very similar to PriJst's composite opera-

tion b u t there is a significant difference, however

For Priist there is a requirement t h a t there should

always be a unique MSCD since he also uses MSCD

to calculate the c o m m o n ground as a test for par-

allelism and there m u s t only be one result for t h a t

purpose By contrast, we have taken C a r p e n t e r ' s

definition of credulous default unification and this

can return m o r e t h a n one result We have strong

reasons for choosing this definition even though

C a r p e n t e r does define a "skeptical default unifi-

cation" operation which returns only one result

Our reasons for preferring the credulous version

arise f r o m examples of vP-ellipsis which exhibit an

a m b i g u i t y whereby b o t h a "strict" and a "sloppy"

reading are possible For example, the second sen-

tence in (14) has two possible readings which can

be glossed as " H a n n a h likes Jessy's brother" (the

strict reading) and " H a n n a h likes her own bro-

ther" (the sloppy reading)

(14) Jessy likes her brother So d o e s Hannah

T h e situations where the credulous version of the operation will return more t h a n one result arise from structure sharing in the defeasible feature structure and it turns out t h a t these are exactly the places where we would need to get m o r e t h a n one result in order to get the s t r i c t / s l o p p y ambi- guities We illustrate below:

(15) Jessy likes her brother So does Hannah

Source:

AGENT PATIENT

like

[] ]

brother

Target: [ AGENT hannah ]

agentive

Priority Union: " AGENT

PATIENT

like

brother

AGENT PATIENT

like

[

brother

Here priority union returns two results, one where the structure-sharing information in the source has been preserved and one where it has not As the example demonstrates, this gives the two readings required By contrast, C a r p e n t e r ' s skeptical de- fault unification operation and Priist's composite operation return only one result

2.3 H i g h e r O r d e r U n i f i c a t i o n

There are similarities between our i m p l e m e n t a - tion of Prfist's g r a m m a r and the account of vP- ellipsis described by Dalrymple, Shieber and Pe- reira (1991) (henceforth DSP) D S P gives an equational characterization of the p r o b l e m of vp- ellipsis where the interpretation of the target phrase follows f r o m an initial step of solving an equation with respect to the source phrase If a function can be found such t h a t applying t h a t fun- ction to the source subject results in the source in- terpretation, then an application of t h a t function

to the target subject will yield the resolved inter- pretation for the target T h e m e t h o d for solving such equations is "higher order unification" (16) shows all the components of the interpretation of the e x a m p l e in (11)

Trang 5

(16) Hannah likes beetles So does Thomas

Source:

Target (T):

Equation:

Solution:

Apply to T:

like(hannah, beetle)

P ( thomas )

P ( hannah ) = like(hannah, beetle)

P = ~x.like(x, beetle)

like(thomas, beetle)

A prerequisite to the DSP procedure is the esta-

blishment of parallelism between source and target

and the identification of parallel subparts For ex-

ample, for (16) it is necessary both that the two

clauses Hannah likes beetles and So does Thomas

should be parallel and that the element hannah

should be identified as a parallel element DSP

indicate parallel elements in the source by means

of underlines as shown in (16) An underlined ele-

ment in the source is termed a 'primary occur-

rence' and DSP place a constraint on solutions to

equations requiring that primary occurrences be

abstracted Without the identification of hannah

as a primary occurrence in (16), other equations

deriving from the source might be possible, for ex-

ample (17) :

(17) a P(beetle) = like(hannah, beetle)

b P(like) = like(hannah, beetle)

The DSP analysis of our strict/sloppy example in

(14) is shown in (18) The ambiguity follows from

the fact that there are two possible solutions to the

equation on the source: the first solution involves

abstraction of just the primary occurrence ofjessy,

while the second solution involves abstraction of

both the primary and the secondary occurrences

When applied to the target these solutions yield

the two different interpretations:

(18) Jessy

Source:

Target:

Equation:

Sol.1 ($1):

Sol.2 (S2):

Apply SI:

Apply $2:

likes her brother So does Hannah

like(jessy, brother-of (jessy) )

P( hannah )

P(jessy) = like(jessy, brother-of (jessy) )

P = ~x.like(x, brother-of(jessy))

e = Ax.like(x, brother-of(x))

like(hannah, brother-of (jessy) )

like(hannah, brother-of(hannah))

DSP claim that a significant attribute of their ac-

count is that they can provide the two readings in

strict/sloppy ambiguities without having to postu-

late ambiguity in the source T h e y claim this as

a virtue which is matched by few other accounts

of vP-ellipsis We have shown here, however, that

an account which uses priority union also has no

need to treat the source as ambiguous

Our results and DSP's also converge where the

treatment of cascaded ellipsis is concerned For

t h e example in (19) both accounts find six rea-

dings although two of these are either extremely

implausible or even impossible

(19) John revised his paper before the teacher did, and Bill did too

DSP consider ways of reducing the number of readings and, similarly, we are currently explo- ring a potential solution whereby some of the re- entrancies in the source are required to be trans- mitted to the result of priority union

There are also similarities between our account and the DSP account with respect to the esta- blishment of parallelism In the DSP analysis the determination of parallelism is separate from and

a prerequisite to the resolution of ellipsis Howe- ver, they do not actually formulate how paralle- lism is to be determined In our modification of Prfist's account we have taken the same step as DSP in that we separate out the part of the fea- ture structure used to determine parallelism from the part used to resolve ellipsis In the general spirit of Priist's analysis, however, we have taken one step further down the line towards determi- ning parallelism by postulating t h a t calculating the generalization of the source and target is a first step towards showing that parallelism exists The further condition that Prfist imposes, that the common ground should be a characteristic genera- lization, would conclude the establishment of par- allelism We are currently not able to define the notion of characteristic generalization, so like DSP

we do not have enough in our theory to fully imple- ment the parallelism requirement In contrast to the DSP account, however, our feature structural approach does not involve us having to explicitly pair up the component parts of source and target, nor does it require us to distinguish p r i m a r y from secondary occurrences

2.4 P a r a l l e l i s m

In the DSP approach to vP-ellipsis and in our ap- proach too, the emphasis has been on semantic parallelism It has often been pointed out, howe- ver, that there can be an additional requirement of syntactic parallelism (see for example, Kehler 1993 and Asher 1993) Kehler (1993) provides a use- ful discussion of the issue and argues convincingly that whether syntactic parallelism is required de- pends on the coherence relation involved As the examples in (20) and (21) demonstrate, semantic parallelism is sufficient to establish a relation like

contrast but it is not sufficient for building a co-

herent list

(20) T h e problem was looked into by John, but no-one else did

(21) *This problem was looked into by John, and Bill did too

For a list to be well-formed both syntactic and

semantic parallelism are required:

Trang 6

(22) J o h n looked into this problem, and Bill did

too

In the light of Kehler's claims, it would seem t h a t

a more far-reaching i m p l e m e n t a t i o n of our prio-

rity union account would need to specify how the

constraint of syntactic parallelism might be imple-

mented for those constructions which require it

An nPSG-style sign, containing as it does all types

of linguistic information within the same feature

structure, would lend itself well to an account of

syntactic parallelism I f we consider t h a t the DTRS

feature in the sign for the source clause contains

the entire parse tree including the node for the

vP which is the syntactic antecedent, then ways

to bring together the source vP and the target be-

gin to suggest themselves We have at our disposal

b o t h unification to achieve re-entrancy and the op-

tion to use priority union over syntactic subparts

of the sign In the light of this, we are confident

t h a t it would be possible to articulate a more ela-

b o r a t e account of vp-ellipis within our framework

and t h a t priority union would remain the opera-

tion of choice to achieve the resolution

3 E x t e n s i o n s t o A L E

In the previous sections we showed t h a t Prfist's

MSCD o p e r a t i o n would m o r e appropriately be re-

placed by the related operations of generalization

and priority union We have added generalization

and priority union to the ALE system and in this

section we discuss our i m p l e m e n t a t i o n We have

provided the new operations as a c o m p l e m e n t to

the definite clause c o m p o n e n t of ALE We chose

this route because we wanted to give the g r a m -

m a r writer explicit control of the point at which

the operations were invoked ALE adopts a sim-

ple eROLOG-like execution s t r a t e g y rather t h a n

the m o r e sophisticated control schemes of systems

like CUF and TFS ( M a n a n d h a r 1993) In princi-

ple it m i g h t be preferable to allow the very gene-

ral deduction strategies which these other systems

support, since they have the potential to s u p p o r t a

m o r e declarative style of g r a m m a r - w r i t i n g Unfor-

tunately, priority union is a non-monotonic ope-

ration, and the consequences of embedding such

operations in a s y s t e m providing for flexible exe-

cution strategies are largely unexplored At least

at the outset it seems preferable to work within a

f r a m e w o r k in which the g r a m m a r writer is requi-

red to take some of the responsibility for the order

in which operations are carried out Ultimately we

would hope t h a t much of this load could be taken

by the system, but as a tool for exploration ALE

certainly suffices

3.1 Priority U n i o n in A L E

We use the following definition of priority union,

based on C a r p e n t e r ' s definition of credulous de-

fault unification:

(23) punion(T,S) = {unify(T,S') I S ' K S

is maximal such that unify(T,S') is defined} punion(T,S) computes the priority union o f t (tar- get; the strict feature structure) with S (source; the defeasible feature structure) This definition relies on Moshier's (1988) definition of atomic fea- ture structures, and on the technical result t h a t any feature structure can be decomposed into a unification of a unique set of a t o m i c feature struc- tures Our i m p l e m e n t a t i o n is a simple procedura- lization of C a r p e n t e r ' s declarative definition First

we decompose the default feature structure into a set of a t o m i c feature structures, then we search for the m a x i m a l subsets required by the definition

We illustrate our i m p l e m e n t a t i o n of priority union

in ALE with the e x a m p l e in (15): Source is the de- fault input, and Target is the strict input T h e hierarchy we assume is the s a m e as shown in (3) and (4) I n f o r m a t i o n a b o u t how features are asso- ciated with types is as follows:

• T h e t y p e agentive introduces the feature AGENT with range type human

• T h e type plus-patient introduces the feature PA-

T I E N T with range type human

• T h e type brother introduces the feature BROTHER-OF with range type human

• T h e types jessy and hannah introduce no fea- tures

In order to show the decomposition into ato- mic feature structures we need a notation to re- present paths and types We show paths like this: PATIENTIBROTHER-OF and in order to sti- pulate t h a t the PATIENT feature leads to a struc- ture of type brother, we include type informa- tion in this way: (PATIENW/brother)[(BROTHER- of~human) We introduce a special feature (*)

to allow specification of the top level type of the structure T h e structures in (15) decompose into the following atomic components

(24) Default input:

( AGENT / jessy) ( D 1 ) (PATIENT/brother)I(BROTHER-OF/jessy) (D2) AGENT -~ PATIENTIBROTHER-OF (D3)

Strict input:

(AGENT~hannah) (S 1 ) ( * / agentive) ($2) Given the type hierarchy the expressions above ex- pand to the following typed feature structures:

Trang 7

(25)

Default input:

[ AGENT jessy ]

agentive

AGENT

PATIENT

plus-patient

AGENT

PATIENT

plus-patient

AGENT human ]

PATIENT entity

like

brother

]

brother

(D1)

(D2)

(D3)

(D4)

Strict input:

[ AGENT hannah ]

We can now carry out the following steps in order

to generate the priority union

1 Add (94) to the strict input It cannot conflict

2 Note t h a t it is impossible to add (D1) to the

strict input

3 Non-deterministically add either (92) or (93)

to the strict input

4 Note t h a t the results are m a x i m a l in each case

because it is impossible to add both (D2) and

(D3) without causing a clash between the dis-

joint atomic types hannah and jessy

5 Assemble the results into feature structures If

we have added (D3) the result will be (26) and

if we have added (D2) the result will be (27)

(26) Result 1:

" AGENT [] hannah ]

PATIENT [ B R O T H E R - O F [ ] ] brother ]

like

(27) Result 2:

AGENT

PATIENT

like

[BROTHER-OFjessy]

brother

In order to m a k e this step-by-step description into

an algorithm we have used a breadth-first search

routine with the p r o p e r t y t h a t the largest sets are

generated first We collect answers in the order in

which the search comes upon t h e m and carry out

s u b s u m p t i o n checks to ensure t h a t all the answers

which will be returned are m a x i m a l These checks reduce to checks on subset inclusion, which can be reasonably efficient with suitable set representati- ons Consistency checking is straightforward be- cause the ALE system m a n a g e s type information

in a m a n n e r which is largely t r a n s p a r e n t to the user Unification of ALE t e r m s is defined in such a way t h a t if adding a feature to a term results in a

t e r m of a new type, then the representation of the structure is specialized to reflect this Since prio- rity union is non-deterministic we will finish with

a set of m a x i m a l consistent subsets Each of these subsets can be converted directly into ALE terms using ALE's built-in predicate a d d _ t o / 5 T h e re- sulting set of ALE terms is the (disjunctive) result

of priority union

In general we expect priority union to be a com- putationally expensive operation, since we cannot exclude pathological cases in which the s y s t e m has

to search an exponential n u m b e r of subsets in the search for the m a x i m a l consistent elements which are required In the light of this it is fortunate

t h a t our current discourse g r a m m a r s do not re- quire frequent use of priority union Because of the inherent complexity of the task we have fa- voured correctness and clarity at the possible ex- pense of efficiency Once it becomes established that priority union is a useful operation we can begin to explore the possibilities for faster imple- mentations

3.2 G e n e r a l i z a t i o n in A L E The abstract definition of generalization stipulates that the generalization of two categories is the lar- gest category which subsumes b o t h of them Mos- hier (1988) has shown t h a t generalization can be defined as the intersection of sets of a t o m i c fea- ture structures In the previous section we outli- ned how an ALE t e r m can be broken up into atomic feature structures All t h a t is now required is the set intersection operation with the addition t h a t

we also need to cater for the possibility t h a t ato- mic types m a y have a consistent generalization

1 For P and Q complex feature structures

Gen(P,Q) =~! {Path: C I Path: A E P

and Path : B E Q } where C is the m o s t specific type which subsumes b o t h A and B

2 For A and B a t o m i c types Gen(A, B) =dr C where C is the m o s t specific type which subsu- mes b o t h A and B

In ALE there is always a unique type for the gene- ralization We have m a d e a small extension to the ALE compiler to generate a table of t y p e genera- lizations to assist in the (relatively) efficient com- putation of generalization To illustrate, we show how the generalization of the two feature structu- res in (28) and (29) is calculated

Trang 8

(28)

(29)

Hannah likes ants

A G E N T hannah ]

P A T I E N T ant

like

Jessy laughs

[ A G E N T jessy ]

laugh

These decompose into the a t o m i c components

shown in (30) and (31) respectively

(30) (*/like)

(AGENT/hannah)

(PATIENT/ant)

(31) (*/Za.gh)

(AGENT/jessy)

These have only the AGENT p a t h in c o m m o n alt-

hough with different values and therefore the ge-

neralization is the feature structure corresponding

to this p a t h b u t with the generalization of the ato-

mic types hannah and jessy as value:

(32) [ A G E N T female ]

agentive

4 C o n c l u s i o n

In this p a p e r we have reported on an implemen-

tation of a discourse g r a m m a r in a sign-based for-

m a l i s m , using C a r p e n t e r ' s A t t r i b u t e Logic Engine

(aLE) We extended the discourse g r a m m a r and

ALE to incorporate the operations of priority union

and generalization, operations which we use for

resolving parallelism dependent anaphoric expres-

sions We also reported on a resolution mecha-

nism for verb phrase ellipsis which yields sloppy

and strict readings through priority union, and we

claimed some advantages of this approach over the

use of higher-order unification

T h e o u t s t a n d i n g unsolved p r o b l e m is t h a t of esta-

blishing parallelism While we believe t h a t gene-

ralization is an a p p r o p r i a t e formal operation to

assist in this, we still stand in dire need of a con-

vincing criterion for judging whether the genera-

lization of two categories is sufficiently informative

to successfully establish parMlelism

Acknowledgements

This work was supported by the EC-funded project

LRE-61-062 "Towards a Declarative Theory of Dis-

course" and a longer version of the paper is available

in Brew et al (1994) We have profited from discus-

sions with Jo Calder, Dick Crouch, Joke Dorrepaal,

Claire Gardent, Janet Hitzeman, David Millward and

Hub Prfist Andreas Schhter helped with the imple- mentation work The Human Communication Rese- arch Centre (HCRC) is supported by the Economic and Social Research Council (UK)

R e f e r e n c e s

Asher, N (1993) Reference to Abstract Objects in Di- scourse Dordrecht: Kluwer

Bouma, G (1990) Defaults in Unification Grammar

In Proceedings of the 28th ACL, pp 165-172, Uni-

versity of Pittsburgh

Brew, C et al (1994) Discourse Representation De-

liverable B+ of LRE-61-062: Toward a Declarative Theory of Discourse

Calder, J H R (1990) An Interpretation of Paradig- matic Morphology PhD thesis, Centre for Cognitive

Science, University of Edinburgh

Carpenter, B (1993) ALE The Attribute Logic En- gine user's guide, version ~ Laboratory for Com- putational Linguistics, Carnegie Mellon University, Pittsburgh, Pa

Carpenter, B (1994) Skeptical and credulous default unification with applications to templates and inhe- ritance In T Briscoe et al, eds., Inheritance, De- faults, and the Lexicon, pp 13-37 Cambridge: Cam-

bridge University Press

Dalrymple, M., S Shieber and F Pereira (1991) El- lipsis and higher-order unification Linguistics and Philosophy 14(4), 399-452

Kaplan, R M (1987) Three seductions of computa- tional psycholinguistics In P J Whitelock et al, eds., Linguistic Theory and Computer Applications,

pp 149-188 London: Academic Press

Kehler, A (1993) The effect of establishing coherence

in ellipsis and anaphora resolution In Proceedings

of the 31st ACL, pp 62-69, Ohio State University

Manandhar, S (1993) CUF in context In J Dbrre, ed.,

Computational Aspects of Constraint-Based Lingui- stics Description DYANA-2 Deliverable

Moshier, D (1988) Extensions to Unification Gram- mar for the Description of Programming Languages

PhD thesis, Department of Mathematics, University

of California, Los Angeles

Polanyi, L and R Scha (1984) A syntactic approach

to discourse semantics In Proceedings of the tOth Coling and the 22nd ACL, pp 413-419, Stanford

University

Pollard, C and I A Sag (in press) Head-Driven Phrase Structure Grammar Chicago, Ill.: Univer-

sity of Chicago Press and CSLI Publications Priist, H (1992) On Discourse Structuring, VP Ana- phora and Gapping PhD thesis, Universiteit van

Amsterdam, Amsterdam

Pr/Jst, H., R Scha and M van den Berg (1994} Dis- course grammar and verb phrase anaphora Lingui- stics and Philosophy To appear

Scha, R and L Polanyi (1988) An augmented context free grammar for discourse In Proceedings of the 12th Coling, pp 573-577, Budapest

Ngày đăng: 08/03/2014, 07:20

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN