1. Trang chủ
  2. » Luận Văn - Báo Cáo

Báo cáo khoa học: "HPSG-Style Underspecified Japanese Grammar with Wide Coverage" docx

5 278 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 5
Dung lượng 442,62 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

The dependency accu- racy is 78% when a parser uses the heuristic that every bunsetsu 1 is attached to the nearest possible one.. tries LEs for some functional words 2, 63 lex- ical entr

Trang 1

HPSG-Style Underspecified Japanese Grammar

with Wide Coverage

M I T S U I S H I Y u t a k a t, T O R I S A W A K e n t a r o t, T S U J I I J u n ' i c h i t*

t D e p a r t m e n t of I n f o r m a t i o n Science

G r a d u a t e School of Science, U n i v e r s i t y of Tokyo*

*CCL, U M I S T , U.K

A b s t r a c t This paper describes a wide-coverage Japanese

g r a m m a r based on HPSG The aim of this work

is to see the coverage and accuracy attain-

able using an underspecified grammar Under-

specification, allowed in a typed feature struc-

ture formalism, enables us to write down a

wide-coverage g r a m m a r concisely The gram-

m a r we have implemented consists of only 6 ID

schemata, 68 lexical entries (assigned to func-

tional words), and 63 lexical entry templates

(assigned to parts of speech ( B O S s ) ) Further-

more word-specific constraints such as subcate-

gorization of verbs are not fixed in the gram-

mar However this granllnar call generate parse

trees for 87% of the 10000 sentences in the

Japanese EDR corpus The dependency accu-

racy is 78% when a parser uses the heuristic

that every bunsetsu 1 is attached to the nearest

possible one

1 I n t r o d u c t i o n

Our purpose is to design a practical Japanese

g r a m m a r based on HPSG (Head-driven Phrase

Structure Grammar) (Pollard and Sag, 1994),

with wide coverage and reasonable accuracy for

syntactic structures of real-world texts In this

paper, "coverage" refers to the percentage of

i n p u t sentences for which the g r a m m a r returns

at least one parse tree, and "accuracy" refers to

the percentage of bunsetsus which are attached

correctly

To realize wide coverage and reasonable ac-

curacy, the following steps had been taken:

A) At first we prepared a linguistically valid

but coarse g r a m m a r with wide coverage

B) We then refined the g r a m m a r in regard to

accuracy, using practical heuristics which

are not linguistically motivated

As for A), the first g r a m m a r we have con-

structed actually consists of only 68 lexical en-

* T h i s research is partially founded by t h e p r o j e c t of

J S P S ( J S P S - R F T F 9 6 P 0 0 5 0 2 )

1A bunsetsu is a c o m m o n unit when s y n t a c t i c struc-

tures in J a p a n e s e are discussed

tries (LEs) for some functional words 2, 63 lex- ical entry templates (LETs) for POSs 3, and 6

ID schemata Nevertheless, the coverage of our

g r a m m a r was 92% for t h e Japanese corpus in the EDR Electronic Dictionary (EDR, 1996), mainly due to underspecification, which is al- lowed in HPSG and does not always require de- tailed g r a m m a r descriptions

As for B), in order to improve accuracy, the

g r a m m a r should restrict ambiguity as much as possible For this purpose, the g r a m m a r needs more constraints in itself To reduce ambiguity,

we added additional feature structures which may not be linguistically valid b u t be empir- ically correct, as constraints to i) the original LFs and LETs, and ii) the ID schemata The rest of this paper describes the archi- tecture of our Japanese g r a m m a r (Section 2) refinement of our g r a m m a r (Section 3), exper- imental results (Section 4) and discussion re- garding errors (Section 5)

2 A r c h i t e c t u r e o f J a p a n e s e

G r a m m a r

In this section we describe the architecture of the HPSG-style Japanese g r a m m a r we have de- veloped In the HPSG framework, a g r a m m a r consists of (i) immediate dominance schemata (ID schemata), (ii) principles, and (iii) lexi- cal entries (LEs) All of t h e m are represented

by typed feature structures (TFSs) (Carpen- ter, 1992), the f u n d a m e n t a l d a t a structures of HPSG ID schemata, corresponding to rewrit- ing rules in CFG, are significant for construct- ing syntactic structures The details of our ID schemata are discussed in Section 2.1 Princi- ples are constraints between m o t h e r and daugh- ter feature structures 4 LEs, which compose the lexicon, are detailed constraints on each word

In our grammar, we do not always assign LEs

to each word Instead, we assign lexical entry

2A functional word is assigned one or m o r e LEs

SA P O S is also assigned one or m o r e L E T s 4We o m i t f u r t h e r e x p l a n a t i o n a b o u t principles h e r e

d u e to limited space

Trang 2

S c h e m a n a m e Explanation

Applied when a predicate subcategorizes a pnrase

H e a d - c o m p l e m e n t s c h e m a

Head-relative s c h e m a Applied when a relative clause modifies a

phrase

H e a d - m a r k e r s c h e m a Applied when a m a r k e r like a postposition

marks a phrase

Head-adjacent schema Applied when a suffix a t t a c h e s to a word

or a c o m p o u n d word

H e a d - c o m p o u n d s c h e m a Applied when a c o m p o u n d word is

constructed

Head-modifier schema Applied when a phrase modifies a n o t h e r or

when a coordinate s t r u c t u r e is constructed

1~ x a m p l e Kare ga hashiru he-sUBJ run 'He runs.' Aruku hitobito

'People who walk.' KanoJo ga

'She '

Iku darou

Go will

• will go.,

natural language 'Natural language.' Yukkuri tobu ,slo.w]lYy flY

• slowly.' Table 1: ID s c h e m a t a in our g r a m m a r

templates ( L E T s ) to POSs The details of our

LEs and LETs are discussed in Section 2.2

Our g r a m m a r includes the 6 ID s c h e m a t a shown

in Table 1 Although t h e y are similar to the

ones used for English in s t a n d a r d HPSG, there

is a f u n d a m e n t a l difference in the t r e a t m e n t of

relative clauses Our g r a m m a r adopts the head-

relative schema to treat relative clauses instead

of the head-filler schema More specifically, our

g r a m m a r does not have SLASH features and does

not use traces Informally speaking, this is be-

cause SLASH features and traces are really nec-

essary only when there are more t h a n one verb

between the head and the filler (e.g., Sentence

( 1 ) ) But such sentences are rare in real-world

corpora in Japanese Just using a Head-relative

schema makes our g r a m m a r simpler and thus

less ambiguous

'The woman who Taro says that he loves.'

E n t r y T e m p l a t e s ( L E T s )

Basically, we assign LETs to POSs For ex-

ample, c o m m o n nouns are assigned one L E T ,

which has general constraints t h a t t h e y can be

complements of predicates, t h a t t h e y can be a

c o m p o u n d n o u n with o t h e r common nouns, and

so on However, we assign LEs to some single

functional words which behave in a special way

For example, t h e verb 'suru' can be adjacent to

some nouns unlike o t h e r ordinary verbs T h e

solution we have a d o p t e d is t h a t we assign a

special LE to t h e verb 'suru'

Our lexicon consists of 68 LEs for some func-

tional words, and 63 LETs for POSs A func-

tional word is assigned one or more LEs, and a POS is also assigned one or more LETs

3 R e f i n e m e n t o f o u r G r a m m a r Our goal in this section is to improve accuracy without losing coverage Constraints to improve accuracy can also be represented by TFSs and

be added to the original g r a m m a r c o m p o n e n t s such as ID schemata, LEs, and LETs

T h e basic idea to improve accuracy is that in- cluding descriptions for rare linguistic phenom- ena might m a k e it more difficult for our system

to choose the right analyses Thus, we a b a n d o n some rare linguistic phenomena This approach

is not always linguistically valid b u t at least is practical for real-world corpora

In this section, we consider some frequent linguistic phenomena, and explain how we dis- carded the t r e a t m e n t of rare linguistic phenom-

e n a in favor of frequent ones, regarding three

components: (i) the postposition 'wa', (ii) rela-

tive clauses and commas and (iii) nominal suf- fixes representing time T h e way how we aban- don t h e t r e a t m e n t of rare linguistic p h e n o m e n a

is by i n t r o d u c i n g a d d i t i o n a l constraints in fea- ture structures Regarding (i) and (ii), we intro- duce 'pseudo-principles', which are unified with

ID s c h e m a t a in t h e same way principles are uni- fied Regarding (iii), we add some feature struc- tures to L E s / L E T s

3.1 P o s t p o s i t i o n 'Wa'

T h e main usage of the postposition 'wa' is di-

vided into t h e following two patternsS:

• If two P P s with the postposition 'wa' ap- pear consecutively, we treat t h e first P P as 5These patterns are almost similar to the ones in (Kurohashi and Nagao, 1994)

Trang 3

(a) (b)*

1 (~)

l T (i)

I l_- i * ; !

* -'t 4

'-" I ' - ' 1

Figure 1: (a) Correct / (b) incorrect parse tree for

Sentence (2); (c) correct / (d) incorrect parse tree

for Sentence (3)

a c o m p l e m e n t of a p r e d i c a t e just before the

second PP

• Otherwise, P P with the p o s t p o s i t i o n 'wa' is

t r e a t e d as the c o m p l e m e n t of the last pred-

icate in the sentence

Sentences (2) and (3) are examples for these

p a t t e r n s , respectively T h e parse tree for Sen-

tence (2) corresponds to Figure l(a) b u t not to

Figure l(b) and the parse tree for Sentence (3)

corresponds to Figure l(c) but not to Figure

l(d)

- T O P I C g O ~ut Jiro -TOPIC go -NEG wa

'Though Tarogoes, Jiro does not go."

city -TOPIC people -SUBJ many noisy

'A city is noisy because there are ninny people.'

A l t h o u g h there are exceptions to t h e above

p a t t e r n s (e.g., Sentence (4) & Figure ( 2 ) ) , they

are rarely observed in real-world corpora T h u s ,

we a b a n d o n their t r e a t m e n t

ability -TOPIC missing but guts -SUaJ exist

'Though he does not have ability, he has guts.'

Both of t h e m are binary features as follows:

WA + / - The phrase contains a/no 'wa'

P_WA + / - The PP is/isn't marked by 'wa'

We t h e n i n t r o d u c e d a 'pseudo-principle' for 'wa'

in a disjunctive form as below6:

(A) W h e n a p p l y i n g h e a d - c o m p l e m e n t schema,

also apply:

6ga_hc and ~a_l'm are DCPs, which are also executed

when the pseudo-principle is applied

Chil~ 1 tt&x " g~., k o u

Figure 2: Correct parse tree for Sentence (4)

_ho(N El D

where

.a_hc(-, , ) .a_hc(+, , 4-) .a_hc(-, +, +) (B) W h e n a p p l y i n g head-modifier schema, also apply:

where

and so on

This t r e a t m e n t p r u n e s the parse trees like those

in Figure l(b, d) as follows:

• Figure l(b) l) At (:~), t h e h e a d - c o m p l e m e n t schema should be applied, and (A) of the 'pseudo- principle should also be applied

2) Since the phrase 'iku kedo ashita wa ika

nai' contains a 'wa', [ ] is +

3) Since the P P 'Kyou wa' is m a r k e d by 'wa',

[-3] is +

4) a_hc([~], [ ~ []-]) fails

• Figure l ( d ) 1) At ( # ) , t h e head-modifier s c h e m a should

be applied, and ( B ) of the 'pseudo- principle' should also be applied

2) Since the p h r a s e ' Tokai wa hito ga ookute'

contains a 'wa', E / i s + 3) Since the phrase 'sawagashii' contains no

4) _hm(E], D fails

3.2 R e l a t i v e C l a u s e s a n d C o m m a s

Relative clauses have a t e n d e n c y to contain no commas In Sentence (5), t h e P P 'Nippon de,'

is a c o m p l e m e n t of t h e m a i n verb 'atta', not a

c o m p l e m e n t of 'umareta' in t h e relative clause (Figure 3(a) ), t h o u g h 'Nippon de' is preferred

to 'urnaveta' if t h e c o m m a after 'de' does not exist (Figure 3(b) ) We, therefore, a b a n d o n the t r e a t m e n t of relative clauses containing a

Trang 4

(a)

I

+ - - - ÷ ÷

' l

¢ ÷

T ,i ,l, una.re~a, a.ka ha.n ' 3 LI t L

l i p p o n

I

÷ ÷ ÷

! !

i 1 i I

I I 1

l i p p o J'+ ,ai~ia u m L r c t a + J c a c h L n i a t t a

Figure 3: (a) Correct parse tree for Sentence (5);

(b) correct parse tree for comma-removed Sentence

(5)

c o m m a

Japan -LOC recently be-born-PAST baby

-GOAL m e e t - P A S T

'ill Japan I met a baby who was born recently.'

To treat such a tendency of relative clauses

we first i n t r o d u c e d the TOUTEN feature 7 T h e

TOUTEN feature is a binary feature which takes

+ / - if the phrase contains a / n o c o m m a We

then i n t r o d u c e d a 'pseudo-principle' for relative

clauses as follows:

(A) W h e n applying head-relative schema, also

apply:

[ DTRSlNH.DTRITOUTE - ]

(B) W h e n applying other ID schemata, this

pseudo-principle has no effect

This is to make sure t h a t parse trees for relative

clauses with a c o m m a cannot be produced

3.3 N o m i n a l S u f f i x e s R e p r e s e n t i n g

T i m e a n d C o m m a s

Noun phrases (NPs) with n o m i n a l suffixes such

as nen (year), gatsu ( m o n t h ) , and ji (hour) rep-

resent i n f o r m a t i o n a b o u t time Such N P s are

sometimes used adverbially, rather t h a n nomi-

nally Especially N P s with such a n o m i n a l suffix

and c o m m a are often used adverbially (Sentence

(6) & Figure 4(a) ), while general S P s with a

c o m m a are used in coordinate structures (Sen-

tence (7) & Figure 4(b) )

year earthquake -SUBJ Occur-PAST

An earthquake occurred in 1995

rA t o u t e n s t a n d s for a c o m m a in Japanese

1 l

I ' ¢ ¢ -¢ ' ]

÷-÷-÷ ¢.÷-÷ ¢ ÷ ÷

Figure 4: (a, b) Correct parse trees for Sentences (6) and (7) respectively

(7) Kyoto, Nara ni itta

-GOAL gO-PAST

I went to Kyoto and-Nara

In order to restrict the behavior of N P s with

n o m i n a l time suffixes and c o m m a s to adverbial usage only, we a d d e d the following constraint to

t h e LE of a c o m m a , c o n s t r u c t i n g a coordinate structure:

[ MARK [SYN[LOCAL[N-SUFFIX - ]

This prohibits an N P with a n o m i n a l suffix from being m a r k e d by a c o m m a for coordination

4 E x p e r i m e n t s

We i m p l e m e n t e d our parser and g r a m m a r in LiLFeS (Makino et al., 1998) s, a feature-

s t r u c t u r e description language developed by our group We tested r a n d o m l y selected 10000 sen- tences fi'om the Japanese E D R corpus (EDR, 1996) Tile E D R Corpus is a J a p a n e s e version

of t r e e b a n k with morphological, structural, and semantic information In o u r e x p e r i m e n t s , we used only the structural i n f o r m a t i o n , t h a t is, parse trees Both the parse trees in our parser and the parse trees in the E D R Corpus are first converted into bunsetsu dependencies, a n d they are c o m p a r e d when calculating accuracy Note

t h a t t h e internal structures of bunsetsus, e.~

structures of c o m p o u n d n o u n s , are not consid- ered in our evaluations

~re evaluated the following g r a m m a r s : (a) the original underspecified g r a m m a r , (b) (a) + con- straint for wa-marked P P s , (c) (a) + constraint for relative clauses with a c o m m a , (d) (a) + con- straint for nominal time suffixes with a c o m m a , and (e) (a) + all t h e three constraints We eval-

u a t e d those g r a m m a r s by t h e following three

m e a s u r e m e n t s :

C o v e r a g e T h e percentage of the sentences

t h a t generate at least one parse tree

P a r t i a l A c c u r a c y T h e p e r c e n t a g e of t h e cor- rect dependencies b e t w e e n bunsetsus (ex- cepting the last obvious d e p e n d e n c y ) for

t h e parsable sentences

T o t a l A c c u r a c y T h e p e r c e n t a g e of t h e correct dependencies between bunsetsus (excepting the last dependency) over all sentences

8LiLFeS will soon be published on its horn÷page,

Trang 5

Coverage

Partial Accuracy 74.20%

77.50%

74.98%

74.41%

77.77%

Total Accuracy 72.61%

74.65%

73.11%

72.80%

74.65%

Table 2: Experimental results for 10000 sentences

from the Japanese EDR Corpus: (a-e) are grammars

respectively corresponding to Section 2 (a), Section

2 + Subsection 3.1 (b), Section 2 + Subsection 3.2

(c), Section 2 + Subsection 3.3 (d), and Section 2 +

Section 3 (e)

When calculating t o t a l a c c u r a c y , the depen-

dencies for unparsable sentences are predicted

so that every bunsetsu is attached to the near-

est bunsetsu In other words, t o t a l a c c u r a c y

can be regarded as a weighted average of partial

accuracy and baseline accuracy

Table 2 lists the results of our experiments

Comparison of the results between (a) and (b-

d) shows that all the three constraints improve

p a r t i a l a c c u r a c y and t o t a l a c c u r a c y with

little coverage loss And g r a m m a r (e) using the

combination of the three constraints still works

with no side effect

We also measured average parsing time per

sentence for the original g r a m m a r (a) and the

fully augmented g r a m m a r (e) The parser we

adopted is a naive CKY-style parser Table 3

gives the average parsing time per sentence for

those 2 grammars Pseudo-principles and fur-

ther constraints on L E s / L E T s also make pars-

ing more time-efficient Even t h o u g h they are

sometimes considered to be slow in practical ap-

plication because of their heavy feature struc-

tures, actually we found t h e m to improve speed

In (Torisawa and Tsujii, 1996), an efficient

HPSG parser is proposed, and our preliminary

experiments show t h a t the parsing time of the

effident parser is about three times shorter than

t h a t of the naive one Thus, the average parsing

time per sentence will be about 300 msec., and

we believe our g r a m m a r will achive a practical

speed Other techniques to speed-up the parser

are proposed in (Makino et al., 1998)

5 D i s c u s s i o n

This section focuses on the behavior of commas

Out of randomly selected 119 errors in experi-

ment (e), 34 errors are considered to have been

caused by t h e insufficient t r e a t m e n t of commas

Especially the fatal errors (28 errors) oc-

curred due to the n a t u r e of commas To p u t it

Average parsing time per sentence

1277 (msec)

838 (msec)

(a)

m

Table 3: The average parsing time per sentence

in another way, a phrase with a comma, some- times, is attached to a phrase farther t h a n the nearest possible phrase In (Kurohashi and Na- gao, 1994), the parser always attaches a phrase with a c o m m a to the second nearest possible phrase We need to introduce such a constraint into our grammar

T h o u g h the g r a m m a r (e) had the pseudo- principle prohibiting relative clauses containing commas, there were still 6 relative clauses con- taining commas This can be fixed by investi- gating the nature of relative clauses

6 C o n c l u s i o n a n d F u t u r e W o r k

We have introduced an underspecified Japanese

g r a m m a r using the HPSG framework The techniques for improving accuracy were easy to include into our g r a m m a r due to the HPSG framework Experimental results have shown that our g r a m m a r has wide coverage with rea- sonable accuracy

T h o u g h the pseudo-principles and further constraints on L E s / L E T s that we have intro- duced contribute to accuracy, they are too strong and therefore cause some coverage loss One way we could prevent coverage loss is by introducing preferences for feature structures

R e f e r e n c e s

Bob Carpenter 1992 The Logic of Typed Fea- ture Structures Cambridge University Press

E D R (Japan Electronic Dictionary Research In- stitute, Ltd.) 1996 EDR electronic dictio- nary version 1.5 technical guide

Sadao Kurohashi and Makoto Nagao 1994 A syntactic analysis m e t h o d of long japanese sentences based on the detection of conjunc- tive structures Computational Linguistics,

20(4):507-534

Takaki Makino, Minoru Yoshida, Kentaro Tori- sawa, and Tsujii Jun'ichi 1998 LiLFeS - to- wards a practical HPSG parser In COLING-

A CL '98, August

Carl Pollard and Ivan A Sag 1994 Head- Driven Phrase Structure Grammar The Uni- versity of Chicago Press

Kentaro Torisawa and Jun'ichi Tsujii 1996

C o m p u t i n g phrasal-signs in HPSG prior to parsing In COLING-96, pages 949-955, Au- gust

Ngày đăng: 31/03/2014, 04:20

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

🧩 Sản phẩm bạn có thể quan tâm