1. Trang chủ
  2. » Ngoại Ngữ

Word Grammar New Perspectives on a Theory of Language Structure. docx

251 560 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Word Grammar New Perspectives on a Theory of Language Structure
Tác giả Kensei Sugayama, Richard Hudson
Trường học Continuum
Chuyên ngành Language Structure
Thể loại Book
Năm xuất bản 2006
Thành phố London
Định dạng
Số trang 251
Dung lượng 11,77 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

His research interest is the theory of language structure; his mainpublications in this area are about the theory of Word Grammar, includingWord Grammar 1984, Oxford: Blackwell; English

Trang 2

New Perspectives on a Theory of Language Structure

Trang 4

New Perspectives on a Theory of Language Structure

edited by Kensei Sugayama and Richard Hudson

continuum

Trang 5

The Tower Building 15 East 26th Street

11 York Road New York

London SE1 7NX NY 10010

All rights reserved No part of this publication may be reproduced or transmitted inany form or by any means, electronic or mechanical, including photocopying,recording, or any information storage or retrieval system, without prior permission inwriting from the publishers

First published 2006

British Library Cataloguing-in-Publication Data

A catalogue record for this book is available from the British Library

ISBN: 0-8264-8645-2 (hardback)

Library of Congress Cataloguing-in-Publication Data

To come

Typeset by BookEns Ltd, Royston, Herts

Printed and bound in Great Britain by MPG Books Ltd, Bodmin, Cornwall

© Kensei Sugayama and Richard Hudson 2005

Trang 6

best part of a century.

-P H Matthews

Trang 8

5 The Language Network

6 The Utterance Network

Word Grammar Approaches to Linguistic Analysis:

Its explanatory power and applications

2 Case Agreement in Ancient Greek: Implications for a

theory of covert elements

Chet Creider and Richard Hudson

1 Introduction

2 The Data

3 The Analysis of Case Agreement

4 Non-Existent Entities in Cognition and in Language

5 Extensions to Other Parts of Grammar

6 Comparison with PRO and pro

7 Comparison with Other PRO-free Analyses

8 Conclusions

3 Understood Objects in English and Japanese with

Reference to Eat and Taberui A Word Grammar

33 35 35 35 41 42 46 49 50 52

54 54 56

Trang 9

6 Semantics of the Be To Construction

7 Should To be Counted as Part of the Lexical Item?

8 A Word Grammar Analysis of the Be To Construction

9 Conclusion

5 Linking in Word Grammar

Jasper Holmes

1 Linking in Word Grammar: The syntax semantics principle

2 The Event Type Hierarchy: The framework; event types;

roles and relations

3 A Word Grammar Approach to Code-Mixing

4 Word Order in Mixed and Monolingual 'Subordinate' Clauses

5 Summary and Conclusion

7 Word Grammar Surface Structures and HPSG Order

Domains

Takafumi Maekawa

1 Introduction

2 A Word Grammar Approach

3 An Approach in Constructional HPSG: Ginzburg and Sag 2000

4 A Linearization HPSG Approach

5 Concluding Remarks

Part II

Towards a Better Word Grammar

8 Structural and Distributional Heads

Andrew Rosta

1 Introduction

2 Structural Heads

58 60 63 67 67 68 69 70 71 72 75 77 81 83 83 103 114 117 117 118 121 128 139 145 145 146 154 160 165

169 171 171 172

Trang 10

3 Distributional Heads

4 Thai-Clauses

5 Extent Operators

6 Surrogates versus Proxies

7 Focusing Subjuncts: just, only, even

3 The Locative Inversion Data

4 Factored Out Subjects

227 229

Trang 12

RICHARD HUDSON is Professor Emeritus of Linguistics at University CollegeLondon His research interest is the theory of language structure; his mainpublications in this area are about the theory of Word Grammar, including

Word Grammar (1984, Oxford: Blackwell); English Word Grammar (1990,

Oxford: Blackwell) and a large number of more recent articles He has alsotaught sociolinguistics and has a practical interest in educational linguistics.Website: www phon ucl ac uk/home/dick/home hrm

Email: dick@linguistics ucl ac uk

KENSEI SUGAYAMA, Professor of English Linguistics at Kobe City University ofForeign Studies Research interests: English Syntax, Word Grammar, LexicalSemantics and General Linguistics Major publications: 'More on unaccusative

Sino-Japanese complex predicates in Japanese' (1991) UCL Working Papers in

Linguistics 3; 'A Word-Grammatic account of complements and adjuncts in

Japanese' (1994) Proceedings of the 15th International Congress of Linguists; 'Speculations on unsolved problems in Word Grammar' (1999) The Kobe City

University Journal 50 7; Scope of Modern Linguistics (2000, Tokyo: Eihosha); Studies in Word Grammar (2003, Kobe: Research Institute of Foreign Studies,

KCUFS)

Email: ken@inst kobe-cufs ac jp

CHET CREIDER, Professor and Chair, Department of Anthropology, University

of Western Ontario, London, Ontario, Canada Research interests:

morphol-ogy, syntax, African languages Major publications: Structural and Pragmatic

Factors Influencing the Acceptability of Sentences with Extended Dependencies in Norwegian (1987, University of Trondheim Working Papers in Linguistics 4); The Syntax of the Nilotic Languages: Themes and variations (1989, Berlin: Dietrich

Reimer); A Grammar of Nandi (1989, with J T Creider, Hamburg: Helmut Buske); A Grammar of Kenya Luo (1993, ed ); A Dictionary of the Nandi

Language (2001, with J T Creider, Koln: Riidiger Koppe).

Email: creider@uwo ca

ANDREW ROSTA, Senior Lecturer, Department of Cultural Studies, University

of Central Lancashire, UK Research Interests: all aspects of English grammar.Email: a rosta@v21 me uk

NIKOLAS GISBORNE is a lecturer in the Department of Linguistics and EnglishLanguage at the University of Edinburgh His research interests are in lexicalsemantics and syntax, and their interaction in argument structure

Trang 13

Website: www englang ed ac uk/people/nik html

Email: n gisborne@ed ac uk

JASPERW HOLMES is a self-employed linguist who has worked with many largeorganizations on projects in lexicography, education and IT Teaching andresearch interests include syntax and semantics, lexical structure, corpuses andother IT applications (linguistics in computing, computing in linguistics),language in education and in society, the history of English and English as aworld language His publications include 'Synonyms and syntax' (1996, with

Richard Hudson, And Rosta, Nik Gisborne) Journal of Linguistics 32; 'The syntax and semantics of causative verbs' (1999) UCL Working Papers in

Linguistics 11; 'Re-cycling in the encyclopedia' (2000, with Richard Hudson), in

B Peeters (ed ) The Lexicon-Encyclopedia Interface (Amsterdam: Elsevier);

'Constructions in Word Grammar' (2005, with Richard Hudson) in Jan-Ola

Ostman and Mirjam Fried (eds) Construction Grammars: Cognitive Grounding and

Theoretical Extensions (Amsterdam: Benjamins).

Email: jasper holmes@gmail com

EVA EPPLER, Senior Lecturer in English Language and Linguistics, School ofArts, University of Roehampton, UK Research Interests: morpho-syntax ofGerman and English, syntax-pragmatics interface, code-mixing, bilingualprocessing and production, sociolinguistics of multilingual communities Recentmain publication: '" because dem Computer brauchst' es ja nicht zeigen ":

because + German main clause word order' International Journal of

Bilingualism 8 2 (2004), pp 127-44.

Email: evieppler@hotmail com

TAKAFUMI MAEKAWA, PhD student, Department of Language and Linguistics,University of Essex Research Interests: Japanese and English syntax, Head-Driven Phrase Structure Grammar and lexical semantics Major publication:

'Constituency, Word Order and Focus Projection' (2004) The Proceedings of

the llth International Conference on Head-Driven Phrase Structure Grammar.

Center for Computational Linguistics, Katholieke Universiteit Leuven, August3-6

Email: maekawa@btinternet com

Trang 14

This volume comes from a three-year (April 2002-March 2005) researchproject on Word Grammar supported by the Japan Society for the Promotion

of Science, the goal of which is to bring together Word Grammar linguistswhose research has been carried out in this framework but whose approaches

to it reflect differing perspectives on Word Grammar (henceforth WG) Igratefully acknowledge support for my work in WG from the Japan Society forthe Promotion of Science (grant-in-aid Kiban-Kenkyu C (2), no 14510533 fromApril 2002-March 2005) The collection of papers was planned so as tointroduce the readers into this theory and to include a diversity of languages, towhich the theory is shown to be applicable, along with critique from differenttheoretical orientations

In September 1994 Professor Richard Hudson, the founder of WordGrammar, visited Kobe City University of Foreign Studies to give a lecture in

WG on a part of his lecturing trip to Japan His talks were centred on advances

in WG at that time, which refreshed our understanding of the theory ProfessorHudson has been writing in a very engaging and informative way for about twoquarters of a century in the world linguistics scene

Word Grammar is a theory of language structure which Richard Hudson,now Emeritus Professor of Linguistics at University College London, has beenbuilding since the early 1980s It is still changing and improving in detail, yet themain ideas remain the same These ideas themselves developed out of twoother theories that he had tried: Systemic Grammar (now known as SystemicFunctional Grammar), due to Michael Halliday, and then Daughter-Dependency Grammar, his own invention

Word Grammar fills a gap in the study of dependency theory Dependencytheory may not belong to the mainstream in the Western World, especially not

in America, but it is gaining more and more attention, which it certainlydeserves In Europe, dependency has been better known since the Frenchlinguist Lucien Tesniere's study in the 1950s (cf Hudson, this volume) I justmention here France, Belgium, Germany and Finland Dependency theorynow also rules Japan in the shape of WG Moreover, the notion of head, thecentral idea of dependency, has been introduced into virtually all modernlinguistic theories In most grammars, dependency and constituency are usedsimultaneously However, this adduces the risk of making these grammars toopowerful WG's challenge is to eliminate constituency from grammar except incoordinate structures, although certain dependency grammars, especially theGerman ones, refuse to accept constituency for coordination

Richard Hudson's first book was the first attempt to write a generative

(explicit) version of Systemic Grammar (English Complex Sentences: An

Trang 15

Introduction to Systemic Grammar, North Holland, 1971); and his second book

was about Daughter-Dependency Grammar (Arguments for a

Non-transforma-tional Grammar, University of Chicago Press, 1976) As the latter tide indicates,

Chomsky's transformational grammar was very much 'in the air', and bothbooks accepted his goal of generative grammar but offered other ideas aboutsentence structure as alternatives to his mixture of function-free phrase structureplus transformations In the late 1970s when Transformational Grammar wasimmensely influential, Richard Hudson abandoned Daughter-Dependency

Grammar (in spite of its drawing a rave review by Paul Schachter in Language

54, 348-76) His exploration of various general ideas that hadn't come togetherbecame an alternative coherent theory called Word Grammar, first described

in the 1984 book Word Grammar and subsequently improved and revised in the 1990 book English Word Grammar Since then the details have been

worked out much better and there is now a workable notation and anencyclopedia available on the internet (cf Hudson 2004) The newest version

of Word Grammar is now on its way (Hudson in preparation)

The time span between the publication of Richard Hudson's Word Grammar

(1984) and this volume is more than two decades (21 years to be precise) Theintervening years have seen impressive developments in this theory by the WGgrammarians as well as those in other competitive linguistic theories such asMinimalist Programme, Head-driven Phrase Structure Grammar (HPSG),Generalized Phrase Structure Grammar (GPSG), Lexical Functional Grammar(LFG), Construction Grammar and Cognitive Grammar

Here are the main ideas, most of which come from the latest version of the

WG homepage (Hudson 2004), together with an indication of where they camefrom:

• It is monostratal - only one structure per sentence, no transformations.(From Systemic Grammar)

• It uses word-word dependencies - e g a noun is the subject of a verb.(From John Anderson and other users of Dependency Grammar, viaDaughter-Dependency Grammar; a reaction against Systemic Grammarwhere word-word dependencies are mediated by the features of the motherphrase )

• It does not use phrase structure - e g it does not recognize a noun phrase

as the subject of a clause, though these phrases are implicit in thedependency structure (This is the main difference between Daughter-Dependency Grammar and Word Grammar )

• It shows grammatical relations/functions by explicit labels - e g 'subject'and 'object' (From Systemic Grammar)

• It uses features only for inflectional contrasts - e g tense, number but nottransitivity (A reaction against excessive use of features in both SystemicGrammar and current Transformational Grammar )

• It uses default inheritance, as a very general way of capturing the contrastbetween 'basic' or 'underlying' patterns and 'exceptions' or 'transforma-tions' - e g by default, English words follow the word they depend on, butexceptionally subjects precede it; particular cases 'inherit' the default pattern

Trang 16

unless it is explicitly overridden by a contradictory rule (From ArtificialIntelligence).

• It views concepts as prototypes rather than 'classical' categories that can bedefined by necessary and sufficient conditions All characteristics (i e alllinks in the network) have equal status, though some may for pragmaticreasons be harder to override than others (From Lakoff and earlyCognitive Linguistics, supported by work in sociolinguistics)

• It presents language as a network of knowledge, linking concepts about

words, their meanings, etc - e g twig is linked to the meaning 'twig', to the

form /twig/, to the word-class 'noun', etc (From Lamb's SrratificationalGrammar, now known as Neurocognitive Linguistics)

• In this network there are no clear boundaries between different areas ofknowledge - e g between 'lexicon' and 'grammar', or between 'linguisticmeaning' and 'encyclopedic knowledge' (From early Cognitive Linguistics

- and the facts)

• In particular, there is no clear boundary between 'internal' and 'external'facts about words, so a grammar should be able to incorporate

sociolinguistic facts - e g the speaker of jazzed is an American (From

Sociolinguistics)

In this theory, word-word dependency is a key concept, upon which the syntaxand semantics of a sentence build Dependents of a word are subcategorizedinto two types, i e complements and adjuncts These two types of dependentsplay an important role in this theory of grammar

Let me give you a flavour of the syntax and semantics in WG, as shown inFigure 1:

Figure 1

Trang 17

Contributors to this volume are primarily WG grammarians across the worldwho participated in the research organized by myself, and I am also grateful forbeing able to include critical work by Maekawa of the University of Essex, who

is working in a different paradigm

All the papers here manifest what I would characterize as theoreticalpotentialities of WG, exploring how powerful WG is to offer analyses forlinguistic phenomena in various languages The papers we have collected comefrom varying perspectives (formal, lexical-semantic, morphological, syntactic,semantic) and include work on a number of languages, including English,Ancient Greek, Japanese and German Phenomena studied include verbalinflection, case agreement, extraction, construction, code-mixing, etc

The papers in this volume span a variety of topics, but there is a commonthread running through them: the claim that word-word dependency isfundamental to our analysis and understanding of language The collectionstarts with a chapter on WG by Richard Hudson which serves to introduce thenewest version of WG The subsequent chapters are organized into twosections:

Part I: Word Grammar Approaches to Linguistic Analysis: its explanatorypower and applications

Part II: Towards a Better Word Grammar

Part I contains seven chapters, which contribute to recent developments in

WG and explore how powerful WG is to analyze linguistic phenomena invarious languages They deal with formal, lexical, morphological, syntactic andsemantic matters In this way, these papers give a varied picture of thepossibilities of WG

In Chapter 2, Creider and Hudson provide a theory of covert elements,which is a hot issue in linguistics Since WG has hitherto denied the existence

of any covert elements in syntax, it has to deal with claims such as the one thatcovert case-bearing subjects are possible in Ancient Greek As the authors saythemselves, their solution is tantamount to an acceptance of some covertelements in syntax, though in every case the covert element can be predictedfrom the word on which it depends The analysis given is interesting becausethe argument is linked to dependency It is more sophisticated than the simpleand undefined Chomskyan notion of PRO element

In Chapter 3, Sugayama joins Creider and Hudson in detailing an analysis ofunderstood objects in English and Japanese, albeit at the level of semanticsrather than syntax He studies an interesting contrast between English andJapanese concerning understood objects Unlike English and most otherEuropean languages, Japanese is quite unique in allowing its verbs to miss outtheir complements on the condition that the speaker assumes that they areknown to the addressee The reason seems to be that in the semantic structure

of the sentences, there has to be a semantic argument which should be, but isnot, mapped onto syntax as a syntactic complement The author adduces a WGsolution that is an improvement on Hudson's (1990) account

Sugayama shares with the preceding chapter an in-depth lexical-semantic

Trang 18

analysis in order to address the relation between a word and the construction.

In Chapter 4, he attempts to characterize the be to construction within the WG

framework He has shown that a morphological, syntactic and semantic analysis

of be in the be to construction provides evidence for the category of be in this

construction Namely, be is an instance of modal verb in terms of morphology

and syntax, while the sense of the whole construction is determined by thesense of 'to'

In Chapter 5, Holmes in a very original approach develops an account forthe linking of syntactic and semantic arguments in the WG approach Underthe WG account, both thematic and linking properties are determined at boththe specific and the general level This is obviously an advantage

In Chapter 6, Eppler draws on experimental studies concerning the mixing and successfully extends WG to an original and interesting area ofresearch Constituent-based models have difficulties accounting for mixingbetween SVO and SOV languages like English and German A dependency(WG) approach is imperative here A word's requirements do not project tolarger units like phrasal constituents The Null-Hypothesis, then, formulated in

code-WG terms, assumes that each word in a switched dependency satisfies theconstraints imposed on it by its own language The material is taken fromEnglish/German conversations of Jewish refugees in London

Maekawa continues the sequence in this collection towards more purelytheoretical studies In Chapter 7, he looks at three different approaches to theasymmetries between main and embedded clauses with respect to the elements

in the left periphery of a clause: the dependency-based approach within WG,the Constructional HPSG approach, and the Linearization HPSG analysis.Maekawa, a HPSG linguist, argues that the approaches within WG and theConstructional HPSG have some problems in dealing with the relevant facts,but that Linearization HPSG provides a straightforward account of them.Maekawa's analysis suggests that linear order should be independent to aconsiderable extent from combinatorial structure, such as dependency orphrase structure

Following these chapters are more theoretical chapters which help toimprove the theory and clarify what research questions must be undertakennext

Part II contains two chapters that examine two theoretical key concepts in

WG, head and dependency They are intended to help us progress a few stepsforward in revising and improving the current WG, together with Hudson (inpreparation)

The notion of head is a central one in most grammars, so it is normal that it

is discussed and challenged by WG and other theorists In Chapter 8, Rostadistinguishes between two kinds of head and claims that every phrase has both adistributional head and a structural head, although he agrees that normally thesame word is both distributional and structural head of a phrase Finally,Gisborne's Chapter 9 then challenges Hudson's classification of dependencies.The diversification of heads (different kinds of dependency) plays a role in

WG as well Gisborne is in favour of a more fine-grained account of

Trang 19

dependencies than Hudson's 1990 model He focuses on a review of thesubject-of dependency, distinguishing between two kinds of subjects, whichseems promising Gisborne's thesis is that word order is governed not only bysyntactic information but also by discourse-presentational facts.

I hope this short overview will suggest to the prospective reader that ourattempt at introducing a dependency-based grammar was successful

By the means of this volume, we hope to contribute to the continuingcooperation between linguists working in WG and those working in othertheoretical frameworks We look forward to future volumes that will furtherdevelop this cooperation

The editors gratefully acknowledge the work and assistance of all thosecontributors whose papers are incorporated in this volume, including one non-

WG linguist who contributed papers from his own theoretical viewpoint andhelped shape the volume you see here

Last but not least, neither the research in WG nor the present volume wouldhave been possible without the general support of both the Japan Society forthe Promotion of Science and the Daiwa Anglo-Japanese Foundation, whoseassistance we gratefully acknowledge here In addition, we owe a special debt ofgratitude to Jenny Lovel for assisting with preparation of this volume in hernormal professional manner We alone accept responsibility for all errors inthe presentation of data and analyses in this volume

Kensei Sugayama

References

Hudson, R A (1971), English Complex Sentences: An Introduction to Systemic Grammar.

Amsterdam: North Holland

— (1976), Arguments for a Non-transformational Grammar Chicago: University ofChicago Press

— (1984), Word Grammar Oxford: Blackwell.

— (1990), English Word Grammar Oxford: Blackwell.

— (2004, July 1-last update), 'Word Grammar', (Word Grammar), Available:

www phon ucl ac uk/home/dick/wg htm (Accessed: 18 April 2005)

— (in preparation), Advances in Word Grammar Oxford: Oxford University Press Pollard, C and Sag, LA (1987), Information-Based Syntax and Semantics Stanford:

CSLI

Schachter, P (1978), 'Review of Arguments for a Non-Transformational Grammar'

Language, 17, 348-76.

Sugayama, K (ed ) (2003), Studies in Word Grammar Kobe: Research Institute of

Foreign Studies, KCUFS

Tesniere, Lucien (1959), Elements de Syntaxe Structurale Paris: Klincksieck.

Trang 22

RICHARD HUDSON

Abstract

The chapter summarizes the Word Grammar (WG) theory of language structureunder the following headings: 1 A brief overview of the theory; 2 Historicalbackground; 3 The cognitive network: 3 1 Language as part of a general network;

3 2 Labelled links; 3 3 Modularity; 4 Default inheritance; 5 The languagenetwork; 6 The utterance network; 7 Morphology; 8 Syntax; 9 Semantics; 10.Processing; and 11 Conclusions

1 A Brief Overview of the Theory

Word Grammar (WG) is a general theory of language structure Most of thework to date has dealt with syntax, but there has also been serious work insemantics and some more tentative explorations of morphology, sociolinguistics,historical linguistics and language processing The only areas of linguistics thathave not been addressed at all are phonology and language acquisition (but evenhere see van Langendonck 1987) The aim of this article is breadth rather thandepth, in the hope of showing how far-reaching the theory's tenets are

Although the roots of WG lie firmly in linguistics, and more specifically ingrammar, it can also be seen as a contribution to cognitive psychology; in terms

of a widely used classification of linguistic theories, it is a branch of cognitive

linguistics (Lakoff 1987; Langacker 1987; 1990; Taylor 1989) The theory has

been developed from the start with the aim of integrating all aspects of languageinto a single dieory which is also compatible with what is known about generalcognition This may turn out not to be possible, but to the extent that it ispossible it will have explained the general characteristics of language as 'merely'one instantiation of more general cognitive characteristics

The overriding consideration, of course, is the same as for any otherlinguistic theory: to be true to the facts of language structure However, ourassumptions make a great deal of difference when approaching these facts, so it

is possible to arrive at radically different analyses according to whether weassume that language is a unique module of the mind, or that it is similar toother parts of cognition The WG assumption is that language can be analysedand explained in the same way as other kinds of knowledge or behaviour unlessthere is clear evidence to the contrary So far this strategy has proved productiveand largely successful, as we shall see below

Trang 23

As the theory's name suggests, the central unit of analysis is the word, which

is central to all kinds of analysis:

• Grammar Words are the only units of syntax (section 8), as sentence

structure consists entirely of dependencies between individual words; WG

is thus clearly part of the tradition of dependency grammar dating from

Tesniere (1959; Fraser 1994) Phrases are implicit in the dependencies, butplay no part in the grammar Moreover, words are not only the largest units

of syntax, but also the smallest In contrast with Chomskyan linguistics,syntactic structures do not, and cannot, separate stems and inflections, so

WG is an example of morphology-free syntax (Zwicky 1992: 354) Unlike

syntax, morphology (section 7) is based on constituent-structure, and thetwo kinds of structure are different in others ways too

• Semantics As in other theories words are also the basic lexical units

where sound meets syntax and semantics, but in the absence of phrases,words also provide the only point of contact between syntax and semantics,giving a radically 'lexical' semantics As will appear in section 9, a ratherunexpected effect of basing semantic structure on single words is a kind ofphrase structure in the semantics

• Situation We shall see in section 6 that words are the basic units for

contextual analysis (in terms of deictic semantics, discourse or sociolinguistics).Words, in short, are the nodes that hold the 'language' part of the human

network together This is illustrated by the word cycled in the sentence / cycled to

UCL, which is diagrammed in Figure 1.

Figure 1

Trang 24

Table 1 Relationships in cycled

related concept G relationship of C to notation in diagram

cycled

the word /

the word to

the morpheme {cycle}

the word-form {cycle+ed}

the concept 'ride-bike'

the concept 'event e'

the lexeme CYCLE

the inflection 'past'

me

now

subjectpost-adjunctstemwholesensereferent

cycled isa CYCLE

speakertime

V'>a'straight downward linecurved downward linestraight upward linecurved upward linetriangle resting on CYCLE'speaker'

'time'

As can be seen in this diagram, cycled is the meeting point for tenrelationships which are detailed in Table 1 These relationships are all quitetraditional (syntactic, morphological, semantic, lexical and contextual), andtraditional names are used where they exist, but the diagram uses notationwhich is peculiar to WG It should be easy to imagine how such relationshipscan multiply to produce a rich network in which words are related to oneanother as well as to other kinds of element including morphemes and variouskinds of meaning All these elements, including the words themselves, are

'concepts' in the standard sense; thus a WG diagram is an attempt to model a

small part of the total conceptual network of a typical speaker

2 Historical Background

The theory described in this article is the latest in a family of theories which havebeen called 'Word Grammar' since the early 1980s (Hudson 1984) The presenttheory is very different in some respects from the earliest one, but the continueduse of the same name is justified because we have preserved some of the mostfundamental ideas - the central place of the word, the idea that language is anetwork, the role of default inheritance, the clear separation of syntax andsemantics, the integration of sentence and utterance structure The theory is stillchanging and further changes are already identifiable (Hudson, in preparation)

As in other theories, the changes have been driven by various forces - newdata, new ideas, new alternative theories, new personal interests; and by theinfluence of teachers, colleagues and students The following brief history may

be helpful in showing how the ideas that are now called 'Word Grammar'developed during my academic life

The 1960s My PhD analysis of Beja used the theory being developed by

Halliday (1961) under the name 'Scale-and-Category' grammar, which laterturned into Systemic Functional Grammar (Butler 1985; Halliday 1985) Ispent the next six years working with Halliday, whose brilliantly wide-ranginganalyses impressed me a lot Under the influence of Chomsky's generative

Trang 25

grammar (1957, 1965), reinterpreted by McCawley (1968) as well-formednessconditions, I published the first generative version of Halliday's SystemicGrammar (Hudson 1970) This theory has a very large network (the 'systemnetwork') at its heart, and networks also loomed large at tihat time in theStratificational Grammar of Lamb (1966; Bennett 1994) Another reason whystratificational grammar was important was that it aimed to be a model ofhuman language processing - a cognitive model.

The 1970s Seeing the attractions of both valency theory and Chomsky's

subcategorization, I produced a hybrid theory which was basically SystemicGrammar, but with the addition of word-word dependencies under theinfluence of Anderson (1971); the theory was called 'Daughter-DependencyGrammar' (Hudson 1976) Meanwhile I was teaching sociolinguistics and

becoming increasingly interested in cognitive science (especially default inheritance systems and frames) and the closely related field of lexical

semantics (especially Fillmore's Frame Semantics 1975, 1976) The result was avery 'cognitive' textbook on sociolinguistics (Hudson 1980a, 1996a) I was alsodeeply influenced by Chomsky's 'Remarks on nominalization' paper (1970),and in exploring the possibilities of a radically lexicalist approach I toyed withthe idea of 'pan-lexicalism' (1980b, 1981): everything in the grammar is 'lexical'

in the sense that it is tied to word-sized units (including word classes)

The 1980s All these influences combined in the first version of Word

Grammar (Hudson 1984), a cognitive theory of language as a network which

contains both 'the grammar' and 'the lexicon' and which integrates languagewith title rest of cognition The semantics follows Lyons (1977), Halliday (1967-8) and Fillmore (1976) rather than formal logic, but even more controversially,the syntax no longer uses phrase structure at all in describing sentence structure,because everything that needs to be said can be said in terms of

dependencies between single words The influence of continental

depen-dency theory is evident but the dependepen-dency structures were richer than thoseallowed in 'classical' dependency grammar (Robinson 1970) - more like thefunctional structures of Lexical Functional Grammar (Kaplan and Bresnan1982) Bresnan's earlier argument (1978) that grammar should be compatiblewith a psychologically plausible parser also suggested the need for a parsingalgorithm, which has led to a number of modest Natural Language Processing(NLP) systems using WG (Fraser 1985, 1989, 1993; Hudson 1989; Shaumyan1995) These developments provided the basis for the next book-lengthdescription of WG, 'English Word Grammar' (EWG, Hudson 1990) Thisattempts to provide a formal basis for the theory as well as a detailed application

to large areas of English morphology, syntax and semantics

The 1990s Since the publication of EWG there have been some important

changes in the theory, ranging from the general theory of default inheritance,through matters of syntactic theory (with the addition of 'surface structure', thevirtual abolition of features and the acceptance of 'unreal' words) andmorphological theory (where 'shape', 'whole' and 'inflection' are new), todetails of analysis, terminology and notation These changes will be describedbelow WG has also been applied to a wider range of topics than previously:

Trang 26

• lexical semantics (Gisborne 1993, 1996, 2000, 2001; Holmes 2004;Hudson and Holmes 2000; Hudson 1992, 1995, forthcoming; Sugayama

1993, 1996, 1998);

• morphology (Creider 1999; Creider and Hudson 1999);

c• morphology (Creider 1999; Creider and Hudson 1999);

• sociolinguistics (Hudson 1996a, 1997b; Eppler 2005);

• language processing (Hudson 1993a, b, 1996b; Hiranuma 1999, 2001).Most of the work done since the start of WG has applied the theory to English,but it has also been applied to the following languages: Tunisian Arabic (Chekili1982); Greek (Tzanidaki 1995, 1996a, b); Italian (Volino 1990); Japanese(Sugayama 1991, 1992, 1993, 1996; Hiranuma 1999, 2001); Serbo-Croatian(Camdzic and Hudson 2002); and Polish (Gorayska 1985)

The theory continues to evolve, and at the time of writing a 'Word GrammarEncyclopedia' which can be downloaded via the WG website (www phon u-

cl ac uk/home/dick/wg htm) is updated in alternate years

3 The Cognitive Network

3 1 Language as part of a general network

The basis for WG is an idea which is quite uncontroversial in cognitive science:The idea is that memory connections provide the basic building blocks throughwhich our knowledge is represented in memory For example, you obviously knowyour mother's name; this fact is recorded in your memory The proposal to beconsidered is that this memory is literally represented by a memory connection, That connection isn't some appendage to the memory Instead, the connection is thememory all of knowledge is represented via a sprawling network of theseconnections, a vast set of associations (Reisberg 1997: 257-8)

In short, knowledge is held in memory as an associative network (though

we shall see below that the links are much more precisely defined than theunlabelled 'associations' of early psychology and modern connectionistpsychology) What is more controversial is that, according to WG, the same

is true of our knowledge of words, so the sub-network responsible for words isjust a part of the total 'vast set of associations' Our knowledge of words is ourlanguage, so our language is a network of associations which is closely integratedwith the rest of our knowledge

However uncontroversial (and obvious) this view of knowledge may be ingeneral, it is very controversial in relation to language The only part of languagewhich is widely viewed as a network is the lexicon (Aitchison 1987: 72), and afashionable view is that even here only lexical irregularities are stored in anassociative network, in contrast with regularities which are stored in afundamentally different way, as 'rules' (Pinker and Prince 1988) For example,

we have a network which shows for the verb come not only that its meaning is

Trang 27

'come' but that its past tense is the irregular came, whereas regular past tenses

are handled by a general rule and not stored in the network The WG view isthat exceptional and general patterns are indeed different, but that they canboth be accommodated in the same network because it is an 'inheritancenetwork' in which general patterns and their exceptions are related by defaultinheritance (which is discussed in more detail in section 4) To pursue the lastexample, both patterns can be expressed in exactly the same prose:

(1) The shape of the past tense of a verb consists of its stem followed by -d (2) The shape of the past tense of come consists of came.

The only difference between these rules lies in two places: 'a verb' versus come, and 'its stem followed by -ed" versus came Similarly, they can both be

incorporated into the same network, as shown in Figure 2 (where the triangleonce again shows the 'isa' relationship by linking the general concept at its base

to the specific example connected to its apex):

Figure 2

Once the possibility is accepted that some generalizations may be expressed

in a network, it is easy to extend the same treatment to the whole grammar, as

we shall see in later examples One consequence, of course, is that we lose theformal distinction between 'the lexicon' and 'the rules' (or 'the grammar'), butthis conclusion is also accepted outside WG in Cognitive Grammar (Langacker1987) and Construction Grammar (Goldberg 1995) The only parts of linguisticanalysis that cannot be included in the network are the few general theoreticalprinciples (such as the principle of default inheritance)

3 2 Labelled links

It is easy to misunderstand the network view because (in cognitive psychology)there is a long tradition of 'associative network' theories in which all links havejust the same status: simple 'association' This is not the WG view, nor is it theview of any of the other theories mentioned above, because links are

Trang 28

classified and labelled - 'stem', 'shape', 'sense', 'referent', 'subject', 'adjunct'

and so on The classifying categories range from the most general - the 'isa' link

- to categories which may be specific to a handful of concepts, such as 'goods'

in the framework of commercial transactions (Hudson forthcoming) This is afar cry from the idea of a network of mere 'associations' (such as underliesconnectionist models) One of the immediate benefits of this approach is that itallows named links to be used as functions, in the mathematical sense ofKaplan and Bresnan (1982: 182), which yield a unique value - e g 'the referent

of the subject of the verb' defines one unique concept for each verb In order todistinguish this approach from the traditional associative networks we can callthese networks 'labelled'

Even within linguistics, labelled networks are controversial because the labelsthemselves need an explanation or analysis Because of this problem sometheories avoid labelled relationships, or reduce labelling to something moreprimitive: for example, Chomsky has always avoided functional labels forconstituents such as 'subject' by using configurational definitions, and thepredicate calculus avoids semantic role labels by distinguishing arguments interms of order

There is no doubt that labels on links are puzzlingly different from the labelsthat we give to the concepts that they link Take the small network in Figure 2for past tenses One of the nodes is labelled 'COME: past', but this label could

in fact be removed without any effect because 'COME: past' is the only concept

which isa 'verb: past' and which has came as its shape Every concept is uniquely

defined by its links to other concepts, so labels are redundant (Lamb 1996,1999: 59) But the same is not true of the labels on links, because a network withunlabelled links is a mere associative network which would be useless inanalysis For example, it is no help to know that in John saw Mary the verb islinked, in some way or other, to the two nouns and that its meaning is linked,again in unspecified ways, to the concepts 'John' and 'Mary'; we need to knowwhich noun is the subject, and which person is the see-er The same label may

be found on many different links - for example, every word that has a sense(i e virtually every word) has a link labelled 'sense', every verb that has a subjecthas a 'subject' link, and so on Therefore the function of the labels is to classifythe links as same or different, so if we remove the label we lose information Itmakes no difference whether we show these similarities and differences bymeans of verbal labels (e g 'sense') or some other notational device (e g.straight upwards lines); all that counts is whether or not our notation classifieslinks as same or different Figure 3 shows how this can be done using firstconventional attribute-value matrices and second, the WG notation used so far.This peculiarity of the labels on links brings us to an important characteristic

of the network approach which allows the links themselves to be treated like theconcepts which they link - as 'second-order concepts', in fact The essence of anetwork is that each concept should be represented just once, and its multiplelinks to other concepts should be shown as multiple links, not as multiplecopies of the concept itself Although the same principle applies generally toattribute-value matrices, it does not apply to the attributes themselves Thus

Trang 29

Figmre 3

there is a single matrix for each concept, and if two attributes have the samevalue this is shown (at least in one notation) by an arc that connects the twovalue-slots But when it comes to the attributes themselves, their labels arerepeated across matrices (or even within a single complex matrix) For example,the matrix for a raising verb contains within it the matrix for its complementverb; an arc can show that the two subject slots share the same filler but the onlyway to show that these two slots belong to the same (kind of) attribute is torepeat the label 'subject'

In a network approach it is possible to show both kinds of identity in thesame way: by means of a single node with multiple 'isa' links If two words areboth nouns, we show this by an isa link from each to the concept 'noun'; and iftwo links are both 'subject' links, we put an isa link from each link to a singlegeneral 'subject' link Thus labelled links and other notational tricks are justabbreviations for a more complex diagram with second-order links between

links These second-order links are illustrated in Figure 4 for car and bicycle, as well as for the sentence Jo snores.

Figure 4

Trang 30

This kind of analysis is too cumbersome to present explicitly in mostdiagrams, but it is important to be clear that it underlies the usual notationbecause it allows the kind of analysis which we apply to ordinary concepts to beextended to the links between them If ordinary concepts can be grouped intolarger classes, so can links; if ordinary concepts can be learned, so can links.And if the labels on ordinary concepts are just mnemonics which could, inprinciple, be removed, the same is true of the labels on almost all kinds of link.The one exception is the 'isa' relationship itself, which reflects its fundamentalcharacter.

3 3 Modularity

The view of language as a labelled network has interesting consequences for thedebate about modularity: is there a distinct 'module' of the mind dedicatedexclusively to language (or to some part of language such as syntax orinflectional morphology)? Presumably not if a module is defined as a separate'part' of our mind and if the language network is just a small part of a muchlarger network One alternative to this strong version of modularity is nomodularity at all, with the mind viewed as a single undifferentiated whole; thisseems just as wrong as a really strict version of modularity However there is athird possibility If we focus on the links, any such network is inevitably'modular' in the much weaker (and less controversial) sense that links between

concepts tend to cluster into relatively dense sub-networks separated by

relatively sparse boundary areas

Perhaps the clearest evidence for some kind of modularity comes fromlanguage pathology, where abilities are impaired selectively Take the case ofPure Word Deafness (Airman 1997: 186), for example Why should a person

be able to speak and read normally, and to hear and classify ordinary noises,but not be able to understand the speech of other people? In terms of a WGnetwork, this looks like an inability to follow one particular link-type ('sense') inone particular direction (from word to sense) Whatever the reason for thisstrange disability, at least the WG analysis suggests how it might apply to just thisone aspect of language, while also applying to every single word: what isdamaged is the general relationship 'sense', from which all particular senserelationships are inherited A different kind of problem is illustrated by patientswho can name everything except one category - e g body-parts or thingstypically found indoors (Pinker 1994: 314) Orthodox views on modularityseem to be of little help in such cases, but a network approach at least explainshow the non-linguistic concepts concerned could form a mental cluster ofclosely-linked and mutually defining concepts with a single super-category It iseasy to imagine reasons why such a cluster of concepts might be impairedselectively (e g that closely related concepts are stored close to each other, so asingle injury could sever all their sense links), but the main point is to haveprovided a way of unifying them in preparation for the explanation

In short, a network with classified relations allows an injury to apply tospecific relation types so that these relations are disabled across the board The

Trang 31

approach also allows damage to specific areas of language which form clusterswith strong internal links and weak external links Any such cluster or sharedlinkage defines a kind of 'module' which may be impaired selectively, but themodule need not be innate: it may be 'emergent', a cognitive pattern which

emerges through experience (Karmiloff-Smith 1992; Bates et al 1998).

4 Default Inheritance

Default inheritance is just a formal version of the logic that linguists have alwaysused: true generalizations may have exceptions We allow ourselves to say that

verbs form their past tense by adding -ed to the stem even if some verbs don't,

because the specific provision made for these exceptional cases willautomatically override the general pattern In short, characteristics of a generalcategory are 'inherited' by instances of that category only 'by default' - only ifthey are not overridden by a known characteristic of the specific case Commonsense tells us that this is how ordinary inference works, but default inheritanceonly works when used sensibly Although it is widely used in artificialintelligence, researchers treat it with great caution (Luger and Stubblefield 1993:386-8) The classic formal treatment is Touretzky (1986)

Inheritance is carried by the 'isa' relation, which is another reason for

considering this relation to be fundamental For example, because snores isa

'verb' it automatically inherits all the known characteristics of 'verb' (i e of 'thetypical verb'), including, for example, the fact that it has a subject; similarly,

because the link between Jo and snores in Jo snores isa 'subject' it inherits the

characteristics of 'subject' As we have already seen, the notation for 'isa'consists of a small triangle with a line from its apex to the instance The base ofthe triangle which rests on the general category reminds us that this category islarger than the instance, but it can also be imagined as the mouth of a hopperinto which information is poured so that it can flow along the link to theinstance

The mechanism whereby default values are overridden has changed duringthe last few years In EWG, and also in Fraser and Hudson (1992), themechanism was 'stipulated overriding', a system peculiar to WG; but since thenthis system has been abandoned WG now uses a conventional system in which

a fact is automatically blocked by any other fact which conflicts and is more

specific Thus the fact that the past tense of COME is came automatically blocks

the inheritance of the default pattern for past tense verbs One of theadvantages of a network notation is that this is easy to define formally: we alwaysprefer the value for 'R of C' (where R is some relationship, possibly complex,and C is a concept) which is nearest to C (in terms of intervening links) Forexample, if we want to find the shape of the past tense of COME, we have a

choice between came and corned, but the route to came is shorter than that to

corned because the latter passes through the concept 'past tense of a verb' (For

detailed discussions of default inheritance in WG, see Hudson 2000a, 2003b )Probably the most important question for any system that uses default

inheritance concerns multiple inheritance, in which one concept inherits

Trang 32

from two different concepts simultaneously - as 'dog' inherits, for example,both from 'mammal' and from 'pet' Multiple inheritance is allowed in WG, as

in unification-based systems and the programming language DATR (Evans andGazdar 1996); it is true that it opens up the possibility of conflicting informationbeing inherited, but this is a problem only if the conflict is an artefact of theanalysis There seem to be some examples in language where a form isungrammatical precisely because there is an irresoluble conflict between twocharacteristics; for example, in many varieties of standard English the

combination */ amn't is predictable, but ungrammatical One explanation for this strange gap is that the putative form amn't has to inherit simultaneously from aren't (the negative present of BE) and am (the I-form of BE); but these models offer conflicting shapes (aren't, am] without any way for either to

override the other (Hudson 2000a) In short, WG does allow multipleinheritance, and indeed uses it a great deal (as we shall see in later sections)

5 The Language Network

According to WG, then, language is a network of concepts The following morespecific claims flesh out this general idea

First, language is part of the same general conceptual network which containsmany concepts which are not part of language What distinguishes the languagearea of this network from the rest is that the concepts concerned are words andtheir immediate characteristics This is simply a matter of definition: conceptswhich are not directly related to words would not be considered to be part oflanguage As explained in section 3 3, language probably qualifies as a module

in the weak sense that the links among words are denser than those betweenwords and other kinds of concept, but this does not mean that language is amodule in the stronger sense of being 'encapsulated' or having its own specialformal characteristics This is still a matter of debate, but we can be sure that atleast some of the characteristics of language are also found elsewhere - themechanism of default inheritance and the isa relation, the notion of linearorder, and many other formal properties and principles

As we saw in Table 1, words may have a variety of links to each other and toother concepts This is uncontroversial, and so are most of the links that arerecognized Even the traditional notions of 'levels of language' are respected in

as much as each level is defined by a distinct kind of link: a word is linked to itsmorphological structure via the 'stem' and 'shape' links, to its semantics by the'sense' and 'referent' links, and to its syntax by dependencies and word classes

Figure 5 shows how clearly the traditional levels can be separated from one

another In WG there is total commitment to the 'autonomy' of levels, in thesense that the levels are formally distinct

The most controversial characteristic of WG, at this level of generality, is

probably the central role played by inheritance (isa) hierarchies.

Inheritance hierarchies are the sole means available for classifying concepts,which means that there is no place for feature-descriptions In most othertheories, feature-descriptions are used to name concepts, so that instead of

Trang 33

Figure 5 'verb' we have '[+V, -N]' or (changing notation) '[Verb: +, Noun: -, SUB- CAT: <NP>]' or even 'S/NP' This is a fundamental difference because, as

we saw earlier, the labels on WG nodes are simply mnemonics and the analysis

would not be changed at all if they were all removed The same is clearly nottrue where feature-descriptions are used, as the name itself contains crucialinformation which is not shown in any other way In order to classify a word as

a verb in WG we give it an isa link to 'verb'; we do not give it a

feature-description which contains that of 'verb'

The most obviously classifiable elements in language are words, so inaddition to specific, unique, words we recognize general 'word-types'; but wecan refer to both simply as 'words' because (as we shall see in the next section)their status is just the same Multiple inheritance allows words to be classified

on two different 'dimensions': as lexemes (DOG, LIKE, IF, etc ) and as

inflections (plural, past, etc ) Figure 6 shows how this cross-classification can beincorporated into an isa hierarchy The traditional word classes are shown onthe lexeme dimension as classifications of lexemes, but they interact in complexways with inflections Cross-classification is possible even among word-classes;

for example, English gerunds (e g Writing in Writing articles is fun ) are both

nouns and verbs (Hudson 2000b), and in many languages participles areprobably both adjectives and verbs

Unlike other theories, the classification does not take words as the highestcategory of concepts - indeed, it cannot do so if language is part of a larger

network WG allows us to show the similarities between words and other kinds

Trang 34

Figure 6

of communicative behaviour by virtue of an isa link from 'word' to'communication', and similar links show that words are actions and events.This is important in the analysis of deictic meanings which have to relate to theparticipants and circumstances of the word as an action

This hierarchy of words is not the only isa hierarchy in language There aretwo more for speech sounds ('phonemes') and for letters ('graphemes'), and afourth for morphemes and larger 'forms' (Hudson 1997b; Creider and Hudson1999), but most important is the one for relationships - 'sense', 'subject' and so

on Some of these relationships belong to the hierarchy of dependents which

we shall discuss in the section on syntax, but there are many others which donot seem to comprise a single coherent hierarchy peculiar to language (incontrast with the 'word' hierarchy) What seems much more likely is thatrelationships needed in other areas of thought (e g 'before', 'part-of) are put touse in language

To summarize, the language network is a collection of words and word-parts(speech-sounds, letters and morphemes) which are linked to each other and tothe rest of cognition in a variety of ways, of which the most important is the 'isa'relationship which classifies them and allows default inheritance

6 The Utterance Network

A WG analysis of an utterance is also a network; in fact, it is simply an

extension of the permanent cognitive network in which the relevant wordtokens comprise a 'fringe' of temporary concepts attached by 'isa' links, so the

Trang 35

utterance network has just the same formal characteristics as the permanentnetwork For example, suppose you say to me 'I agree ' My task, as hearer, is to

segment your utterance into the two words / and agree, and then to classify each

of these as an example of some word in my permanent network (my grammar).This is possible to the extent that default inheritance can apply smoothly; so, forexample, if my grammar says that / must be the subject of a tensed verb, thesame must be true of this token, though as we shall see below, exceptions can

be tolerated In short, a WG grammar can generate representations of actualutterances, warts and all, in contrast with most other kinds of grammar whichgenerate only idealized utterances or 'sentences' This blurring of the boundarybetween grammar and utterance is very controversial, but it follows inevitablyfrom the cognitive orientation of WG

The status of utterances has a number of theoretical consequences both forthe structures generated and for the grammar that generates them The mostobvious consequence is that word tokens must have different names from thetypes of which they are tokens; in our example, the first word must not beshown as / if this is also used as the name for the word-type in the grammar.This follows from the fact that identical labels imply identity of concept,whereas tokens and types are clearly distinct concepts The WG convention is

to reserve conventional names for types, with tokens labelled 'wl', 'w2' and so

on through the utterance Thus our example consists of wl and w2, which isa Tand 'AGREE: pres' respectively This system allows two tokens of the same type

to be distinguished; so in / agree I made a mistake, wl and w3 both isa T (For

simplicity WG diagrams in this chapter only respect this convention when it isimportant to distinguish tokens from types )

Another consequence of integrating utterances into the grammar is thatword types and tokens must have characteristics such that a token can inheritthem from its type Obviously the token must have the familiar characteristics

of types - it must belong to a lexeme and a word class, it must have a sense and

a stem, and so on But the implication goes in the other direction as well: thetype may mention some of the token's characteristics that are normallyexcluded from grammar, such as characteristics of the speaker, the addresseeand the situation This allows a principled account of deictic meaning (e g /

refers to the speaker, you to the addressee and now to the time of speaking), as

shown in Figure 1 and Table 1 Perhaps even more importantly, it is possible

to incorporate sociolinguistic information into the grammar, by indicating thekind of person who is a typical speaker or addressee, or the typical situation ofuse

Treating utterances as part of the grammar has two further effects which areimportant for the psycholinguistics of processing and of acquisition As far asprocessing is concerned, the main point is that WG accommodates deviantinput because the link between tokens and types is guided by the rather liberal'Best Fit Principle' (Hudson 1990: 45ff): assume that the current token isa thetype that provides the best fit with everything that is known The defaultinheritance process which this triggers allows known characteristics of the token

to override those of the type; for example, a misspelled word such as mispelled

Trang 36

can isa its type, just like any other exception, though it will also be shown as adeviant example There is no need for the analysis to crash because of an error.(Of course a WG grammar is not in itself a model of either production orperception, but simply provides a network of knowledge which the processorcan exploit ) Turning to learning, the similarity between tokens and typesmeans that learning can consist of nothing but the permanent storage of tokensminus their utterance-specific content.

These remarks about utterances are summarized in Figure 7, whichspeculates about my mental representation for the (written) 'utterance' Tons

mispelled it According to this diagram, the grammar supplies two kinds of

utterance-based information about wl:

• that its referent is a set whose members include its addressee;

• that its speaker is a 'northerner' (which may be inaccurate factually, but isroughly what I believe to be the case)

It also shows that w2 is a deviant token of the type 'MISSPELL: past' (Thehorizontal line below 'parts' is short-hand for a series of lines connecting theindividual letters directly to the morpheme, each with a distinct part name: part

1, part 2 and so on )

Figure 7

Trang 37

7 Morphology

As explained earlier, the central role of the word automatically means that thesyntax is 'morphology-free' Consequently it would be fundamentally against the

spirit of WG to follow transformational analyses in taking Jo snores as Jo 'tense'

snore A morpheme for tense is not a word in any sense, so it cannot be a

syntactic node The internal structure of words is handled almost entirely bymorphology (The exception is the pattern found in clitics, which we return to

at the end of this section )

The WG theory of inflectional morphology has developed considerably inthe last few years (Creider and Hudson 1998; Hudson 2000a) and is stillevolving In contrast with the views expressed in EWG, I now distinguish sharply

between words, which are abstract, and forms, which are their concrete (visible

or audible) shapes; so I now accept the distinction between syntactic words andphonological words (Rosta 1997) in all but terminology The logic behind thisdistinction is simple: if two words can share the same form, the form must be aunit distinct from both For example, we must recognize a morpheme {bear}which is distinct from both the noun and the verb that share it (BEARnoun andBEARvverb) This means that a word can never be directly related to phonemeserb) This means that a word can never be directly related to phonemesand letters, in contrast with the EWG account where this was possible (e g.Hudson 1990: 90: 'whole of THEM = <them>') Instead, words are mapped

to forms, and forms to phonemes and letters A form is the 'shape' of a word, and a phoneme or letter is a 'pronunciation' or 'spelling' of a form In

Figure 7, for example, the verb MISSPELL has the form {misspell} as its stem (akind of shape), and the spelling of {misspell} is < misspell>

In traditional terms, syntax, form and phonology define different 'levels oflanguage' As in traditional structuralism, their basic units are distinct words,morphemes and phoneme-type segments; and as in the European tradition,morphemes combine to define larger units of form which are still distinct fromwords For example, {misspell} is clearly not a single morpheme, but it exists as

a unit of form which might be written {mis+spell} - two morphemes combining

to make a complex form - and similarly for {mis+spell+ed}, the shape of thepast tense of this verb Notice that in this analysis { } indicates forms, notmorphemes; morpheme boundaries are shown by '+'

Where does morphology, as a part of the grammar, fit in? Inflectional morphology is responsible for any differences between a word's stem - the

shape of its lexeme - and its whole - the complete shape For example, the

stem of misspelled is {misspell}, so inflectional morphology explains the extra

suffix Derivational morphology, on the other hand, explains the relationsbetween the stems of distinct lexemes - in this case, between the lexemesSPELL and MISSPELL, whereby the stem of one is contained in the stem ofthe other The grammar therefore contains the following 'facts':

• the stem of SPELL is {spell};

e the stem of MISSPELL is {mis+spell};

• the 'mis-verb' of a verb has a stem which contains {mis} + the stem of thisverb;

Trang 38

• the whole of MISSPELL: past is {mis+spell+ed};

• the past tense of a verb has a whole which contains its stem + {ed}

In more complex cases (which we cannot consider here) the morphologicalrules can handle vowel alternations and other departures from simplecombination of morphemes

A small sample of a network for inflectional morphology is shown inFigure 8 This diagram shows the default identity of whole and stem, and thedefault rule for plural nouns: their shape consists of their stem followed by {s}

No plural need be stored for regular nouns like DUCK, but for GOOSE the

irregularity must be stored According to the analysis shown here, geese is

doubly irregular, having no suffix and having an irregular stem whose vowelpositions (labelled here simply T and '2') are filled by (examples of) <e>

instead of the expected <o> In spite of the vowel change the stem of geese isa

the stem of GOOSE, so it inherits all the other letters, but had it beensuppletive a completely new stem would have been supplied

Figure 8

Trang 39

This analysis is very similar to those which can be expressed in terms of

'network morphology' (Brown et al 1996), which is also based on multiple

default inheritance One important difference lies in the treatment ofsyncretism, illustrated by the English verb's past participle and passive participlewhich are invariably the same In network morphology the identity is shown byspecifying one and cross-referring to it from the other, but this involves anarbitrary choice: which is the 'basic' one? In WG morphology, in contrast, the

syncretic generalizations are expressed in terms of 'variant' relations between

forms; for example, the past participle and passive participle both have as theirwhole the 'en-variant' of their stem, where the en-variant of {take} is {taken} and

that of {walk} is {walked} The en-variant is a 'morphological function'

which relates one form (the word's stem) to another, allowing the requiredcombination of generalization (by default a form's en-variant adds {ed} to a copy

of the form) and exceptionality

As derivational morphology is responsible for relationships betweenlexemes, it relates one lexeme's stem to that of another by means of exactlythe same apparatus of morphological functions as is used in inflectionalmorphology - indeed, some morphological functions may be used both ininflection and in derivation (for example, the one which is responsible foradding {ing} is responsible not only for present participles but also fornominalizations such as flooring) Derivational morphology is not welldeveloped in WG, but the outlines of a system are clear It will be based on

abstract lexical relationships such as 'mis-verb' (relating SPELL to

MISSPELL) and 'nominalization' (relating it to SPELLING); these abstractrelations between words are realized, by default, by (relatively) concretemorphological functions, so, for example, a verb's nominalization is typically

realized by the ing-variant of that verb's stem Of course, not all lexical

relationships are realized by derivational morphology, in which related lexemesare partly similar in morphology; the grammar must also relate lexemes wheremorphology is opaque (e g DIE - KILL, BROTHER - SISTER) Thenetwork approach allows us to integrate all these relationships into a singlegrammar without worrying about boundaries between traditional sub-disciplinessuch as derivational morphology and lexical semantics

I said at the start of this section that clitics are an exception to the generallyclear distinction between morphology and syntax A clitic is a word whose

realization is an affix within a larger word For example, in He's gone, the clitic 's

is a word in terms of syntax, but its realization is a mere affix in terms ofmorphology They are atypical because typical words are realized by an entireword-form; but the exceptionality is just a matter of morphology In the case of

's, I suggest that it isa the word 'BE: present, singular' with the one exceptional

feature that its whole isa the morpheme {s} - exactly the same morpheme as wefind in plural nouns, other singular verbs and possessives As in other uses, {s}needs to be part of a complete word-form, so it creates a special form called a

'host-form' to combine it with a suitable word-form to the left.

In more complex cases ('special clitics' - Zwicky 1977) the position of theclitic is fixed by the morphology of the host-form and conflicts with the

Trang 40

demands of syntax, as in the French example (3) where en would follow deux

(*Paul mange deux en) if it were not attached by cliticization to mange, giving a

single word-form en mange.

8 Syntax

As in most other theories, syntax is the best developed part of WG, whichoffers explanations for most of the 'standard' complexities of syntax such asextraction, raising, control, coordination, gapping and agreement However the

WG view of syntax is particularly controversial because of its rejection of phrasestructure WG belongs to the family of 'dependency-based' theories, in which

syntactic structure consists of dependencies between pairs of single words As

we shall see below, WG also recognizes 'word-strings', but even these are notthe same as conventional phrases

A syntactic dependency is a relationship between two words that areconnected by a syntactic rule Every syntactic rule (except for those involved incoordination) is 'carried' by a dependency, and every dependency carries atleast one rule that applies to both the dependent and its 'parent' (the word onwhich it depends) These word-word dependencies form chains which linkevery word ultimately to the word which is the head of the phrase or sentence;consequently the individual links are asymmetrical, with one word depending

on the other for its link to the rest of the sentence Of course in some cases thedirection of dependency is controversial; in particular, published WG analyses

of noun phrases have taken the determiner as head of the phrase, though thisanalysis has been disputed and may turn out to be wrong (Van Langendonck1994; Hudson 2004) The example in Figure 9 illustrates all thesecharacteristics of WG syntax

A dependency analysis has many advantages over one based on phrasestructure For example, it is easy to relate a verb to a lexically selectedpreposition if they are directly connected by a dependency, as in the pair

consists of in Figure 9; but it is much less easy (and natural) to do so if the

preposition is part of a prepositional phrase Such lexical interdependencies arecommonplace in language, so dependency analysis is particularly well suited todescriptions which focus on 'constructions' - idiosyncratic patterns not covered

by the most general rules (Holmes and Hudson 2005) A surface dependencyanalysis (explained below) can always be translated into a phrase structure bybuilding a phrase for each word consisting of that word plus the phrases of all

the words that depend on it (e g a sentence; of a sentence; and so on); but

Paul

enof-them

mangeeats

deux,two'Paul eats two of them '

Once again we can explain this special behaviour if we analyze en as an ordinary

word EN whose shape (whole) is the affix {en} There is a great deal more to besaid about clitics, but not here For more detail see Hudson (2001) andCamdzic and Hudson (2002)

Ngày đăng: 27/06/2014, 16:20

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

🧩 Sản phẩm bạn có thể quan tâm