1. Trang chủ
  2. » Luận Văn - Báo Cáo

Báo cáo khoa học: "Acquiring a Lexicon from Unsegmented Speech" potx

3 316 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 3
Dung lượng 261,02 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

For the time being, we ap- proximate the problem as induction from phone se- quences rather than acoustic pressure, and assume that learning takes place in an environment where simple se

Trang 1

A c q u i r i n g a L e x i c o n f r o m U n s e g m e n t e d S p e e c h

C a r l d e M a r c k e n

M I T A r t i f i c i a l I n t e l l i g e n c e L a b o r a t o r y

545 T e c h n o l o g y S q u a r e , NE43-804

C a m b r i d g e , M A , 02139, U S A

c g d e m a r c @ a i m i t e d u

A b s t r a c t

We present work-in-progress on the ma-

chine acquisition of a lexicon from sen-

tences that are each an unsegmented phone

sequence paired with a primitive represen-

tation of meaning A simple exploratory

algorithm is described, along with the di-

rection of current work and a discussion

of the relevance of the problem for child

language acquisition and computer speech

recognition

1 I n t r o d u c t i o n

We are interested in how a lexicon of discrete words

can be acquired from continuous speech, a prob-

lem fundamental both to child language acquisition

and to the automated induction of computer speech

recognition systems; see (Olivier, 1968; Wolff, 1982;

Cartwright and Brent, 1994) for previous computa-

tional work in this area For the time being, we ap-

proximate the problem as induction from phone se-

quences rather than acoustic pressure, and assume

that learning takes place in an environment where

simple semantic representations of the speech intent

are available to the acquisition mechanism

For example, we approximate the greater problem

as that of learning from inputs like

Phon Input: /~raebltslne~ b~W t/

S e m Input: { BOAT A IN RABBIT THE BE }

(The rabbit's in a boat.) where the semantic input is an unordered set of iden-

tifiers corresponding to word paradigms Obviously

the artificial pseudo-semantic representations make

the problem much easier: we experiment with them

as a first step, somewhere between learning language

"from a radio" and providing an unambiguous tex-

tual transcription, as might be used for training a

speech recognition system

Our goal is to create a program that, after train-

ing on many such pairs, can segment a new phonetic

utterance into a sequence of morpheme identifiers

Such output could be used as input to many gram-

mar acquisition programs

2 A S i m p l e P r o t o t y p e

We have implemented a simple algorithm as an ex- ploratory effort It maintains a single dictionary, a set of words Each word consists of a phone sequence and a set of sememes (semantic symbols) Initially, the dictionary is empty When presented with an utterance, the algorithm goes through the following sequence of actions:

• It attempts to cover ("parse") the utterance phones and semantic symbols with a sequence

of words from the dictionary, each word offset a certain distance into the phone sequence, with words potentially overlapping

• It then creates new words that account for un- covered portions of the utterance, and adjusts words from the parse to better fit the utterance

• Finally, it reparses the utterance with the old dictionary and the new words, and adds the new words to the dictionary if the resulting parse covers the utterance well

Occasionally, the program removes rarely-used words from the dictionary, and removes words which can themselves be parsed The general operation of the program should be made clearer by the follow- ing two examples In the first, the program starts with an empty dictionary, early in the acquisition process, and receives the simple u t t e r a n c e / n i n a / { NINA } (a child's name) Naturally, it is unable to parse the input

Utterance:

Words:

Unparsed:

Mismatched:

Phones Sememes

/nina/ { JINA }

/nina/ { NINA }

From the unparsed portion of the sentence, the program creates a new word, / n i n a / { NINA } It then reparses

Phones Sememes Utterance: /nina/ { NINA }

Words: /nine/ { sISA }

Unparsed:

Mismatched:

Trang 2

Having successfully parsed the input, it adds the

new word to the dictionary Later in the acquisition

process, it encounters the sentence you kicked off

~he sock, when the dictionary contains (among other

words) / y u / { YOU }, /~a/ { THE }, and /rsuk/ {

SOCK }

Utterance:

Words:

Unparsed:

Mismatched:

/yukIkt~f~sak/ { KiCK YOU OFF

SOCK THE } / y / { YOU }

/rs~k/ { sock }

kIkt~f { KICK OFF }

r

The program creates the new word /kIkt~f/ {

KICK OFF } to account for the unparsed portion of

the input, a n d / s u k / { SOCK} to fix the mismatched

phone It reparses,

Phones Utterance: /yukIkt3f5~sak/

Words: !yu/

/klkt~f/

/a~/

/s~k/

/rs~k/

unused

Unparsed:

Mismatched:

Sememes

{ KICK YOU OFF SOCK THE }

{ You }

{ KICK OFF }

{ THE } { SOCK } { SOCK }

On this basis, it a d d s / k I k t ~ f / { KICK OFF } and

/sak/ { SOCK } to the dictionary /rsuk/ { SOCK

}, not used in this analysis, is eventually discarded

from the dictionary for lack of use / k l k t ~ f / { KICK

OFF } is later found to be parsable into two sub-

words, and also discarded

One can view this procedure as a variant of the

expectation-maximization (Dempster et al., 1977)

procedure, with the parse of each utterance as the

hidden variables There is currently no preference

for which words are used in a parse, save to mini-

mize mismatches and unparsed portions of the input,

but obviously a word grammar could be learned in

conjunction with this acquisition process, and used

as a disambiguation step

3 T e s t s a n d R e s u l t s

To test the algorithm, we used 34438 utterances

from the Childes database of mothers' speech to chil-

dren (MacWhinney and Snow, 1985; Suppes, 1973)

These text utterances were run through a publicly

available text-to-phone engine A semantic dictio-

nary was created by hand, in which each root word

from the utterances was mapped to a correspond-

ing sememe Various forms of a root ("see", "saw",

"seeing") all map to the same sememe, e.g., SEE

Semantic representations for a given utterance are

merely unordered sets of sememes generated by tak-

ing the union of the sememe for each word in the

utterance Figure 1 contains the first 6 utterances from the database

We describe the results of a single run of the al- gorithm, trained on one exposure to each of the

34438 utterances, containing a total of 2158 differ- ent stems The final dictionary contains 1182 words, where some entries are different forms of a com- mon stem 82 of the words in the dictionary have never been used in a good parse We eliminate these words, leaving 1100 Figure 2 presents some entries

in the final dictionary, and figure 3 presents all 21 (2%) of the dictionary entries that might be reason- ably considered mistakes

Phones /yu/

/ ~ / /

/ s t /

It,ll / d /

/e/

/It/

/ax/

/in/

/wi/

Sememes Phones Sememes

{ THE } /we/ { wAY }

{ WHAT } /hi/ { HEY } { TO } /brik/ { BREAK }

{ DO } /f, vg3/ { FINGER } { A } Ikisl { KISS }

{ IT } /tap/ { TOP } { I } /k~ld/ { CALL }

{ IS } l~gz/ { EGG }

{ WE } /eng/ { THING }

Figure 2: Dictionary entries The left 10 are the

10 words used most frequently in good parses The right 10 were selected randomly from the 1100 en- tries

/ i v / { BE } /z~/ { YOU } / i v / { DO } Hi,./{ SHE BE }

/shappin/ { HAPPEN }

/t I { NOT }

/skatt/ { BOB SCOTT } /nidahz/ { NEEDLE BE }

IsAmOl { SOMETHING }

Innpi~/{ sNooPy }

I""I { AT ZOO }

/ s d f / { YOU }

/~/{ BE }

/smAd/ { MUD }

/~r~/{ BE }

Idontl { DO NOT }

/watarOiz/ { WHAT BE THESE } /wathappind/ { WHAT HAPPEN}

/dran^63wiz/ { DROWN OTHERWISE }

Figure 3: All of the significant dictionary errors Some of them, like /J'iz/ are conglomerations that should have been divided Others, l i k e / t / , /wo/, and / d o n / demonstrate how the system compen- sates for the morphological irregularity of English contractions The / I ~ / p r o b l e m is discussed in the text; misanalysis of the role o f / I ~ / also manifests itself on something

The most obvious error visible in figure 3 is the suffix -ing (/I~/), which should be have an empty se- meme set Indeed, such a word is properly hypothe- sized but a special mechanism prevents semantically empty words from being added to the dictionary Without this mechanism, the system would chance

Trang 3

Sentence

this is a book

what do you see in the book?

how many rabbits?

h o w many?

o n e rabbit

what is the rabbit doing?

Phones /bIslzebuk/

/watduyusilnb~buk/

/hat~menirabhlts/

/hatlmeni/

/w^nrabblt/

/watlzb~rabbItdulD /

Sememes { THIS BE A'B00K ) { WHAT DO YOU SEE IS THE BOOK } { HOW MANY RABBIT }

{ HOW MANY }

{ ONE RABBIT }

{ WHAT BE THE RABBIT DO } Figure 1: The first 6 utterances from the Childes database used to test the algorithm

upon a new word like ring,/rig/, use t h e / I ~ / { } to

account for most of the sound, and build a new word

/ r / { RINa } to cover the rest; witness something in

figure 3 Most other semantically-empty affixes (plu-

r a l / s / f o r instance) are also properly hypothesized

and disallowed, but the dictionary learns multiple

entries to account for them (/eg/ "egg" and /egz/

"eggs") The system learns synonyms ("is", "was",

"am", ) and homonyms ("read", "red" ; "know",

"no") without difficulty

Removing the restriction on empty semantics, and

also setting the semantics of the function words a,

an, the, that and of to {}, the most common empty

words learned are given in figure 4 The ring prob-

lem surfaces: among other words learned are now

/ k / { CAR } a n d / b r / { BRI/IG } To fix such prob-

lems, it is obvious more constraint on morpheme

order must be incorporated into the parsing pro-

cess, perhaps in the form of a statistical grammar

acquired simultaneously with the dictionary

I ~ I {} the

/ o / { } ?

/ r / { } u o / y o ,

Is/{) plur~ -~

I t / {) is/'s

Word Source

/ w o / { } ?

/el {} a

/ ~ , , / { } o/

/z/ {} plural -s

Figure 4: The most common semantically empty

words in the final dictionary

4 C u r r e n t D i r e c t i o n s

T h e a l g o r i t h m described a b o v e is e x t r e m e l y simple,

as was t h e i n p u t fed to it I n p a r t i c u l a r ,

• T h e i n p u t w a s p h o n e t i c a l l y oversimplified, each

w o r d p r o n o u n c e d t h e s a m e w a y each t i m e it oc-

curred, regardless o f e n v i r o n m e n t T h e r e was

no p h o n o l o g i c a l noise a n d no cross-word effects

• T h e s e m a n t i c r e p r e s e n t a t i o n s were not only

noise free a n d u n a m b i g u o u s , b u t c o r r e s p o n d e d

directly to t h e words in t h e u t t e r a n c e

To better investigate more realistic formulations

of the acquisition problem, we are extending our coverage to actual phonetic transcriptions of speech,

by allowing for various phonological processes and noise, and by building in probabilistic models of morphology and syntax We are further reducing the information present in the semantic input by removing all function word symbols and merging various content symbols to encompass several word paradigms We hope to transition to phonemic in- put produced by a phoneme-based speech recognizer

in the near future

Finally, we are instituting an objective test mea- sure: rather than examining the dictionary directly,

we will compare segmentation and morpheme- labeling to textual transcripts of the input speech

5 A c k n o w l e d g e m e n t s This research is supported by NSF grant 9217041- ASC and AR.PA under the ttPCC program

R e f e r e n c e s Timothy Andrew Cartwright and Michael R Brent

1994 Segmenting speech without a lexicon: Evi- dence for a bootstrapping model of lexical acqui- sition In Proc of the 16th Annual Meeting of the Cognitive Science Society, IIillsdale, New Jersey

A P Dempster, N M Liard, and D B Rubin 1977 Maximum liklihood from incomplete data via the

EM algorithm Journal of the Royal Statistical Society, B(39):1-38

B MacWhinney and C Snow 1985 The child lan- guage data exchange system Journal of Child Language, 12:271-296

Donald Cort Olivier 1968 Stochastic Grammars and Language Acquisition Mechanisms Ph.D thesis, Harvard University, Cambridge, Mas- sachusetts

Patrick Suppes 1973 T h e semantics of children's language American Psychologist

J Gerald Wolff 1982 Language acquisition, data compression and generalization Language and Communication, 2(1):57-89

Ngày đăng: 08/03/2014, 07:20

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN