1. Trang chủ
  2. » Luận Văn - Báo Cáo

000058932 Automated text summarization means detecting important content in one or more documents. This is a very challenging problem, relating to many scientific areas Tóm tắt văn bản tự động có nghĩa là phát hiện nội dung quan trọng trong một hoặc nhiều

64 0 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Automated text summarization
Tác giả Do Thuy Duong
Người hướng dẫn Dr. Nguyen Xuan Hoai
Trường học Hanoi University
Chuyên ngành Computer Science
Thể loại Graduation thesis
Năm xuất bản 2011
Thành phố Hanoi
Định dạng
Số trang 64
Dung lượng 6,96 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Cấu trúc

  • 1.1. O b je c tiv e s (9)
  • 1.2. M o tiv a tio n (9)
  • 1.3. M e th o d o lo g y (10)
  • 1.4. A n overv iew o f th e rest o f th e d o c u m e n t (0)
  • 2.1. D e tìn itio n (0)
  • 2.2. G enres o f s u m m a ry (13)
  • 2.3. Som e m eth od s o f su m m a riz a tio n (14)
    • 2.3.1. Stage 1: T opic ex tra c tio n (15)
    • 2.3.2. S tage 2: In terp re tatio n (19)
    • 2.3.3. Stage 3: S um m ary g e n e ra tio n (21)
    • 2.3.4. M ulti-do cum en t su m m a rizatio n (22)
  • 2.4. S um m arizatio n e v alu atio n (22)
  • 2.5. T w o ty p ical m ethods stu d ied (24)
    • 2.5.1. D isco urse p a rs in g (24)
    • 2.5.2. L exical chain and related k n o w le d g e (27)
  • 3.1. R easons oí cho osin g lexical chain m e th o d (0)
  • 3.2. Lexical Chain m ethod overvievv (36)
    • 3.1.1. P rep ro cessin g (36)
    • 3.1.2. N oun íìlte r in g (0)
    • 3.1.3. Lexical c h a in in g (37)
    • 3.1.4. S entence e x tra c tin g (37)
  • 3.2. A lgorithm 1: R egina B arzilay and Michae! Elhadad 1 9 9 7 (0)
  • 3.3. A lgorithm 2: H .G regory S ilber and K athleen F.M cC oy 2 0 0 3 (46)
  • 3.4. A lgorithm 3: M ichel G allev and K athleen McKeovvn ( 2 0 0 3 ) (0)
  • 4.1. A l g o r i t h m (52)
  • 4.2. O u r sum m arizer sy stem (53)
  • 4.3. E xperim ents and e v a lu a tio n (54)
  • 4.4. L im itations and future w o rk (0)
  • Chapter 5: C o n clu sio n (59)

Nội dung

000058932 Automated text summarization means detecting important content in one or more documents. This is a very challenging problem, relating to many scientific areas Tóm tắt văn bản tự động có nghĩa là phát hiện nội dung quan trọng trong một hoặc nhiều tài liệu. Đây là một vấn đề rất thách thức, liên quan đến nhiều lĩnh vực khoa học

O b je c tiv e s

This thesis addresses the challenging artificial intelligence problem of automated text summarization and aims to develop an automatic summarization system Specifically, I will combine selected algorithms proposed by previous researchers to create a system capable of producing concise, informative summaries from input texts.

Text summarization is a large and diverse field Source texts span many genres, including short stories, poetry, journals, magazines, newspapers, and scientific articles Each genre has its own presentation style, which makes building a single, universal summarization system extremely challenging In this study, we constrain the scope to single-document summarization of news articles, focusing on the linguistic characteristics of the text The summarization approach examined here is extractive, selecting key sentences from the original document rather than generating new content.

M o tiv a tio n

Today, information technology is growing dramatically, earning it the title of the digital age Every day, an enormous volume of articles, news, and journals is posted and published, contributing to a flood of information We are witnessing a rapid expansion of content as new material appears continually online According to Lyman and Varian, by 2003 there were about four billion websites indexed by Google and roughly 200 terabytes of material on the Web.

W eb By 2007, these ĩigu res w ere even bigger, up to 10 billion w ebsites indexed.

A s a consequence, the problem o f how to absorb as m uch as iníorm ation em erges

H ow can w e put a book on a scanner, tu m the dial to ‘2 p ag es’, and ju s t read the 2- page result? O r dovvnload thousands o f do cum ents from the weh, send them to the sum m arizcr and tìnally ju s t have to rcad thc bcst ones instead o f all? AU o f those abilities lead to an exciting challenge - A U T O M A T E D T E X T SU M M A R IZA T IO N The practical applications o f sum m arization help us to use an enorm ous am ouní o f inform ation effíciently A num ber o f searching tools such as google.com yahoo.com tim nhanh.com etc retu m hundreds thousands o f results W e need autom ated sum m arization as we need headline new s for iníbrm ing, TV guides for decision m aking, abstracts o f papers for tim e saving, graphica! m aps (w hich show us the shortest path) for orienting etc Instead o f reading all docum ents retu m ed, users ju st need to read th eir sum m aries, then íìlter and find the m ost suitable iníorm ation easier and faster than ev er betbre.

Early experiments in the late 1950s and early 1960s suggested that computer-generated text summarization held real promise After a hiatus, the gradual development of natural language processing (NLP), together with increases in computer memory and processing speed and the rising volume of online texts, has renewed interest in automated text summarization.

The present state of text summarization and the advantages it offers motivate us to pursue research in this challenging field Our goal is to investigate different summarization methods and to develop an automatic system capable of efficiently summarizing text.

M e th o d o lo g y

To achieve my goals in text summarization, I gathered a wide range of resources—e-books, journals, and reputable websites in the field At first, the work was quite challenging, since text summarization is a tough area where many researchers have spent long periods researching but still encounter difficulties and limitations in their work As I progressed, things began to click By combining the strongest aspects of several existing algorithms, I developed a summarization system that analyzes lexical features to generate concise, coherent summaries.

Using the Chain method, the system was implemented in Python, a dynamic, general-purpose, high-level language with a vast Standard Library and strong object-oriented support for both classes and functions In natural language processing, researchers widely rely on Python and the Natural Language Toolkit (NLTK) for robust text processing, making Python a go-to choice for NLP projects I also collected a diverse set of news articles to test the system and used Microsoft Word's AutoSummarize tool to compare results with the product I developed.

1.4 An overvievv o f the rest of the document

T his th esis is organized as following:

C h ap ter 1 introduces the obịectives, the m otivation and the m ain content o f the thesis

C hapter 2 provides w ith all the essential basic inform ation related to sum m arization, its ty pes, the ev aluation m easures as well as th e background know ledge o f W ord N et

A key factor of the lexical chain method is emphasized to ensure the following chapter of this summarization approach is easily understood This chapter presents two summarization methods in greater detail: the discourse parsing method and the lexical chain method Among them, the lexical chain method is selected for deeper exploration to build an effective summarization system.

C h ap ter 3: L exical Chain algorithm s

T h is ch ap ter describes thoroughly som e p opular algorithm s o f lexical Chain m ethod w hich previous researchers have discovered.

C h ap ter 4: B uilding a sum m arization system

This chapter describes how our system is implemented, the accompanying experiment, and the resulting evaluation We built a corpus consisting of human summaries and Microsoft Word 2007 summaries to enable a direct comparison with our system Finally, we identify several limitations and outline opportunities for improvement in future work.

T h is last chapter listed b rieíly the know ledge that the thesis had presented

F urtherm ore a nu m ber o f problem s I encounter and experiences that I leam during th e period o f thesis com pletion w ere m entioned.

A u to m a te d te x t su m m a ri/.a tio n is the generation o f a shorter version o f a text by a Com puter program but still keep th e m ost im portant points o f the original text.

Automated text summarization takes a source text, extracts its most significant content, and presents it in a condensed form tailored to the user’s or application’s needs It emphasizes essential ideas while eliminating extraneous details, enabling quick, accurate understanding and efficient decision-making across diverse contexts.

An extractive summary only uses units ranging from single words to whole paragraphs that are taken verbatim from the original text By contrast, an abstractive summary is newly generated text that covers the source text’s content as well as the source text reviews, requiring the summarizer to have prior knowledge about the topic to produce coherent, original wording.

Two audience-oriented summary approaches are common: generic and query-oriented A generic summary presents the author's point of view on the source text and gives even attention to all parts of the text, producing a broad, balanced overview A query-oriented summary, by contrast, is tailored to the reader’s needs, highlighting the specific aspects of the text that the user wants to learn about and adjusting the emphasis accordingly Using both types can improve SEO and user satisfaction by delivering comprehensive coverage for general readers and targeted insights for those with particular questions.

An indicative summary identifies the main subject matter or domain of the input text without revealing its contents; after reading it, you can explain what the text was about, but not necessarily what was contained in it An informative summary covers portions of the content and lets you describe specific parts of what appeared in the input text.

- Expansiveness: o B ackground: A ssum es readers do not have prior know ledge abou t the source tex t topic. o Just-the-new s: S upposes r e a d e rs p rior know ledge is up-to-date.

- M onolingual vs cross-lingual: Just sum m arizes in the sam e language vs sum m arizes a s w ell as tran slates into another language.

- S in gle-docum ent vs m ulti-docum ent source: Sum m arizes only One source text vs fuses to g eth er m any source texts.

To generate a concise summary, we systematically evaluate every unit of the text—paragraphs, sentences, and even individual words—and decide which elements to keep or discard We then reconstruct the selected material into a coherent, streamlined version that preserves the core ideas Conceptually, the method unfolds in three stages: first, content assessment and selection; second, reformation of the chosen material into a clear, logical narrative; and third, a final rewrite that delivers a focused, SEO-friendly summary.

- T o p ic id e n tiíĩc a tio n : identify th e m aterials to keep

- In te r p r e ta tio n /c o m p a c tio n : com bine and com press them , as m uch as possible

- G e n e r a tio n : output it in th e lb rm at required

E xtraction system s only perform th e íirst step vvhile abstraction system s carry out the first tw o, an d usually also the third.

Phase X computes the importance scores for each text unit, typically a sentence, ranks the units by these scores, and outputs the top N percent To preserve coherence, the selected sentences are returned in document order rather than strictly by rank, ensuring the resulting paragraph remains readable while highlighting the most significant content.

During this stage, most systems use a modular architecture with several independent modules Each module assigns a score to every input unit—whether a word, a sentence, or a longer passage A subsequent combination module then aggregates these scores for each unit to produce a single integrated score Finally, based on the requested summary length, the system selects the top-scoring units to include in the final output.

Determining the optimal unit size for scoring text remains an open issue Most systems evaluate text one sentence at a time; however, Fukushima, Ehara, and Shirai (1999) argued that sub-sentence units produce shorter summaries that carry more information On the contrary, other researchers contend that larger units can better capture context and coherence, leading to different trade-offs between brevity and informativeness.

S trzalkow ski et al (1999) proved that sentences adịacent to im portant sentences helps to increase coherence.

Across nearly any text, key information tends to appear in strategic spots—headings, titles, and especially the lead paragraph The simplest and often most effective method is to treat the lead paragraph as a concise summary of the piece Identifying the most significant sentences in the text introduces the concept of an Optimal Position Policy, a framework for placing core ideas where readers and search engines will most likely encounter them.

The Optimal Position Policy (OPP) is a method for learning a ranked list of sentences by their usefulness, indicating the ordinal positions in a text where high-topic-bearing sentences tend to occur It relies on a training corpus and foundational studies such as Luhn (1959) and Lin and Hovy (1997) to identify and prioritize sentences that maximize topical relevance for coherent writing and SEO effectiveness.

• S tep 1: For each article index sentence positions.

• S tep 2: For each sentence determ ine vield (= o verlap betw een sentences and the index term s for the article).

• S tep 3: C reate partial ordering over the lo cation s w here sentences containing im po rtant w ords occur:

F or exaniple: For the Z iff-D avis corpus (13,000 new spaper articles), they found the list ordering from the highest-topic-bearing sentence to the low est One is

T h is m eans the title ( T l ) is the m ost likely to bear topics follow ed by the ĩirst senten ce o f paragraph 2, the fírst sentence o f paragraph 3, etc.

• Lack o f coherence: pronouns such as he, she, it, this, that, etc may refer back to sentences that are not included in th e extract.

• U nintended im plicatio ns th at m ay arise w hen tw o non-contiguous sentences are put together:

M any p eople w ere ío oled by the salesm an’s sm ooth talk [But not everyone] M r

In th is case M r B arker is not the salesm an.

C laim : W ords in titles and headings o r boldfaced units o f a text are im portant, th e re ío re increase scores to sen ten ces Ihat contain them (L uhn 1959).

C laim ]: Im portant scntences contain "bonus p h rases', such as signiíicantly, in this p aper w e show , and in conclusion w hile unim portant sentences contain 'stigm a phrases* such a s hardlv and im possible

C laim 2: T hese phrases can be tìgured out autom atically (K upiec et al 95; Teufel and

C laim : Im portant sentences contain w ords that o ccu r írequently m aking their scores increased for each írequent word.

T hese m ethods focus on cohesion across sentences in the text

Construct lexical chains by linking semantically related words through identity, hypernym/hyponym, and synonym/antonym relationships These chains reveal the thematic structure of the text and help identify the most highly connected sentences or paragraphs that encapsulate the central ideas, following the approach described by Morris and Hearst (1991).

L ink related w ords based on co reíerence using pronouns, elision/nom inal sub stitution, corỹunctions etc., also count connectedness o f each sentence, and score accordingly.

P eríò rm a shallovv parse o f each sentence, to indicate at least som e head inform ation, and th en use in corỹunction w ith th e rest.

G enres o f s u m m a ry

An extract-type summary contains units from the original text ranging from single words to whole paragraphs and is taken verbatim, preserving the exact wording of the source By contrast, an abstract-type summary is newly generated and covers both the source content and the source text reviews, requiring the summarizer to have prior knowledge of the topic to accurately synthesize and contextualize the material.

Two main audiences shape how summaries are written: generic and query-oriented A generic summary provides the author's point of view on the source text, giving balanced attention to all aspects of the material A query-oriented, or user-oriented, summary emphasizes the aspects a reader wants to learn about, tailoring the content to their specific information needs.

Usage: There are two types of summaries An indicative summary identifies the main subject matter or domain of the input text without including its content; after reading an indicative summary, one can explain what the input text was about, but not necessarily what it contained An informative summary covers some of the content and allows describing parts of what was in the input text The results should be presented as a single coherent paragraph in English, with no additional explanation.

- Expansiveness: o B ackground: A ssum es readers do not have prior know ledge abou t the source tex t topic. o Just-the-new s: S upposes r e a d e rs p rior know ledge is up-to-date.

- M onolingual vs cross-lingual: Just sum m arizes in the sam e language vs sum m arizes a s w ell as tran slates into another language.

- S in gle-docum ent vs m ulti-docum ent source: Sum m arizes only One source text vs fuses to g eth er m any source texts.

Som e m eth od s o f su m m a riz a tio n

Stage 1: T opic ex tra c tio n

At this stage, the task is to compute an importance score for every unit of text, typically each sentence These scores measure how informative or relevant each sentence is within the document After scoring, the sentences are sorted by their importance, and the top N% are selected To maintain readability and coherence, the selected sentences are returned in their original document order rather than in strictly rank-order.

At this stage, almost all systems rely on several independent modules Each module assigns a score to every input unit—whether it's a word, a sentence, or a longer passage A subsequent combination module then aggregates the scores for each unit to produce a single integrated score Finally, depending on the desired length of the summary, the system selects the highest-scoring units and returns them as the result.

Determining the optimal unit size for scoring text is an open issue in automatic summarization Most systems evaluate text at the sentence level, but Fukushima, Ehara, and Shirai (1999) argued that sub-sentence units can produce shorter summaries that carry more information By contrast, other research suggests that the advantages of sub-sentence granularity are not universal and may depend on the domain, dataset, and evaluation criteria, leaving the question unresolved.

S trzalkow ski et al (1999) proved that sentences adịacent to im portant sentences helps to increase coherence.

Across almost all text types, key information tends to appear in headings, titles, and especially the lead paragraph The simplest and often most effective method is to treat the lead paragraph as a summary of the piece Understanding how to identify the most significant sentences leads to the concept of an Optimal Position Policy, which guides where essential ideas should sit to maximize clarity and SEO impact.

Optimal Position Policy (OPP) is a method for learning a ranking of the most informative sentences, revealing the ordinal positions in a text where high-topic-bearing sentences tend to occur By analyzing a training corpus (Luhn, 1959; Lin and Hovy, 1997), OPP identifies these key positions to model how topic relevance is distributed across a document, enabling more effective summarization, indexing, and content understanding.

• S tep 1: For each article index sentence positions.

• S tep 2: For each sentence determ ine vield (= o verlap betw een sentences and the index term s for the article).

• S tep 3: C reate partial ordering over the lo cation s w here sentences containing im po rtant w ords occur:

F or exaniple: For the Z iff-D avis corpus (13,000 new spaper articles), they found the list ordering from the highest-topic-bearing sentence to the low est One is

T h is m eans the title ( T l ) is the m ost likely to bear topics follow ed by the ĩirst senten ce o f paragraph 2, the fírst sentence o f paragraph 3, etc.

• Lack o f coherence: pronouns such as he, she, it, this, that, etc may refer back to sentences that are not included in th e extract.

• U nintended im plicatio ns th at m ay arise w hen tw o non-contiguous sentences are put together:

M any p eople w ere ío oled by the salesm an’s sm ooth talk [But not everyone] M r

In th is case M r B arker is not the salesm an.

C laim : W ords in titles and headings o r boldfaced units o f a text are im portant, th e re ío re increase scores to sen ten ces Ihat contain them (L uhn 1959).

C laim ]: Im portant scntences contain "bonus p h rases', such as signiíicantly, in this p aper w e show , and in conclusion w hile unim portant sentences contain 'stigm a phrases* such a s hardlv and im possible

C laim 2: T hese phrases can be tìgured out autom atically (K upiec et al 95; Teufel and

C laim : Im portant sentences contain w ords that o ccu r írequently m aking their scores increased for each írequent word.

T hese m ethods focus on cohesion across sentences in the text

Link semantically related words using relationships such as identity, hypernym/hyponym, and synonym/antonym to build lexical chains, and then identify the most highly connected sentences or paragraphs within those chains, following the approach of Morris and Hearst (1991).

L ink related w ords based on co reíerence using pronouns, elision/nom inal sub stitution, corỹunctions etc., also count connectedness o f each sentence, and score accordingly.

P eríò rm a shallovv parse o f each sentence, to indicate at least som e head inform ation, and th en use in corỹunction w ith th e rest.

Coherence in a multi-sentence text can be modeled as a discourse structure, where centrality—the nuclei of textual units—reflects the importance of each unit Building this discourse tree involves applying relations from Rhetorical Structure Theory (RST), as developed by Mann and Thompson in 1988 Core relations, such as Elaboration, show how sentences support and expand the main ideas, yielding a clear representation of the text’s logical flow and making the argument easier to follow for readers and search engines alike.

C ontrast, C oncession, A ntithesis, Exam ple, etc Each (o r m ost) R S T relations have tw o branches; the dom inant one called the nucleus and the other o n e th e satellite

By representing the text as a hierarchical tree, you can prune away satellite branches and retain only the root’s farthest, most central branches—the nucleus sentences or clauses From this core, you can rank the units by their importance, so the resulting summary is a concise collection of the most important units This information-extraction approach streamlines text analysis by emphasizing core ideas and trimming extraneous details to yield a focused overview.

R ilo and L ehnert (1994) d eíined that the goal o f inform ation extraction is to extract sp eciíìc k ind s o f inform ation from a source docum ent A nd its steps are:

- D eĩine a tem plate, w hich specifíes w hat is o f interest.

- U se a canonical IE system to extract from th e source d ocum en t th e relevant iníòrm ation, w hich then be íilled in the tem plate.

- C reate th e content o f th e tem plate as th e sum m ary

For exam ple: (C ostantino M , C ollingham R J., M organ R.G , 1995)

P L O R iiA M PARK M.J (AP) ỊGeneric drug maker Schein Phưrmaceulical Inc wìlỉ acquire M arsam Pharmaceuticals Inc for 240 million dotìurs, the two companies said.

The agreement calls fo r Schein ío acquire aỉ I stock outstanding o f Mursam ai aboul

Marsam, a maker of injectable drug products, received unsolicited takeover offers in May at around $19 per share On Friday, its shares closed at $19.3125 on the Nasdaq Stock Market, down $0.0625 (6.25 cents).

T hen the sum m ary is the follow ing:

C om pany target: M arsam Pharm aceutical Inc.

C om pany predator: Schein Pharm aceutical Inc.

S tage 2: In terp re tatio n

C o n sid er this exam ple: (H ovy and Lin, 1999, p.6)

Word-count analysis reveals that skimming and guns are the central topics in this text; however, it is clear that the piece is ultimately about a robbery, and any summary must mention this fact The task of identifying that the text centers on a robbery lies in the interpretation phase, which guides how the content should be summarized and optimized for SEO.

During this step, two or more extracted topics are fused into one or more underlying concepts The interpretation stage transforms extractive summarization into abstractive systems, so the topics identified in the previous topic extraction stage are rerepresented with new terminology They are paraphrased and expressed using a fresh formulation, drawing on concepts or words beyond the source text to capture the underlying meaning.

Using topics to represent one or more defining concepts is the most difficult step in automated text summarization To perform this capability, a summarization system must have prior domain knowledge so it can interpret the input in terms of concepts that go beyond the text itself In other words, effective topic-based summarization relies on identifying underlying themes and mapping them to meaningful extratextual ideas rather than merely extracting surface phrases Without domain knowledge, the system struggles to relate the input to relevant frameworks, producing summaries that lack coherence or context Therefore, domain-aware topic modeling is essential for generating concise, accurate summaries that capture the core meaning of the source material.

E g H e ate apples, oranges and p ears -> H e ate fruit.

E.g R oofs, w alls, ceilings, ílo o r -> The h o u se

- S cript identifícation: (based on the notion ‘w ord ía m ily ’) (H ovy and Lin, 98)

E.g Ile opened his books, listened to h is teacher closed all books and left -> He stud ied at school.

E.g A spokesperson fo r the u s G overnm ent announced th a t -> W ashington announced th a t lnterpretation is still prevented from d eveloping by the obstacles o f dom ain know ledge acquisition B eiore sum m arization system s can produce abstracts this root problem has to be solved.

Stage 3: S um m ary g e n e ra tio n

The third stage of summarization is generation After the summary is produced through abstraction or information extraction, it must be turned into fluent, coherent text using natural language generation techniques such as text planning, sentence planning, and sentence realization to produce a readable final summary.

“sm o o th " th e sum m ary and m ake it m ore readable A process o f “ sm oothing” was fírst introduced by Hirst et al (1997) to identiíy and fix typical disíluencies, w hich m ig h t be:

- R epetition o í'ciau ses o r noun phrases -> solution: aggregate th e m aterial into a corýunction

- R epetition o f nam ed entities -ỳ solution: substitute them w ith pronouns

- C o ntaining unim portant m aterial such as parenthesis or discourse m arkers -> solution: rem ove them.

Text compression is another promising approach Knight and Marcu (2000) proposed the EM algorithm to compress the syntactic parse tree of a sentence, producing a shorter version with the goal of eventually abridging text—for example, reducing two sentences into one or three into two or one.

Jin and McKeown (1999) argued that many summaries are produced from the source text through a cut-and-paste process, where fragmented sentences are grouped into summary sentences Therefore, an extractive summarization system primarily needs to identify the important fragments of sentences and then assemble them grammatically into coherent, concise summaries.

M ulti-do cum en t su m m a rizatio n

S um m arizing a single text is diiTicult enough but sum m arizing a set o f them atically related do cum ents is even m ore challenging Because it tầces soine hindrances:

- R epetition: T o avoid this them atic overlaps have to be identiíìed and located

A variety o f m ethods have been proposed to identiíy redundancy over docum ents.

- Inconsistencies: T o overcom e this, one has to decide w hat to include o f the rem ainder and som etim es, arrange events from various sources along a single tim eline.

For exam ple: The m ost outstanding one is SU M M O N S w hich uses iníorm ation extraction based approach All source docum ents are parsed into tem plates; the

SUMMONS uses a template-based summarization approach, grouping templates by content and applying rules to extract the most important items By contrast, Barzilay, McKeown, and Elhadad (1999) parse each sentence into a syntactic dependency structure (a simple parse tree) and then match those trees across documents, using paraphrase rules that modify the tree when needed.

T w o ty p ical m ethods stu d ied

Lexical Chain m ethod overvievv

Ngày đăng: 13/11/2025, 22:37

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

🧩 Sản phẩm bạn có thể quan tâm

w