1. Trang chủ
  2. » Luận Văn - Báo Cáo

An investigation on the writing test of the national english certicate – level b at an giang university center for foreign languages m a 60 14 10

192 1 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề An Investigation on the Written Test of the National English Certificate – Level B at An Giang University Center for Foreign Languages
Tác giả Nguyễn Hoàng Phương Trang
Người hướng dẫn Assoc. Prof. Đinh Đền
Trường học Vietnam National University – Ho Chi Minh City University of Social Sciences and Humanities
Chuyên ngành TESOL
Thể loại Thesis
Năm xuất bản 2010
Thành phố Ho Chi Minh City
Định dạng
Số trang 192
Dung lượng 1,47 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Cấu trúc

  • Chapter 1: INTRODUCTION (12)
    • 1.1. Rationales (12)
    • 1.2. Practical background of the study (16)
    • 1.3. Aims and scope of the study (0)
    • 1.4. Research questions (17)
    • 1.5. Significance of the study (17)
    • 1.6. Overview of the thesis (18)
  • Chapter 2: LITERATURE REVIEW (19)
    • 2.1. A review of Readability and use of Readability Analyzer (19)
      • 2.1.1. The Development of Readability Formulas in Short (19)
      • 2.1.2. Advantages and Limitation of Readability Formulas (20)
      • 2.1.3. A Focus on Two Readability Formulas (23)
        • 2.1.3.1. Flesch Reading Ease (23)
        • 2.1.3.2 Flesch-Kincaid Readability Formula (25)
      • 2.2.1. Vocabulary size, levels and lexical coverage (28)
      • 2.2.2. The relationship between the vocabulary knowledge and success in (32)
  • Chapter 3: METHODOLOGY (36)
    • 3.1. Research method (36)
      • 3.1.1. Materials (36)
      • 3.1.2. The instruments (36)
        • 3.1.2.1. Readability formulas (36)
        • 3.1.2.2. Software tools (38)
      • 3.1.3. Procedure (41)
    • 3.2. Summary (41)
  • Chapter 4: RESULTS AND DISCUSSIONS (42)
    • 4.1. Results (42)
      • 4.1.1. Readability Progress of texts within years (42)
      • 4.1.2. The relationship between the word frequencies and text difficulty (45)
        • 4.1.2.1. The frequency of words in the test texts within years (45)
      • 4.1.2. The relationship between the difficulty of the reading texts and the students’ reading’s comprehension scores (0)
    • 4.2. Discussion (59)
  • Chapter 5: RECOMMENDATIONS AND IMPLICATIONS (60)
    • 5.1. Recommendations (60)
      • 5.1.1. Recommendations for AGU CFL “Reading comprehension” (60)
        • 5.1.1.1. Recommendations for AGU development Process of Reading (61)
      • 5.1.2. Recommendations for AGU CFL staff (63)
    • 5.2. Implications (64)
      • 5.2.1. For learners (64)
      • 5.2.2. For teachers (65)
    • 5.3. Limitations (67)
    • 5.4. Conclusion (68)

Nội dung

INTRODUCTION

Rationales

Readability of text refers to how easily a reading passage can be understood, making it essential for effective communication According to Webster Dictionary, "readable" describes text that is "fit to be read, interesting, agreeable, and attractive in style," ensuring it is enjoyable for readers The Literacy Dictionary (Harris & Hodges, 1995) defines readability as "the ease of comprehension because of style of writing," highlighting its importance for engaging and accessible content In essence, high readability improves reader engagement and ensures that the message is effectively communicated.

Readability, in the context of education, refers to the ease with which words and sentences can be read and understood (Hargis et al., 1998) In classroom settings, readability is often measured using objective numerical scores derived from readability formulas, providing a quantifiable assessment of text accessibility Furthermore, readability also pertains to language comprehensibility, focusing on factors that influence students’ ability to successfully read and comprehend the material, ultimately supporting their learning and academic achievement.

Readability is a key factor in reading comprehension, focusing on how easily readers can understand the text It involves matching instructional materials to the student's reading level to enhance understanding, as emphasized by Fry Ensuring that texts are accessible and appropriately challenging helps improve overall reading success.

Selecting the appropriate textbooks for a student group is essential, focusing on matching the reading level of the texts to students’ reading abilities This approach ensures that educational materials are accessible and engaging, promoting better learning outcomes Understanding how to relate reading difficulty to students' skills is a core aspect of effective teaching strategies and curriculum development.

Readability, also known as "text difficulty," is a crucial concern for educators and syllabus designers when selecting appropriate reading materials for learners across different ability levels Accurate assessment of text difficulty helps teachers and test developers choose suitable texts for pedagogic purposes and examination sub-tests, respectively Writers targeting diverse audiences require guidance on factors influencing text accessibility to ensure materials are suitable and engaging Currently, decisions about text difficulty are often made based on intuition, which can risk selecting texts that are either too challenging, potentially hindering learning, or too easy, reducing student motivation.

Readability formulas are commonly used to assess text difficulty, aiming to match texts more accurately to readers' abilities (Jones, 1995) However, reliance on these formulas can lead to simplistic and naive applications in advising novice writers Additionally, readability measures serve as controls for determining text levels in reading assessments, highlighting their role in testing contexts (Davies and Irvine, 1996) Consequently, interpreting text difficulty should involve a comprehensive examination beyond just these formulas for more effective evaluation.

Most readability studies focus on a single contributing factor, such as the use of rare words or technical terminology, which can hinder comprehension for certain audiences (Collins-Thompson and Callan, 2004; Schwarm and Ostendorf, 2005) Additionally, syntactic complexity is linked to increased processing time and can negatively impact readability (Gibson, 1998) Effective text organization also plays a crucial role in enhancing overall readability for diverse audiences.

Linguistic characteristics such as vocabulary, sentence structure, and variety, along with concepts presented, text organization, and background knowledge required of readers, are essential factors in determining a text's appropriateness for a specific grade level However, readability formulas primarily assess specific features like word difficulty based on length and sentence complexity, using mathematical calculations Since these formulas focus on limited text features, they cannot evaluate all aspects that influence readability or directly measure comprehension Consequently, readability formulas serve as predictions of reading ease rather than definitive assessments of a reader’s understanding.

Vocabulary and sentence structure are key factors in readability formulas and are commonly used to assess text difficulty These elements are essential for predicting how challenging a text might be for readers According to Armbruster (1984), readability or text difficulty is measured by factors such as the number of syllables in words and the length of sentences The presence or absence of these factors helps determine the overall readability of a given text.

'considerate' (to enable readers with minimal effort) or 'inconsiderate' (text requiring much greater effort) ”

Vocabulary is a key indicator of text difficulty, as research shows that texts with many difficult words tend to be more challenging However, simply replacing complex words with easier synonyms does not always make a text more accessible, since vocabulary often reflects the complexity of the topic itself For example, substituting "petrified" with "afraid" or "scared" can fail to capture the original meaning, especially when a word conveys a specific intensity or context Therefore, efforts to simplify texts by replacing difficult words must consider the precise meaning and contextual fit, as inappropriate substitutions can inadvertently increase difficulty.

Research indicates that only a small number of difficult words are unlikely to significantly hinder students’ understanding Studies, such as those by Freebody et al., demonstrate that a substantial presence of challenging vocabulary is required to impact comprehension, emphasizing the importance of context and overall text difficulty.

Wide reading involving diverse and unfamiliar words is the primary method for enhancing vocabulary, as relying solely on familiar texts limits students’ opportunities for lexical expansion (Anderson, 1983).

Practical background of the study

Readability formulae estimate text difficulty by analyzing quantifiable features such as word length and sentence length, based on foundational studies by Dale and Chall (1945) With the advent of computer-based text analysis, newer factors like word frequency—derived from large reference corpora—have been incorporated, as more common words tend to be more familiar and thus make texts more readable This focus on word frequency also extends to sequences of words, influencing overall text comprehension Our research examines the impact of both individual word frequency and word sequences on readability, using texts from the reading sub-tests of the National Language Certificate Examinations at Angiang University The findings aim to inform guidelines for text designers, helping them create more accessible materials by understanding the relationship between word frequency and text difficulty.

1.3 AIMS AND SCOPES OF THE STUDY

This study evaluates the readability of texts extracted from the "reading sub-tests" of the National English Certificate Level B examinations These tests are part of the written paper assessments developed by the Center of Foreign Languages at Angiang University The research aims to analyze how accessible and comprehensible these reading materials are for test-takers By examining the readability levels, the study provides insights into the effectiveness of the test design and its suitability for learners at this proficiency level Improving readability in exam texts can enhance student performance and ensure fair assessment of language skills.

The population of my study will be the written paper tests designed for

The AGU CFL National English Certificate Level B exam comprises two parts: a speaking task and a written paper with three sections—listening comprehension, reading comprehension and vocabulary, and use of language However, between 2003 and 2007, only the reading passages in the second section of the written exam were utilized for analysis The readability of these texts was assessed using the Flesch Reading Ease and Flesch-Kincaid Readability formulas, which primarily consider word length and sentence length These formulas estimate text difficulty by examining quantifiable features, with recent developments incorporating word frequency data from large corpora to enhance accuracy; nonetheless, other factors influencing comprehension were not included in this study.

In this study, the following question is going to be answered:

“What are the difficulty levels of the level B reading texts published by AGU CFL in term of vocabulary?”

This study emphasizes the importance of readability analysis in identifying texts that are appropriately challenging for readers It highlights that vocabulary level is a fundamental factor in assessing reading difficulty, ensuring that the chosen texts match the learners’ language proficiency By focusing on these key elements, the research aims to support the selection of suitable reading materials that enhance comprehension and learning outcomes.

7 materials and ensure that the level and complexity of different texts used in parallel tests have to be shown of equivalent difficulty

This thesis aims to analyze current practices of the Reading Comprehension Test part of the English Certificate Level B at AGU and to propose effective improvements The study recommends implementing a standardized scale to assess the difficulty of reading texts in Level B certificates, ensuring more accurate measurement of reading comprehension skills These suggestions are intended to serve as guidelines for developing better reading assessments across different levels and enhancing language proficiency testing at AGU CFL.

The thesis consists of five chapters as follows:

Chapter 1: identifies the problem and provides an overview of the thesis

Chapter 2: reviews the literature related to major issues in the vocabulary – reading connection and text readability

Chapter 3: describes the research method employed in the study

Chapter 4 : presents the results of the study, and analyses the results to point out findings observed in the study

Chapter 5: makes some practical recommendations for standardized the reading comprehension tests of English Certificate of level B at AGU, and provide a summary of the main details of the whole thesis with a conclusion ending the thesis.

Research questions

In this study, the following question is going to be answered:

“What are the difficulty levels of the level B reading texts published by AGU CFL in term of vocabulary?”

Significance of the study

The study emphasizes the importance of readability analysis in identifying texts that match a suitable difficulty level for readers It highlights that vocabulary complexity is a fundamental factor in evaluating reading difficulty Understanding both overall readability and vocabulary demands is essential for selecting appropriate reading materials and improving comprehension.

7 materials and ensure that the level and complexity of different texts used in parallel tests have to be shown of equivalent difficulty.

Overview of the thesis

This thesis aims to analyze the current practices of the Reading Comprehension Test for English Certificate Level B at AGU and provide practical recommendations for improvement It advocates for the implementation of a standardized scale to accurately assess the difficulty level of reading texts in the Level B certificates, ensuring more consistent and reliable testing standards These insights are intended to serve as guidelines for developing reading comprehension assessments across various levels and proficiency levels at AGU CFL, enhancing the overall quality and effectiveness of language testing.

The thesis consists of five chapters as follows:

Chapter 1: identifies the problem and provides an overview of the thesis

Chapter 2: reviews the literature related to major issues in the vocabulary – reading connection and text readability

Chapter 3: describes the research method employed in the study

Chapter 4 : presents the results of the study, and analyses the results to point out findings observed in the study

Chapter 5: makes some practical recommendations for standardized the reading comprehension tests of English Certificate of level B at AGU, and provide a summary of the main details of the whole thesis with a conclusion ending the thesis.

LITERATURE REVIEW

A review of Readability and use of Readability Analyzer

2.1.1 The Development of Readability Formulas

Readability studies began in the late 19th century, marking the start of efforts to measure and improve text clarity (DuBay, 2004) The first formal readability formula was introduced in 1923, revolutionizing how we assess written content (Fry, 2002; Klare, 1988) Since then, over 200 different readability formulas have been developed, supported by more than 1,000 research studies dedicated to enhancing text comprehension and readability metrics (DuBay, 2004).

2) However, of these formulas, only 12, at the most, are widely used (Gunning, 2003:

Readability formulas analyze linguistic features such as word length, frequency, and sentence length to assess text difficulty They often utilize one or more of these features, particularly words or syllables, to create an index of readability This index is validated by student testing, where the proportion of students successfully completing related questions determines the text’s appropriateness for specific age groups or grade levels Some formulas assign a score on a 0-100 scale, ranging from extremely difficult to very easy, to clearly indicate the text’s readability level.

Starting in the late 1920s, research shifted towards identifying variables that influence text difficulty, ultimately focusing on semantic and syntactic factors while excluding stylistic aspects (Chall, 1988) Today’s readability formulas primarily assess comprehension through these two components, using metrics such as average sentence length to gauge syntactic difficulty and word length or frequency of unfamiliar words to measure semantic complexity (Davison & Green, 2002; Fry, 2002; Gilliland, 1972; Gunning, 2003) Sherman initially proposed these variables as key predictors of text difficulty, and numerous studies have demonstrated that syntactic and semantic factors most strongly correlate with readers’ understanding, making them essential in readability assessments (DuBay, 2006; Gray & Leary, 1972; Gunning, 2003).

In the past decade, educational focus has shifted toward leveling systems that consider multiple aspects of texts beyond just language, as highlighted by Stein Dzaldov & Peterson (2005) Despite this, readability formulas remain relevant, providing an objective and easily calculable alternative through computer analysis, as noted by Fry (2002).

2.1.2 Advantages and Limitation of Readability Formulas

Readability formulas are valued for their simplicity and ease of use, especially with the advent of computerized programs, which has expanded their applicability (Burns, 2006) They are also highly validated through numerous studies, confirming their reliability in assessing text complexity (Fry, 1977, cited in Fry, 2002: 291) Additionally, these formulas provide objective measurements of readability (DuBay, 2004; Fry, 2002) However, discrepancies can occur because different computer programs may use varying methods to count sentences, words, and syllables, even when applying the same formula (Dubay, 2004: 56).

Readability formulas, while generally correlating well with each other, can sometimes disagree by up to three grade levels due to their different starting points (Gunning, 2003; Klare, 1988) Although these formulas may not accurately determine the difficulty level of individual texts, they are useful for assessing the relative progression of difficulty across multiple texts (Gunning, 2003) Consequently, some researchers suggest that readability formulas are more effective for broad comparisons rather than precise matching of specific texts to particular readability levels (Anderson & Davison, 1988; Bruce & Rubin, 1988).

Readability formulas cannot fully capture all the essential factors for comprehension, as they tend to be too complex, subjective, and difficult to use (Gilliland, 1972) Critics argue that increasing the number of attributes in these formulas reduces their accuracy and practicality, with Gilliland emphasizing that ease of application can compromise measurement precision Conversely, some studies suggest that adding more attributes does not enhance the reliability of readability formulas (Binkley, 1988) Klare notes that formulas with more than two variables generally require more effort without significantly improving predictive power, indicating that two-variable formulas are typically sufficient for effective reading level screening.

Readability formulas have long been recognized as inherently limited, as acknowledged by numerous researchers including Davison & Green (1988), Fry (2002), and Gunning (2003) L.A Sherman, a pioneer of classic readability studies from the late 19th century, emphasized that a text’s readability ultimately depends on the reader Experts like Bruce & Rubin (1988) concur that the true measure of readability is the reader’s perception, not relying solely on formulas.

Readability formulas cannot, nor are they designed to, assign exact values of comprehensibility; instead they offer numerical approximations of text difficulty

11 methods in the process of choosing appropriate texts (Gunning, 2003: 182; Klare, 1988:

Readability formulas tend to become less accurate predictors of text difficulty at higher grade levels, particularly in college content where comprehension challenges are greater (Klare, 1988) Additionally, these formulas assume that a reader's ability to handle difficult words increases proportionally with their skill in understanding complex sentences, which is not always true (Gilliland, 1972).

Formulas in reading comprehension primarily focus on analyzing text features but often neglect the cognitive processes involved in understanding (Zakaluk & Samuels, 1988) Additionally, they overlook crucial internal factors such as the reader’s social and cultural background, motivation, interests, and prior knowledge, which significantly influence comprehension (Bruce & Rubin, 1988; Afflerbach & Johnston, 1986) Integrating these internal and contextual factors into formulas remains a challenge, as they are complex and difficult to quantify (Klare, 1988).

External factors such as text layout, visual aids, writing style, organization, exposition, and typographical elements significantly influence written communication but are challenging to quantify in formulas (Burns, 2006; Davison, 1988; Gilliland, 1972) While these factors might be easier to incorporate into analytical models, their subjective nature makes statistical measurement difficult, potentially compromising the objectivity of such formulas.

Readability formulas often overlook the deeper textual structures that influence comprehension It is important to note that a low readability score does not necessarily ensure true ease of reading, as highlighted by Bruce and Rubin (1988).

Complex formulas can produce the same readability score regardless of sentence word order, potentially masking true comprehension issues (Chall, 1988) Additionally, overusing short words and sentences may lower readability scores but can lead to incoherent and fragmented text, ultimately hindering effective communication.

(Bruce & Rubin, 1988: 12-13) Moreover, the lack of connectives may very well result in a confusing text but it would not be shown in the readability scores (Anderson & Davison, 1988: 32-33)

Longer words do not necessarily make text more difficult to understand, as highlighted by Anderson and Davison (1988) and others While there is a correlation between longer sentences and sentence complexity in English, shorter sentences can sometimes be more complex and harder to comprehend than longer ones, emphasizing that sentence length alone does not determine readability.

2.1.3 A Focus on Two Readability Formulas

Readability formulas for measuring reading difficulty can be categorized into three main groups based on their variables The first group uses only words, exemplified by the US FORCAST and McLaughlin “SMOG” formulas The second group incorporates both difficult words and sentence structures, such as the Dale-Chall and New Dale-Chall formulas The third group evaluates readability through word length (syllables) and sentence length, represented by indices like the Gunning FOG, Coleman-Liau, and Flesch-Kincaid formulas These indices typically express reading level as a school grade or a score from 0 to 100, where higher scores indicate easier comprehension, helping to determine the education level needed to understand the material.

METHODOLOGY

Research method

This study analyzed twenty-four reading passages from the English Written Test of the National English Certificate Level B, administered at AGU CFL in 2003 and 2007 These passages underwent item analysis to determine their levels of difficulty, as outlined in Table 1 Since the passages lacked titles, a short form was assigned to each for identification purposes, such as P-1 for the first passage, P-2 for the second, and so on A complete list of all analyzed texts can be found in Appendix 3.

Readability scores like the Flesch Reading Ease and Flesch-Kincaid are influenced by their underlying programming, which can lead to different applications generating varying results on the same text A study by Mailloux et al (1995) cited in Hall (2006) demonstrated these inconsistencies Despite minor modifications, it was expected that these formulas should produce consistent readability assessments Today, both formulas are integrated into Microsoft Word, providing users with easy access to readability analysis tools.

Spelling and grammar are easily quantifiable in Microsoft Word For detailed instructions on enabling readability statistics in Word 2003, please refer to appendix 4 For other versions of Microsoft Word, visit Microsoft Office Online for comprehensive guidance.

It was also assumed that the application in Word is more accurate than most other applications available on the Internet

Flesch Reading Ease is a readability metric that evaluates how easy a text is to understand by analyzing the average number of syllables per word and words per sentence This formula can be applied to entire texts or specific samples to assess their readability (Flesch, 2006) Using Flesch Reading Ease scores helps writers improve content clarity and accessibility, making their writing more engaging for a broader audience Implementing this measure is essential for creating clear, accessible content optimized for SEO and reader comprehension.

Reading Ease Score = 206.835 - (1.015 x ASL) - (84.6 x ASW)

ASL = average sentence length (number of words divided by number of sentences)

ASW = average number of syllables per word (number of syllables divided by number of words)

Readability scores are interpreted according to the guidelines shown in Table 1, where the Reading Ease formula ranges from 0 (hard) to 100 (easy) Standard English documents typically have an average score between 60 and 70, indicating a balance that is accessible to most readers Higher scores suggest that the text is easier to understand, making it more comprehensible to a wider audience (Flesch, 2006: 107).

Table 1 Interpretation Table for Flesch Reading Ease Scores Reading Ease

(Freely from Flesch, cited in Klare, 1988: 21)

The Flesch–Kincaid Grade Scale assesses reading difficulty by analyzing the number of syllables per word and the average sentence length This scoring system indicates the educational grade level required to understand a text; for instance, a score of 10.0 suggests that a tenth-grade student should be able to read and comprehend the material Using this readability metric helps writers create content that matches their target audience's reading ability, improving accessibility and engagement.

3.1.2.2 Software tools 3.1.2.2.1 The Vocabulary Statistic Worksheet

This study utilized a Microsoft Excel-based worksheet to assess vocabulary levels by comparing target texts with the University Word List (UWL, 1,000 word families) and Brown Corpus High-Frequency Word List (BC HFWL, 1st-5th 1,000 words) The worksheet identifies which words in the text are included in these foundational word lists, allowing for analysis of the proportion of low- and high-frequency words By ranking words from the 1,000 to 5,000 most common, the method provides insights into the vocabulary coverage and complexity within each text, supporting a comprehensive understanding of lexical familiarity and frequency.

(purely frequency-based list – BC HFWL), and the 1,000 most frequent word families and 1,000 academic word families (Paul Nation’s Word List – PNWL)

This study focuses on English vocabulary at the tertiary education level, which predominantly exceeds 3,000 words It examines texts containing vocabulary ranges from 3,000 to 5,000 words To analyze vocabulary learning sequencing based on word frequency, we created ranked word lists—such as 1,000-word, 2,000-word, and 3,000-word lists—and measured how frequently these words appear across different texts This approach helps understand the importance of high-frequency vocabulary in academic reading and language acquisition.

Figure3.1: The Vocabulary Statistic Worksheet

A concordance is a software program used to analyze corpora and generate detailed lists of word occurrences According to Daniel Krieger (2003), it can identify a selected word and present all sentences containing that word, known as key words in context (KWIC) This tool is essential for linguistic research, enabling users to study word usage and patterns within large text collections.

Concordance tools can identify common collocations and words frequently found together with a key term, offering students valuable insights into language patterns based on authentic sample sentences (Garry N Dyckn, 1999) This approach helps learners understand how words are naturally used in context, enhancing their language skills Additionally, linguistics and applied linguistics researchers benefit from concordance as a means of exploring the meanings and how words are used in real-world language applications (Aston, 1997).

&Tribble, 1997) Also language teachers can use a concordance program to facilitate their teaching in the light of lexical, syntactic, semantic and stylistic patterns of a language

Carrying out those analyses in this study, the Concordance V 3.2 was highly appreciated to process the monolingual corpus and conclude the data for their compatibility with the Microsoft Excel program

This software efficiently generates a comprehensive word list, presenting investigated words as concordance lines alongside their contextual occurrences It displays a detailed "word list" for each text, organized in descending order based on word frequency This feature enables users to quickly identify the most common words and analyze their contextual usage within the text Optimized for SEO, this tool enhances linguistic analysis by providing clear, structured insights into text patterns and keyword distribution.

Concordance software (version 3.2) allows users to efficiently identify and analyze word occurrences within a text By using this tool, users can easily cut and paste relevant data into a worksheet column, facilitating the calculation of the percentage of difficult words present in each text This streamlined process enhances accuracy and saves time in lexical analysis and readability assessments.

This study involved a systematic process beginning with measuring the readability of selected reading sub-test passages using two established readability formulas It was essential to ensure that the entire document was uniformly marked for the same language in Microsoft Word before analysis, allowing accurate readability scoring with Word's statistics and online tools These scores helped predict the difficulty level of the passages compiled by AGU CFL Additionally, Concordance software was employed to extract and list all words from the texts, displaying concordance lines to analyze their context within each paper Wordlists for each reading text were generated to identify common terms, and using the Vocabulary Statistic worksheet, the most frequent words were automatically processed based on the Brown Corpus and Paul Nation datasets This approach streamlined the analysis, making it effective and efficient in identifying key vocabulary features.

Summary

Chapter 3 has described the research methods employed, particularly the subjects of the study, the data collection instruments used to serve the purposes of the study, and the way the study was conducted The data collection instruments include: the Corpus _ Aided analysis (The Brown Corpora and Paul Nation, and the Concordance V3.2 software program), the readability formulas, and the Vocabulary

RESULTS AND DISCUSSIONS

Results

The study results demonstrate that all analyzed text passages show a clear progression in difficulty levels from grade 4 to grade 12 Figures 4.1 illustrate the average grade levels of these texts based on the Flesch Reading Ease (FRE) and Flesch-Kincaid formulas Additionally, higher FRE scores indicate texts that are easier to read, highlighting the correlation between readability scores and text complexity across different grade levels.

Figure 4.1 Average Flesch Reading Ease Grade Levels of the texts

In 2003, analysis showed a clear progression in readability throughout the school years from grade 7 to 12, indicating increasing complexity as students advance The first two selected texts were identified as particularly challenging to understand, highlighting the complexity of material at certain educational levels On average, the Flesch-Kincaid Grade level within each year reflects this trend, providing insights into the evolving readability standards for students across different grades.

The readability analysis revealed that individual texts ranged from a Flesch-Kincaid Grade level of 6.9 to 12, indicating that the difficulty of the reading passages at AGU CFL’s Level B examinations corresponds to between nearly seven and twelve academic years of education Passage 3.4 (08/2003) was identified as the easiest, while passage 3.6 (12/2003) was considered the most difficult, highlighting variations in text complexity for test-takers.

In 2004, the text styles across the three test terms showed minimal variation, indicating consistency in readability The readability levels improved progressively from Year 4 to Year 9, demonstrating a gradual increase in text complexity The three readability indexes revealed an average grade level of 9.0 across all terms, meaning the reading materials were equivalent to a ninth-grade academic level During this year, the selected reading texts were notably readable, aligning with standard readability benchmarks.

In 2005, the average reading ease scores were relatively high and consistently distributed throughout the year, with scores of 36 in Term 1, 42 in Term 2, and 24 in Term 3 Despite these scores, most reading passages across the three terms were regarded as quite difficult to comprehend, reflecting a high reading grade level The readability index indicated that the texts tested an average reading level equivalent to grade 12 The second testing phase in August 2005 featured the easiest passage, whereas the final phase presented the most challenging material.

The year 2006 was notably the most unusual year, featuring more testing than other years During this period, the average Flesch-Kincaid Grade levels varied significantly across individual reading texts, ranging from a high of 12 to a low of 5.0 Specifically, the readings for Term 1 and Term 2 both averaged at Grade 12, while Term 3 averaged 8.5, and Term 5 dropped to a Grade 5.0, reflecting a wide disparity in text complexity throughout the year.

In 2007, all the texts exhibited similar writing styles and were notably challenging to read, with an average Flesch-Kincaid Grade level of approximately 12 Individual texts ranged from grades 11 to 12, indicating a consistently high reading difficulty Among these, the passage from December 2008 was identified as the easiest to understand within that year.

The analysis of texts used in level B examinations at AGU CFL reveals significant differences in text selection, with some texts demanding more advanced reading skills than the learners' grade levels, while others are aligned with lower grade levels This inconsistency indicates that the texts are not uniformly appropriate or of equivalent difficulty, which can impact the reliability of the assessments and influence test results Ensuring consistent difficulty levels in test materials is crucial for accurate measurement of learners' reading abilities.

Vocabulary is often considered a crucial factor influencing the readability of texts, with many researchers emphasizing its importance However, the Flesch Readability Formula primarily assesses text complexity based on sentence length and word difficulty According to Fulcher (1997, p 501), the formula is simply a function of sentence length and word size, suggesting that factors beyond vocabulary may have limited impact on readability scores.

Factors influencing text difficulty are not solely responsible for making a text easy or hard, but they serve as important predictors of overall difficulty Estimating lexical load—by analyzing how many words in a sample appear in established word frequency lists—provides valuable insights into text complexity The development of corpus-based approaches enables researchers to determine word frequency more accurately within specific reading texts, allowing for reliable rough estimates of their accessibility and difficulty levels.

4.1.2 The relationship between the word frequencies and text difficulty

This study investigates the difficulty of reading texts used in the Level B examination by analyzing word frequency The research highlights that word frequency is a major factor influencing the readability and complexity of the text Understanding how often words appear can help determine the overall accessibility of exam texts for test-takers By examining the relationship between word frequency and text difficulty, this study aims to improve assessment design and enhance language learning strategies.

4.1.2.1 The frequency of words in the test texts within years

Word frequency and range are crucial factors influencing text readability, with studies showing that high-frequency vocabulary offers the greatest learning benefits While vocabulary alone does not determine an ESL learner’s reading ability, research indicates that vocabulary knowledge is one of the most vital components in mastering reading in a second language (Nation, 1990) Frequency data provides a strategic foundation for optimizing vocabulary learning, ensuring learners acquire the most impactful words efficiently Incorporating vocabulary frequency lists that consider both frequency and range enhances language acquisition and improves overall reading comprehension.

A specific word list contains frequently occurring words within a particular corpus, making them essential for accurate recognition when reading These key words are crucial for understanding the text's meaning and are important for developing reading comprehension skills Matching written texts to students’ individual levels is especially vital for targeted reading instruction and improving overall comprehension (Gunning, 2003).

When selecting appropriate texts for students, it is essential to consider the readability of the text and how different test text series vary in their complexity Readability scores can serve as a useful foundation for determining whether a written text matches students' reading levels However, to ensure the most suitable material, readability formulas should be supplemented with other assessment methods, as emphasized by scholars like Gunning (2003) and Klare (1988).

(a) The first 3000 MFWs in the Brown Corpus wordlist

The figure 4.2 illustrates the percentage of the most frequent words from the first 3000 common words in the Brown Corpus, analyzed within the context of the year 2003 and the Flesch-Kincaid grade level index Typically, during the initial two years, there was a tendency to have two passages, with the word percentages in each passage often differing significantly in size (see Appendix 3) Overall, the data indicate that the texts in the first list contained a relatively high number of words, making them easier to understand compared to texts from subsequent years.

Discussion

The readability indexes in this study reveal that the average reading level of Level B texts at AGU CFL aligns with materials typically used by ninth- or tenth-grade native speakers With the exception of 2004 and 2007, most of these texts could not be classified as simplified based on readability scores alone This gap between students' reading abilities and the complexity of the provided materials may contribute to teachers' subjective selection of reading tests Addressing this discrepancy is essential for improving reading comprehension and instructional effectiveness at AGU CFL.

B reading comprehension texts produced by AGU CFL teachers, on the other hand, seems to be decided by non clear criteria

The B-level reading texts often face challenges due to their approach to supporting readers with difficult vocabulary, resulting in limited comprehension On average, these texts incorporate only about 65% of the most frequent words from the Brown Corpus List's top 3000 and 5000 words, and about 55% from the Paul Nation list, falling significantly short of the recommended 95-97% coverage for effective comprehension Most AGU CFL reading materials used in this study are not classified as simplified readers, and a vocabulary size below the optimal level does not guarantee the quality or readability of the texts.

RECOMMENDATIONS AND IMPLICATIONS

Ngày đăng: 22/08/2023, 02:45

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

🧩 Sản phẩm bạn có thể quan tâm