● Analyze the targets of all the questions in the exam paper, the tasks in all the skills assessment, and the question writing techniques.. This essay would explore the assessment target
Trang 1UNIVERSITY OF LANGUAGES AND INTERNATIONAL STUDIES
VIETNAM NATIONAL UNIVERSITY
-
LANGUAGE ASSESSMENT
Evaluative Essay
Pair 12:
Nguyễn Minh Ngọc – 19040154 Phạm Đỗ Nguyên Hương – 19040101
Lecturer: Cao Thúy Hồng
Hanoi, December 2021
Trang 2● Analyze the targets of all the questions in the exam paper, the tasks in all the skills
assessment, and the question writing techniques
● Evaluate the given exam paper based on prescribed criteria in the rating scale below
● Estimate the match between the exam paper (targets, tasks) and the contents (targets, tasks) that students have learned in the 9th grade, second-semester English textbook
RESPONSE
In this essay, we would perform an analysis and evaluation of the chosen test - an end-of-term II exam paper for Vietnamese 9th graders This essay would explore the assessment targets of the test tasks, and analyze the test in terms of five language assessment principles and question-writing techniques
I Assessment targets
There are six units included in the coursebook (Tieng Anh 9 - Volume 2), namely Recipes and eating habits, Tourism, English around the world, Space travel, Changing roles in society, and
My future career For each unit, six components are covered, which are vocabulary, grammar, listening, reading, writing, and speaking.
The table below demonstrates the assessment targets of the test according to the six components mentioned above
Table 1 Assessment targets
Performance
levels
Target contents Genres Topics Conditions
VOCABULARY
(covered in II Reading - Exercise 1; III Writing - Exercise 1; IV Speaking - Exercise 1)
Recognize
meanings of a range of words and phrases
about the difficulty of learning English
in a text of around 150 words in length, in which about 15 % of the words are above A2 level (CEFR)
Apply
forms, meanings and uses of a range of words and phrases
about tourism
in constructing sentences provided that all the lexical items have been taught in the course and parts of the sentences are given
Trang 3forms, meanings, uses of a range of words and phrases
in responses to
open-ended questions
about personal eating habits
provided that all the lexical items have been taught in the course
LISTENING (covered in I Listening)
Identify specificinformation in 2 talks
about the number of English speakers and teaching career
provided the audio is around 150-200 words in length, in which about 10 % of the words are above A2 level (CEFR); speech is delivered relatively slowly and clearly in standard dialect
Identify general ideas in a talk about theteaching career
provided the audio is around
200 words in length, in which about 15 % of the words are above A2 level (CEFR); speech
is delivered relatively slowly and clearly in standard dialect
READING (covered in II Reading)
Identify
- general ideas
- specific information
in a paragraph
about the difficulty of learning English
provided the text is around 150 words in length, in which about
15 % of the words are above A2 level (CEFR)
Identify
- specific information
- lexical inferences
in a paragraph
about space travel
provided the text is around 200 words in length, in which about
37 % of the words are above A2 level (CEFR)
WRITING (covered in III Writing, Exercise 2)
Construct a paragraph
about a trip that the student remember the most
in 100-120 words, provided that some cues are given
SPEAKING (covered in IV Speaking)
Produce responses
to five open-ended questions
about personal eating habits
provided that related lexical items have been taught in the course
Produce a conversation
about the information of cooking clubs
provided that context is made clear and adequate prompts are given
Trang 4In summary, from the given test tasks, it can be concluded that this test aims at assessing 1 language component - vocabulary, and 4 language skills Pronunciation can be deduced as more or less incorporated in the speaking assessment Abovementioned assessment targets cover some
important learning targets such as vocabulary, speaking, listening, reading, and writing (See Appendix A), but noticeably miss out on assessing grammar Specifically, as an achievement test,
assessing the depth of vocabulary (Unit 7, 8, 9) is integrated in the assessment of reading, speaking and writing skills In addition, listening tasks touch upon the assessment of two major sub-skills in the syllabus, which are listening for general and specific information (Unit 9, 12) Regarding reading skills, the tasks display a comprehensive coverage of 3 crucial sub-skills for 9th graders: reading for general and specific information, and making lexical inferences (Unit 9, 10) Concerning writing skills, the assessment target is questionable with the choice of irrelevant topic compared to the learning targets This, accompanied with the exclusion of the subject matter in Unit
11, shows a marked discrepancy between assessment targets of the test and learning targets of the textbook The effects of this will be discussed in the following sections
II Qualities of the test
To deduce the quality of this assessment, this section will analyze the test tasks against five benchmarks of a language assessment, namely reliability, validity, authenticity, washback, and practicality
1 Reliability
A test is considered reliable when a consistent result is recorded on different occasions of administration (Brown, 2004) While the factor of test administration and the students themselves cannot be measured, test/retest and rater reliability can be examined based on the given test tasks Specifically, this test showcases a considerable level of test unreliability
Firstly, the 45-minute time allowance seems too constricted considering the coverage of four language skills, which total 13 selected-response items, 12 limited-response items and 3 extended-response tasks Furthermore, the mismatch between learning targets and assessment targets can cause test unreliability as students are expected to revise according to the predetermined lesson objectives only In addition, poorly written test items such as writing task 2, which will be discussed later, can interfere with the interpretation of students’ performances, leading to test unreliability
Besides, the reliability of the test is influenced by human errors and subjectivity in the scoring process (Brown, 2004) While inter-rater reliability is not an issue since this type of test is
Trang 5rarely graded by more than one teacher, problems might arise within the scoring process itself, also known as intra-rater reliability In this test, the inclusion of 3 selected-response tasks and 3
limited-response tasks (See Table 2) entails higher intra-rater reliability However, the objectivity in
scoring of the latter can be compromising as alternative answers might occur Additionally, 3 performance tasks in the writing and speaking sections are subject to scoring subjectivity if marking rubrics are not well-constructed The grading of students’ competencies then lays at the sole mercy
of the scorer, impeding the impartial assessment of the students In this case, since there is only a sample writing for reference, and no marking rubrics provided, it is of limited power for us to further assess the test’s rater reliability
2 Validity
A test is considered valid if it successfully measures the targets it sets out to measure (Hughes, 2003)
Regarding the construct validity, which refers to the extent to which a test score can be interpreted to assess the target language proficiency (Bachman & Palmer, 1996), the test tasks are partly construct-invalid To begin with, the reading and speaking tasks, which are selected-response and performance tasks, are relatively well-suited to assess the learning targets clarified in the
textbook (See Appendix A) such as identifying general and specific information, and delivering a
talk or conversation about the given topics Meanwhile, the listening and writing tasks show substantial room for improvement For the listening tasks, the writing of gap-filling items in Q1-2
of task 2 underrepresents the target skill of listening for specific information Simultaneously, the recording, which supposedly consists of 2 talks, is modified into scripted monologues, losing the natural characteristics of the target situations As for the writing tasks, while the sentence completion task matches the target of assessing vocabulary, the paragraph writing task fails to clearly communicate the expected outcome to the students regarding the genre and topic of the writing These shortcomings prove the test to be construct-invalid, which might hinder the interpretation of the test scores in evaluating students’ performances
In addition, the test is also partially content-invalid While sufficient tasks are provided to assess vocabulary and four language skills on a range of topics, grammar - a crucial language component - is not assessed in any tasks Besides, the content of Unit 11 is not covered in any parts
of the test This lends itself to inadequate representativeness of the learning targets, which reduces the content validity of the test Furthermore, the sentence completion task can be seen as an indirect testing of vocabulary, which might lower the content validity as well (Brown, 2004) At the same time, Q4 of this task seems out of place since it touches on a grammatical point that is not included
Trang 6in the curriculum, and does not serve any meaningful assessment target in the entirety of the task.
Writing task 2 also touches on a topic that is not the target of writing clarified in the textbook (See Appendix A) Generally speaking, content validity is severely underperformed in this test.
3 Authenticity
Authenticity is the degree to which test materials and test conditions present what happens
in the real target situation (Brown, 2004) In terms of authenticity, Brown (2004) suggested several criteria to precisely evaluate the authenticity of a test, namely language use, items, topics, thematic organization and resemblance to real-life situations Taking these into consideration, the listening recording shows an appropriate use of language; however, it is adapted into scripted monologues with little intonation and relatively slow speaking pace For the reading section, although Task 1 Q2-3 are contextualized to measure students’ sub-skill of inferring the meaning of unknown words from the context, both of the texts are not provided with an authentic source Regarding the speaking part, it successfully resembles real-world tasks in which students have to perform their learned knowledge and skill In addition, one advantage that the four parts have in common is the topics of the tasks, which are meaningful and relevant to the course
4 Washback
Washback is the effect of testing on how students prepare for the test (Brown, 2004) There are two types of washback: positive and negative, based on whether it has beneficial or undesirable effects on educational practices (Hughes, 2003) In this case, the analyzed test has a somewhat positive washback as it thoroughly covers a sizable portion of learning targets identified in the
texbook However, the test fails to assess grammar as well as knowledge learned in Unit 11: Changing roles in society Therefore, students might be perplexed when their preparation is not
reflected on the test, potentially resulting in demotivation for students in later assessment
5 Practicality
Practicality is the relationship between the resources that will be required and the resources that will be available (Bachman & Palmer, 1996) The chosen test requires reasonably priced printing materials and well-prepared equipment such as speakers and exam papers, which cannot be practically measured The only criterion that can be evaluated is the impracticality of the time allowance It is challenging for students to successfully complete the test within the set time frame
of 45 minutes To elaborate, the test includes a 7-minute long recording, 150-200-word texts, and 3 performance tasks, which might be impractical for teachers to administer and for students to finish the test in the time limit Moreover, as the test does not contain an evaluation system and the
Trang 7procedure on how teachers can administer the two speaking tasks, it is difficult to evaluate the practicality of this part in particular and the test in general
III Question writing techniques
The analyzed test encompasses two types of assessment methods, which are selected-response assessment, and constructed-response assessment The specifics of the question types are presented in the following table:
Table 2 Assessment methods
Selected-response
Multiple Choice
(covered in II Reading - Exercise 1) 1
True/False Statements
(covered in II Reading - Exercise 2) 1
Sequencing
(covered in I Listening - Exercise 2 - Q3-6) 1
Constructed
-response
Limited-response
Gap-filling
(covered in I Listening - Exercise 1; 2 - Q1-2) 2
Sentence Completion
(covered in III Writing - Exercise 1) 1
Extended-response
Performance:
- Paragraph Writing
(covered in III Writing - Exercise 2)
- Interview: Response to open-ended questions (covered in IV Speaking - Exercise 1)
- Paired test (covered in IV Speaking - Exercise 2)
3
Accordingly, all questions in the analyzed test will be closely examined in terms of the characteristics of their corresponding assessment methods, the construction of tasks’ instructions, input and vocabulary level, and their achievement of the expected assessment targets
1 Listening
The listening tasks are constructed in the forms of gap-filling (Task 1, Q1-5; Task 2, Q1-2) and sequencing tasks (Task 2, Q3-6) The former’s task items are designed with little modifications
Trang 8from the tapescript while the latter are synthesized to fit with the aims of listening for general ideas With this construct, both tasks allow minimal guessing probability and highly subjective scoring Besides, the instructions for both tasks are written clearly with a brief description of the context However, there is no direct instruction on how to note the order of events in the sequencing task, which might pose an unnecessary challenge for students in achieving the task requirement Regarding the input, gap-filling items of task 1 (Q1-5) are efficiently designed in a table with reasonable intervals in between This allows students to track the items easily and have enough time
to fill in one gap before the next item is mentioned Meanwhile, the writing of gap-filling items in task 2 (Q1-2) is of little service to the purpose of listening for specific information, but rather just to testing the recognition of words in use Vocabulary wise, all words and phrases in the tasks are largely within the A2 level (roughly 90% in task 1 and 85% in task 2) Those of higher levels are
mostly taught previously, such as although (B1) and combine (B2) Advanced level words (C1, C2),
accounting for 5% in task 2, might hinder the comprehensibility of the recording
2 Reading
Both instructions for these two tasks are written clearly and briefly, which ensures the
effectiveness of the instruction The test designers also make great use of action verbs (read, complete, circle) as well as clarify the task requirements (decide if the statements are true or false).
In terms of the topics chosen, both are relevant to the content of the course as they demand the
knowledge learned in Unit 9 and Unit 10.
Besides, there are noticeable differences between the two reading tasks Reading task 1 is a multiple choice assessment, which contributes to the subjectivity of the rating process However, as the answers for Q4-5 are directly taken from the passage without being paraphrased, there is a fair chance that students can guess the answers correctly without regard to the comprehension of the text In terms of the input, approximately 20% of the words in passage 1 are above A2 level with
only 1 word above B2 level (See Appendix C) Given the majority of those are included in the
syllabus, the chosen passage is generally comprehensible to the targeted learners Meanwhile, reading task 2 takes the form of a true/ false assessment, thus requiring students’ understanding of the text and enabling subjective grading Regarding the input of text 2, although approximately 23%
and 12% of the words are at B1 and B2 level respectively (See Appendix C), the majority of the words at B2 level have already been taught to students in Unit 10 such as launched, missions, telescopes (See Appendix B), which improves the comprehensibility of the text However, there are several spelling and grammatical mistakes shown in the text, such as accommodate, flybies, serve as space environment… In terms of layout, task 2 has an effective layout as it requires students to
Trang 9circle the correct answers instead of writing them down, which creates the uniformity and subjectivity for the answers
3 Writing
Writing task 1 is designed as a sentence completion task with given prompts This type of task can only assess writing ability to a narrow extent Instead, it concerns the assessing of the depth
of vocabulary, which is often the focus of assessing writing for students of low level (Brown, 2004) The instruction and wording of items are also clear and straightforward with an example provided
However, Q4 seems redundant as it tests the grammatical point (present perfect) not included in the textbook (See Appendix A) Besides, all the tested lexical items (breathtaking, affordable, break the bank, full board) have been taught in Unit 8 (See Appendix B), and no grammatical mistakes are
recorded
Writing task 2 is an extended-response assessment in the form of paragraph writing This type of task is suitable for evaluating higher cognitive skills such as analysis and production (Florida Center for Instructional Technology, n.d.) However, the construction of this task reveals several problems First, the instructions fail to clarify the specific genre of the writing task such as a blog or a description paragraph Moreover, the topic of the writing does not represent any learning targets identified in the textbook This raises a question regarding the content validity and reliability
of the test tasks It is highly likely that students will struggle to deliver the requirements of the task since task specifications are not clearly communicated Apart from that, the cues provided are sufficient and contain simple languages appropriate for students of A2 level
4 Speaking
There are several problems that need adjusting in the instructions of the speaking section Both of them are unclear, informal, and lengthy, which might cause confusion to the test-takers A clear context for the role-play in task 2 is also missing In response, task 2 should include an example as its requirement is rather complicated and might cause students difficulty performing it Regarding the assessment method, task 1 is designed as an interview, requiring students to respond
to open-ended questions while task 2 is a paired test in which two students have to carry out a conversation with given prompts These two formats are particularly well-suited for assessing the grasp of vocabulary and speaking skills set out in the targets Nevertheless, the test designers should elaborate more on how the examiners can successfully conduct the tasks given the tense time limit
of the test Besides, both speaking tasks contain simple vocabulary, which is suitable for students of
A2 level In terms of topic, the focus is placed upon Unit 7: Recipes and eating habits, which is
relevant to the course and interesting to students
Trang 10Bachman, L F., & Palmer, A S (1996) Language Testing in Practice: Designing and Developing
Useful Language Tests Oxford University Press.
Brown, H D (2004) Principles of Language Assessment In Language Assessment: Principles and
Classroom Practices (pp 19-41) Pearson Education, Inc.
Florida Center for Instructional Technology (n.d.) Classroom Assessment - Constructed Response.
Florida Center for Instructional Technology Retrieved from:
https://fcit.usf.edu/assessment/constructed/constructb.html
Hughes, A (2003) Validity In Testing for Language Teachers (2nd ed., pp 26-35) Cambridge
University Press