04051001837 nghiên cứu về năng lực Đánh giá kĩ năng viết của giáo viên và hoạt Động Đánh giá bộ môn viết tại một trường đại học thái nguyên
INTRODUCTION
Statement of research problem and rationale for the study
Teaching and assessment have long been considered inseparable in the actual educational practice The assessment on students’ performances is of extreme importance to teaching activities (Mertler, 2009) Also, positive improvements during the teaching process including proper instruction, greater motivation and better performance of learners could only be made once the application of sufficient assessment and marking techniques takes place (Mellati & Khademi, 2018) However, being fueled by this growing concern for assessment practices, numerous studies have also raised the concern of the relationship between teachers’ belief and their actual classroom practices and assessment practices included This link has been studied by various researchers to be incompatible, incongruent, or mismatched (Karavas-Doukas, 1996; Phipps and Borg, 2009)
Vietnam has experienced a growing concern for language testing and assessment in recent decades Efficient teaching methods are no longer the only primary concern of scholars, educators as well as stakeholders Instead, greater attention has been shifted to assessing students’ performances and achievements With the introduction of the National project “Teaching and Learning Foreign Languages in the National Formal Educational System in the Period of 2008 – 2020” or Project 2020, the curriculum constructed for teaching English subject experienced a significant change with a shift into communicative teaching approach As a results, the practice of teaching and learning Writing skills has received greater concerns from scholars However, in contrast to a rich body of studies carried out within high school contexts, teachers’ writing assessment literacy as well as their actual practice of this at tertiary level fall short of sufficient attention
Being aware of this issue, this study, entitled “An investigation into teachers’ writing assessment literacy and writing assessment practices at a university in Thai Nguyen” expects to bridge the afore-mentioned research gap This research aims to answer the three following questions:
1 What is English language teachers’ perception of writing assessment literacy at the Faculty of English teachers education of a University in Thai Nguyen?
2 What are the current practices of classroom writing assessment conducted by these teachers?
3 What factors affect the practices of classroom writing assessment of these teachers?
Scope of the research
This study is limited to writing assessment literacy, current practices of writing assessment and factors influencing assessment decisions of teachers in a Faculty of English teachers education of a University in Thai Nguyen
Concerning the matter of assessment literacy, the research targets at investigating teachers’ understanding of writing abilities, knowledge of the writing assessment procedures and measurements as well as skills, abilities to carry out writing assessment
With regard to teachers’ practice of writing assessment, this study focuses on their actual writing assessment procedure including purposes, construct definition, assessment methods, design writing items and tasks, writing assessment administration, assessing writing assessment procedure’ quality, interpreting results, and grading and feedback
Finally, factors affecting teachers’ decision during their actual writing assessment procedure is also targeted within this research However, the study focuses on external factors and classroom realities
This study wishes to provide in-service university English teachers and educators with more in-depth information about writing assessment literacy and
3 actual practices of writing assessment This would consequently support them to build suitable plans as well as make appropriate adjustments for teaching writing at university level Significantly, this research might also be a reference for establishing training programs and courses of writing assessment literacy and teaching methodology for pre-service teachers.
Methods of the study
This research adopts a mixed method design To be more specific, the quantitative procedure is applied for analysis of data collected from the questionnaire Meanwhile, the qualitative one is employed during the analyzing procedure of data from the interview
The questionnaire is utilized to investigate two aspects which are writing assessment literacy and writing assessment practice Accordingly, it is also divided into two parts to collect information regarding these two issues Additionally, it also includes a section of problems that writing teachers encounter during their current practice as well as factors influencing their actual decisions Meanwhile, the interview enable the researcher to discover further factors with great impact on participants’ writing assessment procedure
LITERATURE REVIEW
The definitions of assessment for language education
For language education, Bachman (2004) considered assessment as the procedures in which data of the concerned aspects is gathered in a systematic way These data might come from either formal tests or other informal methods such as teachers’ checklists of student performances (Purpura 2016) Assessment is also defined as a procedure providing the foundation for judgments about students’ skills and knowledge (Bachman 1990, Lynch, 1996; McNamara, 1996 as cited in Shohamy and Inbar, 2006)
In a smaller scale of classroom, assessment is also considered as a process which “begins with the identification of a purpose for gathering the information, proceeds to selection of an appropriate way to gather information, and concludes with use of the results to enhance the quality of teachers’ decisions” (McMillan & Workman, 1998, p.10) However, it is later believed to include not only procedures but also sets of strategies and techniques employed by both teachers and students with the aim to gather, analyze and report student achievements (McMillan, 2015).
Language assessment procedure
As mentioned in the previous section, language assessment itself is a procedure by definition One of the most popular concept of language assessment process is proposed by Bachman and Palmer (1996) in which assessment is suggested as a conceptual three-stage process including Design, Operationalization and Administration In the Design stage, there are six activities involved in correspondence with six expected products pertaining to a description of test
5 purposes, TLU domain, task types, test takers, a construct definition, a plan for test usefulness evaluation and an inventory of resources The Operationalization phase mainly focuses on the development of task specifications and a blueprint This is when the activities of writing actual test tasks, instructions as well as specifying the procedures for the test are carried out In the last phase which is Administration, the test is delivered to test takers and necessary information is also collected for further analysis
Figure 1 Language assessment process by Bachman and Palmer (1996)
Another description of the assessment process is proposed by Shohamy and Inbar (2006) in which they believe that despite differences in assessment instruments, the assessment process still includes particular phases The language assessment procedure starts with setting assessment purposes, defining language knowledge to be judged and selecting appropriate assessment processes After that, actual tasks and items are developed and generated In the next step, assessment instrument is
• Plan for test usefulness evaluation
6 administered, and its quality is then assessed along with validity and reliability Especially, expected difficulties during the administration needs to be listed Lastly, results from the assessment will be interpreted and reported to relevant bodies
1 Determining the purpose of assessment
2 Defining language knowledge to be assessed
6 Determining the quality of the language sample/answers produced
7 Assessing the quality of the procedures
Figure 2 The language assessment process (Shohamy and Inbar, 2006)
A simpler version of test development process is suggested by Cheng and Fox
(2017) which consists of six major steps According to this, the initial point of an assessment procedures is not identifying the purpose but a mandate instead The mandate might emerge internally (e.g., teachers’ desire to evaluate students at a particular point during the learning course) or externally (e.g., interest of other stakeholders, ministries or departments of education) This mandate later motivates the purpose of the test
Figure 3 Overview of a test development process (Cheng & Fox, 2017)
Other stages, in comparison, can be seen as a shorter and simpler version of detailed steps described in the two previously mentioned procedures However, a noticeable point in this process is the implication on continuous evidence collection, review and revision as an on-going act during the life of the test
Based on the literature of language assessment process mentioned above, major steps that need to be included in an assessment procedure can be concluded as follow despite the variety in detailed description: Identifying purposes, defining construct, selecting methods, designing items and tasks, administering the assessment, assessing assessment procedure’s quality, interpreting the results, reporting and giving feedback the results
Important points of these stages will be discussed further in the next section of this study Furthermore, this conclusion of language assessment procedure would later be utilized as the foundation for constructing items of the questionnaire to investigate teachers’ assessment literacy and their practice of it The next sections will present each of the steps in depth
2.2.1 Identifying purposes of language assessment
Identifying specific purposes of assessment is so important that it ranks first in the process of assessment as proposed above by some scholars According to Mertler (2003), there are five major purposes of assessment which are planning, conducting and evaluating instruction, diagnosing student difficulties, placing students, providing feedback to students and grading and evaluating academic learning In terms of instructional decisions, Mertler classifies them in a chronological order of before instruction, during instruction and after instruction Decisions during the course of planning for instruction mainly relates to making a draft of upcoming material, designing lesson plans, selecting supplemental materials, deciding on students’ assignments or projects as well as formats for assessment Noticeably, they are constructed on the foundation of various sources including teachers’ knowledge, district’s curriculum, students’ preferences and abilities Afterwards, decisions concerning the efficiency of the teaching process are made on a regular basis while instruction is occurring It is necessary for teachers to be aware of students’ difficulties when approaching a new concept or their act of misbehaving and disrupting instruction in the classroom Lastly, after the completion of instruction, teachers also need to evaluate it for further alteration in the future Regarding investigating students’ difficulties, Mertler points out that this activity is a kind of diagnostic assessment which supports teachers to highlight kinds of challenges that students are encountering This functions as the basis for generating remedy for particular problems Placing students is another significant purpose of assessment Airasian (2000) contends that the drives behind placement decisions might be either academic or social such as dividing classrooms into groups of similar ability, organizing students for group assignments or recommending suitable courses The fourth aim of assessment is providing feedback to students which is regarded as a form of formative evaluation In contrast, grading and evaluating academic learning, which is the fifth purpose, is considered to be summative evaluation This usually
9 takes place at the end of a unit or certain periods and grades are a method of communication between teachers and related bodies at this time
Earlier than Mertler, Bachman (1981) indicates that the overall purpose of assessment is making inferences about language proficiency for later decision making As a result, specific purposes of assessment are in correlation with types of decisions to be made They divide these decisions into two levels which are micro- evaluation and macro-evaluation While the first group relates to individuals, the second one pertains to the program A summary of particular purposes is presented in the table below
Table 1 Purposes of Assessment, Bachman (1981)
Decisions about students Selection (entrance,readiness)
Placement Diagnosis Progress and grading Decisions about teachers Identifying proficiency level
Hire a given individual as a teacher
Evaluate appropriateness, effectiveness, efficiency Summative evaluation of programs
In addition to similar purposes mentioned by Mertler in the previous part, there are several special points to consider Firstly, it is important to make a distinction
10 between selection and placement as they both judge test-takers against certain sets of criteria However, while selection might require a single overall score, decisions regarding placement pertain to various levels Secondly, while Mertler concerns students’ difficulties mainly when it comes to diagnosis, Bachman (1990) contends that its fundamental concern is identifying students’ strengths and weaknesses Significantly, progress and grading is carried out to satisfy the demand for the judgement of learners’ language improvement The operationalisation might either target overall performance or component skills
However, the emergence of constructivist theory has leads to a shift to three purposes of assessment which are inter-related with each other despite their differences (Gonzales & Aliponga, 2012) They include assessment FOR learning, assessment AS learning and assessment OF learning A small comparison of them based on Gonzales & Aliponga (2012) is presented in the Table 2 below
Table 2 Comparison of three assessment types Assessment for learning Assessment as learning Assessment of learning
Providing teachers with sufficient data to adjust and distinguish between teaching and learning activities
Improving and reinforcing learners’ metacognition which is their knowledge from the thinking procedures
Providing foundation for the production of instructional and educational decisions
With the aim of ensuring students’ utmost benefits from the instructional process, teachers usually carry out continuous assessment over the course of instruction
Assessment as learning is carried out by students themselves when they actively regulate their learning and utilize teachers’ feedback
Assessment of learning is usually carried out at the end of the instructional process
Results collected from this provide teachers with a chance to improve their instructional process
Results of this are the adjustment and adaptations of students based on teachers’ feedback Significantly they might achieve important changes in their comprehension and studying
Results collected from this indicate learners’ proficiency
It can be seen that despite a heavier focus on instruction, assessment purposes mentioned by earlier research are still witnessed in this version of categorization In this proposal of assessment purposes, students’ role is acknowledged in changing their own understanding and learning instead of focusing on this of assessors only
In summary, based on the above-mentioned literature, the most significant purposes of assessment relate to instructional process and learners Among a variety of assessment purposes, this study would concern most about the following ones due to its focus on teachers’ assessment literacy and their practice The list of purposes of assessment that would be utilized to construct several questions later in this research can be concluded as follow:
1 Planning, conducting and evaluating instruction
2 Diagnosis (student’s difficulties, strengths and weaknesses)
5 Evaluating (academic learning, appropriateness, effectiveness, efficiency of teaching programs
7 Improving and reinforcing students’ metacognition
As the general purpose of language assessment is providing information for further inferences of learners’ language ability (Bachman, 1981), identifying the language ability to be assessed or a construct is of great importance This process is known as construct definition and the method to carry out it is mostly driven by the types of inferences that assessors target based on the results from language assessment (Bachman & Palmer, 1996) In other words, the definition of construct needs to be in alignment with the purposes of the tests and the target language use situation (Weigle, 2002) Furthermore, the definition of construct might base on different perspectives, ranging from the content of specific courses to a theoretical model of language ability (Bachman & Palmer, 1996) This leads to the categorization of two types of construct definitions which are Syllabus-based and theory-based Bachman and Palmer (1996, p.118) point out that the former types “distinguish among the specific components of language ability that are included in an instructional syllabus” As a result, if the intention of assessors is collecting information of “students’ mastery of specific areas of language ability”, the syllabus- based construct would be most appropriate Meanwhile, the theory-based construct definitions originate from models of language ability Bachman (1990) provided a description of language competence According to this, components of language competence could be divided into Organizational competence and Pragmatic competence In particular, the organizational competence refers to abilities which relate to “controlling the formal structure of language for producing or recognizing grammatically correct sentences, comprehending their propositional content, and ordering them to form texts” (Bachman, 1990, p.87) Meanwhile, the Pragmatic competence emphasizes the connection between “language users and the context of communication” (Bachman, 1981, p.89)
Figure 4 Components of language Competence (Bachman, 1990)
Assessment methods could be categorized in a number of ways The foundation for classification might be types of questions or items (Cheng & Fox,
2017) Brown and Hudson (1998) propose three major categories of assessment methods including selected-response assessments, constructed-response assessments and personal-response assessments
Factors affecting teachers’ classroom assessment practices
Numerous studies have investigated the relationship between teachers' belief and their actual classroom practices and assessment practices included This is evidently concluded to be incompatible (Karavas-Doukas, 1996 as cited in Phipps and Borg, 2009) Such distinction is reflected by different terms expressing negative attitude including "incongruence, mismatch, inconsistency, and discrepancy" (Phipps and Borg, 2009)
In an attempt to illustrate this relationship as well as factors affecting it, Borg
(1997) proposed a model of teacher cognition, schooling, professional education and classroom practice According to this, classroom practice is directly influenced by teacher cognition which conforms to the impact of social, professional and personal factors In other words, although teachers with rich experience deliver their teaching and instruction on the basis of their constructed knowledge of teaching, alteration still occurs due to the intervention of contextual influences such as class size, students' performance or examination (Borg, 2006) However, as the priority of this model is teacher cognition, classroom practice is, therefore, mainly diagnosed in the relation with this factor As a result, more profound research is in demand for further understanding of factors driving classroom practice in general and classroom assessment in particular
Figure 6 Teacher cognition, schooling, professional education, and classroom practice (Borg, 1997)
With a specific target at assessment practices at classroom level, Tierney
(2006) proposed a model illustrating the impact of six educational sources of information on this aspect These six influences are divided into two main groups which are knowledge-generating sources and mediating sources The former category includes evaluative inquiry, educational research and large-scale assessment While the aims of educational research are building up knowledge foundation and developing comprehension of practical and theoretical aspects of certain academic interests (Tierney, 2006), evaluative inquiry mainly relates to understanding of the concerned subjects' merit or significance (Cousins et al, 2004) Large-scale assessment, despite being considered as a form of educational inquiry, targets at students' academic achievement identification and description at a specific time Information from these three sources is believed to be adjusted through educational policy, professional development and teachers' belief in the second category
Figure 7 Framework model showing hypothetical flow of influence between sources (Tierney, 2006)
However, it is noticeable that although the final destination of this model is specifically classroom assessment practices, its main focus lies in sources of information that affect assessment The underlying reason behind this encompasses the research context when a reform of assessment practices from conventional summative orientation takes place Therefore, this model still fails to indicates specific factors influencing classroom assessment practices in a general context
Figure 8 Teachers’ classroom assessment decision making
With a focus on classroom assessment and grading decision, McMillan and Nash (2000) introduced a model demonstrating factors affecting teachers' classroom assessment decision making Due to this, there are three major influences which are
(1) teachers' knowledge, beliefs, and expectations values, (2) external factors and (3) classroom realities It is noticeable that "assessment decision making is characterized by tension between the internal beliefs and values of the teachers and external influences that are imposed on them" (McMillan, 2005, p.35) In other words, pressures arising from external factors such as large-scale testing programs, district policies, parents and real situations of the classroom create an obligation for teachers to employ assessment practices being contradict to their beliefs and values
Basing on the above-mentioned models, it can be concluded that factors influencing assessment practices can be divided into three major categories which are
(1) Internal factors or teachers' knowledge, beliefs, expectations and values and (2) External factors emerging from outside the classroom context as well as (3) Classroom realities Due to the clarity in describing each factor as well as apparent illustration of relationship among them, the model of McMillan and Nash (2000) would be employed as the theoretical foundation to construct interview questions in this study
2.3.1 Teacher knowledge, beliefs, expectations, and values
Teachers’ cognition has been acknowledged as “a powerful influence on their practices” (Borg, 2003, p.91) McMillan (2003) described this in terms of five specific manifestations which are pulling for students, philosophy, promoting understanding, accommodating individual differences and motivation
Firstly, pulling for students was explained as teachers’ attempt to provide students with chances to succeed This, accordingly, results in teachers’ adaptation of their assessment practices Typical examples of this might be designing various test forms, acceptance of substitution in testing methods, allowing late submission or giving extra credit
Secondly, teachers’ philosophy of education also exerts a profound impact on assessment decision making Participants in McMillan (2003) study referred to their fundamental educational beliefs and values as well as their general goals for students (noncognitive outcomes included) when providing an explanation for the driving force behind their assessment decisions Noticeably, teachers’ beliefs and values or their cognition are also “shaped by the experiences teachers accumulate” (Borg, 2003, p.95) Findings from a longitudinal research carried out by Woods (1996) indicate the changes in a participant’s perception of purpose in language learning However, it was students’ failure to respond to his pedagogical method that stimulated him to enlarge the notion of purpose This, ultimately, leads to the modification of his instructional approach to students
Promoting students’ understanding also plays an important role in shaping teachers’ assessment practices Assessment of students’ progress is based on analysis of their in-depth comprehension, ability to use knowledge and application in resolving issues or making decision (McMillan, 2003)
Fourthly, the demand of assessment variety also affects teachers’ decisions in class McMillan (2003) acknowledged teachers’ application of their general belief regarding the role of adjusting to individual distinction for further changes of assessment type in his research
The fifth influence in McMillan and Nash (2000) model is teacher’s belief in terms of students’ engagement In particular, the original perception of the importance of students’ active participation in class prompts teachers to carry out assessment activities which enhances students’ motivation and engagement as much as possible (McMillan, 2003)
In summary, based on the analysis presented above, I would like to categorize the aspect of teachers’ knowledge, beliefs, expectations, and values into two smaller groups which are (1) teachers’ philosophy (general beliefs and values from their own
25 experience and education or training) and (2) Specifics beliefs pertaining to enhance students’ academic performance (increasing opportunities to success, maximizing understanding, promoting engagement and motivation)
External factors are those beyond teachers’ management (McMillan, 2003) According to the model, there are three major external influences on teachers’ assessment decisions which are state-mandated high-stake tests, district policies and requirements, and parents
Writing assessment
In section 2.2, the process of language assessment in general has been presented This process includes eight major steps which are Identifying purposes, defining construct, selecting methods, designing items and tasks, administering the assessment, assessing assessment procedure’s quality, interpreting the results, reporting and giving feedback the results
The process of assessing specifically writing skills also follow the same procedure However, it contains certain differences due to the typical features of the Writing skills The following sections will present four steps with significance difference for this particular skills which are Defining Writing construct, Selecting writing assessment methods, Designing writing tasks and Crafting the rating scales
An important beginning for assessing a skill is defining it (Weigle, 2002) and the case of writing assessment is not an exception An early definition was developed by Hayes which emphasizes that writing is a social act Yi (2009), however, expressed
27 concerns of the difficulties in defining the construct of writing ability as she found that pedagogical methods of teaching this skill are considered as a major contributor to the formation of this She proposed three main approaches to teaching writing skills which are product/text-oriented, process/cognitive-oriented and reader/genre- oriented as the foundation to generate three corresponding definitions of writing ability In the light of product/text-oriented teaching approach, writing ability is defined as the competency to generate “contextually correct forms of language, following prescribed patterns at either sentence of discourse level” (Yi, 2009, p.58) Meanwhile, writing competency from the viewpoint of process/cognitive-oriented approach is defined as writers’ capacity to, firstly, trigger, develop ideas and secondly improve as well as edit them to perfection in a provided context Lastly, from the perspective of reader/genre-oriented approach, the definition of writing ability pertains to the capacity to carry out writing tasks to meet a provided demand and
“satisfy a given discourse community with regard to the structure and content of the discourse, and communicate functionally” (Yi,2009, p 60)
An aspect worth consideration when defining writing ability is its components suggested from models of language knowledge related to writing Building upon a series of previously published models of Hymes (1972), Canale and Swain (1980) and Bachman (1990), Grabe and Kaplan model outlines a general view of various factors of language competence According to this model, language knowledge is categorized into linguistic knowledge, discourse knowledge and sociolinguistic knowledge Linguistic knowledge pertains to fundamental factors of language, discourse knowledge relates to methods to build cohesive text while sociolinguistics involves the appropriate usage of language in various social backgrounds This description is "useful in outlining considerations for designing and scoring of writing tasks for assessment” (Weigle, 2002, p 29)
In this section, attempts have been made to define writing ability This provides a foundation to define constructs in the writing assessment process and will be employed to build questions regarding this issue later in this study
2.4.2 Selecting the methods to assess writing
Various methods to apply in language assessment has been presented in section 2.2.3 These methods include: Selected-response (True-False, Matching and Multiple-choice), Constructed-response (fill-in, short-answer, and performance ), Personal-response (conferences, portfolios, and self-and peer assessments), Observations (group discussions, independent work, rehearsals, daily work), Teacher-made assessment (conferences, interviews, group/class discussions, class meetings), Student-conducted assessments (presentation, tests, portfolios, artwork, critiques/reviews, self and peer reflections) All of these methods could also be employed for both formal and informal Writing assessment However, the following section will present three most frequently exploited methods for assessing Writing skills which are Essays, Timed impromptu Writing test and Portfolio
As the typical feature of writing assessment demands language production, Genesee and Usphur (1996) contend that Open-ended tests are of great suitability The most frequently applied form of open-ended tasks in writing is essays Nitko
(2001) divided essay items into two major groups which are restricted and extended response items However, with the aim of testing general writing ability, the latter type is more suitable With extended response items, writers are granted the freedom to illustrate ideas and the correlation of them as well as decide on the organization of the essay
Direct test of writing or timed impromptu writing test is argued by Weigle
(2002) to be the most popular method for testing writing Typically, test takers are requested to produce a sample of writing There are five major features of this writing assessment method (Hamp-Lyons, 1991, p.5) including: 1/ test takers are required to produce in minimum one piece of continuous text, 2/ a set of instructions is provided and requires time for responding to the prompt, 3/ judgements need to follow certain
29 principles and should be demonstrated in numerical forms, 4/ the production of text is restricted within limited amount of time and 5/ the topic is not revealed for test takers before test time
Portfolio assessment is considered as an alternative approach to writing assessment providing more profound inferences regarding writing ability (Weigle,
2002) In terms of writing assessment, a portfolio is a collection of written works produced for various aims during a certain course of time (Weigle, 2002) Therefore, it allows writers to exhibit a wide range of writing performances (Hamp-Lyons and Codon, 2000) An outstanding feature of employing portfolios in assessing writing is that it makes room for students’ reflection and self-assessment Students have to reflect on their own writing performances as well as make decisions regarding portfolio arrangement
As previously mentioned in the earlier section, the quest of designing tasks is of great complexity as it requires a range of considerations on different aspects Similarly, designing writing tasks also relates to a variety of issues
The four most basic requirements for assessors to consider were mentioned by White (1994), including clarity, validity, reliability and interest Clarity contributes to students’ quick and easy understanding of the tasks Meanwhile, validity indicates the ability of the prompt to draw out written products that cover a series of targeted abilities from test takers A good prompt is supposed to provide better writers with opportunities to exhibit their best writing while simultaneously enabling weakers one to perform at their own levels The scores of skilled writers in writing tasks with high level of validity, therefore, should be higher than those of weaker ones Reliability refers to the consistent application of scoring criteria on all collected responses As a result, the same papers should receive similar scores regardless of readers High level of reliability also demonstrates in the sense that writing prompt grants sufficient flexibility so that test takers with various backgrounds can deal with at least a part of
30 it However, should too much flexibility be included, the variety of responses would increase, thus causing difficulties in comparison among different answers
Apart from these four fundamental requirements, Weigle (2002) also adds other factors for test developers to consider when designing writing tasks which are subject matter, stimulus material, genre, time allotment and use of dictionaries and other reference materials
According to Hyland (2003), there are three major types of writing scales to assess writing which are Holistic, Analytic and Trait-based
Writing assessment literacy
the scale targets broadly or narrowly defined tasks and 2/ the score provided is single or multiple (Weigle, 2002) The selection of suitable rating scale, therefore, also relies on these typical features
In summary, this section has presented some major issues regarding writing assessment In the light of previous literature introduced in the earlier section, it is obvious that writing assessment is a subset of language assessment In other words, knowledge of writing assessment is compatible with these language assessment in general However, there still exists several particular differences for writing test developers to consider
In this study, the knowledge presented in the 2.2 and 2.4 sections provides foundation to construct questionnaires relating the practice of writing assessment of teachers
The term “Assessment literacy” was first introduced by Richard Stiggins
(1991) He contends that assessment literacy includes fundamental understanding of assessment and relevant techniques to measure students’ performances Another simple version of assessment literacy definition is suggested by Davies (2008) in which Assessment literacy is supposed to comprise skills and knowledge In particular, skills relates to the ability to analyze and construct a test while knowledge emphasizes the “relevant background in measurement and language description” (Davies, 2008, p 328) Significantly, Fulcher (2012) proposes an expanded definition of language assessment literacy which points out that there are three dimensions of knowledge types included in it, including practice, principles and contexts
Figure 9 Fulcher’s expanded definition of assessment literacy
Particular explanation for this figure is also provided by Fulcher (2012) as follow: “The knowledge, skills and abilities required to design, develop, maintain or evaluate, large-scale standardized and/or classroom based tests, familiarity with test processes, and awareness of principles and concepts that guide and underpin practice, including ethics and codes of practice The ability to place knowledge, skills, processes, principles and concepts within wider historical, social, political and philosophical frameworks in order to understand why practices have arisen as they have, and to evaluate the role and impact of testing on society, institutions, and individuals.”
Despite the differences in wording of the above-mentioned assessment literacy definitions, common significant similarities are noticeable As a result, in this study, assessment literacy can be described as a combination of understanding of the language , knowledge of the assessment procedures and measurements and skills as well as abilities to carry out assessment
Various attempts have been made so far to describe writing assessment literacy by many researchers According to Weigle (2007), it is necessary for writing teachers to possess skills relating to developing, administering, and scoring writing tasks
Besides, the abilities to recognize good assessment and its application in the classroom context or components of a good paper, comprehend both formative and summative assessment are all crucial for writing teachers Meanwhile, Crusan, Plakan and Gebril (2016) contend that writing assessment literacy includes three major components which are knowledge, beliefs and practices In particular, Crusan (2010) lists out compulsory abilities for writing teachers including differentiating between formative and summative assessment, generating writing prompt which extract targeted data for different aims, comprehending the important role of fronting of criteria as well as understanding the use and abuses of writing assessment Based on these descriptions, writing assessment literacy can be understood as a combination of knowledge about formative and summative assessment, quality of good assessment, skills and abilities to conduct writing tasks
However, a comparison of this conclusion about writing assessment literacy with the definition of assessment literacy in general reveals two important issues: 1/ There is a shortage in the volume of research which focuses specifically on defining writing assessment literacy and 2/ The above-mentioned definitions of writing assessment literacy pay attentions to too specific details instead of providing a broad overview Therefore, this study would prefer to consider writing assessment literacy as a subset of assessment literacy in general and thus applying and adjusting the definition of assessment literacy presented earlier to define writing assessment literacy in particular In conclusion, writing assessment literacy can be conceptualized as a combination of understanding of writing abilities, knowledge of the writing assessment procedures and measurements and skills as well as abilities to carry out writing assessment
Studies and Official documents for Writing assessment literacy and practices at tertiary level in
practices at tertiary level in the context of Vietnam
2.6.1 Official requirements of writing assessment literacy
Current methods to assess writing skills at tertiary levels in some schools mainly apply Multiple choice questions, writing essays and interviews while assessing learning procedures only accounts for a minor part (Tran, 2017) Besides, summative tests and indirect testing methods still dominate in non-specializing contexts (Ngo, 2018)
Another alarming issue discovered by Le (2014) is that many teachers lack guidelines on methods to assess students in class As the only available source for this is the syllabus, choices of assessment techniques belong to teachers only
Significantly, the International standardized tests such as TOEFL, TOEIC or IELTS are exploited as major yardstick to assess students’ skills (writing included) in many institutions (Hoang, 2010)
2.6.1.2 Rating scale and Scoring practice
Teachers’ discretion plays a major role in the decision of rating scale (Ngo,
2018) This conclusion matches Tran’s (2017) comment on the scoring practice at her school She points out that scoring rubrics for assessing writing skills depend greatly on teachers instead of following the standard of Vietnam 6-level system of testing English proficiency (VSTEP) She also adds that due to the large number of students in each class, teachers fail to distribute an appropriate amount of time for correcting students’ essays
Another noticeable problem is the focus of teachers during the scoring practice of writing performances Trinh and Nguyen (2014) point out that Vietnamese EFL teachers have a preference for grammatical correction instead of paying attention to communicative discourse Writings with grammar mistakes tend to receive lower scores in comparison with those which are grammatical error-free
At tertiary level, Le (2014) indicates that teachers face difficulties in giving feedback as this is only possible for the mid-term tests They are unable to comment on the end-of-term tests, especially when these are carried out on computers More seriously, teachers’ amount of time for marking tests is so limited that they sometimes fail to correct mistakes for students
2.6.2 Current issues of Writing assessment practice at Tertiary level
When it comes to developing Vietnam English teachers proficiency, it would be a shortage without mentioning Vietnam National Foreign Language Project 2020 (Project 2020 in short) One of its significant documents is the Vietnam English Teacher Competency Framework approved in 2012 However, the major aims of this framework relate to reinforcing teaching capacity via “syllabus/programme/subject evaluation, teacher’s self-evaluation, teacher training and professional development” (Vu, 2013, p.5) It can be, therefore, concluded that there is a lack of focus on the assessment field in general and writing assessment in particular
At tertiary level, Regulations of Standards for University Teachers published by MOET (Vietnam’s Ministry of Education and Training) in 2018 provide general requirements for university teachers as well as how to assess and rate them According to the second standard for professional competencies presented in these regulations, teachers should be able to design, employ assessment tools and use the results of assessment to develop training programs as well as adjust their teaching activities However, this is the only requirement concerning the aspect of assessment Apart from this, there are no other current official regulations regarding assessment competencies in general and writing assessment in particular It can be concluded that there is a shortage of framework to assess university teachers writing assessment literacy
However, the information presented above has revealed that the practice of writing assessment and writing assessment literacy are both under the influence of regulations and requirements from the Ministry of Education and Training It mainly
38 provides major guidelines and criteria Besides, the Vietnam 6-level system of testing English proficiency (VSTEP) and Project 2020 also affects the writing assessment practice of teachers at tertiary level to a certain extent However, it should be noted that as each university has the right to develop their own learning curriculum and other relevant assessment practice, it is important to take this factor into serious consideration when carrying out research concerning assessment practice and literacy This leads to a revisit of Fulcher’s expanded definition of assessment literacy mentioned in the previous section
Figure 9 Fulcher’s expanded definition of assessment literacy
Fulcher emphasizes on the context as one of the three components of assessment literacy In the case of Vietnam and tertiary education in particular, the aspects of contexts that should be considered should include regulations and policies from the Ministry of Education and Training, the Vietnam 6-level system of testing English proficiency and the actual requirements from the teaching environment of each university Therefore, the conclusion of defining writing assessment literacy in the earlier section should be supplemented further by adding considerations of Vietnam teaching context
In conclusion, writing assessment literacy can be conceptualized as a combination of three components which are 1/ understanding about writing abilities, 2/ knowledge of the writing assessment procedures and measurements, and 3/ skills as well as abilities to carry out writing assessment in the teaching context of Vietnam This would lead to the inclusion of items concerning the teaching context in this study’s questionnaires
2.6.3 Factors affecting Vietnamese teachers’ classroom assessment practice
In Vietnam, a great volume of research has investigated the factors influencing teachers’ classroom practices and assessment practice included Nguyen (2021) studied high school teachers for influential factors on their classroom practices indicated that the greatest impacts were contextual ones These factors include language curriculum, assessment resources, time, and workload She also pointed out learners-related factors which are their characteristics, linguistic proficiency, and expectation of learning outcomes Teachers’ experience and assessment abilities, however, led to positive alteration in their assessment practice Sharing several similar ideas, Vu (2017) also considered huge workload and limited teaching hours are negative influences on teachers’ classroom assessment practices Besides, results from her study emphasized the dominance and harmful effect of both final and high- stakes exams including National High school graduation exam and gate-keeping proficiency tests for university students However, the importance of teachers’ assessment literacy was in a clear recognition due to its contribution to designing and constructing high-quality assessment tasks and class tests
With specific target at university level, Phan (2018) investigated factors influencing teachers’ actual classroom practices at this level and came to a similar conclusion in comparison with previously mentioned studies She also listed syllabus time-frame and class size as negative influences In particular, participants in this research reported to experience a shortage of time when teaching a large number of around 50 students in a class This large figure of students prevented them from
40 explaining essential knowledge for students and maintaining high attention span Phan (2018)’s findings also conform to these of Nguyen (2021) regarding students’ variables Specifically, students’ limited motivation and low linguistic ability forced participants in Phan (2018)’s study to follow provided teachers’ guidelines and use mostly Vietnamese in class to ensure pupils’ comprehension of the lesson content
However, a significant factor in this research is the impacts of culture and education norms The deep-rooted old teaching practices with which many teachers and students have for long been familiar discouraged participants in Phan (2018)’s study to apply interactive teaching practices This was even fueled further by the widespread Vietnamese achievement-oriented culture Under profound impact of this, many parents regularly expect their children to pass exam with flying colors, leading to an unavoidable shift of teaching methods to focus on exam preparation Without doubt, consideration of cultural factors or Vietnamese context is imperative to gain in-depth understanding of factors influencing teachers’ classroom assessment practices The importance of Vietnamese context has also previously mentioned earlier in section 2.5 of this study
METHODOLOGY
Settings of the study
This research was carried out in the Faculty of English teachers education in a university in Thai Nguyen province The investigated university is a public one with the primary aim of training pre-service teachers for a variety of academic subjects and English included
The average class size in this university is about 30 students per class Writing skills are taught for students from the first year until the third one During the first two years, general writing skills are the major focus while more profession-related ones are the priority of the third year.
Sampling and participations
Targeted participants of this study are 17 English teachers from the Faculty of English teachers education in a university in Thai Nguyen province The number of female teachers dominates that of males at 14 and 3 respectively Most of the participants hold a master degree while 4 of them possess a doctoral degree
Significantly, the majority of surveyed teachers are highly experienced with
16 of them having taught English for at least 10 years Meanwhile, there is only one participant with 8-year experience in teaching English in general Regarding the number of years teaching Writing skills only, 12 of them have spent at least 8 years
In contrast, 4 teachers only dedicate to this skill for 2 or 3 years and this figure for 1 of them is 5 years
It is worth noting that many participants do not stand a chance to attend courses specifically focusing on writing assessment Meanwhile, the number of those taking such kind of training is only 4
In terms of the Questionnaire, the research population involved all 17 teachers
To ensure a thorough understanding of all delivered questions, participants are provided with the Vietnamese version of the questionnaire Questions in this questionnaire relate to two major aspects which are teachers’ perception of writing assessment literacy and their current writing assessment practices
Meanwhile, regarding the interview, there were 03 participants In the table below, a brief introduction of their information is presented
Table 6 Participants’ profiles (Author’s data)
Years of teaching Writing experience
Highest degree PhD Master Master
Professional status Head of the
Although the effort of diversifying participants’ genders in this research has been made, female teachers declined to attend the interview Therefore, all the willing participants of the interview were females
All participants in the interview possess rich teaching experience in general However, their experience in teaching specifically writing is varied This fact could be explained by the policy of their faculty as each teacher is responsible for one subject each year and this assignment may change annually Therefore, participants’ number of years teaching Writing skills might not be compatible with their figure for general teaching experience Significantly, as one of the participant was the Head of
44 the Faculty, data collected from the interview include a broader opinion from a managerial perspective.
Data collection
This research employs questionnaire and interview as data collection instruments The primary reason for this is the efficiency of questionnaire in supporting the researcher to collect the huge amount of necessary information within a short time from all participants Meanwhile, the interview enables the researcher to discover and collect more detailed information
The designing procedure of the questionnaire was carried out with great conscientiousness from the researcher Firstly, the literature review was employed as the foundation to build questions in the survey Secondly, the author consulted with the supervisor who is an experienced specialist in the field of language testing and assessment for further comments Based on these, refinements and adjustments were added to build a complete questionnaire Simultaneously, the researcher also asked for opinions from two other English teachers who are also graduates from English teacher education program and currently teaching English in Hanoi Lastly, the final version of the questionnaire was translated into Vietnamese before being delivered to participants to ensure that no misunderstanding interrupts their answer process
At the beginning of the questionnaire, a brief overview of the research title and its purpose is presented Besides, this part also includes the confirmation of protecting data’s confidentiality as well as commitment to use information collected for research purposes only
The questionnaire consists of two parts Part 1 aims at investigating teachers’ perception of writing assessment literacy with 11 questions in total In particular, the two first questions target participants’ understanding of writing abilities Meanwhile, questions from 3 to 11 collect information about the knowledge of the writing
45 assessment procedures and measurements and skills, abilities to carry out writing assessment In part 2, there are 10 questions gathering information about participants' current writing assessment practice Particularly, questions from number 1 to 9 dig deeply into the writing assessment procedure including purposes, construct definition, writing assessment methods, designing writing items and tasks, writing assessment administration, assessing the writing assessment procedure’s quality, interpreting results and grading and feedback The last question of this section investigates issues of writing assessment at tertiary level in the context of Vietnam Detailed frameworks to construct this questionnaire is presented in the table 3 below
Based on the literature review, the author designed a framework for interview questions After that, the first draft of interview questions was constructed and sent to the supervisor who an experienced specialist in the field of language testing and assessment for comment and revision After several refinements and adaptations, the final version of the interview questions was established However, to prevent any misunderstanding from occurring during the course of the interview, all questions were then translated into Vietnamese
At the beginning of the official interview, the researcher gave a brief introduction on the aims of the study After that, both researcher and participants go through each question together Due to the serious impact of Covid 19 pandemic, it was impossible for the researcher and participants to meet in person to carry out the interviews Therefore, the three interviews took place by means of phone call Each interview lasted from about 15 to 20 minutes and was audio-recorded These recordings were then transcribed for further analysis and discussion
Table 7 Frameworks for Questionnaire of English teachers’ writing assessment literacy and Writing assessment practice
Components Description Part I Part II Items Source of adaptation
Understanding of the writing skills
Language knowledge relating to writing
Knowledge of the writing assessment procedures and measurements
Skills as well as abilities to carry out writing assessment
Designing writing items and tasks
Assess the writing assessment procedure’s quality
Issues of writing assessment at tertiary level in
The Interview include 8 questions, targeting at different factors affecting the actual writing assessment practices An additional question of the most influenced aspect was also included after gaining a general opinion of each factor’s impact The detailed framework to construct interview questions was presented in the table below:
Table 8 Frameworks for Interview of factors affecting teachers’ writing assessment practices
Nguyen (2021) Achievement-oriented culture 2 Phan (2018)
Phan (2018) Students’ linguistic proficiency 8 Pennington (1998)
The process of collecting data for this research is described in details below
Step 1: Preparing official documents and ask for consent
Based on requirements from the researched faculty, the researcher prepared an introduction letter from University of Languages and International Studies for official confirmation of this research After that, this letter is submitted to the Head of the Faculty of English teachers education where the survey was carried out to ask for permission of data collection Lastly, teachers received invitation for participation in the research as well as questioning their agreement for this
At a meeting of all teachers of the faculty, questionnaires were handed out directly to all of them Before they began to answer, careful instruction and further explanation for any question were provided After participants finished filling the questionnaires, the researcher collected all and thanked them for their cooperation
The process of carrying out the interview for this research took place in the following sequence
Step 1: Arrange time and online tool for the Interview
Before the official interviews were carried out, the researcher contacted with each interviewee to agree upon a suitable and specific time Besides, as the interviews were carried out online, researcher need to suggest several tools including Zoom video call or phone call for participants to choose as well as discuss the benefits and
50 difficulties of each option A reminder of the time and checking on devices used for the interview was sent to the participants to ensure their attendance
At the previously arranged time, the interviews were carried out The researcher announced again the aim of the interview and started asking attendants questions in sequence Some additional questions were also added to clarify ambiguous information
Simultaneously, the interviews were audio-recorded with the participants’ consent for later analysis and discussion After the transcriptions of these audios were generated, they were sent to participants via emails for confirmation to ensure that their ideas and opinions were depicted as precisely as possible.
Data analysis
As this research employs quantitative research design, descriptive statistics methods were applied to analyze data
Firstly, careful examination of data collected was carried out After collecting filled questionnaires from participants, the researcher checked whether all questions were answered Besides, validity and clarity of them were also examined
Secondly, the data collected was reported The researcher used the software SPSS to analyze and synthesize data from the surveys
Finally, the researcher used both written forms (numbers) and graphs (bar charts) to illustrate data
The thematic analysis proposed by Braun and Clarke (2006) was applied in this research to analyze information collected from the interviews This process includes six phases which are data familiarization, initial code generation, search and reviewing themes, defining and naming themes and writing report
Phase 1: Familiarizing myself with data
At the beginning of this phrase, all the recordings were transcribed into the written form The researcher tried to carry out this as accurately as possible to avoid any alteration in the messages or ideas that participants wanted to convey
After that, the researcher read all the transcription for several times This repeated reading provided researcher with a chance to familiarize with the collected data, leading to an immersion in it
While reading all the transcription, the researcher also paid attention to the repeated patterns appear in the data set Besides, all the outstanding features were also highlighted and noted down It was at this phase that a rough comparison of data provided by participants were made to search for any potential themes that might emerge The software Microsoft Word and its review function were exploited to support during this phase An example of this is illustrated in the figure below
Figure 10 Example of initial notes
Based on these notes and the theoretical framework presented in section 3.3.1.2, initial themes and codes were produced The table below illustrates an example of this
Figure 11 Example of initial codes Phase 3 Searching for themes
After generating initial codes, the researcher focused on finding out “the relationship between codes, between themes and between different levels of themes” (Braun & Clarke, 2006, p.89) During this stage, the researcher draw mind-maps for better understanding of the relationship of different data set As a result, an initial thematic map was generated, showing two main themes which are Significant impact and Small impact
At this phase, the researcher first compared the two themes generated from phase 3 and the themes in the original framework in Section 3.3.2.1 According to the theoretical framework, all factors are divided into two major themes which are External factors and Classroom Realities However, based on the data collected, the extent of influence for each factor in these two groups of was inconsistent Therefore,
53 the researcher decided to follow the division emerging from Phase 3 which were Significant impact and Small impact
Additionally, all the initial codes generated from the previous phases were also revised to search for coherent patterns again For example, when putting reasons mentioned by participants into consideration, the researcher realized they could be arranged into two core purposes which are ensuring assessment quality and motivating students
Phase 5: Defining and naming themes
At this stage, all the themes continued to experience the process of being refined, defined, and names The result of this phase is a thematic map to visualize the relationship between themes for later data analysis
Based on findings from the five previous phases, a report was produced to answer the third question of this research
In summary, this chapter has presented the research methodology, starting with describing settings of the study and participants in the questionnaire and interview section Two frameworks to build questionnaire and the semi-structured interview were also provided along with detailed description of the data collection procedure and analysis