L.4 Key questions when planning literature review 176 M.1 Meta-inference in mixed-methods research 185 M.2 Methodological stage in the research process 187 M.3 Sequential mixed-methods d
Trang 2The Routledge Encyclopedia
of Research Methods in
Applied Linguistics
The Routledge Encyclopedia of Research Methods in Applied Linguistics provides accessible
and concise explanations of key concepts and terms related to research methods in applied linguistics Encompassing the three research paradigms of quantitative, qualitative, and mixed methods, this volume is an essential reference for any student or researcher working in this area
This volume provides:
• A–Z coverage of 570 key methodological terms
• detailed analysis of each entry that includes an explanation of the head word, a visual illustration, cross-references, and further references for readers
• an index of core concepts for quick reference
Comprehensively covering research method terminology used across all strands of applied linguistics, this encyclopedia is a must-have reference for the applied linguistics community
A Mehdi Riazi is associate professor in the Department of Linguistics, Macquarie University
His areas of interest include research methodology, second-language writing, language ing strategies, and test validation
learn-Editorial Advisory Board:
Keith Richards, Warwick University, UK
Steven Ross, University of Maryland, US
Trang 3“This book is an invaluable resource for applied linguistics researchers, beginning and enced alike The book, further, is an extraordinary achievement for a single author, revealing Riazi’s strong command and understanding of issues relating to applied linguistics research
experi-I will most certainly be recommending it to my students.”
Brian Paltridge, University of Sydney, Australia
“The Routledge Encyclopedia of Research Methods in Applied Linguistics is an important
resource for researchers in this field of study It provides an up-to-date and comprehensive erence guide to core constructs and covers key concepts in quantitative, qualitative and mixed-method approaches This volume will serve as a useful tool for both novice and experienced researchers in applied linguistics and related subject areas.”
ref-Jane Jackson, The Chinese University of Hong Kong
Trang 4The Routledge Encyclopedia of Research
Trang 5First published 2016
by Routledge
2 Park Square, Milton Park, Abingdon, Oxon OX14 4RN
and by Routledge
711 Third Avenue, New York, NY 10017
Routledge is an imprint of the Taylor & Francis Group, an informa business
© 2016 A Mehdi Riazi
The right of A Mehdi Riazi to be identified as author of this work has been asserted by him in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988.
All rights reserved No part of this book may be reprinted or reproduced or utilised in any form
or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers.
Trademark notice: Product or corporate names may be trademarks or registered trademarks,
and are used only for identification and explanation without intent to infringe.
British Library Cataloguing in Publication Data
A catalogue record for this book is available from the British Library
Library of Congress Cataloging-in-Publication Data
Riazi, A Mehdi, author.
The Routledge Encyclopedia of research methods in applied linguistics : quantitative, qualitative, and mixed-methods research / A Mehdi Riazi.
pages cm
Includes bibliographical references and index.
1 Language and languages––Study and teaching––Research––Methodology––
Encyclopedias 2 Second language acquisition––Research––Methodology––Encyclopedias
3 Applied linguistics––Research––Methodology––Encyclopedias I Title
Trang 6To my wife, Mahvash, who has been a sincere companion for me all through my life!
To my sons, Ali, Amin, and Hamed, and to all my teachers and students, past, present, and future
Trang 7This page intentionally left blank
Trang 9L.4 Key questions when planning literature review 176 M.1 Meta-inference in mixed-methods research 185 M.2 Methodological stage in the research process 187 M.3 Sequential mixed-methods design with development purpose 191 M.4 Monostrand conversion mixed design 197 M.5 Multilevel mixed data analysis 198 N.1 An example of a negatively skewed distribution 211 N.2 Normal distribution 216
Trang 10O.1 The process of operationalisation 221 P.1 Parallel mixed-methods designs 227 P.2 Partial correlation 232 P.3 An example of a path analysis diagram 234 P.4 An example of a pie chart 238 P.5 Population in quantitative research 241 P.6 An example of a positively skewed distribution 242 P.7 Graphical representation of a posttest-only design 244 P.8 Graphical representation of a pre(non)-experimental design 247 Q.1 Stages in qualitative data analysis 256 R.1 A graphical representation of a repeated measurement 272 S.1 An example of a scatterplot 287 S.2 Schematic diagram of the scientific method 288 S.3 An example of a semantic differential scale 290 S.4 Another example of a semantic differential scale 290 S.5 An example of pattern of responses in a semantic differential scale 290 T.1 Thematic analysis 319 W.1 The role of warrant in an argument 349
Trang 11C.1 Contingency table 58 C.2 Correlation matrix 65 C.3 Cross-tabulation 76 M.1 Matrix of data collection in mixed-methods research 179 M.2 Correlation matrix for the multitrait-multimethod approach 204 P.1 Response pattern of students’ performance on an imaginary test 237 P.2 Contingency table for response patterns on an imaginary test 238 S.1 A three-group Solomon experimental design 300 S.2 An example of ratings in Spearman correlation 301 S.3 An example of stratified sampling 308 T.1 Type I and Type II errors in hypothesis testing 334 Y.1 An example of a contingency table for gender and personality type 352
Trang 12My systematic familiarity with research methodology goes back to the 1990s when I was doing
my Ph.D at the University of Toronto in Canada and when the paradigm war debates were a hot topic and a matter of discussion among different researchers and scholars My enrolment
in both quantitative and qualitative research method courses in my doctoral program provided
me with the opportunity to become familiar with different research paradigms and logical debates Moreover, I was working as a research assistant for a second-language writing research project in which both quantitative and qualitative data were collected and analysed
methodo-In addition to collaborating with the project team in data collection, I was responsible for forming the quantitative data analysis of the project and preparing preliminary reports This is while, despite my familiarity and experience with quantitative data analysis, I used qualitative methodology for my own doctoral dissertation Both the theoretical and practical experiences
per-I gained during my doctoral studies provided me with an impetus to pursue further discussions
of research methodology as one of my areas of interest and to consider teaching research method courses ever since I have now been teaching research methodology for over 20 years and have always been fascinated by the way researchers in applied linguistics design their studies in unique ways to answer specific research questions Although the basic principles and fundamental underlying assumptions of quantitative and qualitative research methodolo-gies have remained the same since my first encounter with research methodology, the scope
of each methodology has expanded enormously, providing applied linguist researchers with quite a wide variety of methods to use and to design their research projects Moreover, with the
formal advent of mixed-methods research in 2007, with the inception of the Journal of Mixed Methods Research and systematic publications, applied linguist researchers are now capable
of addressing more complex problems using more sophisticated research designs drawing on and integrating both quantitative and qualitative methods There are now enormous and valua-ble titles on research methodology in applied linguistics in general, and specific book titles on quantitative and qualitative methods However, there seems to exist no encyclopedia type of reference that can help applied linguist researchers with synoptic and concise overviews of the many research concepts and methods developed and used over the past decades
The current project was therefore initiated to fill this gap by collecting and explaining the key research concepts and methods pertaining to quantitative, qualitative, and mixed-methods research methodology To collect a comprehensive list of the key research methods and con-cepts, I first consulted the index sections of almost all the existent titles on research methods
in applied linguistics, and even neighbouring disciplines such as education, psychology, and social sciences to create an initial list of key research methods and concepts The initial list was then sent to the advisory and editorial board for their review and suggestions They con-tributed to the list by suggesting some further key concepts which had slipped past my radar
Trang 13We therefore ended up with a total of 620 key research methods and concepts related to the three research methodologies, namely, quantitative, qualitative, and mixed methods I have done my best to base my descriptions and explanations of these key concepts on reliable and valid sources of knowledge on research methodology Moreover, the advisory board’s com-ments and feedback on each individual entry have added quality assurance to the content of the book
In presenting the final product, I had two options of either dividing the encyclopedia into three different sections – providing key concepts related to each research methodology in
a separate section – or putting all the entries in alphabetical order and presenting them as a whole I have chosen to follow the second approach for two reasons The first reason was that
I found some key concepts more general and applicable to all research paradigms than specific and pertaining to a particular research methodology This, therefore, made it difficult for me to decide which particular section some of these key concepts should go in Second, it occurred
to me that when the whole encyclopedia is arranged in alphabetical order, users can find any term much quicker than checking it in one section and, if not found in that section, they look for it in another section I therefore ended up with the decision to present the key concepts and methods in alphabetical order as it now is In order to further help readers and users, and although the whole encyclopedia is arranged in alphabetical order, an index is also provided at the end of the book for the ease of finding certain key words and their relevant pages
I hope the encyclopedia is useful to applied linguistics postgraduate students, early career researchers, and even seasoned researchers Although the title of the book and the examples used throughout are drawn from my experience as an applied linguist, I think the whole work should be useful to researchers in other disciplines too given the commonalities and the sim-ilarity of research methods in different strands of social sciences There might be some key concepts that users do not find in this volume, basically because they either were not listed in the indices of the research method books I consulted or I simply ignored them I would there-fore invite readers and appreciate it very much if they could possibly send the terms they find missing to me so that I can include them in subsequent editions of the encyclopedia, along with any other suggestions for improvements that I might receive
Mehdi RiaziJune 2015Sydney, Australia
Trang 14I am grateful to a number of people, for without their support and encouragement, I could not produce this book I am thankful to Nadia Seemungal, commissioning editor of the English language and linguistics section of Routledge who welcomed and supported my proposal for this encyclopedia I am also grateful for the useful comments provided by the three reviewers for Routledge on the initial proposal for this encyclopedia
My special thanks are due to Keith Richards and Steven Ross who consented to be on the advisory and editorial board of this project Christopher Candlin had agreed to be on the panel too, but unfortunately his illness deprived me from benefiting from his scholarly comments Both Keith and Steven have been a pleasure to work with They both have read each entry very carefully and provided me with their insightful and useful comments Keith contributed to and shared his supports and expertise on the qualitative and mixed-methods parts, and Steven provided his knowledgeable and efficient comments on the quantitative part The work has therefore benefited from excellent comments from the advisory board, though any problems and mistakes remain mine, indeed
My thanks also go to my son, Hamed, who helped me with the production of normal and skewed distributions It was not easy to produce these graphs, and his help was essential
I would like to express my thanks to the Routledge team who provided all the necessary support throughout the project, including Helen Tredget, Nadia’s editorial assistant, whose timely reminders kept me on track so that I could not forget the due date of submitting the work The copy editor, Lisa McCoy, meticulously read the manuscript and provided me with useful suggestions I am thankful to all
Finally, I am thankful to Sheri Sipka, the project manager of the book
Trang 15ANCOVA Analysis of covariance
ANOVA Analysis of variance
BAWE British academic written English
BNC British National Corpus
CA Conversation analysis
CAQDAS Computer-assisted qualitative data analysis software
CIT Critical incidence technique
COCAE Corpus of Contemporary American English
DCT Discourse completion task (test)
df Degrees of freedom
ERIC Educational Resources Information Centre
H0 Null hypothesis
H1 Alternative (research) hypothesis
LLBA Linguistics and Language Behaviour Abstracts
MANOVA Multivariate analysis of variance
MICASE Michigan Corpus of Academic Spoken English
MLA Modern Language Association
MMR Mixed-methods research
QUAL Qualitative
QUAN Quantitative
rbi Biserial correlation
rpbi Point-biserial correlation
SPSS Statistical Package for Social Sciences
SSCI Social Sciences Citation Index
Trang 16A-B-A designs
A-B-A designs refer to those research designs in which a single case is measured repeatedly
at three points in time The first and the third measurement points are called the baseline, and
the second measurement point is referred to as the treatment point because the case receives some treatment at this point The case is measured repeatedly at baseline point (A) on an outcome measure, receives a treatment at time point (B) while being measured repeatedly on the same outcome measure, and is again measured repeatedly when the treatment is lifted at a second baseline point (A) on the same outcome measure A-B-A designs are a combination of
time-series and experimental designs Figure A.1 presents an A-B-A design.
A-B-A designs are similar to time-series designs because the case is measured repeatedly over time so that different measurements can be compared for any short-term and long-term changes in the outcome measure and due to the effect of the treatment provided to the case These designs are also similar to experimental designs because the case receives some treat-ment and the outcome measure is compared before and after treatment and with baseline meas-urements The treatment period is usually continued for the same length of time as the original baseline or until some stable changes occur in the outcome measure After the treatment period, the case will be measured again in the same way as it was done in the first baseline (A) and while the treatment is withdrawn The assumption is that the outcome measures of the second baseline must revert to the original baseline measures to rule out rival explanations Another version of A-B-A design is A-B-A-B in which a second phase of introducing treatment is incorporated There are some problems with A-B-A designs The first problem is that the pro-cess ends with the baseline condition Pedagogically speaking, educators expect that the effect
of a treatment will continue over time rather than stop when the treatment is withdrawn This problem can indeed be remedied by an A-B-A-B design in which a second round of treatment
is added The second problem with A-B-A design is that if the effect of treatment continues over the second baseline measurements, it would be hard to infer whether the continued effect
is due to the treatment or other intervening or extraneous variables Because of the
prob-lems with A-B-A designs and unless there is a particular reason for using this type of design,
Trang 17other words, abduction allows for relating case observations to theories or vice versa, which can result in more plausible interpretations and explanation of the phenomenon Broadly
speaking, in quantitative research, researchers usually aim at testing hypotheses pertaining
to specific theories using a deductive logic or deductive approach, whereas in qualitative research the purpose is usually to generate hypotheses or theories using an inductive logic or inductive approach In mixed-methods research because the purpose is to integrate both quan-
titative and qualitative approaches, researchers can move back and forth between theory and data using an abductive logic or abductive approach in favour of drawing more rigorous and
comprehensive inferences about the issue being studied A basic example of this
back-and-forth movement between inductive and deductive approaches in mixed-methods research
would be the case of a sequential mixed-methods design In such a study, the researcher first generates some hypothetical explanations about a phenomenon from interviews with a small
cohort of, for example, language teachers through an inductive and contextually based itative study In the process of analysing the qualitative interview data, the researcher may look back at relevant theories in generating explanations or hypotheses from the interviews, thus establishing a back-and-forth movement between the data and theory The results of the
Trang 18qualitative study achieved through an inductive data-driven approach but in light of relevant theories can then be used in a deductive theory–driven, larger-scale study The hypothetical
or theoretical constructs developed from the interviews in the first phase are used as the
underlying constructs for creating a survey questionnaire that could be administered to a larger sample of participants The survey research thus aims at investigating the credibility
of the inductively developed, data-driven hypothetical constructs with a larger sample of participants The results of the quantitative phase of the study can thus be used for gen-
eralisability purposes Depending on the design of the mixed-methods research, different
configurations of qualitative (inductive) and quantitative (deductive) combinations are sible The abductive approach thus provides a context for researchers to make inferences to best explain the phenomenon by integrating exploratory and explanatory insights in a single project It provides MMR researchers the possibility to expand the scope of their inquiry by systematically and interactively using both inductive and deductive approaches in a single study
pos-Further reading → Danermark et al (2002), Haig (2005), Josephson & Josephson (1996),
Locke (2007), Morgan (2008), Rozeboom (1999), Special issue of Semiotica (2005),
Ted-dlie & Tashakkori (2009)
See also → credibility, data, deductive approach, hypotheses, inference, inductive approach, interviews, mixed-methods research (MMR), participants, qualitative research, quantitative research, questionnaires, research design, sample, sequential mixed-methods designs, survey research, theoretical framework
Absolutism
Questions such as whether there is universal truth, or that truth and knowledge are relative, or what constitutes knowledge have engaged the human mind from early stages in history and have resulted in absolutist versus relativist perspectives about truth and knowledge Absolut-
ism is used as a contrast to relativism, each representing a school of thought and denoting a
dif-ferent worldview or doctrine From the perspective of absolutism, the physical and the social world are formed by many natural laws, which yield unchanging truths On the other hand, a relativist perspective posits that there is not much difference between knowledge and belief and, as such, knowledge is a kind of belief constrained by individual and socio-cultural norms and thus changing across time and place The debate between absolutism and relativism has implications for researchers and research methodologies and has nurtured the quantitative and qualitative debate An absolutist researcher in both the natural and social sciences may thus attempt to discover the natural laws and seek truth-like propositions, which are time and con-text free and generalisable as universal laws Logical positivism is a school of thought that can
be matched with an absolutist worldview Relativist researchers, on the other hand, recognise multiple realities and thus multiple truths and seek to understand how the same phenomenon may be represented and construed differently by different individuals or social groups These two perspectives have for a long time led to the quantitative and qualitative paradigm wars
and incompatibility thesis A pragmatic approach toward research recognises both approaches and seeks for warranted claims Warranted claims are conclusions drawn or inferences made
from quantitative or qualitative data and analysis and an integration of them for a better
under-standing of the research problem As such, the role of experts and intersubjectivity (consensus
among thoughtful members of a discourse community) becomes very important
Trang 19Abstract section of reports
Further reading→ Berger & Luckman (1966), Dewey (1929), Foucault (1977), Kuhn (1962), Teddlie & Tashakkori (2009)
See also → incompatibility thesis, intersubjectivity, (post)positivism
Abstract section of reports
Each research report usually comes with a summary of the entire project to help readers get an overview of the study being reported on The abstract is, however, quintessential and a require-ment for journal articles Even though different journals may have different standards as how the abstract should be prepared, there is usually a consensus that the abstract of the papers submitted for publication should have four moves or include four parts The first move usually states the research problem and the purpose of the study; the second move provides some brief explanations about the methods of the study, including participants and data collection and analysis procedures The third move highlights the main findings of the study, and the final fourth move presents brief theoretical and/or pedagogical implications of the study’s findings Because the abstract provides considerable information in a short space, it is therefore crucial
to write it clearly and succinctly The editors and reviewers’ first encounter and judgment of the paper will be based on the quality of the abstract If the paper gets published, the abstract
is indexed by indexing centres and will reach a wider audience when searching for any of the keywords related to the topic of the paper The length of the abstract varies depending on the purpose for which the abstract is prepared, but journals usually urge writers to prepare
an abstract within a range of 120 to 250 words For a thesis, an abstract may have up to 300 words since there is more room for each section and chapter There is usually a summary or
an extended abstract of the study in the final chapter of the thesis, too, wherein the researcher/author will have a chance to provide more details and write a more complete summary of the study Apart from theses and journal articles in which authors must write an abstract, confer-ences also seek abstracts from potential presenters The role of an abstract in conferences may
be more important because it is the only document on which reviewers decide whether the paper should be accepted for presentation in the conference or not Like journals, some con-ferences provide some useful instructions of how to prepare the abstracts, as well as the criteria for judging the quality of the submitted abstracts It is therefore a good idea to read carefully the instructions for preparing abstracts for conferences
Further reading → APA (2010), Brown (1988), Brown & Rodgers (2002), Mackey & Gass (2005), Porte (2002)
See also → conclusions section of reports, discussion section of reports, procedures section of research reports, results section of research reports
Trang 20Action research
research is the intervention in the functioning of an aspect of the real world (e.g., a classroom)
to bring about a positive change, hence the term “action research” The main purpose of action research is thus taking action to solve a local problem or to improve practice and not to gen-eralise research findings Action research is usually conducted by following certain steps in a cyclical or spiral way The steps include problem identification, planning, acting, observing, and reflecting as depicted in Figure A.2 The process can lead to new cycles of action research depending on the reflections the researcher makes As such, action research is also referred
to as “reflection-in-action” especially when it comes to making changes in one’s teaching approach
The observation includes collecting data through different techniques such as, but not
limited to, interviews, documents, diaries, field notes, and recordings of participants acting
on the intervention Results of the data analysis will lead to a better understanding of the problem, which might be used as a basis for further reflections and new cycles of action
research There are different types of action research, namely, collaborative action research,
critical action research, classroom action research, and participatory action research Action research in schools is also called teacher inquiry or practitioner research The use of action research in teaching and learning creates a culture of reflection and change by questioning and reflecting on one’s own approach to teaching and to make necessary changes based on some research evidence Some of the benefits of action research for teachers include pro-fessionalisation through the professional development of doing action research, developing teachers’ scholarship of teaching and learning, promoting reflection and using research-based evidence for making changes, encouraging research collaboration, and empowering teachers
as researchers One of the key benefits of action research is its capability to bridge the gap between theory and practice in different fields by using research methods to inform prac-tice and bringing about positive changes Like any other research, action research has its challenges too
Further reading → Ahmadian & Tavakoli (2011), Alber (2011), Ary et al (2014), Burns (2010), Heigham & Croker (2009), Howell (2013), Mertler (2009), Paltridge & Phakiti (2010), Pelton (2010), Richards (2003)
problem identification
planning
acting observing
reflecting
Action Research
Figure A.2 Action research
Trang 21Alternative hypothesis (H 1)
See also → action research, classroom-based research, collaborative action research, field notes, participants, transformative-emancipatory design
Alternative hypothesis (H 1)
Alternative or research hypothesis (H 1) is the researcher’s informed assumption about possible
relationships between variables or differences among groups based on the researcher’s
obser-vations or theoretical background Alternative or research hypothesis thus states the
relation-ship between certain variables the researcher expects to find in a population of concern (high
school students, for example) when testing the hypothesis of relationship It may also be used
to state that there is a significant difference between group means when testing the esis of difference For example, a research hypothesis about the relationship between use of reading strategies and reading performance could be stated: “There is a relationship between students’ use of reading strategies and their reading performance” or “There is a significant difference in students’ reading performance when exposed to strategy-based instruction com-pared with traditional reading instruction.” Alternative or research hypothesis is paired with
hypoth-null hypothesis (H 0 ), which states the negation of the alternative hypothesis Providing
evi-dence for research or alternative hypothesis is not usually easy, whereas providing evievi-dence
that a null hypothesis cannot be true is more feasible This is usually done through hypothesis testing and providing statistical evidence whether the null hypothesis could be true or not The
assumption is that if we are able to find evidence to reject the null hypothesis, we can conclude that the alternative hypothesis, which states that there is a relationship between variables or a difference between groups, is a possibility
Further reading → Brown & Rodgers (2002), Kumar (2011), Richards, Ross, & Seedhouse (2012)
See also → hypotheses, hypothesis testing, null hypothesis (H 0 ), population, variables
Analysis of covariance (ANCOVA)
Analysis of covariance is one of the solutions to control pre-existing differences between
two comparison groups in experimental designs It is a method of control, which is used to
equate the comparison groups that may differ on a variable, such as language proficiency, at
the pretest stage The initial difference between groups might affect the dependent variable and
must therefore be accounted for In other words, ANCOVA is a statistical procedure to control
for a known extraneous variable that may correlate with the dependent variable Analysis of covariance is used when random assignment of the participants is not possible, as when the researcher is involved in a quasi-experimental design in which accidental or convenience sam- pling is used In such cases, results of the experiment (any significant difference in the posttest
of the two groups) cannot be confidently attributed to the treatment or the independent variable
introduced in the experiment Analysis of covariance can help adjust the groups to pretreatment differences when post-tests are run to statistically equate the participants in the comparison groups If a study is to compare students’ reading comprehension of two intact classes exposed
to two different reading instruction methods, then any observed significant difference between the two groups’ post-test results might be due to initial differences between the two groups’ lev-els of language proficiency and not necessarily the effect of the teaching method In such cases, analysis of covariance could be used to adjust posttest results for the differences in the groups’
Trang 22Analysis of variance (ANOVA)
initial levels of language proficiency to create two groups that are equated on the covariate
(language proficiency) variable After groups are assigned to experimental and control groups,
a pretest must be administered to both groups At the end of the experiment, ANCOVA would statistically adjust the mean reading posttest scores for any initial difference between the two groups in terms of their language proficiency The variable (language proficiency) used to adjust reading performance scores (the dependent variable) is called the covariate The covariate can
be any variable known to correlate with the dependent variable In the previous example, the covariate can be students’ initial reading ability, and so instead of administering a general lan-guage proficiency test as the pretest, the researcher may administer a reading comprehension
test to both groups ANCOVA can be run using statistical software packages like SPSS.
Further reading → Ary et al (2014), Hatch & Lazaraton (1991), Mackey & Gass (2005), Nunan (1992), Paltridge & Phakiti (2010), Salkind (2004), Trochim & Donnelly (2008)
See also → control group, convenience sampling, dependent variable, experimental designs, experimental group, extraneous variables, independent variables, participants, quasi-experimental research design, SPSS
Analysis of variance (ANOVA)
Whereas correlation is used to check relationships between two or more variables, t-tests and
analysis of variance are used to test differences between two or more means from the same or
different groups ANOVA, therefore, is used to test the hypotheses about group mean ences on one factor or dependent variable The observed differences are called main effects
differ-ANOVA has the advantage over t-test since it can compare two or more groups, whereas t-tests are limited to comparing the mean difference of only two groups ANOVA tests the hypothesis
that the observed mean difference among the groups is not merely due to chance or sampling error This is because the samples selected from a population are likely to be different due to
sampling error, and so ANOVA tests whether the observed differences or main effects among
the groups are systematic and the effect of the treatment, or independent variable, used in the
study or is merely due to sampling error The independent variables in analysis of variance are categorical, they refer to and identify certain groups, and the dependent variable is inter-val or continuous For example, if a researcher investigates students’ writing improvement under three feedback conditions of no feedback, marked feedback, and comments feedback, the independent variable is feedback type with three categories, and the dependent variable
is students’ scores on their final essays Depending on the number of independent variables (one, two, or three), one-way, two-way, or three-way analysis of variance will be used The difference between one-way and two-way analysis of variance is that in two-way ANOVA, not only are the main effects (the effect of each treatment or independent variable) provided, but also any interaction between the independent variables will be shown, something that is not possible with multiple one-way ANOVAs Analysis of variance techniques usually use an
F test to assess whether group differences are systematic or random Analysis of variance is
a parametric test and requires certain assumptions to be met Researchers should therefore
check that the assumptions for ANOVA are fulfilled by their data before they run the test
ANOVA can be run using statistical software packages like SPSS.
Further reading → Brown (1988), Hatch & Lazaraton (1991), Larson-Hall (2010), Mackey & Gass (2005), Nunan (1992), Paltridge & Phakiti (2010), Salkind (2004), Trochim & Donnelly (2008)
Trang 23Analytic adequacy
See also → dependent variable, hypotheses, independent variables, parametric tests, tion, sampling error, SPSS, t-test, variables
popula-Analytic adequacy
Objective or analytic adequacy refers to the appropriateness and adequacy of the data
anal-ysis techniques for answering research questions As such, it is a feature on which ity of the findings depends and thus a principle that must be observed and attended to by
valid-researchers in any research study, regardless of what research approach is followed Another level of analytic adequacy refers to the interpretation of the findings and the logic behind those interpretations That is, whether the interpretations of the findings are clearly stated and whether there is a logical link between the findings and the interpretations For example, the
three stages of coding, namely, open coding, axial coding, and selective coding in grounded theory, allow the researcher to produce categories and themes and make logical inferences
from the data The grounded theory researcher, therefore, needs to show how the succession
of codings and the refinements of the data in favour of more abstract themes and categories
and finally the generated hypotheses or theories are analytically adequate In other words,
readers of the grounded theory reports should be able to follow the researcher’s logic of making interpretations and inferences and the succession of codings and analysis from one level to the next In grounded theory, as well as in other qualitative and quantitative methods, the researcher should be able to show that alternative interpretations are less plausible in the
light of the data In mixed-methods research (MMR), in particular, analytic adequacy is
espe-cially critical This is because researchers must not only show the adequacy of each of the data analysis and interpretation procedures used in each of the two strands (the quantitative and the qualitative), but they must also provide evidence for the links they make between the
two analytical procedures when making more comprehensive inferences, or meta-inferences,
which requires the researcher to mix the two types of interpretations This requires of MMR researchers that the mixed analytic strategies be implemented effectively Considering that analytic adequacy is a major component of the validity of interpretations in each study, it is critical for researchers to ensure they have accounted for it and have provided enough evi-dence in their studies
Further reading → Bergman (2008), Teddlie & Tashakkori (2009)
See also → axial coding, grounded theory, hypotheses, meta-inference, mixed-methods research (MMR), open coding, research questions, selective coding, theory, validity
Analytic induction
Induction, as contrasted with deduction, refers to drawing conclusions from the analysis of
particular instances or cases In qualitative research, induction refers to generating hypotheses
or theories from the analysis of the data Analytic induction, however, refers to a systematic
procedure in which the researcher first generates hypotheses or theories and then attempts
to falsify those hypotheses or theories by searching for the negative or deviant cases in the data This procedure will allow researchers to refine the generated hypotheses, models, or frameworks and make finer conclusions The core of the analytic induction procedure is thus
the analysis of the deviant or negative cases, which will allow researchers to test the
gener-ated models with the evidence obtained from the negative cases The procedures for analytic
Trang 24induction include collecting appropriate data, analysing the data inductively, generating hypotheses or theoretical explanations about the phenomenon, searching the data for deviant cases, testing the theoretical explanation with the deviant cases and either accounting for the cases found or refining the theoretical explanation in light of them, and finalising the theoret-ical explanation about the phenomenon In other words, part of the data (the positive cases)
is used to develop a theoretical explanation in the form of concepts, hypotheses, models, or frameworks, whereas the other part of the data (negative or deviant cases) is used to check and refine the theoretical explanation, including the relationship between the developed concepts
In mixed-methods research (MMR), the concept of analytic induction can be extended to
include the development of theoretical explanations about the phenomenon from the tive part and testing the generated explanations using the quantitative data and analysis from the quantitative phase of the study The cross-checking of the emerged patterns or hypotheses from the qualitative part of the study with the quantitative data and analysis may result in cor-roboration, complementarity, or initiation explanations in mixed-methods research
qualita-Further reading → Denzin (2006), Hammersley (1989), Katz (2001), Teddlie & Tashakkori (2009)
See also → deviant case analysis, hypotheses, mixed-methods research (MMR), negative case analysis, qualitative research, theory
Analytic memos
See memo writing
Anonymity
Anonymity is a methodological principle and a code of practice enforced in ethics guidelines
of different research institutions Essentially, anonymity in research requires researchers to anonymise the settings and participants in their research documents, including publications Concerning participants, it deals with respecting the privacy of the research participants There
are two ways a researcher may attempt to ensure participants’ privacy: anonymity or tiality Usually, anonymity is practiced by making the participants and the places unknown to
confiden-those who might have access to the collected information or research outputs For example, in survey research, participants may be advised not to write their names or provide any other infor-mation that might identify them Often pseudonyms, numbers, or other intricate coding systems are used to mask participants’ identity in qualitative research, though use of pseudonyms is more common in qualitative research There might be circumstances where participants would like to
be represented by their real identities; in that case, researchers may face a dilemma: on the one hand they must abide by the ethical guidelines, and on the other consider their participants’ auton-omy by providing choice to them about the ways in which their data are used Researchers can manage such quandaries by gaining consent from their participants and presenting documents
to ethics committees There are, indeed, counter-arguments against the reliability of sation in qualitative inquiry, as sometimes the characteristics of qualitative inquiry make it easy
anonymi-to discover where the research was conducted and who the participants were For example, the research site or participants of a study could be identified when the administration of an organ-isation or a school is approached to permit the conduct of the research project in those institu-tions Although anonymity in research is maintained as an ethical norm, its unreliability poses
a more serious ethical issue Another problem discussed in the literature is that by removing the
Trang 25Aparadigmatic stance
names, the researcher risks removing those aspects of research that make it believable, vital, and unique In other words, with anonymisation, the link between places, participants, writers, and readers will be lost Finally, in some qualitative research studies, participants may have dual roles of co-conducting the research, and their contribution requires disclosing their names Until these issues are resolved in ethics committees, researchers need to manage them by raising and negotiating them with different stakeholders involved in the research process
Further reading → Nespor (2000), Ritchie & Lewis (2003), Walford (2005), Wiles, Crow, Heath, & Charles (2008)
See also → confidentiality, ethics
Aparadigmatic stance
The pragmatic nature of mixed-methods research (MMR) renders an aparadigmatic stance in
the sense that it is the context and research questions rather than paradigmatic affiliation that lead the researcher to select and apply appropriate methods In other words, the context and the research questions can guide the researcher to make practical decisions about the design
of the study and to mix and match attributes from the seemingly opposing research digms The aparadigmatic stance is based on the premise that research paradigms are distinct
para-from research methods and that they can be used independently so as not to be distracted
by the epistemology–methods relationship It is argued that this is especially applicable in applied fields, such as applied linguistics, where researchers use whatever methods they find
appropriate for answering research questions This is in contrast to a paradigmatic stance that recognises the incompatibility thesis and which asserts that the two research methodologies
(quantitative and qualitative) are different at ontological, epistemological, and axiological els and are therefore incompatible The aparadigmatic stance, on the other hand, gives priority
lev-to research context and research questions and mixes the two methodologies in favour of making more comprehensive inferences The aparadigmatic stance is not without its critics There are scholars who believe there is no paradigm-free research and that it is inappropriate
to mix paradigms in one study However, from a pragmatic stance and for applied fields like applied linguistics, with more practical goals of resolving real-life problems than necessarily producing and disseminating theoretical knowledge, it is popular to approach research more pragmatically and mix the two methodologies in one study This will, of course, involve the-oretical and paradigmatic discussions at some stages in the process of research, though this is not a major concern at the initial stages of designing the study The main point from an apara-digmatic stance is therefore to put aside paradigmatic affiliation in favour of allowing the two research paradigms to collaborate in producing more comprehensive explanations about the phenomenon under study
Further reading → Morgan (2007), Patton (2002), Tashakkori & Teddlie (2003, 2010), dlie & Tashakkori (2009)
Ted-See also → incompatibility thesis, mixed-methods research (MMR), research paradigm, research questions
Applied vs basic research
Research may be categorised into applied versus basic Broadly, applied research is conducted
to solve an immediate practical problem in a particular context It is used by researchers in
Trang 26A priori themes
applied fields to address actual problems under the circumstances in which they emerge in practice As such, the outcomes of applied research may not be generalisable to other contexts
In contrast, basic research aims at collecting empirical data to expand theory and not
neces-sarily solving immediate practical problems This is why basic research is also called ical research and is conducted to expand frontiers of knowledge Many research studies in the field of first- and second-language acquisition can be classified as basic research because the researchers aim at developing theories related to language acquisition The developed theories provide an explanation for how first- and second-language learners may acquire a language by explaining different stages involved in the process of language acquisition Moreover, based
theoret-on the developed theories, researchers may be able to make some predictitheoret-ons about different stages in the process of language acquisition The outcome of basic research might be used
to develop some guidelines for addressing practical problems; however, this is usually not a primary concern of the researchers involved in basic research Although a majority of research projects conducted in linguistics can be categorised as basic or theoretical research, the main concern of researchers in applied linguistics is to tackle practical problems related to the use of language as it is used in everyday communication in different contexts There is quite a wide variety of contexts of language use in everyday life, including, but not limited, to the use of lan-guage in education, counseling, courts, and health The use of language in any of these contexts, although quintessential, can critically affect the profession and the outcome of the practice and decision-making process Applied linguists, therefore, are concerned about how language is used in any of these professions and how the language-related practices can be improved to benefit different stakeholders in different professions The findings of applied research may indeed help basic researchers complete theoretical explanations of different phenomena Basic researchers who intend to develop a theory of second-language learning, for example, will
benefit from the findings of classroom-based research, which is applied research in nature The
classification of research into applied and basic as mutually exclusive is getting blurred now, and the tendency is to consider them along a continuum with different permutations This is
especially true in light of the emergence of mixed-methods research (MMR), which recognises
and aims at addressing both practical problems and theory development in a single study.Further reading → Ary et al (2014), Johnson & Christensen (2012), Kumar (2014), Patton (2002)
See also → classroom-based research, mixed-methods research (MMR), theory
A priori themes
Themes, or categories, are more general and abstract concepts qualitative researchers develop
based on the specific codes they extract from their qualitative data In other words, a theme
or a category includes several substantiations of that theme as represented by specific codes The purpose of qualitative research is to provide a theoretical explanation of the phenomenon
by bringing together and making links between the themes or categories developed from the coded data Themes may sometimes be chosen a priori or in advance when the researcher
intends to apply a theoretical framework to empirical data and to check the degree to which the data fits the theoretical framework and its related themes For example, in mixed-methods research (MMR), the researcher may first collect survey data from a sample of participants, investi-
gating their motivational orientations using currently developed motivational frameworks and questionnaires Subsequently, the researcher may interview participants about their motiva-tional orientations and use the same themes or categories as used in the survey questionnaire
Trang 27can result in triangulation of the quantitative and qualitative data and analysis When using
a priori themes, the qualitative analysis is, in fact, approached from a deductive or top-down
perspective, looking for the fit of the data to theory, which is in contrast to a grounded theory
approach in which an inductive or bottom-up approach is followed so that themes emerge from the coding process and are then linked in systematic ways to produce more abstract theo-retical explanations Using a priori themes may cause the researcher to overlook aspects of the data and ignore some important concepts and constructs in the data Researchers can therefore prevent pitfalls like this in their data analysis by considering a priori themes as tentative and thus making themselves ready and open to possible emerging themes as well
Further reading → Boyatzis (1998), Crabtree & Miller (1999), Fereday & Muir-Cochrane (2006), Tashakkori & Teddlie (2003, 2010), Teddlie & Tashakkori (2009)
See also → grounded theory, mixed-methods research (MMR), participants, qualitative data analysis, quantitative data analysis, theme, theoretical framework
Archival records
In the process of their research, applied linguists may need to refer to some language-related non-current records (e.g., particular letters, unique speeches, or newspapers of a certain period) that have symbolic meanings Such records are not publicly accessible, but are usually archived in certain places in libraries or other organizations (e.g., National Archives of Aus-tralia and Open Language Archive Community) because of their historical, cultural, or evi-dentiary value Due to their value, archival records are usually appraised, retained, organised and preserved under certain conditions Archival records are very useful sources of knowledge and information, especially in diachronic research projects with a focus on the development
of language over time However, depending on the investigator’s research orientation and the purpose of the research project, archival records could be used and subjected to different types of analyses An example of archival records that may be used by applied linguists is the collection of American English dialect recordings
Further reading → Christian (1986), Teddlie & Tashakkori (2009)
Arithmetic average
See mean
Association, strength of
See correlation
Attributes in quantitative research
In quantitative research, an attribute is a characteristic that human beings possess and that
cannot be manipulated Gender, motivation, learning style, and ethnicity are some ples of attribute variables For example, the gender variable can be categorised using the
Trang 28attributes of male and female Or the level of education variable can be categorised using the attributes of freshman, sophomore, junior, and senior In this sense, attributes are labels that represent different categories of a variable In statistical analysis software packages
such as SPSS, attributes are defined by numbers (for example, 1 = male, 2 = female) so that
the characteristic can be codified and defined for the program Obviously, the numbers used
to label the categories do not have arithmetic values; rather, they are used just to categorise
a variable such as gender or level of education into exclusive groups Attributes are
impor-tant in categorical variables because it is imporimpor-tant to accurately define them so that gories are mutually exclusive Independent variables might be defined as attribute or active Attribute independent variables cannot be manipulated because the participants already
cate-possess them, whereas active independent variables can be directly manipulated As such,
usually attribute independent variables are used in ex post factor or causal-comparative research, and active independent variables are used in experimental designs in which
manipulation of an independent variable is a requirement An example of an ex post facto research would be when a researcher collects data on a variety of demographic attribute variables to investigate how these variables collectively and individually can account for
and predict the variance in a dependent variable For example, the researcher may collect data from a sample of college students on their gender, motivational orientation, learning
style preference, and ethnicity to investigate how these variables might collectively and individually account for and predict variance in students’ language proficiency An exam-ple of an experimental design would be where a researcher manipulates language teaching
methodology to apply to experimental and control groups and compare them on an outcome
measure or a dependent variable
Further reading → Ary et al (2014), Kumar (2011), Trochim & Donnelly (2008)
See also → categorical variables, causal-comparative research, control group, dependent variable, experimental designs, experimental group, independent variables, moderator varia- bles, participants, quantitative research, sample, SPSS, variables, variance
Attrition
Attrition, or experimental mortality, refers to the loss of participants during the experiment and
is one of the potential threats to internal validity in experimental designs During the course
of an experiment, some participants may decide not to complete the experiment, leaving the
researcher with some missing data Given that participants take part in the research studies voluntarily and they are given the right to withdraw at any stage in the process, attrition can occur for different reasons, including students’ other commitments or even just failing to show
up When some participants withdraw from the study, it may affect the experimental and trol groups in a way that it could produce differences on the dependent variable This is based
con-on the fact that when an experiment is set up, the compariscon-on groups are comparable either
through randomisation (random selection and random assignment) or matching techniques Accordingly, apart from the variables of the study, other variables are presumably evenly
distributed in the groups so that any differences in the dependent variable could be attributed
to the effect of the treatment or the independent variable Even in a quasi-experimental design where convenience sampling and intact groups are used, researchers attempt to make the com-
parison groups as comparable as possible to neutralise the effect of other variables When some participants quit the study, especially if they are mostly from one of the groups, then this may cause the groups to become different so that any observed difference in the dependent
Trang 29Audit trail
variable may not solely be due to the effect of the treatment Attrition usually happens when the experiment goes on for a long time, causing participants to become tired, or in cases where the treatment is so demanding that it leads some participants to drop out, especially the low-performing ones When there is attrition in an experiment, the researcher needs to monitor the characteristics of those who have dropped out to make sure that the drop-outs have been sporadic and not due to characteristics of participants or the demanding nature of the experi-ment If certain types of participants drop out, then the remaining samples may not represent
the target population In this case, attrition will threaten both the internal and external validity
of the experiment
Further reading → Ary et al (2014), Brown (1988), Duff (2008), Trochim & Donnelly (2008)
See also → control group, convenience sampling, dependent variable, experimental designs, experimental group, external validity, independent variables, internal validity, participants, population, quasi-experimental research design, randomisation, variables
Audit trail
Audit trail is a strategy used by qualitative researchers to enhance the trustworthiness of the qualitative inquiry by establishing the confirmability and dependability of the research in terms
of the transparency of the key decisions made throughout the research process Confirmability
is the qualitative equivalent for objectivity, and dependability is the qualitative equivalent for reliability in quantitative research Qualitative researchers thus attempt to show that their
research is confirmable and dependable and therefore trustworthy if the study were replicated Unlike quantitative research that strives for tight controls to enhance replicability, qualita-tive research recognises some variations across contexts, and thus consistency in qualitative research deals with the extent to which variation can be tracked or explained Through reflex-ive methodological accounting, the researcher demonstrates that a study was conducted with
due care One of the methods of enhancing confirmability and dependability in qualitative research is an audit trail An audit trail is a mechanism through which the researcher opens up
the processes of decision making at different stages in the research process Through the audit trail, qualitative researchers document how the study was conducted, why it was conducted, and in what context it was conducted Through the audit trail, the researcher documents all the key stages in the research process and explains the key theoretical, methodological, and analytical decisions This will allow readers to make judgments about the replicability of the study within the constraints of a new context Moreover, to add to the quality of the research study, the researcher explicitly outlines how his or her thinking evolved throughout different
phases of the study This is usually done through memo writing in which the researcher
con-tinually writes analytical descriptions of the procedural decisions they make in the process of research In qualitative data analysis software packages, there are usually memoing facilities, which help the researcher keep a log of his or her reflections Maintaining a log of all research activities, including data collection and analysis procedures, will help the researcher develop
a detailed audit trail, and will help readers envisage the role of the researcher in the research process An audit trail therefore provides a context for the researcher and the readers to authen-ticate research results
Further reading → Carcary (2009), Creswell & Miller (2000), Lincoln & Guba (1985)
See also → confirmability, dependability, memo writing, objectivity, qualitative research, quantitative research, reliability, trustworthiness
Trang 30the categories based on some discernible relationships to arrive at themes usually related to the research questions and purpose of the study The purpose is to identify the key features of
the phenomenon under study by organising the open codes into categories with axial coding and then using the categories and their relationships to develop themes at the selective coding stage to interpret the data and analysis The codes in a category may be hierarchical compris-ing sub-categories, or they may be non-hierarchical, forming a list of codes in a category Therefore, the main purpose of axial coding is to sort a large number of codes produced at the open coding stage into some more significant focused categories Writing memos through the open coding stage helps the researcher observe the similarities between the initial concepts for generating higher-order categories around certain axes Some grounded theorists have recently used “focused coding” instead of axial coding Once the categories are formed at the axial cod-ing stage, the categories will be linked so that more abstract themes are produced in favour of
a theoretical explanation of the phenomenon Given the iterative nature of the grounded theory
approach and through the constant comparative method, data collection and analysis continues through theoretical sampling to refine the codes, categories, and the themes until the researcher reaches data saturation and a systematic explanatory framework about the phenomenon under
study is developed The recursive nature of data collection and analysis ensures that the three different stages of initial (open), intermediate (axial), and advanced (selective) coding occur simultaneously as the researcher codes the data and iteratively through the constant compar-ison method
Further reading → Bazeley (2013), Charmaz (2006), Corbin (2009), Corbin & Strauss (2008), Draucker et al (2007), Glaser & Strauss (2012), Heigham & Croker (2009), Miles & Huber-man (1994), Richards (2003), Strauss & Corbin (1998)
See also → constant comparative method, data saturation, grounded theory, open coding, research questions, selective coding, theme, theoretical sampling
Axiology
Axiology is the branch of philosophy that deals with values and ethics, but is also used as
a distinctive feature of research paradigms One of the main differences between positivist
and constructivist research paradigms and their affiliated research approaches, both
quanti-tative and qualiquanti-tative, is the place for values in the process of the inquiry Pragmatism and transformative paradigms, two underlying paradigms for mixed-methods research (MMR),
acknowledge both the value-ladenness and theory-ladenness of research and thus attempt to recognise the researcher’s subjective interpretations in the process of the inquiry Quantita-tively oriented researchers subscribing to a positivist worldview believe that research is value
Trang 31free in the sense that the researcher’s values should not affect their production of empirical knowledge From their perspective, valid knowledge is objective and free from the research-er’s values in so far as the researcher attempts to detach himself or herself from the object of the study On the contrary, qualitative researchers subscribing to a constructivist worldview contend that research is value-laden, meaning that it is not possible for the researcher to detach himself or herself and their values from the object of the study They therefore recognise sub-jective knowledge, which may entail the researcher and researchees’ values involvement in the production of empirical knowledge For pragmatists, the topic of study is congruent with the researcher’s value system, and thus the researchers are aware of the social consequences of their projects, which will, in turn, make them involve their values in the process of the inquiry The role of values is more significant in the transformativist paradigm From a transformativist perspective, the values that guide the research are derived from and aim at enhancing social justice and not the researcher’s personal interest Along with identifying characteristics of a research project, like that of its methodology, the researcher should communicate with their audience regarding the axiology of their research project This is particularly important in mixed-methods research because not only quantitative and qualitative approaches are used in
a single study, but also different paradigms or worldviews may inform the design and mentation of an MMR study
imple-Further reading → Guba (1990), Heigham & Croker (2009), Reichardt & Rallis (1994), ards (2003)
Rich-See also → mixed-methods research (MMR), pragmatism, qualitative research, quantitative research, research paradigm
Trang 32Bar chart or bar graph
A bar chart or a bar graph is a visual representation of a table of frequency that summarises
categor-ical or nominal data Bar charts are thus used to display variables measured on a nominal or ordinal scale The bars in a bar chart may be presented vertically or horizontally, but vertical bar charts
are more common in research reports to display quantity or frequency of variables Horizontal bar charts may be more appropriate to show timelines such as in a Gantt chart that is used to illustrate
a project schedule In Gantt charts, the bars are used to show the start and finish of the project elements In any case, the bars should be uniform in terms of the width and the space between the bars The height of each bar corresponds to the value of the category it represents The frequency
of occurrences of different categories of a variable (e.g., gender or level of education) is presented
as spaced columns or bars so that they can be compared with each other Figure B.1 is an example
of a bar graph, which presents the number of participants in each of the three proficiency-level
categories
The categories (low, intermediate, high) are called categorical variables, or the utes of the language proficiency variable Bar charts are very popular for presenting infor- mation about nominal and ordinal data in research reports Histograms can be said to be
attrib-another type of bar chart that are used to display the distribution of variables that are
meas-ures on an interval or ratio scale Other graphical representations that are used to provide visual representation of the data and information in research reports are pie charts and line graphs.
Further reading → Brown (1988), Cleveland & McGill (1985), Hatch & Lazaraton (1991),
Kumar (2011), Rasinger (2013), Salkind (2004), Shah & Hoeffner (2002)
See also → attributes in quantitative research, categorical variables, histogram, interval scale, line graph, nominal scale, ordinal scale, participants, pie chart, ratio scale, variables
Trang 33Bartlett test
Bartlett test
One of the underlying assumptions in parametric statistical tests of significance like analysis
of variance (ANOVA) is equality or homogeneity of variance of outcome measures or ent variables across different groups Equal variance across different samples is also called
depend-homoscedasticity or homogeneity of variance and can be tested through different procedures One of the procedures is the Bartlett test, which is also referred to as Bartlett-box F test
Another underlying assumption for parametric tests is normality, and the Bartlett test is quite
sensitive to normality and must therefore be used when the outcome measures are checked for normality before being subjected to the Bartlett test If the normality assumption is violated, then the recommendation is to use Levene’s test instead to check homogeneity of variance.Further reading → Bartlett (1937), Hinkle, Wiersma, & Jurs (2003), Pallant (2007), Snede-cor & Cochran (1989), Stevens (1986, 2002)
See also → analysis of variance (ANOVA), dependent variable, parametric tests, statistical tests of significance, variance
Baseline
Baseline refers to the observation of a given performance (e.g., reading ability) of particular
participants before they are exposed to any treatment designed to affect their performance
The baseline observation of the participants’ intended performance is used to compare any
subsequent observation made after the introduction of a treatment In pre-post test designs, any
difference observed between the baseline (pretest) and after the treatment is applied (post-test)
Trang 34Between-groups designs
can be attributed to the effect of the treatment, provided the effect of control or extraneous variables is neutralised It is usually not easy to control all the extraneous variables in pre-post designs, especially if only one group is involved Interrupted time-series designs are suggested
as an alternative to pre-post one-group designs In interrupted time-series designs, a single group of participants is pretested a number of times during the baseline phase They are then exposed to a treatment (a new writing activity, for example) and are then post-tested a num-ber of times after the treatment Any discontinuity in the pretest versus the posttest can be
attributed to the effect of the treatment Baseline is also used in A-B-A designs For example,
a teacher researcher may be interested to study if group working will affect a student’s writing performance in a writing class The researcher can use an A-B-A design and use a number
of pretests and posttests to investigate if group working will affect the student’s writing formance Accordingly, the student’s writing will be assessed several times before he or she
per-is involved in group working The student will then be involved in group working for some time while his or her writing performance is measured repeatedly during the treatment period The treatment (group working) will then be stopped, and again the student’s writing perfor-mance will be measured repeatedly and for some while after and during the post-treatment period to form another baseline measurement The measures of the student’s writing perfor-mance during the first baseline, the treatment period, and the second baseline will be analysed and represented graphically to show both shorter- and longer-term changes in the student’s writing performance If group working is supposed to improve the student’s writing perfor-mance, then the measures during the treatment period should show significant differences with both baseline measurements This cycle may continue for several times, each time after the baseline writing performance is recorded, a treatment (group working) is implemented, and then students’ writing performance is recorded during and after the treatment is applied The periods during which the treatment is not present are called baseline periods, and the periods during which the treatment is provided are called treatment periods The basis of comparison
is the difference in students’ performance before and after implementation of the treatment The baseline performance recordings serve as a control with which treatment effects will be compared Any discrepancy in the baseline versus treatment measures may be attributed to the
effect of the treatment In experimental designs, the control group acts as the baseline to which the results of the experimental groups are compared In repeated measure designs, the same
group of participants acts as both the control (baseline) and the experimental group
Further reading → Ary et al (2014), Johnson & Christensen (2012), Salkind (2004)
See also → A-B-A designs, control group, experimental designs, experimental group, ous variables, participants, pre-post test designs, repeated measures design, time-series design
extrane-Bell-shaped curve
See normal distribution
Between-groups designs
Between-subjects or between-groups designs are contrasted with within-subject or within-group
designs, which are also called repeated measures design In within-group designs, the same group of participants is measured at different time intervals so that their performance can be
compared over time and any change can be shown In between-group designs, the researcher compares different groups of participants on a factor or treatment This is usually done through
Trang 35Between-strategies mixed-method data collection
experimental designs in which the researcher assigns randomly selected participants to the experimental and control groups In such cases, participants are randomly assigned to different
task conditions with the justification that any uncontrolled characteristics of the participants are
distributed randomly among groups This procedure helps neutralise the effect of unknown founding variables on the experiment’s results The key issue in experimental between-groups
con-designs is thus the random assignment to create equal groups even though we can never expect the groups to be exactly the same Since the groups formed in this way are independent of
each other, the procedure used to test the hypothesis about differences between groups is called
between-groups designs or between-groups studies Each group of the experiment is exposed to
one level of the independent variable After the course of the treatment, the results of the two groups are compared for any possible differences in the outcome measure or dependent varia- ble, which can be attributed to the effect of the treatment or independent variable For example,
a researcher may assign one group of students to a conventional reading instruction and an equal group of students to strategy-based reading instruction After 3 or 4 months, the researcher can compare the reading performance of the two groups using a reading test as the dependent variable The two levels of the independent variable (conventional vs strategy-based reading instruction) will be compared for possible differences in the dependent variable
Further reading → Ary et al (2014), Mackey & Gass (2005), Maxwell & Delaney (2000), Rasinger (2008), Trochim & Donnelly (2008)
See also → control group, dependent variable, experimental designs, experimental group, hypotheses, independent variables, participants, repeated measures design, variables
Between-strategies mixed-method data collection
In mono-method quantitative or qualitative research, depending on the research design, researchers use appropriate data collection instruments In quantitative methods, for example, researchers may use tests or closed-ended questionnaire items On the other hand, in qualitative methods, researchers may collect the required data using observations, unstructured interviews,
or focus group interviews The collected data in each of these two conventional approaches will be analysed appropriately using statistical or thematic analysis to help researchers make plausible conclusions In mixed-methods research (MMR), however, since both quanti-
tative and qualitative data and analysis are used in favour of making more effective ences, between-strategies mixed-method data collection is used This means mixed-methods researchers will be using more than one data collection strategy in a single study to collect both types of data Data collection in mixed-methods can be either within-strategy mixed data col-lection or between-strategies mixed data collection In within-strategy mixed data collection, both quantitative and qualitative data can be collected through one instrument (observation,
infer-questionnaire, etc.) The infer-questionnaire, for example, may include both closed- and open-ended items, allowing researchers to collect both quantitative and qualitative data for mixed-methods
purposes Another example would be a classroom observation scheme with pre-identified egories for the observer to tick, as well as spaces for recording extensive narrative notes about the patterns of learning and teaching activities The first type of categorical data will lend itself
cat-to frequency analysis, whereas the narrative data will allow for thematic analysis – both types
of data and analysis to be dealt with within a single mixed-method study In between-strategies mixed-methods data collection, quantitative and qualitative data will be collected through sep-
arate data collection instruments In classroom-based research, for instance, teachers may be
interviewed (qualitative data) before and after their classes to discuss the learning and teaching
Trang 36Biserial correlation (rbi)
activities they had planned for and that they actually implemented in their classes Structured observation schemes with pre-determined categories may, on the other hand, be used to record patterns of learning and teaching as they occur in the observed classes The mixed-methods researcher can then use both quantitative and qualitative data and analyses to draw plausible conclusions about the research problem Both within- and between-strategies mixed-methods data collection allow mixed-methods researchers to collect and analyse quantitative and qual-itative data in a single study for a particular purpose
Further reading → Tashakkori & Teddlie (2010), Teddlie & Tashakkori (2009)
See also → classroom-based research, closed-ended questions, focus group interviews, ments, interviews, mixed-methods research (MMR), open-ended questions, qualitative research, quantitative research, questionnaires, research design, thematic analysis
instru-Biased sample
A good sample is one that is representative of the population from which it is derived When
selecting a sample, two types of errors occur: random and systematic Some random errors are
always involved in selecting samples even when random sampling procedures are used
How-ever, systematic error occurs when there is a flaw in the procedure used to select the sample, and this will result in a biased sample A sample selected in such a way that all possible elements from the target population do not have an equal chance of being chosen would be a biased sam-ple In other words, certain elements in the population are systematically under- or overrepre-sented Such a sample does not represent the population due to the systematic sampling error it induces, and findings of this sample cannot be generalised to the target population For example,
if some language minorities are underrepresented in a sample selected from a highly gual population, the findings of the sample can hardly be generalised to the target population In
multilin-quantitative research, usually non-random samples are biased samples because they are
system-atically different from their target population on certain characteristics, and this is why ers are cautioned about generalising the nonrandom sample findings to the target populations
research-This is particularly important in survey research in which the goal is to understand population
characteristics based on the findings from a sample The only means for correcting systematic error, which results in a biased sample, is a revision of the procedures used to select the sample.Further reading → Ary et al (2014), Johnson & Christensen (2012), Kumar (2011), Pal-tridge & Phakiti (2010)
See also → population, quantitative research, random sampling, sample, sampling procedure, survey research
cut-score is used to form the dichotomy for the second variable, and cases are assigned to
Trang 37Bivariate analysis
the two dichotomies based on the criterion or cut-score This type of transformation has also been called dummy coding An example of a dummy coding or transforming an interval vari-able into a dichotomous variable is when students are divided into high- and low-proficiency
groups based on the median score of students’ proficiency scores The criterion or the cut-score
used here to divide students into high and low groups is the median of students’ scores on a
language proficiency test In such cases, the biserial correlation (rbi) is considered an mate of Pearson correlation It is rarely advisable to dichotomise continuous measures because
esti-some information will be lost, unless transformation to a normal distribution is not possible When one of the two variables is naturally dichotomous, then point-biserial correlation is
used instead of biserial correlation An example of a naturally occurring dichotomous variable would be gender or correct and incorrect answers in a multiple-choice test Like Pearson cor-
relation, biserial correlation is also reported with a correlation coefficient that varies between
–1 and +1 and is presented and interpreted in terms of significance and magnitude The closer the correlation coefficient to 1, the stronger the relationship will be Moreover, significant cor-relation coefficients show that the relationship between variables is not due to chance.Further reading → Brown (1988), Glass & Hopkins (1984)
See also → continuous variables, correlation, correlation coefficient, dichotomous variables, median, normal distribution, Pearson product-moment correlation, point-biserial correlation, variables
Bivariate analysis
See univariate analyses
Bonferroni procedure
The Bonferroni procedure, named after its designer, is a statistical procedure that is used to
adjust the alpha level, or level of significance, of a hypothesis test when multiple tests of
sig-nificance are being used It is based on the assumption that when several tests of sigsig-nificance, also called a family of tests, are used in comparing groups in a study, it is more likely that the
researcher would be able to reject the null hypothesis when it is true The Bonferroni
proce-dure calculates a new level of significance (alpha) by dividing the alpha level by the number
of statistical tests used To be statistically significant, the test result must be below this newly calculated level and not the original alpha level This would indeed yield a very conservative level of significance, especially when several tests are being used The Bonferroni test is used
in post hoc tests where several groups are compared for statistically significant differences
For example, a researcher may compare three groups exposed to three different instructional
procedures through an analysis of variance (ANOVA) to check if there is any significant
dif-ference among the three groups in terms of the outcome measure If the result of the ANOVA test shows significant differences between the three groups, then the researcher needs to use post hoc tests to find out where exactly the difference lies Since three comparisons will be made, group 1 and 2, group 1 and 3, and group 2 and 3, then the original level of significance (0.05, for example) must be divided by three (0.05/3 = 0.017) The pairwise comparisons will
be checked against the new alpha level (0.017) to decide whether the difference in the
out-come measure is significant or not and not the original alpha level (0.05) The SPSS Options
dialogue box in ANOVA gives three choices for the adjustment of p-values of pairwise parisons These options include least significant difference (LSD), Bonferroni, and Sidak
Trang 38The LSD means no adjustments are made, whereas the Bonferroni means that the level of nificance (0.05, for example) is divided by the total number of comparisons made Sidak is like Bonferroni except that it is less conservative The use of the Bonferroni procedure helps guard
sig-against Type I error, that is, reducing the probability of identifying significant results while
they would have not existed The disadvantage of the Bonferroni procedure, however, is that it often overcorrects the Type I error and thus decreases the statistical power of the test, that is, it
increases the Type II error An alternative to Bonferroni, although it still keeps the Type I error
under control in multiple comparisons, is the Holm’s sequential version In Holm’s sequential procedure, all the comparisons are listed from the smallest to the largest p-values The test with the lowest p-value is then tested first with a Bonferroni procedure involving all groups Next, the second test is tested with a Bonferroni procedure involving one fewer test (the first test), and this procedure continues for the remaining tests Holm’s procedure provides a less conservative correction just like Sidak The use of the Bonferroni procedure therefore depends
on which type of error, Type I or Type II, may be more critical to make in a particular research project In medical science research, for example, the researcher may be more conservative and so opts for reducing the Type I error by using the Bonferroni procedure when using mul-tiple tests of significance to make group comparisons On the other hand, in situations where the researcher is more interested in detecting significant differences (increasing the test power) between groups when the Type I error is not very crucial, the Bonferroni procedure may be ignored
Further reading → Aickin & Gensler (1996), Larson-Hall (2010), Shaffer (1995), Stevens (1986, 2002), Tabachnick & Fidell (2007), Tacq (1997)
See also → analysis of variance (ANOVA), level of significance, null hypothesis, post hoc tests, SPSS, Type I error, Type II error
Bootstrapping
Bootstrapping is a nonparametric statistical procedure, which is used to make statistical ences about a target population when the assumptions for parametric tests are violated The procedure is used when the variability within a sample is used to estimate rather than making
infer-assumptions about the sampling distribution This is done through randomly resampling the
original sample by considering the original sample as a population from which different
sam-ples are derived The derived samsam-ples are similar to, but slightly different from, the original sample because some cases in the original sample may appear once, twice or more, or not at all in the resampling procedure The resampling procedure must, however, follow the same guidelines used in the original sampling so that the random variation that was present in the original sample will be introduced in the resamples in the same way The sample statistics (for
example, mean and standard deviation) for each of the resamples are close to, but slightly
dif-ferent from, those of the original sample The distribution of these resampled statistics is the
bootstrap estimate of the population parameters, which can then be used to make inferences
about the population As such, bootstrapping is, in fact, a statistical procedure that is used
to estimate standard errors and confidence intervals within which the statistical inferences about the target population can be made Bootstrapping is useful when the assumptions for parametric tests are violated or there is no strong parametric theory related to a particular sample statistic An example of the use of bootstrapping is to check the difference between
two medians in which the resampling will be used to produce a median distribution and testing hypotheses about the target population Two steps are involved in bootstrapping: estimating
Trang 39the statistic’s (for example, mean or median) distribution through resampling, and using the estimated sampling distribution to produce confidence intervals for making inferences about the population’s parameters
Further reading → Chernick (1999), Davison & Hinkley (1997), Mooney & Duval (1993), Stevens (1986, 2002), Tabachnick & Fidell (2007), Tacq (1997)
See also → hypotheses, mean, median, nonparametric tests, parameters, parametric tests, population, sample, standard deviation, variability
Bracketing
Bracketing is a technique in qualitative research methods in general and in phenomenology
in particular The concept of bracketing refers to separating the researchers’ own experiences and beliefs from what can be explored in the data and from the perspective of those who have been involved in and experienced the phenomenon The concept and the procedure of the
researchers’ suspension of their beliefs and experiences from those of the participants was
first developed in phenomenology and is also referred to as epoche It is considered a rigorous process through which both internal (assumptions, beliefs, theories) and external (context, culture, time) suppositions are put aside so that the researcher can focus on the specific phe-nomenon and see it as it is There is no consensus on when and how bracketing should be applied in qualitative research Some researchers have suggested that the process includes four main elements of the actual brackets that the researcher places around the phenomenon, the nature of theories, experiences and beliefs suspended by the researcher, the temporal structure
in which the bracketing is applied, and the reintegration of the data generated from the eting process The strength of bracketing is believed to depend on how the researcher opera-tionalises these four elements Some researchers have limited bracketing to the data analysis phase only and not the data collection phase Other researchers suggest that an awareness of preconceptions is developed at the beginning of the research when the project is at the con-ceptualisation stage and continues with the process of bracketing throughout the research One practical method of bracketing is writing memos throughout the process of data collection and analysis, which can include reflections on the processes of conducting research, method-ological decision making, and observational notes Reflexive journaling is another procedure for bracketing, which can begin prior to research in which the researcher records his or her preconceptions of the research problem and continues throughout the research process.Further reading → Ashworth (1996), Chan, Fung, & Chien (2013), Gearing (2004), Glaser (1992), LeVasseur (2003), Rolls & Relf (2006), Schutt (2006), Tufford & Newman (2010)
brack-See also → participants, phenomenology, qualitative research
Trang 40Canonical correlation
Canonical correlation is a multivariate statistical procedure, which is an expansion or
gen-eralization of multiple regression It adds more than one dependent variable to the multiple regression equation In other words, canonical correlation is a regression analysis with several independent variables and several dependent variables Instead of running several multiple
regressions to investigate the predictability of independent variables for each of the dependent variables, canonical correlation computes the best combination of variables in both sets The result is a canonical correlation coefficient that represents the maximum correlation possible between sets of independent variables and sets of dependent variables It also indicates the relative contributions of the separate independent and dependent variables to the canonical correlation, so one can see which variables are most important to the relationships between the two sets Like other correlations, results of canonical correlation vary between –1 and +1
and are presented by R c and interpreted in terms of significance and magnitude The researcher therefore looks for significant relationships among sets of variables and then interprets the magnitude Canonical correlation is indeed increasingly superseded by other sophisticated
statistical analyses such as structural equation modeling (SEM) and multivariate analysis of variance (MANOVA).
Further reading → Hair, Anderson, Tatham, & Black (1998), Stevens (1986), Tacq (1997)
See also → dependent variable, independent variables, multiple regression, multivariate ysis of variance (MANOVA), regression analysis, structural equation modeling (SEM)
anal-Case study
One of the widely used qualitative methods in applied linguistics is case study research in which the researcher concentrates on a single case The case can be a single person (e.g., a language learner), a group of people (teachers of a particular course), or a phenomenon (e.g., providing feedback to students) in a particular context with which the researcher has devel-
oped some ties or interests The unit of analysis will be the case which needs to be defined