Language Testing Bytes is a podcast to accompany the SAGE journal Language Testing. Three or four times per year, we will release a podcast in which we discuss topics related to a particular issue of the journal. This may be an interview with a contributor to the journal, or another expert in the field. You can download the podcast from this website, from ltj.sagepub.com, or you can subscribe to the podcast through iTunes.
Coming Soon: The next podcast will be released in August 2014, and will feature Folkert Kuiken and Ineke Vedder on rater behaviour in performance testing.
Despite the growing popularity of paired format speaking assessments, the effects of pre-task planning time on performance in these formats are not yet well understood. For example, some studies have revealed the benefits of planning but others have not. Using a multifaceted approach including analysis of the process of speaking performance, the aim of this paper is to investigate the effect of pre-task planning in a paired format. Data were collected from 32 students who carried out two decision-making tasks in pairs, under planned and unplanned conditions. The study used analyses of rating scores, discourse analytic measures, and conversation analysis (CA) of test-taker discourse to gain insight into co-constructing processes. A post-test questionnaire was also administered to understand the participants’ perceptions toward planned and unplanned interactions. The results from rating scores and discourse analytic measures revealed that planning had limited effect on performance, and analysis of the questionnaires did not indicate clear differences between the two conditions. CA, however, identified the possibility of a contrastive mode of discourse under the two planning conditions, raising concerns that planning might actually deprive test-takers of the chance to demonstrate their abilities to interact collaboratively.
This paper reports an investigation into how the prompt may influence the discourse of group oral tests. The group oral test, in which three or four participants are rated on their ability to discuss a prompt, is a format for assessing the spoken ability of language learners. In this study, 141 Japanese university students were videoed in 41 group orals of three or four test-takers. Although the four different prompts written for the test were supposed to be of equal difficulty, they were found to be substantially different in the type and number of questions that comprised them. Analysis of the transcribed interactions revealed significant differences in turns taken, syntactic complexity and fluency of the interactions they elicited. A qualitative examination revealed that the two prompts that elicited longer, more complex turns did so by encouraging test-takers to explain their family circumstances or speculate about their future. Prompts with more factual content elicited shorter, less complex turns and the prompt that test-takers responded to with the least fluency required the test-takers to talk about a more personal subject. The implications for rating and creating prompts are discussed, and the need to tailor them to the purpose of the test.
Psychometric properties of the Phonological Awareness Literacy Screening for Kindergarten (PALS-K) instrument were investigated in a sample of 2844 first-time public school kindergarteners. PALS-K is a widely used English literacy screening assessment. Exploratory factor analysis revealed a theoretically defensible measurement structure that was found to replicate in a randomly selected hold-out sample when examined through the lens of confirmatory factor analytic methods. Multigroup latent variable comparisons between Spanish-speaking English-language learners (ELLs) and non-ELL students largely demonstrated the PALS-K to yield configural and metric invariance with respect to associations between subtests and latent dimensions. In combination, these results support the educational utility of the PALS-K as a tool for assessing important reading constructs and informing early interventions across groups of Spanish-speaking ELL and non-ELL students.
A common use of language tests is to support decisions about examinees such as placement into appropriate classes. Research on placement testing has focused on English for Academic Purposes (EAP) in higher education contexts. However, there is little research exploring the use of language tests to place students in English as a Second Language (ESL) support classes in secondary education. The present study examined the relationship between secondary school students’ language test scores from a standardized English-language test and the placement of these students into ESL classes by their language teachers. Ninety-two ESL students in two English-medium schools took TOEFL® Junior™ Standard. For the same students, data collection also included teachers’ judgments regarding the ESL classes the students should attend. Strong correlations between test scores and the teacher-assigned ESL levels were found. Moreover, the results from the logistic regression analysis indicated a great degree of overlap between the teacher-assigned ESL levels and the levels predicted from the TOEFL Junior Standard scores. The findings of this study provide some preliminary evidence to support the use of TOEFL Junior Standard as an initial screening tool for ESL placement. The limitations and implications of these findings for ESL placement decisions in secondary education are also discussed.
A major concern with computer-based (CB) tests of second-language (L2) writing is that performance on such tests may be influenced by test-taker keyboarding skills. Poor keyboarding skills may force test-takers to focus their attention and cognitive resources on motor activities (i.e., keyboarding) and, consequently, other processes and aspects of writing (e.g., planning, revising) might be left unattended to, which can lead to poor text quality and lower test scores. Such effects might be more pronounced for L2 test-takers. This study investigated the impact of keyboarding skills on test-takers’ scores in the context of the TOEFL-iBT Writing Section. Each of 97 test-takers, with different levels of English language proficiency (low vs. high) and keyboarding skills (low vs. high), responded to two TOEFL-iBT writing tasks (independent and integrated) on the computer. Test scores were statistically compared across tasks and test-taker groups. The findings indicated that overall English language proficiency and writing ability in English contributed substantially to variance in task scores, while keyboarding skill had a significant, but weak, effect on task scores. Additionally, keyboarding skills effects depended on task type. While these findings support the claim that performance on TOEFL-iBT writing tasks depends mainly on test-taker English language proficiency, they also raise important questions about the relationships between keyboarding skills, L2 writing ability, and performance on CB L2 writing tests, as well as factors affecting these relationships.
Language Testing is an international peer reviewed journal that
publishes original research on language testing and assessment. Since
1984 it has featured high impact papers covering theoretical issues,
empirical studies, and reviews. The journal's wide scope encompasses
first and second language testing and assessment of English and other
languages, and the use of tests and assessments as research and
evaluation tools. Many articles also contribute to methodological
innovation and the practical improvement of testing and assessment
internationally. In addition, the journal publishes submissions that
deal with policy issues, including the use of language tests and
assessments for high stakes decision making in fields as diverse as
education, employment and international mobility. The journal welcomes
the submission of papers that deal with ethical and philosophical issues
in language testing, as well as technical matters. Also of concern is
research into the washback and impact of language test use, and
ground-breaking uses of assessments for learning. Additionally, the
journal wishes to publish replication studies that help to embed and
extend our knowledge of generalisable findings in the field. Language
Testing is committed to encouraging interdisciplinary research, and is
keen to receive submissions which draw on theory and methodology from
different fields of applied linguistics, as well as educational
measurement, and other relevant disciplines.
How to put the podcast onto your iPod
Decide which of the podcasts below you would like to listen to. Right click on the link, and select 'save target as' to download it into a folder on your computer.
Open iTunes. Click on 'file' and then 'new playlist'. Name your playlist 'Language Testing Bytes'.
Click on the playlist from the iTunes menu.
Open the folder in which you saved the podcast, then drag the podcast from the folder and drop it into the playlist.
Syncronize your iPod.
When you next access your iPod go to the Language Testing Bytes playlist to play the podcast.
Alternatively, just pop it on whichever mp3 player you currently
use, or subscribe to the SAGE Podcast on iTunes.
Issue 17: Ryo Nitta and Fumiyo Nakatsuhara on pre-task planning in paired speaking tests
The authors of our first paper in 31(2) are concerned with a very practical question. What is the effect of giving test-takers planning time prior to a paired-format speaking task? Does it affect what they say? Does it change the scores they get? The answers will inform the design of speaking tests not only in high stakes assessment contexts, but probably in classrooms as well.
Issue 16: Jodi Tommerdahl and Cynthia Kilpatrick on the reliability of morphological analyses in language samples
How large a language sample do we need in order to draw reliable conclusions about what we wish to assess? In issue 31(1) of Language Testing we are delighted to publish a paper by Jodi Tommerdahl and Cynthia Kilpatrick that addresses this important issue.
Issue 30(4) of the journal contains the first paper on eye-tracking studies to investigate the cognitive processes of learners taking reading tests. Stephen Bax joins us to explain the methodology and what it can tell us about how successful readers go about processing items and texts in reading tests.
Issue 30(3) commemorates the 30th Anniversary of the founding of the journal. We mark this milestone in the journal's history with a special issue on the topic of Assessment Literacy, guest edited by Ofra Inbar. A concern for the literacy needs of a wide range of stakeholders who use tests and test scores beyond the experts is a sign of a maturing profession. This issue takes the debate forward in new and exciting ways, some of which Ofra Inbar discusses on this podcast.
Issue 13: Paula Winke and Susan Gass on Rater Bias
Rater bias is something that language testers have known about for a long time, and have tried to control through training and the use of rating scales. But investigations into the source and nature of bias is relatively recent. In issue 30(2) of the journal Paula Winke, Susan Gass, and Caroly Myford share their research in this field, and the first two authors from Michigan State University join us on Language Testing Bytes to discuss rater bias.
Issue 12: Alan Davies on Assessing Academic English
In 2008 Alan Davies' book Assessing Academic English was published by Cambridge University Press. In issue 30(1) of Language Testing it is reviewed by Christine Coombe. With a strong historical narrative, the book raises many of the enduring issues in assessing English for study in English medium institutions. In this podcast we explore some of these with Professor Davies.
Issue 11: Ana Pellicer-Sanchez and Norbert Schmitt on Yes-No Vocabulary Tests
In this issue of the podcast we return to vocabulary testing, after the great introduction provided by John Read in Issue 5. This time, we welcome Ana Pellicer-Sanchez and Norbert Schmitt, to talk about the popular Yes-No Vocabuluary Test. Their recent research looks at scoring issues and potential solutions to problems that have plagued the test for years. Their paper in issue 29(4) of the journal contains the details, but in the podcast we discuss the key issues for vocabulary assessment.
Issue 10: Kathryn Hill on Classroom Based Assessment
Classroom Based Assessment is an increasingly important topic in language education, and in issue 29(3) of Language Testing we publish a paper by Kathryn Hill and Tim McNamara entitled "Developing a comprehensive, empirically based research framework for classroom-based assessment". The research in this paper is based on the first author's PhD dissertation, and so we asked Kathryn Hill to join us on Language Testing Bytes to talk about developments in the field.
Issue 9: Luke Harding on Accent in Listening Assessment
Issue 29(2) of the journal contains a paper entitled "Accent, listening assessment and the potential for a shared-L1 advantage: A DIF perspective", by Luke Harding. In this podcast we explore why it is that most listening tests use a very narrow range of standard accents, rather than the many varieties that we are likely to encounter in real-world communication.
Issue 8: Tan Jin and Barley Mak on Confidence Scoring
In Issue 29(1) of the journal three authors from the Chinese University of Hong Kong have a paper on the application of fuzzy logic to scoring speaking tests. This is termed 'confidence scoring', and the first two authors join us on Language Testing Bytes to explain a little more about their novel approach.
Mark Wilson delivered the Messick Memorial Lecture at the Language Testing Research Colloquium in Melbourne, 2006, on new developments in measurement models to take into account the complexity of language testing. In Language Testing 28(4) we publish the paper based on this lecture, and Mark joins us on Language Testing Bytes to talk about his work in this area.
Issue 6: Craig Deville and Micheline Chalhoub-Deville on Standards-Based Testing
Standards-Based Testing is highly controversial for its social and educational impact on schools and bilingual communities, and the technical aspects that rely to a significant extent on expert judgment. In issue 28(3) we discuss the issues surrounding Standards-Based Testing in the United States with the guest editors of a special issue on this topic. The collection of papers that they have brought together, along with reviews of recent books on the topic, and test review, constitute a state of the art volume for the field.
The journal has seen a flurry of articles on vocabulary testing in recent months, and issue 28(2) is no exception, with Marta Fairclough's paper on the lexical recognition task. It seemed like an appropriate moment to conisder why vocabulary is receiving so much attention, and so we turned to Professor John Read of the University of Auckland, New Zealand, to give us an overview of current research and activity within the field.
Issue 4: Khaled Barkaoui and Melissa Bowles on Think Aloud Protocols
In Language Testing 28(1), 2011, Khaled Barkaoui has an article on the use of think-alouds to investigate rater processes and decisions as they rate essay samples. The focus is not on the raters, but on whether the research method is a useful tool for the purpose. In this podcast he explains his findings, and their importance. We are then joined by Melissa Bowles who has recently published The Think-Aloud Controversy in Second Language Research, to explain precisely what the problems and possibilities of think-alouds are in language testing research.
Language Testing 27(4), 2010, contains an article by Carol Chapelle and colleagues on testing productive grammatical ability. We thought this would be an excellent opportunity to look at what is going on in the field of assessing grammar, and what issues currently face the field. Jim Purpura agreed to talk to us on Language Testing Bytes.
Language Testing 27(3), 2010, is a special issue guest edited by Xiaoming Xi on the automated scoring of writing and speaking tests. In this podcast she talks about why the automated scoring of speaking and writing tests is such a hot topic, and explains the possibilities, limitations and current research issues in the field.
In Language Testing 27(2), 2010, Mike Kane contributed a response to an article on fairness in language testing. We thought this was an excellent opportunity to ask him about his approach to validation, and how he sees 'fairness' fitting into the picture.