Language Testing Bytes is a podcast to accompany the SAGE journal Language Testing. Three or four times per year, we will release a podcast in which we discuss topics related to a particular issue of the journal. This may be an interview with a contributor to the journal, or another expert in the field. You can download the podcast from this website, from ltj.sagepub.com, or you can subscribe to the podcast through iTunes.
Coming Soon: The next podcast will be released in April 2015, which will feature Hyejeong Kim of Melbourne University talking about the assessment of aviation English.
News: Issue 22 of Language Testing Bytes will be the last. I am standing down as Editor of Language Testing at the end of 2015, and the publisher does not have funds to pay for the continuation of the podcast. I am considering whether I am able to set up a new podcast in its place, but without sponsorship this may be highly problematic. I will, however, maintain an archive of the 22 issues that I've had the pleasure of hosting since 2010.
The rise in the affordability of quality video production equipment has resulted in increased interest in video-mediated tests of foreign language listening comprehension. Although research on such tests has continued fairly steadily since the early 1980s, studies have relied on analyses of raw scores, despite the growing prevalence of item response theory in the field of language testing as a whole. The present study addresses this gap by comparing data from identical, counter-balanced multiple-choice listening test forms employing three text types (monologue, conversation, and lecture) administered to 164 university students of English in Japan. Data were analyzed via many-facet Rasch modeling to compare the difficulties of the audio and video formats; to investigate interactions between format and text-type, and format and proficiency level; and to identify specific items biased toward one or the other format. Finally, items displaying such differences were subjected to differential distractor functioning analyses. No interactions between format and text-type, or format and proficiency level, were observed. Four items were discovered displaying format-based differences in difficulty, two of which were found to correspond to possible acting anomalies in the videos. The author argues for further work focusing on item-level interactions with test format.
In this study I examined the dimensionality of the latent ability underlying language use that is needed to fulfill the demands young learners face in English-medium instructional environments, where English is used as the means of instruction for teaching subject matters. Previous research on English language use by school-age children provided evidence that language proficiency for academic studies relates to, yet differs from, the language ability needed for social communications. Focusing on learners of English as a foreign language (EFL), I investigated the nature of language proficiency of school-age EFL learners in light of their learning experience.
Analyses were based on test performance from the TOEFL Junior® Comprehensive test a proficiency assessment of English as a foreign language for young learners between the ages of 11 and 15, developed by the Educational Testing Service. The results showed that the two ability constructs (i.e., academic and social language), although theoretically distinct and educationally relevant, were statistically indistinguishable based on EFL learners’ test performance. It was also found that the test performance could be explained best by a higher-order model, indicating that the language ability of these young EFL learners was structurally similar to that usually found with adult learners in a foreign language environment.
The outcomes highlight the interrelatedness of learning environment, age and language proficiency. On the one hand, the nature of the ability construct can vary across groups of learners due to differences in learning environments. On the other hand, learners of different ages who share similar learning environments could be similar in terms of the latent representation of their language proficiency. The study concludes that the interpretation of young EFL learners’ language proficiency needs to take into consideration how language components are developmentally related to each other as a function of learning experience in a foreign language environment.
The purpose of this study was to determine the extent to which performance on the TOEFL iBT speaking section is associated with other indicators of Japanese university students’ abilities to communicate orally in an academic English environment and to determine which components of oral ability for these tasks are best assessed by TOEFL iBT. To achieve this aim, TOEFL iBT speaking scores were compared to performances on a group oral discussion, picture and graph description, and prepared oral presentation tasks, and their component scores of pronunciation, fluency, grammar/vocabulary, interactional competence, descriptive skill, delivery skill, and question answering. Participants were Japanese university students (N = 222), who were English majors in a Japanese university. Pearson product–moment correlations, corrected for attenuation, between scores on the speaking section of TOEFL iBT and the three university tasks indicated strong relationships between the TOEFL iBT speaking scores and the three university tasks and high or moderate correlations between the TOEFL iBT speaking scores and the components of oral ability. For the components of oral ability, pronunciation, fluency, and vocabulary/grammar were highly associated with TOEFL iBT speaking scores while interactional competence, descriptive skill, and delivery skill were moderately associated with TOEFL iBT speaking scores. The findings suggest that TOEFL iBT speaking scores are good overall indicators of academic oral ability and that they are better measures of pronunciation, fluency and vocabulary/grammar than they are of interactional competence, descriptive skill, and presentation delivery skill.
Self-assessment has been used to assess second language proficiency; however, as sources of measurement errors vary, they may threaten the validity and reliability of the tools. The present paper investigated the role of experiences in using Japanese as a second language in the naturalistic acquisition context on the accuracy of the self-assessment. Results revealed that experiential factors played a significant role in the measurement errors introduced by the self-assessment. The asymmetry pattern emerged, whereby less experienced second language speakers appeared to overestimate their ability, whereas those with more experience underestimated their language skills. The Rasch analysis identified poor-fit items in the self-assessment survey, and the subsequent qualitative analysis of the items indicated that the greater misalignment was related to items including more difficult tasks, with which participants had relatively little experience. Implications for development and use of self-assessment are thus discussed in relation to experiential factors in self-assessment.
In the present study, aspects of the measurement of writing are disentangled in order to investigate the validity of inferences made on the basis of writing performance and to describe implications for the assessment of writing. To include genre as a facet in the measurement, we obtained writing scores of 12 texts in four different genres for each participating student. Results indicate that across raters, tasks and genres, only 10% of the variance in writing scores is related to individual writing skill. In order to draw conclusions about writing proficiency, students should therefore write at least three different texts in each of four genres rated by at least two raters. Moreover, when writing scores are obtained through highly similar tasks, generalization across genres is not warranted. Inferences based on text quality scores should, in this case, be limited to genre-specific writing. These findings replicate the large task variance in writing assessment as consistently found in earlier research and emphasize the effect of genre on the generalizability of writing scores. This research has important implications for writing research and writing education, in which writing proficiency is quite often assessed by only one task rated by one rater.
Implementing assessment reform can be challenging. Proposed new assessments must be seen by stakeholders to be fit for purpose, and sometimes the perceptions of key stakeholders, such as teachers and students, may differ from the assessment developers. This article considers the recent introduction of a new high-stakes assessment of spoken proficiency for students of foreign languages in New Zealand high schools. The new assessment aims to measure spoken proficiency through the recording of a range of unstaged peer-to-peer interactions as they take place throughout the year. It contrasts with an earlier assessment that drew on a summative teacher-led interview. The article presents findings from a survey of teachers (n = 152), completed two years into the assessment reform, in which teachers were asked to consider the relative usefulness of the two assessment formats. Findings suggest that teachers consider the new assessment to be, in most respects, significantly more useful than the earlier model, and that the new assessment is working relatively well. Some challenges emerge, however, in particular around the feasibility and fairness of collecting ongoing evidence of spontaneous peer-to-peer performance. Findings raise issues to be considered if the new assessment is to work more successfully.
Language Testing is an international peer reviewed journal that
publishes original research on language testing and assessment. Since
1984 it has featured high impact papers covering theoretical issues,
empirical studies, and reviews. The journal's wide scope encompasses
first and second language testing and assessment of English and other
languages, and the use of tests and assessments as research and
evaluation tools. Many articles also contribute to methodological
innovation and the practical improvement of testing and assessment
internationally. In addition, the journal publishes submissions that
deal with policy issues, including the use of language tests and
assessments for high stakes decision making in fields as diverse as
education, employment and international mobility. The journal welcomes
the submission of papers that deal with ethical and philosophical issues
in language testing, as well as technical matters. Also of concern is
research into the washback and impact of language test use, and
ground-breaking uses of assessments for learning. Additionally, the
journal wishes to publish replication studies that help to embed and
extend our knowledge of generalisable findings in the field. Language
Testing is committed to encouraging interdisciplinary research, and is
keen to receive submissions which draw on theory and methodology from
different fields of applied linguistics, as well as educational
measurement, and other relevant disciplines.
How to put the podcast onto your iPod
Decide which of the podcasts below you would like to listen to. Right click on the link, and select 'save target as' to download it into a folder on your computer.
Open iTunes. Click on 'file' and then 'new playlist'. Name your playlist 'Language Testing Bytes'.
Click on the playlist from the iTunes menu.
Open the folder in which you saved the podcast, then drag the podcast from the folder and drop it into the playlist.
Syncronize your iPod.
When you next access your iPod go to the Language Testing Bytes playlist to play the podcast.
Alternatively, just pop it on whichever mp3 player you currently
use, or subscribe to the SAGE Podcast on iTunes.
Issue 20: Martin East on Assessment Reform.
In this issue of the podcast Martin East describes an assessment reform project in New Zealand. We're reminded very forcefully that when assessment and testing procedures within educational systems are changed, there are many complex factors to take into account. All stakeholders are going to take a view on the proposed reforms, and they aren't necessarily going to agree.
Issue 19: Fred Davidson and Cary Lin of the University of Illinois at Urbana-Champaign discuss the role of statistics in language testing.
The last issue of volume 31 contains a review of Rita Green's new book on statistics in language testing. We take the opportunity to talk about how things have changed in teaching statistics for students of language testing since Fred Davidson's The language tester's statistical toolbox was published in 2000.
Issue 18: Folkert Kuiken and Ineke Vedder from the University of Amsterdam discuss rater variability in the assessment of speaking and writing in a second language.
The third issue of the journal this year is a special on the scoring of performance tests. In this podcast the guest editors talk about some of the issues surrounding the rating of speaking and writing samples.
Issue 17: Ryo Nitta and Fumiyo Nakatsuhara on pre-task planning in paired speaking tests
The authors of our first paper in 31(2) are concerned with a very practical question. What is the effect of giving test-takers planning time prior to a paired-format speaking task? Does it affect what they say? Does it change the scores they get? The answers will inform the design of speaking tests not only in high stakes assessment contexts, but probably in classrooms as well.
Issue 16: Jodi Tommerdahl and Cynthia Kilpatrick on the reliability of morphological analyses in language samples
How large a language sample do we need in order to draw reliable conclusions about what we wish to assess? In issue 31(1) of Language Testing we are delighted to publish a paper by Jodi Tommerdahl and Cynthia Kilpatrick that addresses this important issue.
Issue 30(4) of the journal contains the first paper on eye-tracking studies to investigate the cognitive processes of learners taking reading tests. Stephen Bax joins us to explain the methodology and what it can tell us about how successful readers go about processing items and texts in reading tests.
Issue 30(3) commemorates the 30th Anniversary of the founding of the journal. We mark this milestone in the journal's history with a special issue on the topic of Assessment Literacy, guest edited by Ofra Inbar. A concern for the literacy needs of a wide range of stakeholders who use tests and test scores beyond the experts is a sign of a maturing profession. This issue takes the debate forward in new and exciting ways, some of which Ofra Inbar discusses on this podcast.
Issue 13: Paula Winke and Susan Gass on Rater Bias
Rater bias is something that language testers have known about for a long time, and have tried to control through training and the use of rating scales. But investigations into the source and nature of bias is relatively recent. In issue 30(2) of the journal Paula Winke, Susan Gass, and Caroly Myford share their research in this field, and the first two authors from Michigan State University join us on Language Testing Bytes to discuss rater bias.
Issue 12: Alan Davies on Assessing Academic English
In 2008 Alan Davies' book Assessing Academic English was published by Cambridge University Press. In issue 30(1) of Language Testing it is reviewed by Christine Coombe. With a strong historical narrative, the book raises many of the enduring issues in assessing English for study in English medium institutions. In this podcast we explore some of these with Professor Davies.
Issue 11: Ana Pellicer-Sanchez and Norbert Schmitt on Yes-No Vocabulary Tests
In this issue of the podcast we return to vocabulary testing, after the great introduction provided by John Read in Issue 5. This time, we welcome Ana Pellicer-Sanchez and Norbert Schmitt, to talk about the popular Yes-No Vocabuluary Test. Their recent research looks at scoring issues and potential solutions to problems that have plagued the test for years. Their paper in issue 29(4) of the journal contains the details, but in the podcast we discuss the key issues for vocabulary assessment.
Issue 10: Kathryn Hill on Classroom Based Assessment
Classroom Based Assessment is an increasingly important topic in language education, and in issue 29(3) of Language Testing we publish a paper by Kathryn Hill and Tim McNamara entitled "Developing a comprehensive, empirically based research framework for classroom-based assessment". The research in this paper is based on the first author's PhD dissertation, and so we asked Kathryn Hill to join us on Language Testing Bytes to talk about developments in the field.
Issue 9: Luke Harding on Accent in Listening Assessment
Issue 29(2) of the journal contains a paper entitled "Accent, listening assessment and the potential for a shared-L1 advantage: A DIF perspective", by Luke Harding. In this podcast we explore why it is that most listening tests use a very narrow range of standard accents, rather than the many varieties that we are likely to encounter in real-world communication.
Issue 8: Tan Jin and Barley Mak on Confidence Scoring
In Issue 29(1) of the journal three authors from the Chinese University of Hong Kong have a paper on the application of fuzzy logic to scoring speaking tests. This is termed 'confidence scoring', and the first two authors join us on Language Testing Bytes to explain a little more about their novel approach.
Mark Wilson delivered the Messick Memorial Lecture at the Language Testing Research Colloquium in Melbourne, 2006, on new developments in measurement models to take into account the complexity of language testing. In Language Testing 28(4) we publish the paper based on this lecture, and Mark joins us on Language Testing Bytes to talk about his work in this area.
Issue 6: Craig Deville and Micheline Chalhoub-Deville on Standards-Based Testing
Standards-Based Testing is highly controversial for its social and educational impact on schools and bilingual communities, and the technical aspects that rely to a significant extent on expert judgment. In issue 28(3) we discuss the issues surrounding Standards-Based Testing in the United States with the guest editors of a special issue on this topic. The collection of papers that they have brought together, along with reviews of recent books on the topic, and test review, constitute a state of the art volume for the field.
The journal has seen a flurry of articles on vocabulary testing in recent months, and issue 28(2) is no exception, with Marta Fairclough's paper on the lexical recognition task. It seemed like an appropriate moment to conisder why vocabulary is receiving so much attention, and so we turned to Professor John Read of the University of Auckland, New Zealand, to give us an overview of current research and activity within the field.
Issue 4: Khaled Barkaoui and Melissa Bowles on Think Aloud Protocols
In Language Testing 28(1), 2011, Khaled Barkaoui has an article on the use of think-alouds to investigate rater processes and decisions as they rate essay samples. The focus is not on the raters, but on whether the research method is a useful tool for the purpose. In this podcast he explains his findings, and their importance. We are then joined by Melissa Bowles who has recently published The Think-Aloud Controversy in Second Language Research, to explain precisely what the problems and possibilities of think-alouds are in language testing research.
Language Testing 27(4), 2010, contains an article by Carol Chapelle and colleagues on testing productive grammatical ability. We thought this would be an excellent opportunity to look at what is going on in the field of assessing grammar, and what issues currently face the field. Jim Purpura agreed to talk to us on Language Testing Bytes.
Language Testing 27(3), 2010, is a special issue guest edited by Xiaoming Xi on the automated scoring of writing and speaking tests. In this podcast she talks about why the automated scoring of speaking and writing tests is such a hot topic, and explains the possibilities, limitations and current research issues in the field.
In Language Testing 27(2), 2010, Mike Kane contributed a response to an article on fairness in language testing. We thought this was an excellent opportunity to ask him about his approach to validation, and how he sees 'fairness' fitting into the picture.