bright
bright
bright
bright
bright
tl
 
tr
lunder

Language Testing Bytes

Edited by:

Glenn Fulcher (University of Leicester, UK)
runder
lcunder
rcunder

 
Podcasts to accompany the journal Language Testing from SAGE
     
 

Language Testing Bytes is a podcast to accompany the SAGE journal Language Testing. Three or four times per year, we will release a podcast in which we discuss topics related to a particular issue of the journal. This may be an interview with a contributor to the journal, or another expert in the field. You can download the podcast from this website, from ltj.sagepub.com, or you can subscribe to the podcast through iTunes.

Coming Soon: The next podcast will be released in late January 2015. Details of content will be available here before Christmas.




Current Journal Content

Dynamic assessment of elicited imitation: A case analysis of an advanced L2 English speaker
by van Compernolle, R. A., Zhang, H.

The focus of this paper is on the design, administration, and scoring of a dynamically administered elicited imitation test of L2 English morphology. Drawing on Vygotskian sociocultural psychology,...

(show all)

Investigating correspondence between language proficiency standards and academic content standards: A generalizability theory study
by Lin, C.-K., Zhang, J.

Research on the relationship between English language proficiency standards and academic content standards serves to provide information about the extent to which English language learners (ELLs) a...

(show all)

Strategies for testing statistical and practical significance in detecting DIF with logistic regression models
by Fidalgo, A. M., Alavi, S. M., Amirian, S. M. R.

This study examines three controversial aspects in differential item functioning (DIF) detection by logistic regression (LR) models: first, the relative effectiveness of different analytical strate...

(show all)

Applying unidimensional and multidimensional item response theory models in testlet-based reading assessment
by Min, S., He, L.

This study examined the relative effectiveness of the multidimensional bi-factor model and multidimensional testlet response theory (TRT) model in accommodating local dependence in testlet-based re...

(show all)

A study on the impact of fatigue on human raters when scoring speaking responses
by Ling, G., Mollaun, P., Xi, X.

The scoring of constructed responses may introduce construct-irrelevant factors to a test score and affect its validity and fairness. Fatigue is one of the factors that could negatively affect huma...

(show all)

An examination of rater performance on a local oral English proficiency test: A mixed-methods approach
by Yan, X.

This paper reports on a mixed-methods approach to evaluate rater performance on a local oral English proficiency test. Three types of reliability estimates were reported to examine rater performanc...

(show all)

Book Review: Statistical Analyses for Language Testers
by Geranpayeh, A.

Book Review: Classroom-based Language Assessment
by Hill, K.

Diagnostic English Language Needs Assessment (DELNA)
by Doe, C.

List of Reviewers



Manuscript Submission Information


Free Sample Copy


Email Alerts


Language Testing is an international peer reviewed journal that publishes original research on language testing and assessment. Since 1984 it has featured high impact papers covering theoretical issues, empirical studies, and reviews. The journal's wide scope encompasses first and second language testing and assessment of English and other languages, and the use of tests and assessments as research and evaluation tools. Many articles also contribute to methodological innovation and the practical improvement of testing and assessment internationally. In addition, the journal publishes submissions that deal with policy issues, including the use of language tests and assessments for high stakes decision making in fields as diverse as education, employment and international mobility. The journal welcomes the submission of papers that deal with ethical and philosophical issues in language testing, as well as technical matters. Also of concern is research into the washback and impact of language test use, and ground-breaking uses of assessments for learning. Additionally, the journal wishes to publish replication studies that help to embed and extend our knowledge of generalisable findings in the field. Language Testing is committed to encouraging interdisciplinary research, and is keen to receive submissions which draw on theory and methodology from different fields of applied linguistics, as well as educational measurement, and other relevant disciplines.


                 


                               




How to put the podcast onto your iPod

  1. Decide which of the podcasts below you would like to listen to. Right click on the link, and select 'save target as' to download it into a folder on your computer.
  2. Open iTunes. Click on 'file' and then 'new playlist'. Name your playlist 'Language Testing Bytes'.
  3. Click on the playlist from the iTunes menu.
  4. Open the folder in which you saved the podcast, then drag the podcast from the folder and drop it into the playlist.
  5. Syncronize your iPod.
  6. When you next access your iPod go to the Language Testing Bytes playlist to play the podcast.

Alternatively, just pop it on whichever mp3 player you currently use, or subscribe to the SAGE Podcast on iTunes.

Current Issue

Issue 19: Fred Davidson and Cary Lin of the University of Illinois at Urbana-Champaign discuss the role of statistics in language testing.

The last issue of volume 31 contains a review of Rita Green's new book on statistics in language testing. We take the opportunity to talk about how things have changed in teaching statistics for students of language testing since Fred Davidson's The language tester's statistical toolbox was published in 2000.

Download:

 Rating Performance Assessments

Or Listen Now:





Previous Issues

Issue 18: Folkert Kuiken and Ineke Vedder from the University of Amsterdam discuss rater variability in the assessment of speaking and writing in a second language.

The third issue of the journal this year is a special on the scoring of performance tests. In this podcast the guest editors talk about some of the issues surrounding the rating of speaking and writing samples.

Download:

 Rating Performance Assessments

Or Listen Now:


Issue 17: Ryo Nitta and Fumiyo Nakatsuhara on pre-task planning in paired speaking tests

The authors of our first paper in 31(2) are concerned with a very practical question. What is the effect of giving test-takers planning time prior to a paired-format speaking task? Does it affect what they say? Does it change the scores they get? The answers will inform the design of speaking tests not only in high stakes assessment contexts, but probably in classrooms as well.

Download:

 Pre-task planning in paired speaking tests

Transcript


Or Listen Now:


Issue 16: Jodi Tommerdahl and Cynthia Kilpatrick on the reliability of morphological analyses in language samples

How large a language sample do we need in order to draw reliable conclusions about what we wish to assess? In issue 31(1) of Language Testing we are delighted to publish a paper by Jodi Tommerdahl and Cynthia Kilpatrick that addresses this important issue.

Download:

 The Reliability of Morphological Analyses in Language Samples

Transcript


Or Listen Now:


Issue 15: Stephen Bax on Eye Tracking Studies

Issue 30(4) of the journal contains the first paper on eye-tracking studies to investigate the cognitive processes of learners taking reading tests. Stephen Bax joins us to explain the methodology and what it can tell us about how successful readers go about processing items and texts in reading tests.

Download:

 Eye Tracking Studies

Transcript


Or Listen Now:


Issue 14: Ofra Inbar on Assessment Literacy

Issue 30(3) commemorates the 30th Anniversary of the founding of the journal. We mark this milestone in the journal's history with a special issue on the topic of Assessment Literacy, guest edited by Ofra Inbar. A concern for the literacy needs of a wide range of stakeholders who use tests and test scores beyond the experts is a sign of a maturing profession. This issue takes the debate forward in new and exciting ways, some of which Ofra Inbar discusses on this podcast.

Download:

 Assessment Literacy

Transcript


Or Listen Now:


Issue 13: Paula Winke and Susan Gass on Rater Bias

Rater bias is something that language testers have known about for a long time, and have tried to control through training and the use of rating scales. But investigations into the source and nature of bias is relatively recent. In issue 30(2) of the journal Paula Winke, Susan Gass, and Caroly Myford share their research in this field, and the first two authors from Michigan State University join us on Language Testing Bytes to discuss rater bias.

Download:

 Rater Bias

Transcript


Or Listen Now:


Issue 12: Alan Davies on Assessing Academic English

In 2008 Alan Davies' book Assessing Academic English was published by Cambridge University Press. In issue 30(1) of Language Testing it is reviewed by Christine Coombe. With a strong historical narrative, the book raises many of the enduring issues in assessing English for study in English medium institutions. In this podcast we explore some of these with Professor Davies.

Download:

 Assessing Academic English

Transcript


Or Listen Now:



Issue 11: Ana Pellicer-Sanchez and Norbert Schmitt on Yes-No Vocabulary Tests

In this issue of the podcast we return to vocabulary testing, after the great introduction provided by John Read in Issue 5. This time, we welcome Ana Pellicer-Sanchez and Norbert Schmitt, to talk about the popular Yes-No Vocabuluary Test. Their recent research looks at scoring issues and potential solutions to problems that have plagued the test for years. Their paper in issue 29(4) of the journal contains the details, but in the podcast we discuss the key issues for vocabulary assessment.

Download:

 Yes-No Vocabulary Tests

Transcript


Or Listen Now:



Issue 10: Kathryn Hill on Classroom Based Assessment

Classroom Based Assessment is an increasingly important topic in language education, and in issue 29(3) of Language Testing we publish a paper by Kathryn Hill and Tim McNamara entitled "Developing a comprehensive, empirically based research framework for classroom-based assessment". The research in this paper is based on the first author's PhD dissertation, and so we asked Kathryn Hill to join us on Language Testing Bytes to talk about developments in the field.

Download:

 Classroom Based Assessment

Transcript


Or Listen Now:



Issue 9: Luke Harding on Accent in Listening Assessment

Issue 29(2) of the journal contains a paper entitled "Accent, listening assessment and the potential for a shared-L1 advantage: A DIF perspective", by Luke Harding. In this podcast we explore why it is that most listening tests use a very narrow range of standard accents, rather than the many varieties that we are likely to encounter in real-world communication.

Download:

 Accents in Listening Assessment

Transcript


Or Listen Now:



Issue 8: Tan Jin and Barley Mak on Confidence Scoring

In Issue 29(1) of the journal three authors from the Chinese University of Hong Kong have a paper on the application of fuzzy logic to scoring speaking tests. This is termed 'confidence scoring', and the first two authors join us on Language Testing Bytes to explain a little more about their novel approach.

Download:

 Confidence Scoring

Transcript


Or Listen Now:



Issue 7: Mark Wilson on Measurement Models

Mark Wilson delivered the Messick Memorial Lecture at the Language Testing Research Colloquium in Melbourne, 2006, on new developments in measurement models to take into account the complexity of language testing. In Language Testing 28(4) we publish the paper based on this lecture, and Mark joins us on Language Testing Bytes to talk about his work in this area.

Download:

 Measurement Models

Transcript


Or Listen Now:



Issue 6: Craig Deville and Micheline Chalhoub-Deville on Standards-Based Testing

Standards-Based Testing is highly controversial for its social and educational impact on schools and bilingual communities, and the technical aspects that rely to a significant extent on expert judgment. In issue 28(3) we discuss the issues surrounding Standards-Based Testing in the United States with the guest editors of a special issue on this topic. The collection of papers that they have brought together, along with reviews of recent books on the topic, and test review, constitute a state of the art volume for the field.

Download:

 Standards-Based Testing

Transcript


Or Listen Now:



Issue 5: John Read on Vocabulary

The journal has seen a flurry of articles on vocabulary testing in recent months, and issue 28(2) is no exception, with Marta Fairclough's paper on the lexical recognition task. It seemed like an appropriate moment to conisder why vocabulary is receiving so much attention, and so we turned to Professor John Read of the University of Auckland, New Zealand, to give us an overview of current research and activity within the field.

Download:

 John Read on Vocabulary

Transcript


Or Listen Now:



Issue 4: Khaled Barkaoui and Melissa Bowles on Think Aloud Protocols

In Language Testing 28(1), 2011, Khaled Barkaoui has an article on the use of think-alouds to investigate rater processes and decisions as they rate essay samples. The focus is not on the raters, but on whether the research method is a useful tool for the purpose. In this podcast he explains his findings, and their importance. We are then joined by Melissa Bowles who has recently published The Think-Aloud Controversy in Second Language Research, to explain precisely what the problems and possibilities of think-alouds are in language testing research.

Download:

 Khaled Barkaoui and Melissa Bowles on Think Aloud Protocols

Transcript


Or Listen Now:



Issue 3: Jim Purpura on Grammar

Language Testing 27(4), 2010, contains an article by Carol Chapelle and colleagues on testing productive grammatical ability. We thought this would be an excellent opportunity to look at what is going on in the field of assessing grammar, and what issues currently face the field. Jim Purpura agreed to talk to us on Language Testing Bytes.

 Jim Purpura on Testing Grammar

Transcript


Or Listen Now:



Issue 2: Xiaoming Xi on Automated Scoring

Language Testing 27(3), 2010, is a special issue guest edited by Xiaoming Xi on the automated scoring of writing and speaking tests. In this podcast she talks about why the automated scoring of speaking and writing tests is such a hot topic, and explains the possibilities, limitations and current research issues in the field.

Download:

 Xiaoming Xi on Automated Scoring

Transcript


Or Listen Now:



Issue 1: Mike Kane on Validation

In Language Testing 27(2), 2010, Mike Kane contributed a response to an article on fairness in language testing. We thought this was an excellent opportunity to ask him about his approach to validation, and how he sees 'fairness' fitting into the picture.

Download:

 Mike Kane on Validation

Transcript


Or Listen Now: